VV Unit-V PDF
VV Unit-V PDF
VV Unit-V PDF
Automation save time as software can execute test cases faster than human do .
The time thus saved can be used effectively for test engineers to
1. develop additional test cases to achieve better coverage;
2. perform some esoteric or specialized tests like ad hoc testing; or
3. Perform some extra manual testing.
The time saved in automation can also be utilized to develop additional test cases,
thereby improving the coverage of testing.
Test automation can free the test engineers from mundane tasks and make them
focus on more creative tasks. -E.g- Ad hoc testing requires intuition and creativity to test
the product for those perspectives that may have been missed out by planned test cases. If
there are too many planned test cases that need to be run manually and adequate
automation does not exist, then the test team may spend most of its time in test execution.
Automating the more mundane tasks gives some time to the test engineers for
creativity and challenging tasks.
Automated tests can be more reliable -when an engineer executes a particular test case
many times manually, there is a chance for human error. As with all machine-oriented
activities, automation can be expected to produce more reliable results every time, and
eliminates the factors of boredom and fatigue.
Automation helps in immediate testing -automation reduces the time gap between
development and testing as scripts can be executed as soon as the product build is ready.
Automated testing need not wait for the availability of test engineers.
Automation can protect an organization against attrition of test engineers Automation
can also be used as a knowledge transfer tool to train test engineers on the product as it
has a repository of different tests for the product.
Test automation opens up opportunities for better utilization of global resources
Manual testing requires the presence of test engineers, but automated tests can be run
round the clock, twenty- four hours a day and seven days a week. This will also enable
teams in different parts of the words, in different time zones, to monitor and control the
tests, thus providing round the- clock coverage.
When a set of test cases is combined and associated with a set of scenarios, they are
called “test suite”. A test suite is nothing but a set of test cases that are automated and
scenarios that are associated with the test cases.
b. Second generation-Data-driven
This method helps in developing test scripts that generates the set of input
conditions and corresponding expected output.
This enables the tests to be repeated for different input and output conditions.
The approach takes as much time and effort as the product.
c. Third generation-Action-driven
This technique enables a layman to create automated tests. There are no input
and expected output conditions required for running the tests.
All actions that appear on the application are automatically tested, based on a
generic set of controls defined for automation.
The set of actions are represented as objects and those objects are reused. The
user needs to specify only the operations and everything else that is needed for
those actions are automatically generated.
Hence, automation in the third generation involves two major aspects-
1) “test case automation” 2) “framework design”.
SCOPE OF AUTOMATION
b. Regression tests Regression tests are repetitive in nature. These test cases are
executed multiple times during the product development phases.
c. Functional tests These kinds of tests may require a complex set up and thus require
specialized skill, which may not be available on an ongoing basis. Automating these
once, using the expert skill sets, can enable using less-skilled people to run these
tests on an ongoing basis.
Automating for standards provides a dual advantage. Test suites developed for
standards are not only used for product testing but can also be sold as test tools
for the market.
Integration Testing, both internal interfaces and external interfaces have to be captured
by design and architecture. In this figure the thin arrows represent the internal
interfaces and the direction of flow and thick arrows show the external interfaces. All the
modules, their purpose, and interactions between them are described in the subsequent
sections.
Architecture for test automation involves two major heads: a test infrastructure that
covers a test case database and a defect database or defect repository. Using this
infrastructure, the test framework provides a backbone that ties the selection and
execution of test cases.
1. External Modules
There are two modules that are external modules to automation-TCDB and defect
DB. All the test cases, the steps to execute them, and the history of their execution
are stored in the TCDB.
The test cases in TCDB can be manual or automated. The interface shown by thick
arrows represents the interaction between TCDB and the automation framework only
for automated test cases.
Defect DB or defect database or defect repository contains details of all the defects
that are found in various products that are tested in a particular organization. It
contains defects and all the related information test engineers submit the defects for
manual test cases.
For automated test cases, the framework can automatically submit the defects to the
defect DB during execution.
Scenarios are nothing but information on “how to execute a particular test case”.
A configuration file contains a set of variables that are used in automation. A
configuration file is important for running the test cases for various execution conditions
and for running the tests for various input and output conditions and states.
A test framework is a module that combines “what to execute” and “how they have to
be executed”. It picks up the specific test cases that are automated from TCDB and
picks up the scenarios and executes them.
The test framework is considered the core of automation design. It subjects the test
cases to different scenarios. The test framework contains the main logic for interacting ,
initiating, and controlling all modules.
A setup for one test case may work negatively for another test case. Hence, it is
important not only to create.
Requirement 5: Independent test cases
Each test case should be executed alone; there should be no dependency between
test cases such as test case-2 to be executed after test case-1 and so on. This requirement
enables the test engineer to select and execute any test case at random without worrying
about other dependencies.
Requirement 6: Test case dependency
Making a test case dependent on another makes it necessary for a particular test
case to be executed before or after a dependent test case is selected for execution
CHALLENGES IN AUTOMATION
Test automation presents some very unique challenges. The most important of
these challenges is management commitment.
Automation should not be viewed as a panacea for all problems nor should it
be perceived as a quick-fix solution for all the quality problems in a product.
The main challenge here is because of the heavy front-loading of costs of test
automation, management starts to look for an early payback.
Successful test automation endeavors are characterized by unflinching
management commitment, a clear vision of the goals, and the ability to set
realistic short-term goals that track progress with respect to the long-term
vision.
Definition
Metrics are the source of measurement.
Metrics derive information from raw data with a view to help in decision making.
Ex: No of defects , No of test cases , effort , schedule
Metrics are needed to know test case execution productivity and to estimate test completion
date.
Effort : actual time that is spent on a particular activity or a phase
Schedule : Elapsed days for a complete set of activities
Steps in a Metrics Program
WHY METRICS IN TESTING?
o To know the size of product development team
o To prevent defects
o To minimize cost of effort
o Test case execution productivity
o Test completion date
o Days needed for defect fixes
o Days needed to release
Total days needed for defect fixes = (outstanding defects yet to fixed +defects that
can be found in future test cycles )
defect fixing capability
Days needed for release = Max(days needed for testing ,days needed for defect
fixes)
or
Days needed for release = Max(days needed for testing , (days needed for defect
Fixes + days needed for regression fixes))
TYPES OF METRICS
Metrics can be classified into different types based on
1. what they measure
2. what area they focus on.
At a very high level, metrics can be classified as product metrics and process metrics.
Product metrics can be further classified as:
1. Project metrics A set of metrics that indicates how the project is planned and
executed.
2. Progress metrics A set of metrics that tracks how the different activities of the
project are progressing. The activities include both development activities and testing
activities. Progress metrics is monitored during testing phases. Progress metrics
helps in finding out the status of test activities and they are also good indicators of
product quality. Progress metrics, is further classified into
1) test defect metrics
2) development defect metrics.
3. Productivity metrics A set of metrics that takes into account various productivity
numbers that can be collected and used for planning and tracking testing activities.
These metrics help in planning and tracking testing activities. These metrics help in
planning and estimating of testing activities.
Types of metrics
.
Process Product
Metrics Metrics
Effort Variance Defect find rate Component-wise Defects per 100 hrs
defect of testing
. Defect fix rate
Schedule distribut ion
Test cases executed .
Variance Outstanding
Defect density per 100 hrs of
Effort defects rate
and defect testing
distribut ion Priority removal rate
Test cases
outstanding rate
Age analysis of developed per 100
Defects trend outstanding hours
defects
Defect Defects per 100
classification Introduced and test cases
trend reopened defects
rate Defects per 100
Weighted defects failed test cases
trend Test phase
Testing defect Defect cause effectiveness
metrics Developme nt
distribut ion Closed defects
defect metrics
distribut ion
.
I) PROJECT METRICS
A typical project starts with requirements gathering and ends with product release.
All the phases that fall in between these points need to be planned and tracked. In the
planning cycle, the scope of the project is finalized. The project scope gets translated to size
estimates, which specify the quantum of work to be done. This size estimate gets translated
to effort estimate for each of the phases and activities by using the available productivity
data available.
base lined effort The initial effort
revised effort. As the project progresses and if the scope of the project changes, then
the effort estimates are re-evaluated again and this re-evaluated effort estimate is called
revised effort.
Two factors
Effort : actual time that is spent on a particular activity or a phase
Schedule : Elapsed days for a complete set of activities
If the effort is tracked closely & met then schedule can be met.
If Planned effort is equal to actual effort and schedule not met then project is not
considered as successful one.
The basic measurements are
1. initial baselined effort and schedule
2. The actual effort
3. The revised estimate of effort and schedule
Calculating effort variance for each of the phases provides a quantitative measure of
the relative difference between the revised and actual efforts.
40
30
person days
20
10
0
Req Design Coding Testing Doc Defect
fixing
baselined estimate Resived estimate Fixing
All the baseline estimates, revised estimates, and actual effort are plotted together for
each of the phases. The variance can be consolidated into as shown in the above
table.
A variance of more than 5% in any of the SDLC phase indicates the scope for
improvement in the estimation.
The variance is acceptable only for the coding and testing phases.
The variance can be negative also. A negative variance is an indication of an over
estimate.
2. Schedule Variance (Planned vs Actual)
Schedule variance is calculated for overall project , at the end of every milestone to find out
how well the project is doing with respect to the schedule.
To get a real picture on schedule in the middle of project execution, it is important to
calculate “remaining days yet to be spent” on the project and plot it along with the “actual
schedule spent” as in the above chart. “Remaining days yet to be spent” can be calculated
by adding up all remaining activities. If the remaining days yet to be spent on project is not
calculated and plotted, it does not give any value to the chart in the middle of the project,
because the deviation cannot be inferred visually from the chare. The remaining days in the
schedule becomes zero when the release is met.
180
160
140 56
No.of Days
120
100
80
60 126 136
110
40
20
0
e d ng
e lin ate in i
s ti m a
Ba Es
m
al/re
A ctu
Estimated Remaining
Effort and schedule variance have to be analyzed in totality, not in isolation. This is
because while effort is a major driver of the cost, schedule determines how best a product
can exploit market opportunities, variance can be classified into negative variance, zero
variance, acceptable variance, and unacceptable variance.
17% 23%
5%
18%
22%
15%
Effort distribution :
Req > Testing > design > bug fixing > coding > doc
Mature organizations spend at least 10-15 % of the total effort in requirements and
approximately the same effort in the design phase. The effort percentage for testing depends
on the type of release and amount of change to the existing code base and functionality.
Typically, organizations spend about 20 -50 % of their total effort in testing.
100%
80%
60%
Blocked
Not run
40%
Fail
Pass
20%
0%
1 2 3 4 5 6 7 8
Week
A scenario represented by such a progress chart shows that not only is testing
progressing well, but also that the product quality is improving. The chart had shown a
trend that as the weeks progress, the “not run” cases are not reducing in number, or
“blocked” cases are increasing in number, or “pass” cases are not increasing, then it would
clearly point to quality problems in the product that prevent the product from being ready
for release.
Time
b) Defect fix rate
The purpose of development is to fix defects as soon as they arrive. If the goal of testing
is to find defects as early as possible, it is natural to expect that the goal of development
should be to fix defects as soon as they arrive. There is a reason why defect fixing rate
should be same as defect arrival rate. If more defects are fixed later in the cycle, they
may not get tested properly for all possible side-effects.
Normally only high-priority defects are tracked during the period closer to release. Some
high-priority defects may require a change in design or architecture & fixed immediatly
e) Defect trend
The effectiveness analysis increases when several perspectives of find rate, fix rate,
outstanding, and priority outstanding defects are combined.
1. The find rate, fix rate, outstanding defects, and priority outstanding follow a bell
curve pattern, indicating readiness for release at the end of the 19th week.
2. a sudden downward movement as well as upward spike in defect fixes rate needs
analysis (13th to 17th week in the chart above)
3. By looking at the priority outstanding which shows close to zero defects in the 19th
week, it can be concluded that all outstanding defects belong to low priority.
4. A smooth priority outstanding rate suggests that priority defects were closely tracked
and fixed.
Both “large defects” and “large number of small defects” affect product release.
Defects per KLOC = Total defects found in the product / total Executable line
of code in KLOC
Variants to this metrics is to calculate AMD (add , modify , delete code ) to find how a
release affects product quality .
Defects per KLOC = Total defects found in the product / total Executable AMD
line of code in KLOC
Testing is not meant to find the same defects again ; release readiness should
consider the quality of defect fixes.
Defects per 100 hours of testing= (Total defects found in the product for a
period / Total hours spent to get those defects) * 100
Test cases executed per 100 hours of testing = (Total test cases executed for a period
/ Total hours spent in test execution) * 100
Test cases developed per 100 hours of testing= (Total test cases developed for a
period/ Total hours spent in test case development) * 100
UT, 39%
CT, 32%
The closed defect distribution helps in this analysis as shown in the figure below. From the
chart, the following observations can be made.
1. Only 28% of the defects found by test team were fixed in the product. This suggests
that product quality needs improvement before release.
2. Of the defects filled 19% were duplicates. It suggests that the test team needs to
update itself on existing defects before new defects are filed.
3. Non-reproducible defects amounted to 11%. This means that the product has some
random defects or the defects are not provided with reproducible test cases. This area
needs further analysis.
4. Close to 40% of defects were not fixed for reasons “as per design,” “will not fix,” and
“next release.” These defects may impact the customers. They need further
discussion and defect fixes need to be provided to improve release quality.