III B.tech II Sem STM
III B.tech II Sem STM
III B.tech II Sem STM
Course Objectives:
1. Understand the basic concepts of software testing.
2. Understand the various techniques and strategies of software testing and
inspection and pointing out the importance of testing in achieving high-
quality software.
3. Perform effective and efficient structural testing of software.
4. Integrate and test the various units and components of a software
system.
5. Perform effective and efficient functional testing of software.
6. Select the appropriate tests to regression test your software after changes
have been made.
7. Plan, track and control the software testing effort.
8. Understand the need of automated testing tools and various kinds of
automated testing tools.
Unit – I
In the 1990s, testing tools finally came into their own. There was flood of
various tools, which are absolutely vital to adequate testing of software
systems.
a. Reliability
Software testing b. Quality
c. Customer satisfaction
d. Risk management
Post-implementation Goals
a. Reduced maintenance
cost
b. Improved testing
process
Definitions
1. Failure- when the software is tested, failure is the first term being used.
It means the inability of a system or component to perform a required
function according to its specification. In other words, when results or
behavior of the system under test are different as compared to specified
expectations, then failure exists.
2. Fault/defect/bug – failure is the term which is used to describe the
problems in a system on the output side. Fault is a condition that in
actual causes a system to produce failure. Failure is a synonymous with
the words defect or bug.
it can be said that failures are manifestations of bugs.
One failure may be due to one or more bugs and one bug may cause one
or more failures. Some bugs are hidden in the sense that these are not
executed, so hidden bugs may not always produce failures. They may
execute only in certain rare conditions.
18.
Resolve the bug Classify the bug
Isolate the bug
Here errors are there in each phase those are not validated then they can be
traversed to next phase.
1. Bugs-In Phase
2. Bugs-Out Phase
1. Bugs-In phase
In this Phase if developer done a mistake that is becoming a bug, this
bug can be traversed from all the phases if verification is not done in
early phases, these should be added to the current phase bugs. IT
becomes more difficult.
So verification is needed at earlier phases.
2. Bugs-Out Phase
This phase deals with due to bugs failure occurs how we are going to
overcome
Activities are taken place in this phase are
a. Bug classification- classifying the bugs according to its nature , the
classification may be critical or catastrophic or it may have no adverse
affect.
After classifying the bugs it will help us which should be solved first,
which should be postponed.
b. Bug isolation- locates the module where the bug is there.
How it is possible means by Incidents observation back trace the
design and reach the module. This is called bug isolation.
States of a Bug
Those are
Test- bug has been fixed by the development team and released to the testing
team to test.
Verified/fixed- if the developer approves that the reported bug is fixed then
the tester verify that weather the reported bug is fixed or not , so the state is
verified.
Reopened- if the bug still exists after fixing it, then the state is reopened.
Closed- if the bug is completely eliminated, got confirmation from tester and
other team members.
Open
Rejected
Assign
Deferred
Reopened Test
Verified
Closed
To error is human
Mistakes done by the development team and current phase bugs , for this some
of the reasons given
Effect of changes in one module effect the other which are connected each
other
There is no guarantee that all the bugs are eliminated after testing
If the bugs are not verified at earlier stages they can be traversed to next
phases , if the bugs eliminated at earlier stage it cost very low, if not eliminated
at each phase of SDLC then the cost become more
Criticality Bugs- this kind of bugs stops or hangs the normal functioning of a
system, this is the worst situation, if use this software it become helpless
Major Bug- this kind of bug cannot stop the function but it causes fail that
function.
Minor Bugs- these are mild kind which does not affect on expected behavior.
Design Bugs
Design bugs are due to design mistakes and previous phase bugs.
These are such as path not reachbel, some paths through the flow is missing.
Logic Bugs
Processing bugs
Uninitialized data, initialized in wrong format, initialized but not used, data
used without initialization, redefined without intermediate use
Testing Principles
These are
Tester mind is always trying to find more and more bugs in the programs so
this destructive approach will help through testing it become constructive .
if modules were interconnected with other modules then the tester first
concentrate on the modules which having more no.of bugs first, after
completely elimination of this module go for the module which consists less
than first, last the tester has to concentrate on the module which having least
no.of bugs.
6. Testing strategy should start at the smallest module level and expand
towards the whole program
Testing must start at the unit level first then go for integration then system
levels.
Who were developers of program if we give to test then they feel that what ever
code I was developed in my code I need to find a bugs no never like feeling from
the developers, so effective testing is not done
Programmers‟ having positive feeling towards what they developed but tester
mind is always negative.
If similar kind of project come to test then it become easy to tester to classify
which are high priority bugs and probability of which modules gets more no. of
bugs
First test the functionality with the invalid inputs for those invalid inputs it has
to give an unexpected behavior
if the tester participated in specification and design review he can easily grasp
where the effective testing is needed
Test planning
Test design
Test execution
Post-Execution/Test
review
Test Execution
In this phase all test cases are executed and test results are documented
in the test incident reports, test logs, testing status, and test summary
reports.
Testing levels versus responsibility
Test execution level Responsibility
Unit Developers of the module
Integration Testers and Developers
System Testers. Developers, End-users
Acceptance Testers, End-users
End-user
Requirement
Gathering
Requirement
specification
Functional
design(HLD)
Low Level
design (LLD)
Coding
Coding
After preparing the LLD, it is easy to write code. Because design gives a way to
write code.
Once a coding is over then the validation starts.
Verification
Verification is a set of activities that ensure correct implementation of specific
functions in software.
Why verification is needed?
If verification is not done at earlier stages the bugs should be traversed to the
next phases and bug should become more and more, then it is too difficult to
remove and cost become increases
And it also increases the quality
Everything must be verified
Here everything means all the SDLC Phases and products
Result verification may not be binary
Binary means may not accepting or rejecting there is deep procedure is there
for whatever the result is.
Even implicit qualities must be verified
Not mentioned any where those qualities also be verified
Verification activities
There are four verification activities are there
Those are
Verification of requirements and objectives
Verification of high level design
Verification of low level design
Verification of coding
Verification of requirement
First the tester prepare acceptance criteria
In this he perform two parallel activities those are
1.For test acceptance criteria he checks for its completeness, clarity and
testability
2. The tester prepares acceptance test plan
Verification of objectives
In this he perform two parallel activities those are
Dept. Of CSE Page 20
Software Testing Methodologies & Tools
Unit verification
Verification of code cannot be done for whole system at once,unit
verification is done by the corresponding module developer.
Here some of the points to consider while doing unit verification
Interfaces are verified for information flows between the modules
Local data structure need to verified
Boundary conditions are checked
Verification is done in such way that all statements in a module executed
at least once
All error handling paths are tested
E software under consideration
Validation
Validation is a set of activities that ensure the software under
consideration has been built right and is traceable to customer
requirements.
Validation is performed after coding is over
To determine whether the product satisfies the user‟s requirements , as
stated in the requirement specification
Products actual behavior matches with the desired behavior
All the stages still coding are bug-free
Validation testing provides the last chance to discover bugs
Validation activities
Validation activities are divided into two parts
1) Validation test plan
2) Validation test execution
Validation test plan
To prepare validation test plan tester has to follow points such as
Testers must understand the current SDLC phase
Testers understand the relevant documents in the corresponding
SDLC phase.
Based on this information tester prepares a test plan which are
used at the time of validation testing.
Tester prepared test plans and when it should be used
Acceptance test plan-based on the user feedback acceptance
criteria prepared in the requirement phase based on acceptance
criteria acceptance test plan is prepared.
This is helpful at the time of acceptance testing
System test plan-to verify objectives specified in the SRS it refers
how entire system behaves in different conditions.
This plan is used at the time of system testing
Function test plan- it is prepared in the HLD phase.
In this test cases are designed to test every functionality. This plan
used at the time of functional testing
Integration test plan-this is for validate all integrated modules,
this is also conforms to the whole system. This plan is used at the
time of integration testing
Dept. Of CSE Page 23
Software Testing Methodologies & Tools
• Tutorial Questions:
How can you fill the gap between Industries and academia?
• Assignment questions:
What is the Psychology the Tester should have?
Explain Software Testing Lifecycle (STLC)?
What are the states of bug?
Explain about bug-in and Bug-out Phases?
Online resources:
• www.csi-india.org
• www.infibeam.com
• www.manit.ac.in
Unit – II
• Objective of the Unit (Two to Three lines):
• Students know what the purpose of Testing
• Understand the differences between Tester and debugger, functional
versus structure, builder versus buyer etc.understand the levels of
testing and know the classification of bugs.
Notes: Testing consumes at least half of the time and work required to
produce a functional program.
History says that even well written programs still have 1-3 bugs per
hundred statements.
DICHOTOMIES:
Testing Debugging
Testing is a demonstration of
Debugging is a deductive process.
error or apparent correctness.
ENVIRONMENT:
PROGRAM:
include more facts and details. And if that fails, we may have
to modify the program.
BUGS:
TESTS
CONSEQUENCES OF BUGS:
1. Frequency: How often does that kind of bug occur? Pay more
attention to the more frequent bug types.
2. Correction Cost: What does it cost to correct the bug after it
is found? The cost is the sum of 2 factors: (1) the cost of
TAXONOMY OF BUGS:
Control and sequence bugs include paths left out, unreachable code,
improper nesting of loops, loop-back or loop termination criteria
incorrect, missing process steps, duplicated processing, unnecessary
processing, rampaging, GOTO's, ill-conceived (not properly planned)
switches, spaghetti code, and worst of all, pachinko code.
One reason for control flow bugs is that this area is amenable
(supportive) too theoretical treatment.
Most of the control flow bugs are easily tested and caught in unit
testing.
Another reason for control flow bugs is that use of old code especially
ALP & COBOL code are dominated by control flow bugs.
Logic Bugs:
If the bugs are parts of logical (i.e. Boolean) processing not related to
control flow, they are characterized as processing bugs.
Processing Bugs:
Initialization Bugs:
DATA BUGS:
Data bugs include all bugs that arise from the specification of data
objects, their formats, the number of such objects, and their initial
values.
Data Bugs are at least as common as bugs in code, but they are often
treated as if they did not exist at all.
Static Data are fixed in form and content. They appear in the source
code or database directly or indirectly, for example a number, a
string of characters, or a bit pattern.
Compile time processing will solve the bugs caused by static data.
CODING BUGS:
Coding errors of all kinds can create any of the other
kind of bugs.
Dept. Of CSE Page 40
Software Testing Methodologies & Tools
External Interfaces:
Internal Interfaces:
Hardware Architecture:
This approach may not eliminate the bugs but at least will localize
them and make testing easier.
Software Architecture:
Integration Bugs:
Integration bugs are bugs having to do with the integration of, and
with the interfaces between, working and tested components.
System Bugs:
Test Debugging: The first remedy for test bugs is testing and debugging the
tests. Test debugging, when compared to program debugging, is easier because
tests, when properly designed are simpler than programs
• Tutorial Questions:
• Write the differences between Debugging and Testing?
• Write the differences between Functional versus Structural?
• Assignment questions:
• Explain the Data bugs, coding bugs and Resource management Problems
with examples?
• Online resources:
• www.researchgate.com
• www.cs.drexel
Unit – III
The name of a path is the name of the nodes along the path.
FUNDAMENTAL PATH SELECTION CRITERIA:
0. There are many paths between the entry and exit of a
typical routine.
1. Every decision doubles the number of potential paths.
And every loop multiplies the number of potential paths
by the number of different iteration values possible for
the loop.
Defining complete testing:
0. Exercise every path from entry to exit
1. Exercise every statement or instruction at least once
2. Exercise every branch and case statement, in each
direction at least once
3. If prescription 1 is followed then 2 and 3 are
automatically followed. But it is impractical for most
routines. It can be done for the routines that have no
loops, in which it is equivalent to 2 and 3 prescriptions.
EXAMPLE: Here is the correct version.
Execute all statements in the program at least once under some test.
If we do enough tests to achieve this, we are said to have achieved
100% statement coverage.
This is the weakest criterion in the family: testing less than this for
new software is unconscionable (unprincipled or cannot be accepted)
and should be criminalized.
Practical Suggestions in Path Testing:
0. Draw the control flow graph on a single
sheet of paper.
1. Make several copies - as many as you will
need for coverage (C1+C2) and several more.
2. Use a yellow highlighting marker to trace
paths. Copy the paths onto a master sheets.
3. Continue tracing paths until all lines on the
master sheet are covered, indicating that
you appear to have achieved C1+C2.
LOOPS:
For example if x1,x2 are inputs, the predicate might be x1+x2>=7, given
the values of x1 and x2 the direction taken through the decision is
based on the predicate is determined at input time and does not
depend on processing.
TESTING BLINDNESS:
Testing Blindness is a pathological (harmful)
situation in which the desired path is achieved for
the wrong reason.
There are three types of Testing Blindness:
Assignment Blindness:
Equality Blindness:
Self Blindness:
PATH SENSITIZING:
For any path in that set, interpret the predicates along the path as
needed to express them in terms of the input vector. In general
ADFGHIJKL+AEFGHIJKL+BCDFGHIJKL+BCEFGHIJ
KL
Each product term denotes a set of inequalities that if solved will yield
an input vector that will drive the routine along the designated path.
Solve any one of the inequality sets for the chosen path and you have
found a set of input values for the path.
If you cant find a solution to any of the sets of inequalities, the path is
un achievable.
For all the cases the outcome is two ,this is not aim of designing case
statements, each case should give different outcome that is the intention
of designing case statement but here fails to avoid this we will move to
Path Instrumentation.
Tracing the path and calculating the intermediate value so that can
achieve path instrumentation.
Instrument the links so that the link's name is recorded when the
link is executed.
Adding the link markers at the begin and at the end of the path also.
Then correctly identify which path it is.
The objective of software rehosting is to change the environment and not the
rehosted software.
• Tutorial Questions:
• Write about Path Instrumentation Techniques?
• What is Path sensitization and explain it?
• Assignment questions:
• What is a predicates and how predicates are helpful in achieving C1+C2
?
• Write a program with at least 4 decisions and prove it weather it is under
coverage or not?
• Online resources:
• www.mcr.org.in
• www.nri.edu.in
• www.ece.uc.edu
Unit-IV
Objective of the unit:
students understand what are the states of the variables, what are the
reasons to occur anomalies , they can realize and write programs without
occurring of bugs.
Topic: Data-Flow Testing
Motivation:
At least half of contemporary source code consists of data declaration
statements-that is, statements that define data structure, individual
objects, initial or default values and attributes.
o We will use a control graph to show what happens to data
objects of interest at that moment.
o Our objective is to expose deviations between the data flows
we have and the data flows we want.
o Data Object State and Usage:
Data Objects can be created, killed and used.
They can be used in two distinct ways: (1) In a
Calculation (2) As a part of a Control Flow Predicate.
The following symbols denote these possibilities:
1. Defined: d - defined, created, initialized etc
2. Killed or undefined: k - killed, undefined,
released etc
3. Usage: u - used for something (c - used in
Calculations, p - used in a predicate)
1. Defined (d):
An object is defined explicitly when it
appears in a data declaration.
Or implicitly when it appears on the left
hand side of the assignment.
It is also to be used to mean that a file has
been opened.
A dynamically allocated object has been
allocated.
Something is pushed on to the stack.
A record written.
2. Killed or Undefined (k):
An object is killed on undefined when it is
released or otherwise made unavailable.
When its contents are no longer known with
certitude (with absolute certainty /
perfectness).
Release of dynamically allocated objects
back to the availability pool.
Return of records.
The old top of the stack after it is popped.
This graph has three normal and three anomalous states and
he considers the kk sequence not to be anomalous. The
difference between this state graph and is that redemption is
possible. A proper action from any of the three anomalous
states returns the variable to a useful working state.
The out link of simple statements (statements with only one out link)
is weighted by the proper sequence of data-flow actions for that
statement. Note that the sequence can consist of more than one letter
TERMINOLOGY:
requires that every du path from every definition of every variable to every
use of that definition be exercised under some test.
All Uses Strategy (AU):The all uses strategy is that at least one definition
clear path from every definition of every variable to every use of that
definition be exercised under some test. Just as we reduced our ambitions
by stepping down from all paths (P) to branch coverage (C2), say, we can
reduce the number of test cases by asking that the test set should include
at least one path segment from every definition to every use that can be
reached by that definition.
All p-uses/some c-uses strategy (APU+C) : For every variable and every
definition of that variable, include at least one definition free path from
the definition to every predicate use; if there are definitions of the
variables that are not covered by the above prescription, then add
computational use test cases as required to cover every definition.
All Definitions Strategy (AD) : The all definitions strategy asks only
every definition of every variable be covered by at least one use of that
variable, be that use a computational use or a predicate use.
oThe right-hand side of this graph, along the path from "all
paths" to "all statements" is the more interesting hierarchy for
practical applications.
o Note that although ACU+P is stronger than ACU, both are
incomparable to the predicate-biased strategies. Note also
that "all definitions" is not comparable to ACU or APU.
SLICING AND DICING:
o A (static) program slice is a part of a program (e.g., a selected
set of statements) defined with respect to a given variable X
(where X is a simple variable or a data vector) and a
statement i: it is the set of all statements that could
(potentially, under static analysis) affect the value of X at
statement i - where the influence of a faulty statement could
result from an improper computational use or predicate use
of some other variables at prior statements.
o If X is incorrect at statement i, it follows that the bug must be
in the program slice for X with respect to i
o A program dice is a part of a slice in which all statements
which are known to be correct have been removed.
• Tutorial Questions:
• Write the differences between forgiving and unforgiving data flow state
graph?
• What are the situations will go for dynamic analysis?
• Assignment questions:
What is Data flow model? Explain with an example?
• Exercise questions / Long answer questions / Project possibilities
Explain all strategies according to its strength wise each with an
example?
• Teacher observations (if any): If students know this concepts all the
simple errors such as declaring variable twice , no declaration , one
variable using for other purpose these are all are eliminated in
programming.
Online resources:
www.worldscientific.com
www.eecs.yorku.ca
Unit-V
PATH EXPRESSION:
o Consider a pair of nodes in a graph and the set of paths
between those node.
PATH PRODUCTS:
o The name of a path that consists of two successive path
segments is conveniently expressed by the concatenation or
Path Product of the segment names.
o For example, if X and Y are defined as X=abcde,Y=fghij,then
the path corresponding to X followed by Y is denoted by
XY=abcdefghij
PATH SUMS:
o The "+" sign was used to denote the fact that path names
were part of the same set of paths.
o The "PATH SUM" denotes paths in parallel between nodes.
LOOPS:
oIn the first way, we remove the self-loop and then multiply all
outgoing links by Z*.
o In the second way, we split the node into two equivalent
nodes, call them A and A' and put in a link between them
whose path expression is Z*. Then we remove node A' using
steps 4 and 5 to yield outgoing links whose path expressions
are Z*X and Z*Y.
A REDUCTION PROCEDURE - EXAMPLE:
o PARALLELTERMSTEP:
Removal of node 8 above led to a pair of parallel links between
nodes 4 and 5. combine them to create a path expression for
an equivalent link whose path expression is c+gkh; that is
o LOOPTERMSTEP
Removing node 4 leads to a loop term. The graph has now
been replaced with the following equivalent simpler graph:
o
o Continue the process by applying the loop-removal step as
follows:
a(bgjf)*b(c+gkh)d((ilhd)*imf(bjgf)*b(c+gkh)d)*(ilhd)*e
APPLICATIONS
o Also mark each loop with the maximum number of times that
loop can be taken. If the answer is infinite, you might as well
stop the analysis because it is clear that the maximum
number of paths will be infinite.
o There are three cases of interest: parallel links, serial links,
and loops.
o EXAMPLE:
The following is a reasonably well-structured
program.
13 = 10 + 11 + 12 + 13 = 1 + 1 + 1 + 1 = 4
2 X 84 X 2 = 32,768.
Alternatively, you could have substituted a "1" for each link in the
path expression and then simplified, as follows:
a(b+c)d{e(fi)*fgj(m+l)k}*e(fi)*fgh
= 1(1 + 1)1(1(1 x 1)31 x 1 x 1(1 + 1)1)41(1 x 1)31 x 1 x 1
= 2(131 x (2))413
= 2(4 x 2)4 x 4
= 2 x 8 x 4 = 32,768
4
EXAMPLE:
Path selection should be biased toward the low - rather than the
high-probability paths.
This raises an interesting question:
o EXAMPLE:
Here is a complicated bit of logic. We want to know
the probability associated with cases A, B, and C.
CASE B:
MEAN PROCESSING TIME OF A ROUTINE:
o Given the execution time of all statements or instructions for
every link in a flowgraph and the probability for each
direction for all decisions are to find the mean processing time
for the routine as a whole.
o The model has two weights associated with every link: the
processing time for that link, denoted by T, and the
probability of that link P.
o The arithmetic rules for calculating the mean time:
o EXAMPLE:
Start with the original flow graph annotated with
probabilities and processing time.
Combine the parallel links of the outer loop. The
result is just the mean of the processing times for the
links because there aren't any other links leaving the
Combine as many serial links as you can.
1.
Use the cross-term step to eliminate a node and to
create the inner self - loop.
LIMITATIONS AND SOLUTIONS:
o The main limitation to these applications is the problem of
unachievable paths.
THE PROBLEM:
o The generic flow-anomaly detection problem (note: not just
data-flow anomalies, but any flow anomaly) is that of looking
for a specific sequence of options considering all possible
paths through a routine.
o Let the operations be SET and RESET, denoted by s and r
respectively, and we want to know if there is a SET followed
immediately a SET or a RESET followed immediately by a
RESET (an ss or an rr sequence).
o Some more application examples:
1. A file can be opened (o), closed (c), read (r), or written
(w). If the file is read or written to after it's been
closed, the sequence is nonsensical. Therefore, cr
and cw are anomalous. Similarly, if the file is read
before it's been written, just after opening, we may
have a bug. Therefore, or is also anomalous.
Furthermore, oo and cc, though not actual bugs, are
a waste of time and therefore should also be
examined.
2. A tape transport can do a rewind (d), fast-forward (f),
read (r), write (w), stop (p), and skip (k). There are
rules concerning the use of the transport; for
example, you cannot go from rewind to fast-forward
without an intervening stop or from rewind or fast-
forward to read or write without an intervening stop.
The following sequences are anomalous: df, dr, dw,
fd, and fr. Does the flowgraph lead to anomalous
sequences on any path? If so, what sequences and
under what circumstances?
Motivation:
One solution to this problem is to represent the graphs as a matrix and to use
matrix operations equivalent to path tracing
Basic principle
A Graph matrix is a square array with one row and column for every node in
the graph. Each row column combination shows a relation.
Antisymmetric relation
Equivalent relations
A maximum element is one for which the relation xRa does not
hold for any other element x.
A Minimum element a is for which the relation aRx does not hold
for any other element x.
a. Principle
The square of the matrix obtained by replacing every entry with aij= ∑aikakj
(sigma is form 1 to n)
Loops
Partitioning algorithm
Partition the graph by grouping nodes in such way thatevery loop within one
group or another. Such a graph is partly ordered.
(A+I)n+#(A+I)nT .
2 7
8
3
1 1
1 1 1
1 1
1 1 1
1 1
1
1 1 1
1 1
1 1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1
1 1 1 1 1 1
1 1 1 1 1
Intersection with its transpose yields
1
1 1
1 1 1
1 1 1
1 1 1
1
1 1
1
A= [1]
B= [2,7]
C= [3,4,5]
D= [6]
E=[8]
Whose graph is
A B C D E
1A
1 1 aaa
1 1
1 1 a
1
1 1 a2,7 B
8E
3,4,
5C
6D
Node-Reduction Algorithm
Steps for node reduction algorithm
1.select a node for removal; replace the node by equivalent links that bypass
that node and add those nodes links to the links they parallel.
2. Combine the parallel terms and simplify
3. Observe loop terms and adjust the outlinks of every node that had a self-
loop to account for the effect of the loop.
4. the result is a matrix whose size has been reduced by 1. Continue until only
the two nodes of interest exist.
. G
.
. G+R
. G
. G
R . G
R+ .
Because R*R=R+
. G
.
. G+R
. G
. G
GR+ R .
. G
.
. G+R
. G
G2R+ .1
. G
.
. G+R
G3R+ .
. G
.
(G+R)G3R+ .
. G(G+R)G3R+
.
• Tutorial Questions:
• Explain node reduction algorithm?
• Assignment questions:
• How can you calculate the probability and maximum path count
arithmetic for a routine with an example?
• Online resources:
• www.analytictech.com
• www.theoinf.uni-bayreuth.de
Unit VI
Notes: Introduction
Definition of automated testing- automating human activities in order
to validate the application is called Automation testing
Automated testing performed using scripting languages or any one of the
third party automation tool like QTP, selenium, Win runner, Silk etc
Advantages
a. Fast in execution
b. More reliable
c. More consistency
d. Automation scripting is re-usable for different versions of builds to
validate
e. Automation script is repeatable
Disadvantages
a. Automation tools are expensive
b. Skilled automation test engineers are required
c. Tools are not support for different environment application
Automated testing should not be viewed as replacement for manual testing.
But everything not possible to automate there are many activities are there
in the testing life cycle which cannot possible to automate
• Notes:
Need for automation
There are some benefits
To automate those are
1. Reduction of testing effort-if there are hundreds or thousands of
test cases it is difficult to done with manually so more effort is needed to
get perfection into that but by using automation it is very easy to get
perfection with the less effort and time
2. Reduces the tester’s involvement in executing tests
condition. These tools also called program monitors they perform the
given activities
i) List the number of times a component is called or line of code is
executed,
ii) Gives report on whether the decision point has branched in all
directions
iv) Generate report summary statistics providing high level view of the
percentage of statements, paths and branches.
This is another kind of classification
Testing activity tools-these tools are based on activity performed
Activities can be categorized as
a) Reviews and inspection
b) Test planning
c) Test execution and evaluation
a) Tools for reviews and inspection-these are static analysis
done many items some tools work with the specifications there
are far many tools work with the code those are
Complexity analysis tools- these tools are helpful to analyse
the complexity so that time and resources were planned
Code comprehension-these tools are help to understand the
dependencies and tracing program logic and identify the dead
code.
b) Tools for test planning-activities of these tools are creating
templates for test plan ning, test schedule and staffing
estimates
c) Tools for test design and development- it can perform
activities such as Test data generates , Test case generates ,
d) Test execution and evaluation tools- these are capture and
play back tools, coverage analysis tools, memory testing tools,
test management tools, network-testing tools, performance
testing tools.
3 Debugging the tests You debug the tests to check that they operate
smoothly and without interruption.
4 Running the tests on a new version of the application You run the tests
on a new version of the application in order to check the application‟s
behavior.
5 Examining the test results You examine the test results to pinpoint
defects in the application.
6 Reporting defects If you have the TestDirector 7.0i, the Web Defect
Manager (TestDirector 6.0), or the Remote Defect Reporter (TestDirector
6.0), you can report any defects to a database. The Web Defect Manager
and the Remote Defect Reporter are included in TestDirector, Mercury
Interactive‟s software test management tool.
Dept. Of CSE Page 99
Software Testing Methodologies & Tools
Exploring the WinRunner Window Before you begin creating tests, you
should familiarize yourself with the WinRunner main window.
The first time you start WinRunner, the Welcome to WinRunner window
opens. From the welcome window you can create a new test, open an
existing test, or view an overview of WinRunner in your default browser.
To display the User toolbar choose Window > User Toolbar. When you
create tests, you can minimize the WinRunner window and work
exclusively from the tool bar.
You can configure the softkey combinations for your keyboard using the
Softkey Configuration utility in your WinRunner program group.
Choosing a GUI Map Mode Before you start teaching WinRunner the GUI
of an application, you should consider whether you want to organize
your GUI map files in the GUI Map File per Test mode or the Global GUI
Map File mode.
The GUI Map File per Test Mode In the GUI Map File per Test mode,
WinRunner automatically creates a new GUI map file for every new test
you create. WinRunner automatically saves and opens the GUI map file
that corresponds to your test. If you are new to WinRunner or to testing,
you may want to consider working in the GUI Map File per Test mode. In
this mode, a GUI map file is created automatically every time you create
a new test.
The GUI map file that corresponds to your test is automatically saved
whenever you save your test and automatically loaded whenever you
Dept. Of CSE Page 100
Software Testing Methodologies & Tools
open your test. This is the simplest mode for inexperienced testers and
for ensuring that updated GUI Map files are saved and loaded.
The Global GUI Map File Mode In the Global GUI Map File mode, you can
use a single GUI map for a group of tests. When you work in the Global
GUI Map File mode, you need to save the information that WinRunner
learns about the properties into a GUI map file. When you run a test, you
must load the appropriate GUI map file.
7 Run the test. Click OK in the Run Test dialog box. WinRunner
immediately begins running the test. Watch how WinRunner opens each
window in the Flight Reservation application.
8 Review the test results. When the test run is completed, the test results
automatically appear in the WinRunner Test Results window. See the
next section to learn how to analyze the test result.
1. Tool Bar:
It contains menu options and icons to perform operations on QTP
2. Test Pane:
a. Expert View:
In this view by default script generates in VB Script
b. Keyword View:
In this view script generates in simple understandable language in terms
of “Item”, “Operation”, “Value” and “Documentation”
3. Active Screen:
During recording by default QTP captures snapshot of application for each
operation and those will be maintained in Active screen component
4. Data table:
in QTP we have built-in data table where we can import/store required test
data and from that we can parameterize test script during runtime
a. Global sheet
b. Action/Local sheet
Note: in QTP within the Test we can create maximum 255 Actions, for each
action QTP provides individual action sheets in Data table
a. Global sheet:
By default script will execute multiple times based on number of rows
filled with test data in Global sheet
using test data from Global sheet we can parameterize any action script
b. Action/local Sheet:
Irrespective of number of rows filled with test data in Action/local sheet
script will execute only one time
Using test data from Action sheet we can parameterize that particular
action script only
5. Test Flow:
In this component we can view the sequence of actions execution flow in a
test and also we can re-order those actions execution flow by performing
Drag and Drop option
6. Debug Viewer:
During execution break time to view the intermediate values of variables
and to update those values we use Debug viewer
8. Missing Resources:
In general for a test we associate different resource files like shared
repositories, recovery scenarios, environment variables, test data, library
functions…etc
For a opened test if any associated resource file is not available that
information we can find in “Missing Resources” component
Procedure:
Click on “C”
Click on “5”
Click on “*”
Click on “6”
Click on “=”
Expected Result= 30
Navigation:
Click on “OK”
Stop recording
Checkpoints:
1. Standard checkpoint
2. Text checkpoint
3. Text Area Checkpoint
4. Bitmap checkpoint
5. DB checkpoint
6. XML checkpoint
7. Accessibility checkpoint
8. Image checkpoint
9. Table checkpoint
10. Page checkpoint
Where as
Selenium
Selenium Grid
Selenium Grid is a tool used together with Selenium RC to run parallel tests
across different machines and different browsers all at the same time. Parallel
execution means running multiple tests at once.
Features:
• Tutorial Questions:
• Explain components of QTP?
• Explain about selenium Testing Tool?
• Assignment questions:
• Explain about Test Director?
• Explain Win Runner Testing Tool?
• Online resources:
• www.qatestingtools.com
• www.opensourcetesting.org
• www.istqb.org
• www.csi-india.org