Software Testing Notes
Software Testing Notes
Software Testing Notes
Introduction
of Software
Testing
Introduction of Software Testing
Learning Objectives :
Introduction of Software Testing.
Why is Testing necessary?
Participants in Testing
Best Practices in Testing
Skills required for Testing
Software Testing
What is SoftwareTesting?
In Software Testing it is not sufficient to demonstrate that the
software is doing what it is supposed to do.
What to test?
Who tests?
Why is Testing necessary?
Software Testing is necessary to make sure the product or
application is defect free, as per customer specifications.
StartTesting -When?
Testing starts right from the requirements phase and
continues till the release time.
As soon as possible.
Objective of starting early:
Requirements related defects caught later in the SDLC result
in higher cost to fix the defect.
Developer
Tester
Auditor
Common problems in the software
development process
Poor Requirements –unclear, incomplete, too general
Inadequate testing –No one will know whether or not the
program is any good till the customer complains or system
crashes.
Features–A request to pile on new features after development
is underway; extremely common.
Miscommunications–If developers don’t know what’s needed
or customers have erroneous expectations, problems are
guaranteed
Best Practices in Testing
Test Planning
Code Testability
Test often-
Test early
3Early testing
4Defect clustering
5Pesticide paradox
Pesticide paradox
Ifthe same test are repeated over and over again, eventually
the same set of test cases will no longer find any new bugs.
To overcome this ‘pesticide paradox’ ,the test case needs to be
regularly reviewed and revised to potentially find more defects.
Testing shows presence of defects
Testing can show that defects are present, but cannot prove
that there are no defects.
Absence of error fallacy
Findingand fixing defects does not help if the system built is
unusable and does not full fill the users needs and expectation.
Interview Questions
What are the principles of testing?
Learning Objectives:
Introduction of Software Process
PDCA Cycle
Phases in SDLC
SDLC Models
WaterfallModel
Incremental Model
Spiral Model
Agile Model
Software Process
Process –Projects –Products
A software process specifies a method of developing software.
DO
Implement the new processes. Often on a small scale if
possible,
to test possible effects. either one or more of the P-D-C-A
steps.
CHECK
Measure the new processes and compare the results against
the expected results to ascertain any differences.
ACT
Analyze the differences to determine their cause.
Incremental Model
Spiral Model
Agile Model
Waterfall Model
Phase Output
Implementation == Code
No working software is produced until late during the life cycle.
2.Incremental Model
Advantages of Incremental Model
Generates working software quickly and early during the
software life cycle.
This model is more flexible –less costly to change scope and
requirements.
It is easier to test and debug during a smaller iteration.
In this model customer can respond to each built.
Lowers initial delivery cost.
Easierto manage risk because risky pieces are identified
and handled during it’s iteration.
Disadvantages of Incremental Model
Needs good planning and design.
Spiral
m
Spiral Model Strengths
High amount of risk analysis.
Interview Questions
What are different phases in SDLC?
CHAPTER-4
V-Shaped Strengths
V-Shaped Weaknesses
Benefits of V-Model
V-Model evolved from waterfall Model.
V-Shaped Strengths
Works well for small projects where requirements are
easily understood.
Level 3 –Defined
--The software process for both management and
engineering activities is documented, standardized and
integrated.
--All projects use a documented and approved version of the
organization’s process.
--Medium Quality & Medium Risk
Level 4 –Managed
--Detailed measures of the software process and product
quality are collected, understood and controlled.
--Management can identify ways to adjust and adapt the
process to particular projects without measurable losses
of quality.
--Higher Quality & Lower Risk
Level 5 –Optimizing
--Focus is on continually improving process performance
through both incremental and innovative technological
changes.
--Defects are minimized and products are delivered on time
and within the budget boundaries.
--Highest Quality & Lowest Risk
Interview Questions
Explain V-model architecture.
chapter 5:
Test Planning
In this phase all the planning about testing is done like what
needs to be tested
How the testing will be done
Test strategy to be followed
What will be the test environment
What test methodologies will be followed
Hardware and software availability ,resources, risks etc.
Test Analysis
After test planning phase is over test analysis phase starts, in this
phase we need to dig deeper into project and figure out what testing
needs to be carried out in each SDLC phase.
Automation activities are also decided in this phase, if
automation needs to be done for software product, how will the
automation be done, how much time will it take to automate
and which features need to be automated.
Test Design
In this phase various black-box and white-box test design
techniques are used to design the test cases for testing,
testers start writing test cases by following those design
techniques.
If automation testing needs to be done then automation scripts
also needs to written in this phase.
Test construction and verification
Inthis phase testers prepare more test cases by
keeping in mind the positive and negative scenarios, end
user scenarios etc.
The test cases are executed and defects are reported in bug
tracking tool.
CHAPTER 6: Verification
Learning Objectives :
VVModel.
This Model is called a Verification and Validation Model.
Build
An executable file of application which is released from
development team.
Walkthrough –A step-by-step presentation by the author of
the document in order to gather information and to establish a
common understanding of its content.
Inspection –A type of peer review that relies on visual
examination of documents to detect defects. This is the most
formal review technique and therefore always based on a
documented procedure.
Technical Review –An evaluation of a product or project
status to ascertain discrepancies from planned results and to
recommend improvements.
Audits:
Internal: Done by the organization
External: Done by people external to the organization to
check the standards and procedures of project
Walkthrough
Meeting led by author
Open-ended sessions
To explain (knowledge transfer) and evaluate the contents
of the document
To establish a common understanding of the document
The meeting is led by the authors; often a separate scribe
is present
A walkthrough is especially useful for higher-level
documents, such as requirement specifications and
architectural documents.
Inspection
Led by trained moderator (not the author).
help the author to improve the quality of the document under
inspection.
Formal process based on rules and checklists.
Remove defects efficiently, as early as possible.
Pre-meeting preparation.
Improve product quality, by producing documents with a
higher level of quality.
Formal follow-up process.
learnfrom defects found and improve processes in order to
prevent similar defects.
Technical Review
Itis often performed as a peer review without management
participation.
Benefits of Verification
Include early defect detection and correction.
CHAPTER 7:
Validation
Learning Objectives:
Introduction of Validation.
1. Disciplined approach to evaluate whether the final
product fulfills its specific intended use.
Unit Testing -
Unit -smallest testable piece of software.
Tests the functionality of units.
Typically done by the developers and not by testers.
It is typically used to verify control flow, data flow and
memory leak problems.
Integration Testing :-
Integration is a process of combining and testing multiple
components together.
Approaches:
Bottom Up-
Process of testing the very lowest layers of software first is
called the bottom up approach.
Top Down
System Testing -
System Testing
System testing is the testing of a finally integrated product
for compliance against user requirements.
1.Functinality testing
5. Manipulations coverage
(the correctness of o/p or outcomes).
6.Order of functionalities
(the existence of functionality w.r.t. customer requirements).
Non-Functionality Testing
Testing team concentrates on characteristics of S/W.
1. Recovery Testing
It is also called as Reliability testing.
Testing how well system recovers from crashes, hardware
failures or sudden problems
With the help of backup and recovery procedures
application can go from abnormal to normal state.
Example :
While an application is receiving data from a network, unplug
the connecting cable. After some time, plug the cable back in
and analyze the applications ability to continue receiving data
from the point at which the network connection was
terminated.
Compatibility Testing-
Also known as Portability testing.
Installation Testing-
Performance Testing
Performance Testing is the process of determining the speed
or effectiveness of a computer, network, software program or
device.
1. Load Testing -Tests the performance of an application
on loading the system with max users at same time.
Ex: websites, yahoo—G-mail.
2.Stress Testing -Checks how the system can behave under
extremes such as insufficient memory, inadequate hardware
etc.
Data Volume Testing-
Find weaknesses in the system with respect to its handling of
large amounts of data during short time periods.
Parallel Testing-
Security testing-
It is also known as Penetration testing.
Internationalization Testing
Checks the compatibility of an application to all possible languages.
Localization Testing
Localization is the process of adapting a globalized
application to a particular culture/locale.
Monkey Testing
Due to lack of time, the testing team concentrates on some
of the main activities in the software build for testing.
Thisstyle of testing is known as “Monkey testing” or
“Chimpanzee testing” or “Gorilla testing”.
Buddy Testing
Due to lack of time, the management groups programmers &
testers as “Buddies”.
Every buddy group consists of programmers & testers.
Defect Seeding
To estimate the efficiency of test engineers, the
programmers add some bugs to the build. This task is called
defect seeding / Bebugging.
It is a method in which developers plant defects in the AUT
to test the efficiency of the testing process.
It is also used as a method to determine an approximate
count of “the number of defects remaining in the application”.
Example of Defect Seeding and practical usage:Let us
take an assumption ---
Seeded Defects in the AUT = 100Discovered Seeded Defects
= 75Actual defects = xDiscovered Defects = 600
From the above statistics, we can infer that testers are able to
find 75 out of 100 defects; which is 75% of the total defects
present in the AUT.
The number of defects remaining in the application can be
calculated as: Actual Defects -Discovered Defects
Interview Questions
What is the key difference between verification and
validation?
Performance Testing Concepts –
Quality
Learning Objectives :
What is quality ?
Quality Views
Quality –Productivity
Software Quality
Quality Views
Customer’s view :
Delivering the right product
Quality Views
Supplier’s view :
Doing the right thing
Doing it the right way
Doing it right the first time
Doing it on time
Any difference between the two views can cause problems.
Quality Means
Consistently meeting customer needs in terms of
Requirements
Cost
Delivery schedule
Service
Quality –Productivity
Increase in Quality can directly lead to increase in
productivity.
Software Quality
SQA: Software Quality Assurance
SQC: Software Quality control
SQA: The Monitoring & Measuring the strength of
development process is called SQA
SQC: The Validation of final product before release to the
customer is called SQC
What is Quality Control(QC) ?
Attempts to test a product after it is built.
What is QMS?
Chapter-10
Equivalence partitioning
Equivalence partitioning is a black-box testing method
Divide the input domain of a program into classes of data
Derive test cases based on these partitions.
An equivalence class represents a set of valid or invalid
states for input condition.
Decision Tables
Explore combinations of inputs, situations or events.
It is very easy to overlook specific combinations of input .
Start by expressing the input conditions of interest so that
they are either TRUE or FALSE.
Valid id- Vaishali
Valid Passward-Vaishali@123
Valid id- Vaishali
InValid Passward-1234Va@#$%^&123
InValid id- Vai2468&*(
Valid Passward-Vaishali@123
InValid id- Vaishal35768i
InValid Passward-Vais@$%%hali@123
State Transition Testing
Process of writing the test cases for each and every possible
look where application or system is changing the state from
one state to another state.
Itis used when there is a sequence of events and
associated conditions that apply to those events
Example:
Statement coverage
Execute all the statements at least once
Weakest form of coverage as it requires every line of code
to be checked
Int a=10
Int b=20
Int d=30 // This statement is not necessary
Int c=a+b //30
System.out.println(c);
Decision coverage(Branch coverage)
Exercise all logical decision on their true or false sides.
To test the branch we must once
check the true condition and once the false condition
Condition Coverage
Execute each decision with all possible outcomes at least
once
It requires all cases.
Checks each of the ways condition can be made true or
false
Cyclomatic Complexity
Itis important to testers because it provides an indication of
the amount of testing.
Cyclomatic complexity is defined as
control flow graph G,cyclomatic complexity V(G):
V(G)= E-N+2 Where
N is the number of nodes in G=Boxes
V(G)=5-5+2=2
=6-5+2=3
Advantage of WBT
As the knowledge of internal coding structure is pre-
requisite, it becomes very easy to find out which type of
input/data can help in testing the application effectively.
Test Plan
TestPlan Identifier: Provides a unique identifier for the
document
Introduction: State the purpose of the plan, specify the
goals and objectives.
Approach
Approach: This mentions the overall test Strategy/Approach
for your Test Plan.
e.g. Specify the testing method (Manual or Automated, White
box or black box etc. )
Test Deliverables
Test Deliverables: Are nothing but all type of documents
which are used in implementing the Testing it starts with sign
in to sign off.
For example:
1.Test Plan2.Test cases3.Test log, etc ...
Environmental needs :
Specify the properties of test environment: hardware,
software, network etc.
List any testing or related tools.
Responsibilities :
List the responsibilities of each team/role/individual.
Staffing and Training Needs
Staffing and training needs:
How many staff members are required to do the testing?
What are their required skills?
What kind of training will be required?
Schedule :
Builtaround the roadmap but specific to testing with testing
milestones
When will testing start
What will be the targets/milestones pertaining to time
When will testing cycle end
Risks and Contingencies
Any activity that is a threat to the testing schedule is a
planning risk.
What are the contingencies( A provision for an unforeseen
event) in case the risk manifests itself.
What are the mitigation plans (Alternate plans) for the risk.
Approvals :
Specify the names and roles of all persons who must
approve the plan.
Provide space for signatures and dates. (If the document is
to be printed.)
Interview Questions
What is a Test Plan?
What is difference between Suspension criteria /Resume
criteria and Exit Criteria?
Chapter-12
Test Cases
Learning Objectives
Structure of Test Cases
Sample Test Cases
Test Scenario
A Scenario is any functionality that can be tested. It is also
called Test Condition or Test Possibility.
Test Cases
A case that tests the functionality of specific object.
Test case is a description of what to be tested, what data to
be given and what actions to be done to check the actual
result against the expected result.
It is document which includes step description of i/p, o/p
conditions with test data along with expected results.
Defect Management
Defect lifecycle
Types of Defect
What is Defect ?
“ Missing, Wrong, Not expected in the software which is
difficult to understand, hard to use is called as Defect”.
OR
Severity is the seriousness of the problem.
Priority-Relative degree of precedence given to a bug for its
fixation.
OR
Priority is the urgency of fixing a problem.
Defect Management
Primary goal is to prevent defects
Should be integral part of development process
As much as possible should be automated
Defect information should be used for process
improvements
Defect submission
In the above defect submission process, testing team uses
defect tracking software.
E.g.: Test Director, Bugzilla, ALM(application lifecycle
managment)…etc.
New : Tester found new bug and reports it to test lead.
Open : Team lead open this bug and assign it to the developer.
Assign : Developer has three choices after assigning the bug
1.Reject :He can say that this is not a bug because of some
hardware or other problems you might getting defect in application.
2.Differed : He can postpone bug fixing according to priority of bug
3.Duplicate : If the bug is repeated twice or the two bugs mention
the same concept of the bug, then one bug status is changed to
“DUPLICATE”.
Fixed : If there are no such conditions like reject, duplicate the
developer has to fix the bug.
Incident management
Learning Objectives:
Incident Management
Incidents
Incident management
Incident:Any event that occurs during testing that requires
subsequent investigation or correction.
actual results do not match expected results
–possible causes:
software fault
Incidents
May be used to monitor and improve testing
Should be logged
Should be tracked through stages,
e.g.:
initial recording
Incident Lifecycle
Interview Questions
What is an incident in software testing?
How to log an Incident in software testing?
Chapter n0-15
Risk Analysis
Learning Objectives :
What is Risk ?
Risk Analysis
Risk Management
Risk Mitigation
What is Risk ?
Risk is a potential that a chosen action or activity will lead to
a loss
or undesirable outcome.
Risk is a potential problem. It might happen or might not.
Eg. There is a possibility that I might meet with an accident if I
drive.
There is no possibility of an accident, if I do not drive at all.
Risk Analysis
Risk analysis and Management are a series of steps that
help software team to understand and manage uncertainty.
Interview Questions
What is Risk Analysis and ?
What are the common risk that leads to the project failure?
Chapter -16
A pocket-sized computing device, typically having a display
screen with touch input or a miniature keyboard.
IOS(Iphone)
Symbian(Nokia)
J2ME
RIM(Blackberry)
BREW
Hybrid apps
Hybrid appsare part native apps, part web apps..
Like web apps, Hybrid appsrely on HTML being rendered in a
browser, with the caveat that the browser is embedded within the
app.
Interview Questions
What is the difference between Mobile device testing and
mobile application testing?
Automation Testing
Why Automation Testing ?
Every organization has unique reasons for automating
software quality activities, but several reasons are common
across industries.
What to Automate?
Tests that will be run many times.
Tests that will be run with different sets of data.
When to Automate?
When the Application under manual test is stable.
Test cases that need to be run for every build such as Login
Module or core functions like Calendar utility.
Tests that require execution of multiple data called
parameterization.
Identical test cases which have to be executed on different
hardware configuration.
When the project doesn’t have enough time for the initial
time required for writing automation test scripts.
GatherSpace
Bugzilla
Performance testing/load
testing/stress testing tools
Performance testing tools monitor and report on how a
system behaves under a variety of simulated usage
conditions.
They simulate a load on an application, a database, or a
system environment, such as a network or server.
Example : Load Runner
Interview Questions
How many types of automations tools are there?