Software Testing Conceptspdf

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

Manual Software Testing Concept

Icon Software Solutions


Sandarsh
QA Lead
What is Software Testing?

Testing is functionality checking of an


Application against System Requirements.
OR
Software Testing is a process of executing an
Application with the intention of finding Defects
What is Quality Product?
The Product which is Reliably Bug Free, built within estimated
Cost and Delivered on Time.

When Clients can Trustworthiness of Software?


Probability that Software will not cause failure in the
System for a specified time under specified Condition.
What are the Causes for Software Bugs?

✓ Poor Requirement.
✓ Poor Management Approach.
✓ Lack of experience.
✓ Unrealistic Schedule.
✓ Practical Difficulties of developing a Software.
✓ Lack of Skills and Communication.
✓ More Assumption.
Necessity of Testing or why we need testing?

✓ Delivery of Quality Product to the Customers.


✓ Conformance of the Requirements.
✓ Reduce the Cost of Fixing.
SDLC (Software Development Life Cycle)
Various Activities involved in Software Development.
• Requirement Collection
• Feasibility Analysis
• Design
• Coding
• Testing
• Implementation
• Maintenance
Types of SDLC
• Waterfall Model
• Spiral Model
• V Model
• Prototype Model
• Hybrid Model
• Agile Methodology OR Framework
Waterfall Model
Requirement Collection

Feasibility Analysis

Design

Coding
Testing
Implementation

Maintenance
Advantages of waterfall Model:
• This model is simple and easy to understand and use.
• In this model phases are processed and completed one at a time. Phases do not overlap.
• Waterfall model works well for smaller projects where requirements are clearly defined and very
well understood
Disadvantages:
• Once an application is in the testing stage, it is very difficult to go back and change something
that was not well-thought out in the concept stage.
• High amounts of risk and uncertainty
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high risk of changing.
When to use the waterfall model?
• This model is used only when the requirements are very well known,
clear and fixed.
• Product definition is stable.
• Technology is understood.
• There are no indefinite requirements
• Ample resources with required expertise are available freely
• The project is short or m/c critical applications.
What is Spiral Model?
The spiral model is similar to the incremental model,
The spiral model has four phases:
✓Planning
✓ Risk Analysis
✓ Engineering
✓Evaluation
Spiral Model

SRS Prototype
Design HLD

Analyst

LLD

CRS

Coding Implementation &


Testing
Advantages of Spiral model:
• High amount of risk analysis hence, avoidance of Risk is enhanced.
• Good for large and mission-critical projects.
• Strong approval and documentation control.
• Additional Functionality can be added at a later date.
• Software is produced early in the software life cycle.
Disadvantages of Spiral model:
• Can be a costly model to use.
• Risk analysis requires highly specific expertise.
• Project’s success is highly dependent on the risk analysis phase.
• Doesn’t work well for smaller projects.
When to use Spiral model?
• When costs and risk evaluation is important
• For medium to high-risk projects
• Long-term project commitment unwise because of potential changes to
economic priorities
• Users are unsure of their needs
• Requirements are complex
• New product line
• Significant changes are expected (research and exploration)
What is V-model?
V- model means Verification and Validation model. Just like
the waterfall model, the V-Shaped life cycle is a sequential path of
execution of processes. Each phase must be completed before the
next phase begins. V-Model is one of the many software
development models.
Testing of the product is planned in parallel with a corresponding
phase of development in V-model.
V-Model

CRS & Feasibility UAT

SRS System Testing

Documentation/Revie
w/Test

High level Integration


Design Testing

Detailed Unit/Functional
Design Testing

Coding
Advantages of V-model:
• Simple and easy to use.
• Testing activities like planning, test designing happens well before coding. This
saves a lot of time. Hence higher chance of success over the waterfall model.
• Proactive defect tracking – that is defects are found at early stage.
• Avoids the downward flow of the defects.
• Works well for small projects where requirements are easily understood.
Disadvantages of V-model:
• Very rigid and least flexible.
• Software is developed during the implementation phase, so no early prototypes
of the software are produced.
• If any changes happen in midway, then the test documents along with
requirement documents has to be updated.
When to use the V-model?
• The V-shaped model should be used for small to medium sized
projects where requirements are clearly defined and fixed.

• The V-Shaped model should be chosen when ample technical


resources are available with needed technical expertise.

• High confidence of customer is required for choosing the V-Shaped


model approach. Since, no prototypes are produced, there is a very
high risk involved in meeting customer expectations
V and V (verification and validation):
Verification is the process of evaluating products of a development phase to
find out whether they meet the specified requirements. Validation is the
process of evaluating software at the end of the development process to
determine whether software meets the customer expectations and
requirements.

In this model we can do both verification and validation.


Verification – It is done by review and meeting to evaluate the documents, done
by Walkthrough and Inspection. Test Case documentation and peer review
process.
Validation – It is the actual testing done after Verification.(TestCase execution)
What is Prototype model?
The basic idea in Prototype model is that instead of freezing the requirements
before a design or coding can proceed, a throwaway prototype is built to
understand the requirements. This prototype is developed based on the currently
known requirements. Prototype model is a software development model. By using
this prototype, the client can get an “actual feel” of the system, since the
interactions with prototype can enable the client to better understand the
requirements of the desired system. Prototyping is an attractive idea for
complicated and large systems for which there is no manual process or existing
system to help determining the requirements.
The prototype are usually not complete systems and many of the details are
not built in the prototype. The goal is to provide a system with overall
functionality.
Prototype model

Requirement Minimal Development The Prototype


specification

Decision

No

Follow any SDLC


Advantages of Prototype model:
• Users are actively involved in the development
• Errors can be detected much earlier.
• Quicker user feedback is available leading to better solutions.
• Missing functionality can be identified easily
• Confusing or difficult functions can be identified
Requirements validation
Disadvantages of Prototype model:
• Leads to implementing and then repairing way of building systems.
• Incomplete application may cause application not to be used as the
full system was designed
Incomplete or inadequate problem analysis
When to use Prototype model?
• Prototype model should be used when the desired system needs to
have a lot of interaction with the end users.
• Typically, online systems, web interfaces have a very high amount of
interaction with end users, are best suited for Prototype model. It
might take a while for a system to be built that allows ease of use and
needs minimal training for the end user.
• Prototyping ensures that the end users constantly work with the
system and provide a feedback which is incorporated in the prototype
to result in a useable system. They are excellent for designing good
human computer interface systems.
Hybrid Model (Combination of Basic Models)
Hybrid Prototype and Spiral:

Prototype design

Prototype testing

Customer review
Design
Requirements

Coding Testing
Hybrid V and Prototype.

SRS

Prototype Prototype testing


development
Changes Customer
review

Approved HLD
Types Of Testing
• White box / structural/glass box/unit testing
• Gray box testing
• Black box / functional testing
White Box Testing
• Technique to check the internal structure of the system.
• Entire code must be available and this is done by developers.
• Each line of code is checked.
Techniques followed in White Box :
Path testing: Each path will be tested independently .
Flow chart may be used for that.
Conditional testing: logical conditions are checked [True / False]
Loop testing: In loop testing every loop in the structure is checked for all the loop cycles.
White Box Testing not only uses the above techniques, but also

A
Rational Graph
B Quantity

Rational® Quantify® is a powerful tool that


Reduces Code Size
identifies the portions of your application that
dominate its execution time
A
Rational Graph
B Purity

Modules
Rational Purity software is a complete set of run-time
analysis tools designed for improving application
reliability and performance
Difference between White Box and Black Box Testing

White Box Black Box

Checking of internal code lines. Functionality of Application.

Done by Developer or white box testers Test Engineers.

Good at Design & No need


Programming Language.

Done after Coding. Done after White Box Testing.


Black Box Testing Black Box Testing
• Checking the Application whether it meets the Requirement or not.

• Black Box Testing is done in many ways, they are

1. Functional Testing
2. Integration Testing

Incremental Non Incremental


Integration Integration

Top Down Bottom Up


3. System Testing
Drawbacks of Non-Incremental Testing(Big-Bang Method)
• Some Interfaces may be missed while covering the data flow.
• Detecting root cause Analysis of the defect will become difficult.

Example of Top-down, Bottom-up(Incremental Integration):

CEO

Man1 Man2

Eng1 Eng2 Eng1 Eng2

Architect
During Integration Testing,

To test Integration between already built Model and


another unbuilt Model

Module A Stub
(Built) (Dummy
Model)

Driver
3. System Testing
• Its an End to End Testing, to check whether the Product meets the
Requirement or not.
• Testing Environment will be just like the Production Environment.

Development Testing
Servers Servers

Developer Tester

Production
Environment
Server similar to
Production Environment
(Lower Configuration)
Acceptance Testing
• Testing done by Customer side to check whether Product meets the
Requirement or not.
• Testing may be done by
Customer itself
Customer’s Customer
End User
Customer can hire another Company for UAT

Acceptance
Software White Box Black Box
Testing
FT IT ST
4. Adhoc Testing
• Testing the Application Randomly(as End User does) is Adhoc.
• Requirements are not followed.

Exploratory Testing
Testing the Application by Exploring the features in it.
It is done when,
✓ There is similar kind of Application.
✓ If there is less time to understand the Application.

Adhoc Smoke Exploratory


▪ Negative Testing Positive Testing Positive or Negative

▪ We may know the


Product very well Should be Known No need to know well

▪ Requirements are Followed May or may not Followed


Compatibility Testing
Testing the Functionality of an Application in different Software and
Hardware Environments.

Environments.

S/W H/W

O/S Windows RAM


98/2000/2003
Processor
/
Browsers XP/NT/ME VGA
Unix
IE 5.0/5.5/6.0
Netscape Solaris
Mozilla HP
Firefox IBM OS2
Linux Redhat
Opera
Federa
Susane

Macintosh
α and β Testing
α and β Testing will be done for Product based
Application.

α Testing Done before releasing the Product to the Market by


Testers.
β Testing Done after releasing the Product to the Market by
both End Users and Testers.

Application WBT α-Testing β -Release

Customers
β-Testing

Product Based Company---->Company Understands the Requirements


and they built the Product.
Service Based Company----->Company Collects the Requirements from
the Customers and they built it.
Comparison Testing
This can be done to know the advantages and
disadvantages of the product by comparing with other
versions of the same products.

Guerilla Testing
This can be done by an expert may be one who gave the
requirements.

Mutation Testing
Checking the automated script written to check the
products.
Usability Testing
User friendliness of an application like
• Accessible
• Navigation
• Look and Feel
• Easy language
• Understand
• Simple
This will be done by end users and feedbacks will be implemented and by
deriving checklists like
• Login feature should have forgot password
• Alt tags for every link and diagrams
• Link to home page at every page
• All pages should have Navigation
Accessibility Testing (ADA Testing,508 Complience Testing)
Application whether it can be used by disabled person or not. Some set of rules
are
• All features of the application should be able to accessible by keyboard
• Color combination should not be Red and Green

Application

A-prompt Results
Performance Testing
Stability and response time of an application by applying load.
Stability – withstanding the load
Response Time – Its Time taken to get a response after request
RT=T1+T2+T3
Where
T1=Time for request
T2=Execution time
T3=Getting response
Load – Number of users
Some of the tools are
• Jmeter
• Load runner
• Silk performer
• Astra load
• Web load
• Ration performance
• blazemeter
Performance Testing:

Agent Agent

M1 M3
Server
M4
M2

Main
Machine Agent Agent
With Tool
The load balancer balances the load within the server

Server 1
Data base
Request Load Balancer
Server 2
Before using load balancer the performance will be
improved by developers by optimizing the code.

• The performance will also depend on the server


configuration, number of servers connected, testing m / c
Configuration.

• It will be done for web application and only when the


application is functionally stable, may be after 3 to 4
releases.
Types of Performance Testing.

Load Testing: Performance applying the load less than or maximum


to which it is designed for.
Stress Testing: Performance applying the load more than designed
value.
Volume Testing: Performance of an application by passing a huge
volume of data through it.
Soak Testing: Stability and response time of an application by
applying the load continuously for particular period of time.
Scalability Testing: Breaking point by gradually increasing the load.
Reliability Testing: Functionality of an application by applying the
load continuously for a period of time .
Recovery Testing:

Application to check how well it recovers from disasters.


Steps followed are:

• Crash the application introducing some defects.


• Crash should write a log message describing proper reason.
• It should kill its own process before it crashes.
• Reopen the application.
• Application should be reopened with previous setting.
Foilover Testing:
Server

A
Application

Load Balancer
C
OK
Regression Testing:
• Execution of some test cases in different releases / builds just to make
sure that changes made in an application don’t introduce defects.
• Changes may be addition , deletion , modifications , or even it may be
defect fixes.

Types of testing:
• Unit Regression : Testing only the unit / feature which had changes.

• Regional Regression : Testing the fixed defects and the related regions
which may have been effected by the changes.

• Full Regression : Testing the complete application due to lots of changes


in the application.
Test Deliverables:

• Total Quality Management


• Test Plan
• Test Scenarios
• Test Cases
• RTM
• Defect Report
• Checklist
• Sprint Retrospect
• Release Notes
• Test Summary Report
• Test Closer Report
Test Cases :

Test case is a document that describes an input , action and an


expected response, to determine if a feature of an application is
working correctly.
Test case Design Techniques :
➢Test case design techniques are used to design an effective test
case without missing any possibility of test case value.

Types of Design Techniques :


➢Error Guessing : In this technique the test case value is
guessed based on the requirements .

➢Equivalence Partitioning : In this technique the test case value


input is a range. Dividing the range into equal parts and tested
for valid / invalid values.

➢Boundary Value Analysis : In this technique, if the input range


of values are between ‘A’ and ‘B’ , then design the test cases
for ‘A’ , ‘A+1’ , ‘A-1’ , ‘B’ , ‘B+1’ , ‘B-1’ values
Test Case Review :
➢Checking whether test case template is w.r.t standards.
➢Checking whether test cases covers all the testing techniques.
➢Checking the header of test case like naming convention / precondition / seviority.
➢The mistakes will be reported using Review Template.
➢ Spelling and grammar mistake
➢ Ensure team member captured test data
➢ Once peer review(others review) done ensure it is incorporated review comments
STLC[Software Test Life Cycle]
Requirements

System Study

Write Test Plan

Write Test Cases

RTM

Test Execution

Defect Tracking

Test Execution Report

Retrospect
Requirements:
The requirements given to the testing team may be in different forms:-

1.CRS :-This is the document given by the customer for the


development of an application.

2.SRS :-Is the detailed description of customer requirements in terms of


technical words.
3.FS [functional specification] :-Describes the details of functional
elements of the page which are used to build the system.
4.Design document

5.Use case

6. User Stories (Agile)


RTM(Requirement Traceability Matrix)
Before executing the test cases, I t is confirmed that whether test case
is written for all requirements by writing RTM.B y writing RTM we can
test coverage

Sl no Module Requirement Test case Automation


Name Numbers Name Script

1 Login 1.1 TC_Login <NA>

2 Loans 2.1.3 TC_Loan <NA>


_Home
Defect life cycle(Tracking)
Defect is the Deviation from the Requirements.
Defect occurs if
➢ There is wrong Implementation of requirements
➢ Some feature is missed to implement.
➢ Some extra feature is implemented.
Defect cycle gives the status of Bug at any point of the time
Cant be
fixed
Duplicate
Not a
defect Closed
pass
Assigned Fixed
open Retest
Cant be
Fail Reopen
fixed
Not
reproducible
Request for
Envrn
Fixed in next
release
Defect Report:
Defect ID : Unique ID to identify the defect
Project name : Name of the project
Module name : Name of the project
Release/version : Release name
Status : Status of bug open
Seviority : Critical
Priority : p1/p2/p3
Environment : Testing environment
Test case ID : Test case-executing which the bug is found
Brief description : bug
Detailed description : Steps to reproduce the bug with the details of I / p ‘s
used while testing.
Pre condition :
Test date :
Found by : Tester who found the bug.
•Open
•Assigned
•Closed
•Reopen
•Fixed
•Duplicate
•Not a defect
•Cannot be fixed
•Fixed in next release (Deferred)
•Request for enhancement
Not reproducable:
•Different environments on both sides
•Different settings
•Data used may be different
Defect Tracking tools
Rational clear………….
Test director.
Bug zilla.
Bug station.
Jtrac

Using defect tracking tool defects can be communicated.


1. Used to generate unique defect id.
2. Data stored in database.
3. Defect automatically sent to dev
automatically.
Url

UsrNme
Psswd

login Cancel

Enter username/passwd to get

Proj name To log defect by testers


Log a defect
Defect Id Project Name

Search Release ver


To check the details of the bug .
Advanced search Module Name

Status
Advanced search will be used to avoid duplicate bugs Seviority
Priority
Search by
Environment
Release ver
Test case id
Module
Brief desc
Status
Steps to reproduce
Priority
Screenshot Attach
Search String

Search Cancel Submit Cancel


Tool Admin 2
Admin
Tool
4 Project
1
Manager

UN 3
Password Details
Project Name:
Release Ver:
Modules:
Create Project Owner maid id:
Edit Project Status:
Priority:
Delete Project Environment:
Add user Employee IDs:
Edit user
Delete user
Create Project
Add User
Project Name
Release Name Project
Add UID
Module: Emp Id
OK cancel

Status: Submit Cancel


Security:
Defect Tracking Sheet(Defect Matrix)

It is used to keep track of all bugs in different release.


Release Module
Defect ID Status Priority Severity
version name

ID -007
Closed P1 Critical low
Loans ID-100
Enhancement P3
V 1.0

payments

Loans
V 1.1
payments

Alerts

Test Plan------>may be single document for complete Project.


-------->may be different for different releases.
--------->may be divided into Master/Module Test Plans
Master Plan
Module
Module 1
Module 2 Plan

---------->may be divided into Functional TP,Integration TP.

You might also like