Pavan Online
Pavan Online
Pavan Online
COM
What is software?
• A Software is a collection of computer programs that helps us to perform a task.
• Types of Software:
• System software
Ex: Device drivers, OS, Servers, Utilities, etc.
• Programming software
Ex: compilers, debuggers, interpreters, etc.
• Application software
Ex: industrial automation, business software, games, telecoms, etc.
Product Vs Project
• If software application is developed for specific customer
requirement then it is called Project.
• If software application is developed for multiple customers
requirement then it called Product.
What is Software Testing?
• Software Testing is a part of software development process.
• Software Testing is an activity to detect and identify the defects in
the software.
• The objective of testing is to release quality product to the client.
Why do we need testing?
• Ensure that software is bug free.
• Ensure that system meets customer requirements and software
specifications.
• Ensure that system meets end user expectations.
• Fixing the bugs identified after release is expensive.
Software Quality
• Quality: Quality is defined as justification of all the requirements of a
customer in a product.
• Note: Quality is not defined in the product. It is defined in the customer`s
mind.
• Quality software is reasonably
• Bug-free.
• Delivered on time.
• Within budget.
• Meets requirements and/or expectations.
• Maintainable.
Error, bug & failure
• Error: Any incorrect human action that produces a problem in the system is called
an error.
• Defect/Bug: Deviation from the expected behavior to the actual behavior of the
system is called defect.
• Failure: The deviation identified by end-user while using the system is called a
failure.
Why there are bugs in software?
• Miscommunication or no communication
• Software complexity
• Programming errors
• Changing requirements
• Lack of skilled testers
Etc..
Software Development Life Cycle (SDLC)
• SDLC, Software Development Life Cycle is a process used by software
industry to design, develop and test high quality software's.
• The SDLC aims to produce a high quality software that meets
customer expectations.
SDLC Models
• Waterfall Model
• Incremental Model
• Spiral Model
• V- Model
• Agile Model
Waterfall Model
Incremental/Iterative Model
Spiral Model
V- Model
Agile Model
QA Vs QC/QE
• QA is Process related.
• QC is the actual testing of the software.
• QA focuses on building in quality.
• QC focuses on testing for quality.
• QA is preventing defects.
• QC is detecting defects.
• QA is process oriented.
• QC is Product oriented.
• QA for entire life cycle.
• QC for testing part in SDLC
Verification V/S Validation
• Verification checks whether we are building the right system.
• Verification typically involves.
• Reviews
• Walkthroughs
• Inspections
• Validation checks whether we are building the system right.
• Takes place after verifications are completed.
• Validation typically involves actual testing.
• System Testing
Static V/S Dynamic Testing
• Static testing is an approach to test project documents in the form of Reviews,
Walkthroughs and Inspections.
• Dynamic testing is an approach to test the actual software by giving inputs and
observing results.
Review, Walkthrough & Inspection
• Reviews:
• Conducts on documents to ensure correctness and completeness.
• Example:
Requirement Reviews
Design Reviews
Code Reviews
Test plan reviews
Test cases reviews etc.
• Walkthroughs:
• It is a formal review and we can discuss/raise the issues at
peer level.
• Also walkthrough does not have minutes of the meet. It can
happen at any time and conclude just like that no schedule as
such.
• Inspections:
• Its a formal approach to the requirements schedule.
• At least 3- 8 people will sit in the meeting 1- reader 2-writer 3-
moderator plus concerned.
• Inspection will have a proper schedule which will be intimated
via email to the concerned developer/tester.
Levels of Software Testing
• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing(UAT)
Unit Testing
• A unit is the smallest testable part of software. It usually has one
or a few inputs and usually a single output.
• Unit testing conducts on a single program or single module.
• Unit Testing is white box testing technique.
• Unit testing is conducted by the developers.
• Unit testing techniques:
• Basis path testing
• Control structure testing
• Conditional coverage
• Loops Coverage
• Mutation Testing
Integration Testing
• In Integration Testing, individual software modules are integrated
logically and tested as a group.
• Integration testing focuses on checking data communication
amongst these modules.
• Integrated Testing is white box testing technique.
• Integrated testing is conducted by the developers.
• Approaches:
• Top Down Approach
• Bottom Up Approach
• Sandwich Approach(Hybrid)
• Stub: Is called by the Module under Test.
• During GUI Testing Test Engineers validates user interface of the application as
following aspects:
• Look & Feel
• Easy to use
• Navigations & Shortcut keys
• GUI Objects:
• Window, Dialog Box, Push Button, Radio Button, Radio Group, Tool bar, Edit Box, Text Box,
Check Box, List Box, Drop down Box, Combo Box, Tab, Tree view, progress bar, Table, Scroll
bar Etc.
Check list for GUI Testing
• It checks if all the basic elements are available in the page or not.
• It checks the spelling of the objects.
• It checks alignments of the objects.
• It checks content display in web pages.
• It checks if the mandatory fields are highlights or not.
• It checks consistency in background color and color font type and
fond size etc.
Usability Testing
• During this testing validates application provided context
sensitive help or not to the user.
• Checks how easily the end users are able to understand
and operate the application is called usability testing.
Functional Testing
• Object Properties Coverage
• Input Domain Coverage (BVA, ECP)
• Database Testing/Backend Coverage
• Error Handling Coverage
• Calculations/Manipulations Coverage
• Links Existence & Links Execution
• Cookies & Sessions
Object Properties Testing
• Every object has certain properties.
• Ex: Enable, Disable, Focus, Text etc..
• During Functional testing Test Engineers validate properties of
objects in run time.
Input Domain Testing
• During Input domain testing Test Engineers validate data
provided to the application w.r.t value and length.
• There are two techniques in Input domain Techniques.
• Equalance Class Partition (ECP)
• Boundary Value Analysis(BVA)
ECP & BVA
• Requirement:-
• User name field allows only lower case with min 6 max 8 letters.
• Session- Sessions are time slots which are allocated to the user at the
serve side.
Non-Functional Testing
• Performance Testing
• Load Testing
• Stress Testing
• Volume Testing
• Security Testing
• Recovery Testing
• Compatibility Testing
• Configuration Testing
• Installation Testing
• Sanitation Testing
Performance Testing
• Load: Testing speed of the system while increasing the load gradually
till the customer expected number.
• Stress: Testing speed of the system while increasing/reducing the
load on the system to check any where its breaking.
• Volume: Check how much volumes of data is able to handle by the
system.
Security Testing
• Testing security provided by the system.
• Types:
• Authentication
• Access Control/Authorization
• Encryption/Decryption
Recovery Testing
• Testing Recovery provided by the system. Whether the system
recovering abnormal to normal condition or not.
Compatibility Testing
• Testing Compatibility of the system w.r.t OS, H/W & Browsers.
• Operating System Compatibility
• Hardware Compatibility (Configuration Testing)
• Browser Compatibility
• Forward & Backward Compatibility
Installation Testing
• Testing Installation pf the application on Customer expected
platforms and check installation steps, navigation, how much space is
occupied in memory.
• Check Un-Installation.
Sanitation/Garbage Testing
• Check whether application is providing any extra/additional features
beyond the customer requirements.
Testing Terminology
• Adhoc Testing:
• Software testing performed without proper planning and
documentation.
• Testing is carried out with the knowledge of the tester about the
application and the tester tests randomly without following the
specifications/requirements.
• Monkey Testing:
• Test the functionality randomly without knowledge of application and
test cases is called Monkey Testing.
Testing Terminology cont…
• Re- Testing:
• Testing functionality repetitively is called re-testing.
• Re-testing gets introduced in the following two scenarios.
• Testing is functionality with multiple inputs to confirm the business validation are
implemented (or) not.
• Testing functionality in the modified build is to confirm the bug fixers are made correctly (or)
not.
• Regression Testing:
• It is a process of identifying various features in the modified build where there is a chance of
getting side-effects and retesting these features.
• The new functionalities added to the existing system (or) modifications made to the existing
system .It must be noted that a bug fixer might introduce side-effects and a regression testing is
helpful to identify these side effects.
Testing Terminology cont…
• Sanity Testing:
• This is a basic functional testing conducted by test engineer
whenever receive build from development team.
• Smoke Testing:
• This is also basic functional testing conducted by developer or tester
before releasing the build to the next cycle.
Testing Terminology cont…
• End to End Testing:
• Testing the overall functionalities of the system including the data
integration among all the modules is called end-to-end testing.
• Exploratory Testing:
• Exploring the application and understanding the functionalities
adding or modifying the existing test cases for better testing is called
exploratory testing.
Testing Terminology cont…
• Globalization Testing (or) Internationalization Testing(I18N):
• Checks if the application has a provision of setting and changing
languages date and time format and currency etc. If it is designed for
global users, it is called globalization testing.
• Localization Testing:
• Checks default languages currency date and time format etc. If it is
designed for a particular locality of users is called Localization
testing.
Testing Terminology cont…
• Positive Testing (+ve ):
• Testing conducted on the application in a positive approach to determine what
system is supposed to do is called a positive testing.
• Positive testing helps in checking if the customer requirements are justifying the
application or not.
• It identifies test items, the feature to be tested, the testing tasks, who will do each
task, and any risks requiring contingency planning.
• A detail of how the test will proceed, who will do the testing, what will be tested, in
how much time the test will take place, and to what quality level the test will be
performed.
Test Plan Contents
• Objective
• Features to be tested
• Features not to be tested
• Test Approach
• Entry Criteria
• Exit Criteria
• Suspension & Resumption criteria
• Test Environment
• Test Deliverables & Milestones
• Schedules
• Risks & Mitigations
• Approvals
Use case, Test Scenario & Test Case
• Use Case:
• Use case describes the requirement.
• Use case contains THREE Items.
• Actor
• Action/Flow
• Outcome
• Test Scenario:
• Possible area to be tested (What to test)
• Test Case:
• How to test
• Describes test steps, expected result & actual result.
Use Case V/s Test Case
• Use Case – Describes functional requirement, prepared by
Business Analyst(BA).
• Test cases are derived (or written) from test scenario. The scenarios are derived from
use cases.
• In short, Test Scenario is ‘What to be tested’ and Test Case is ‘How to be tested’.
• Example:-
• TC1: Click the button without entering user name and password.
• TC2: Click the button only entering User name.
• TC3: Click the button while entering wrong user name and wrong
password.
Positive V/s Negative Test Cases
• Requirement:
• For Example if a text box is listed as a feature and in SRS it is mentioned as Text box accepts 6 - 20 characters
and only alphabets.
• Positive Test Cases:
• Textbox accepts 6 characters.
• Textbox accepts upto 20 chars length.
• Textbox accepts any value in between 6-20 chars length.
• Textbox accepts all alphabets.
• Negative Test Cases:
• Textbox should not accept less than 6 chars.
• Textbox should not accept chars more than 20 chars.
• Textbox should not accept special characters.
• Textbox should not accept numerical.
Test case Design Techniques
• Boundary Value Analysis (BVA)
• Equivalence Class Partitioning (ECP)
• Decision Table Testing
• State Transition Diagram
• Use Case Testing
Decision Table
State Transition Diagram
Use Case
Test Suite
• Test Suite is group of test cases which belongs to same category.
Test Case Contents
• Test Case ID
• Test Case Title
• Description
• Pre-condition
• Priority ( P0, P1,P2,P3) – order
• Requirement ID
• Steps/Actions
• Expected Result
• Actual Result
• Test data
Test Case Template
Requirement Traceability Matrix(RTM)
• RTM – Requirement Traceability Matrix
• Used for mapping of Requirements w.r.t Test cases
Characteristics of good Test Case
• A good test case has certain characteristics which are:
• Should be accurate and tests what it is intended to test.
• No unnecessary steps should be included in it.
• It should be reusable.
• It should be traceable to requirements.
• It should be compliant to regulations.
• It should be independent i.e. You should be able to execute it in any order
without any dependency on other test cases.
• It should be simple and clear, any tester should be able to understand it by
reading once.
Test Data
• Test Data is required for Test cases.
• Test Data will be provided as input to the cases.
• Prepared by Test Engineers.
• Maintains in Excel sheets.
Test Environment Setup (Test Bed)
• Test environment decides the software and hardware conditions under
which a work product is tested.
• Activities
• Understand the required architecture, environment set-up and prepare hardware
and software requirement list for the Test Environment.
• Setup test Environment and test data
• Perform smoke test on the build
• Deliverables
• Environment ready with test data set up
• Smoke Test Results.
Test Execution
• During this phase test team will carry out the testing based on the test
plans and the test cases prepared.
• Bugs will be reported back to the development team for correction and
retesting will be performed.
Defect Reporting
• Any mismatched functionality found in a application is called as
Defect/Bug/Issue.
• During Test Execution Test engineers are reporting mismatches as defects
to developers through templates or using tools.
• Defect Reporting Tools:
• Clear Quest
• DevTrack
• Jira
• Quality Center
• Bug Jilla etc.
Defect Report Contents
• Defect_ID - Unique identification number for the defect.
• Defect Description - Detailed description of the defect including information about the module in which defect was found.
• Version - Version of the application in which defect was found.
• Steps - Detailed steps along with screenshots with which the developer can reproduce the defects.
• Date Raised - Date when the defect is raised
• Reference- where in you Provide reference to the documents like . requirements, design, architecture or may be even
screenshots of the error to help understand the defect
• Detected By - Name/ID of the tester who raised the defect
• Status - Status of the defect , more on this later
• Fixed by - Name/ID of the developer who fixed it
• Date Closed - Date when the defect is closed
• Severity which describes the impact of the defect on the application
• Priority which is related to defect fixing urgency. Severity Priority could be High/Medium/Low based on the impact urgency
at which the defect should be fixed respectively
Defect Management Process
Defect Classification
Defects Categorization
Severity Priority
Critical P1
High P2
Medium P3
Low
Defect Severity
• Severity describes the seriousness of defect.
• In software testing, defect severity can be defined as the degree of impact
a defect has on the development or operation of a component application
being tested.
• Defect severity can be categorized into four class
• Critical: This defect indicates complete shut-down of the process, nothing can
proceed further
• High: It is a highly severe defect and collapse the system. However, certain parts of
the system remain functional
• Medium: It cause some undesirable behavior, but the system is still functional
• Low: It won't cause any major break-down of the system
Defect Priority
• Priority describes the importance of defect.
• Defect Priority states the order in which a defect should be fixed. Higher
the priority the sooner the defect should be resolved.
• Defect priority can be categorized into three class
• P1 (High) : The defect must be resolved as soon as possible as it affects the
system severely and cannot be used until it is fixed
• P2 (Medium): During the normal course of the development activities defect
should be resolved. It can wait until a new version is created
• P3 (Low): The defect is an irritant but repair can be done once the more serious
defect have been fixed
Tips for determining Severity of a Defect
A software system can have a
• A very low severity with a high priority:
• A logo error for any shipment website, can be of low severity as it not
going to affect the functionality of the website but can be of high
priority as you don't want any further shipment to proceed with wrong
logo.
2 The login function of the Critical Login is one of the main function of the banking website if
website does not work this feature does not work, it is serious bugs
properly
3 The GUI of the website Medium The defect affects the user who use Smartphone to view
does not display correctly the website.
on mobile devices
4 The website could not High This is a serious issue since the user will be able to login but
remember the user login not be able to perform any further transactions
session
5 Some links doesn’t work Low This is an easy fix for development guys and the user can
still access the site without these links
Tips for good Bug report
• Structure: test carefully (Use deliberate, careful approach to testing)
• Reproduce: test it again (rule of thumb 3 times)
• Isolate: test it differently (Change variables that may alter symptom)
• Generalize: test it elsewhere Does the same failure occur in other modules or locations?
• Compare: review results of similar tests (Same test run against earlier versions)
• Summarize: relate test to customers (Put a short “tag line” on each report)
• Condense: trim unnecessary information (Eliminate extraneous words or steps)
• Disambiguate: use clear words (Goal: Clear, indisputable statements of fact)
• Neutralize: express problem impartially (Deliver bad news gently)
• Review: be sure (Peer review)
Bug Life Cycle
Jira Defect Workflow
Sates of Defects
• New: A bug is reported and is yet to be assigned to developer
• Open: The test lead approves the bug
• Assigned: Assigned to a developer and a fix is in progress
• Need more info: When the developer needs information to reproduce the bug or to fix the bug
• Fixed: Bug is fixed and waiting for validation
• Closed: The bug is fixed, validated and closed
• Rejected: Bug is not genuine
• Deferred: Fix will be placed in future builds
• Duplicate: Two bugs mention the same concept
• Invalid: Not a valid bug
• Reopened: Bug still exists even after the bug is fixed by the developer
Test Cycle Closure
• Activities
• Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical Business
Objectives , Quality
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test closure report
• Qualitative and quantitative reporting of quality of the work product to the customer.
• Test result analysis to find out the defect distribution by type and severity.
• Deliverables
• Test Closure report
• Test metrics
QA/Testing Activities
• Understanding the requirements and functional specifications of the application.
• Identifying required Test Scenario’s.
• Designing Test Cases to validate application.
• Execute Test Cases to valid application
• Log Test results ( How many test cases pass/fail ).
• Defect reporting and tracking.
• Retest fixed defects of previous build
• Perform various type of testing assigned by Test Lead (Functionality, Usability, User Interface
and compatibility) etc.,
• Reports to Test Lead about the status of assigned tasks
• Participated in regular team meetings.
• Creating automation scripts for Regression Testing.
• Provides recommendation on whether or not the application / system is ready for production.
Test Metrics
SNO Required Data
1 No. Of Requirements
2 Avg. No. of Test Cases written Per Requirement
3 Total No.Of Test Cases written for all Requirement
4 Total No. Of test cases Executed
5 No.of Test Cases Passed
6 No.of Test Cases Failed
7 No.of Test cases Blocked
8 No. Of Test Cases Un Executed
9 Total No. Of Defects Identified
10 Critical Defects Count
11 Higher Defects Count
12 Medium Defects Count
13 Low Defects Count
14 Customer Defects
15 No.of defects found in UAT
• % of Test cases Executed:
• No.of Test cases executed / Total No. of Test cases written ) * 100
• Defect Leakage:
• (No.of defects found in UAT / No. of defects found in Testing) * 100
• Both development and testing activities are concurrent unlike the Waterfall
model.