Testing Training (1) - Jan - Batch

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 48

Application software

System

Firmware

Programming

Driver

Freeware

Shareware

Open source software

Closed source software

What is testing?

Is to find that customer or client is satisfied

Testing is to find defect in an application

Defect/bug-the variance between the expected and actual value is known as bug

Product-- Product is like Ready to use solution which is built by the company and sold to different
customers

Product is for market purpose/taking survey in market

Project—project is taking requirements from the customer to build a solution

Project is for particular client customer

Purpose of testing

To prevent defects

To make sure that the end result meet business and user requirements

To ensure that it satisfies the BRS (BUISNESS REQUIREMENT SPECIFICATION) and SRS (SYSTEM
REQUIRMENT SPECIFICATIONS)

To satisfy the customer by giving quality product

Software Quality

To meet the customer requirement as to satisfy the customer

Error- a mistake in coding is called error

Failure-when a defect reaches the end customer it is called failure


When to start testing?

SDLC –software development life cycle

Steps taken to build the product staring from the scratch i.e., gathering requirement till deploying the
product

Phases-requirement gathering (customer/client gives the requirement)-business analyst

BRS: BUISNESS REQUIREMENT SPECIFICATION)

SRS: SOFTWARE/SYSTEM REQUIRMENT SPECIFICATIONS->review

Planning

Design-review

Coding-

Testing

Deploy

Maintenance

Lost of time, money, raw materials (Resources), effort….

Conclusion: testing has to be conducted at each and every phase of SDLC

Testing has to start from first phase itself and it has to be conducted at each and every
stage of SDLC

Stage containment: testing at each and every stage of SDLC is known as stage containment

Without stage containment: Lost of time, money, raw materials (Resources), effort….

Testing

1. Static testing: reviews, walkthrough, inspection

The testing which is conducted without execution

Static testing is also known as verification testing

Though not execution anything but some time tools are used

How static testing is conducted: reviews, walkthrough, and inspection

Artifact: The document which has to be reviewed is the artifact in this scenario

A. Peer review: desk checking

Peer means person of same level


Informal way of review

B. Walkthrough: semi-formal way of review

Team size: 3 to 5 people

Some experience person will also be present in the meeting

C. Technical reviews

D. Inspection: most formal way of conducting the review

Team size: 3 to 5 people

Very structured and planning is required

Certain roles needs to be played

1. Moderator: who arranges and leads the meeting

2. Author: whose artifact has to be reviewed

3. Inspector: who inspects the meeting

E. Static code analysis

2. Dynamic testing: testing done by execution

A. white box testing: is a testing strategy in which test design is based on internal logic,
structure and implementation of the AUT

Other names: structural testing, glass box testing, clear box testing, open box testing

Applied in unit and integration testing

Techniques:

1. Statement coverage

Execution of all statements of the source code at least once.it is used to calculate the total no of
executed statements in the source code out of total statements `present in the source code

Statement coverage= (no of executed statements /total no of statements)*100

Test (int a)

If (a>4)

a=a*3

Print (a)

}
Conclusion only 1 test case is enough to achieve 100% S.C

Minimum 1 test case is enough

a=5

Test data/input data: the data generated for testing purpose

100% branch/decision coverage will imply 100% statement coverage, but 100% SC does not branch
coverage and decision coverage are same

2. Decision coverage

Minimum 2 test case are compulsory and 100% coverage

For each if statement both the true and false has to be check compulsorily

One condition check for true and another for false statement

If (a||b) {

Test1=true;

Else{

If(c) {

Test2=true

a=-1,b=0,c=3

dc=

3. Branch

4. Condition

5. Multiple condition

6. Finite state machine

7. Path coverage

8. Control flow testing

9. Data flow testing


10. Loop coverage

B. black box testing:

Why black box needed?

Exhaustive testing is not possible due to time and budget so black box testing techniques helps to
reduce the test cases and also helps in conducting the testing in effective way.

Technique :

1.equilence partitioning:

Applied to all levels of testing units like unit, integration, system etc.,

In this technique invalid dated units are divided into equivalent partitions that can be used to derive test
cases which reduces time required for testing because of small number of test cases

Question: a bank is offering a loan in the range of $0 to $100 with 5% interest, a balance over $100 up to
$10000 with 10% interest and balance over $10000 with 20% interest

2. boundary value analysis:

Testing between extreme ends or boundaries between partitions of the input values.

If (Min, Max) is the range given for a field validation, then the boundary values come as follows:

Invalid boundary check {Min-1; Max+1}

valid boundary check {Min; Min+1,Max-1;Max}

question:

requirements : user name field allows only lower case min 6 and max 8 letters

Decision Table:

Ex: user id : valid

Password: valid---------------execute(login successfully)

If either Blank or invalid in any case then “invalid credential execution”

4. State Transition Testing:


State Transition Testing is basically a black box testing technique that is carried out to observe
the behavior of the system or application for different input conditions passed in a sequence. In
this type of testing, both positive and negative input values are provided and the behavior of the
system is observed

C. grey box testing

What are the reasons for bugs in software?

Changing requirement

Programming errors

Miscommunication or no communications

Software complexity

Time pressures

Lack of skilled testers

Why do projects fail?


1.Poorly defined project scope

2.Inadequate risk management


3.Failure to identify key assumptions
4.Project managers who lack experience and training
5.No use of formal methods and strategies
6.Lack of effective communication at all levels
7.Key staff leaving the project and/or company
8.Poor management of expectations
9.Ineffective leadership
10.Lack of detailed documentation
11.Failure to track requirements
12.Failure to track progress

SDLC Models

1. Waterfall model

Requirement—design—development—testing—deployment—maintenance

Suitable for the projects where the requirements are very clear and not suitable for changing and
unclear requirements

Advantage: very less chance of defect been carried from one stage to another
Drawbacks: a very rigid model, very less flexible model, testing starts at very later stage, defects found
at later stage very difficult fix

Prototyping model

A prototype of the end product is first developed, tested and refined as per customer feedback
repeatedly till a final acceptable prototype is achieved which forms the for developing the final product

Iterative model

Requirments are clearly defined and understood

A new technology is being is used and is being learnt by the development while working on the project

Time to the market constraint

Spiral model

Projects in which frequent releases are necessary


Medium to high risk projects

Long term projects are not feasible due to altered economic priorities

V-shape model:

4 level of testing:

1. Unit testing/module testing: Testing each unit/module separately

2. Integration testing: integrating all the units and checking if the units have been integrated in proper
order and checking the correct flow of data from one module to another module.

3. System testing: Checking the functionality of the product as a whole

4. User acceptance testing: client should accept your product. Whether the product is according to the
customer/client specification or not

-------------------------------------------------------------------------------------------------------------------------------------

SEVEN testing principles

Positive testing: Checking for valid conditions are known as positive testing

Negative testing: checking for invalid conditions are known as negative testing

1. Testing shows the presence of defects:


the objective of testing is to find more and more hidden defects using different techniques and
methods.
Testing can reveal undiscovered defects and if no defects are found then it does not mean that
the software is defect free.
Testing can show that defects are present but cannot prove that there are no defects.
2. Exhaustive testing is not possible:
It is not possible to test all the functionalities with all valid and invalid combinations
of input data during actual testing.
Testing the product in all possible scenario.
Instead of this approach, testing of a few combinations is considered based on priority using
different techniques. instead of exhaustive testing, we use risk and priorities to focus testing
efforts
Exhaustive testing will take unlimited efforts and most of efforts are ineffective
3. Early testing:
Testers need to get involved at an early stage of the Software
Development Life Cycle (SDLC). Thus the defects during the requirement
analysis phase or any documentation defects can be identified. The cost
involved in fixing such defects is very less when compared to those that
are found during the later stages of testing.
4. Defect clustering:
During testing, it may happen that most of the defects found are related to a
small number of modules. There might be multiple reasons for this like the
modules may be complex, coding related to such modules may be
complicated etc.,
This is the Pareto Principle of software testing where 80% of the problems are
found in 20% of the modules. We will learn more about Defect clustering and
Pareto Principle later in this article

5. Pesticide paradox:
Pesticide Paradox principle says that if the same set of test cases are executed again
and again over the period of time then these set of tests are not capable enough to
identify new defects in the system.
In order to overcome this “Pesticide Paradox”, the set of test cases needs to be
regularly reviewed and revised. If required a new set of test cases can be added and
the existing test cases can be deleted if they are not able to find any more defects
from the system.
6. Testing is context dependent:
Different domains are tested differently, thus testing is purely based on the context of domain
or application. testing is done differently in different contexts. For example, safety-critical
software is tested differently from an e-commerce site
7. Absence of errors fallacy:
If the software is fully tested and if no defects are found before release,then we can say that the
software is 99% defect free.but what if this software is tested against weong requirnebts?in
such cases,even finding defectd and fixing thenm on time would not help as testing id
performed on wrong requiremnwts which are not as per neds of the end user.

-------------------------------------------------------------------------------------------------------------------------------------

Agile

Scrum

Kanban

--------------------------------------------------------------------------------------------------------------------------------
How many test cases are needed for 100 % statement coverage?--1

How Many test cases are needed for 100 % decision coverage?---2

Read p-71

Read q-30

If p+q >100 then

Print “large”

Endif

If p>50 then

Print “p large”

Endif

Read p-30

Read q-70

If p+q >100 then

Print “large”

Endif

If p<50 then

Print “p small”

Endif

SC--1

DC—2

-----------------------------------------------------------------------------------------------------------------------------------------

Integration testing:

Top down integration testing:

The higher level modules are tested first and then lower level modules are tested and integrated in
order to check the software functionality
Main()

Add();-Calling function
Sub();Calling function

Mul();-Calling Function

Int Add(int, int)-called Function

Int c;

C=a+b;

Int sub(int, int)-Called Function

Return 0;-Stub

Bottom up integration testing:


Main()

Add();-Calling function

Sub();Calling function

Driver-Dummy code

Int Add(int, int)-called Function

Int c;

C=a+b;

Int sub(int, int)-Called Function

Int c;

C=a+b;

Stubs and Drivers :


The Stubs and Drivers are considered as elements which are equivalent to to-do
modules that could be replaced if modules are in their developing stage, missing or
not developed yet, so that necessity of such modules could be met. Drivers and
stubs simulate features and functionalities, and have ability to serve features that a
module can provide. This reduces useless delay in testing and makes the testing
process faster.
• Stubs are mainly used in Top-Down integration testing while the Drivers are used in
Bottom-up integration testing, thus increasing the efficiency of testing process.
Drivers
Stubs are used in Top-Down Integration Testing. Drivers are used in Bottom-Up Integration Testing.
Stubs are basically known as a “called programs” and are used in the Top- While, drivers are the “calling program” and are used
down integration testing. in bottom-up integration testing.
Stubs are similar to the modules of the software, that are under While drivers are used to invoking the component
development process. that needs to be tested.

Sandwich approach:
Combination of top down and bottom up
Big bang testing:

In which all the components or modules are integrated together at once and then tested as a unit. This
combined set of components is considered as an entity while testing. If all of the components in the unit
are not completed, the integration process will not execute.

Advantages: convenient for small systems

Acceptance testing

User Acceptance testing is done to verify if the product meets customer


requirement.
Alpha testing.
Another subset of acceptance testing, alpha testing uses internal team members to evaluate
the product. These team members should be knowledgeable of the project but not directly
involved in its development or testing. Where some builds might still be somewhat unstable,
alpha testing provides an immediate subset of testers to root out major bugs before the
software is seen by external users.
• Alpha testing example: In this functional testing example, a casino games provider releases a
new version of its app that includes video poker. The organization compiles a cross-functional
group of internal users that test whether the app functions correctly on their devices and how
the user experience can improve.
•Beta testing.
After the internal team tests the product and fixes bugs, beta testing occurs with a select
group of end users. Beta testing serves as a soft launch, enabling you to get feedback from
real users who have no prior knowledge of the app. Beta testing enables you to gather
feedback from unbiased users who may interact with the product differently than you
intended, perhaps identifying critical unknown bugs before release to a wide user base.
Database testing:
Database testing checks the integrity and consistency of data by verifying the schema, tables, triggers,
etc., of the application’s database that is being tested. In Database testing, we create complex queries to
perform the load or stress test on the database and verify the database’s responsiveness.

Performance testing:

Performance Testing is a software testing process used for testing the speed, response time, stability,
reliability, scalability and resource usage of a software application under particular workload

Load testing – checks the application’s ability to perform under anticipated user loads. The objective is
to identify performance bottlenecks before the software application goes live.

•Stress testing – involves testing an application under extreme workloads to see how it handles high
traffic or data processing. The objective is to identify the breaking point of an application.

•Endurance testing – is done to make sure the software can handle the expected load over a long period
of time.

•Spike testing – tests the software’s reaction to sudden large spikes in the load generated by users.

•Volume testing – Under Volume Testing large no. of. Data is populated in a database and the overall
software system’s behavior is monitored. The objective is to check software application’s performance
under varying database volumes.

•Scalability testing – The objective of scalability testing is to determine the software application’s
effectiveness in “scaling up” to support an increase in user load. It helps plan capacity addition to your
software system.

Principle of Security Testing:

Below are the six basic principles of security testing:

•Confidentiality

•Integrity

•Authentication

•Authorization

•Availability

•Non-repudiation

•Encryption/Decryption

Authentication and authorization:

In authentication process, the identity of users are checked for providing the access to the system using
username and password. While in authorization process, person’s or user’s authorities are checked for
accessing the resources like read permission, write permission, Execute permission. Authentication is
done before the authorization process, whereas authorization process is done after the authentication
process.
Recovery testing
it is the activity of testing how well an application is able to recover from crashes, hardware failures and
other similar problems

Compatibility testing

It is part of non functional testing

Checking the functionality of an application on different software, hardware platforms, network, and
browsers is known as compatibility testing

Sanitation Testing:

It is also called as garbage testing during this test the test engineers find extra features in our application
build with respect to customer requirements.

There are 2 types of enhancements

1) Application enhancements:

It is a process of analyzing the designs if it permits new

designs and changes to the existing features.

2) Performance enhancements:

To analyze if the application supports overload than what

is accepted without major changes in the performance

(Actually this is done by performance tester)

REGRESSION TESTING:

•Repeated testing of an already tested program, after modification, to discover any defects introduced
or uncovered as a result of the changes in the software being tested or in another related or unrelated
software components.

•Usually, we do regression testing in the following cases:

1.New functionalities are added to the application

2.Change Requirement (In organizations, we call it as CR)

3.Defect Fixing

4.Performance Issue Fix

5.Environment change (E.g.. Updating the DB from MySQL to Oracle)

Retesting

Testing again the same test cases to confirm if the bug has been fixed
Testing that runs test cases that failed last time that they are running fine after fixation of bugs in order
to verify success of corrective actions.

To ensure that the defects which were found and posted in the earlier build were fixed or not in the
current build.

Adhoc testing:

Software testing performed without proper planning and documentation

Testing is carried out with the knoweldge of the tester about the application and the tester tests
randomly without following the specifications/requirements

Monkey Testing:

Test the functionality randomly without knowledge of application and,

Smoke testing:

Smoke testing is the means by which a software build’s stability is tested. It’s used to determine whether
a new build is ready for the next phase of testing, or whether that new build needs to be made stable
first.

•smoke test: A test suite that covers the main functionality of a component or system to determine
whether it works properly before planned testing begins.

Sanity testing:

At times the testing is even done randomly with no test cases. But remember, the sanity test should only
be done when you are running short of time, so never use this for your regular releases. Theoretically,
this testing is a subset of Regression Testing.

Exploratory testing:

Graphical User Interface Testing (GUI)

Graphical User Interface Testing (GUI) Testing is the process for ensuring proper functionality of the
graphical user interface (GUI) for a specific application. GUI testing generally evaluates a design of
elements such as layout, colors and also fonts, font sizes, labels, text boxes, text formatting, captions,
buttons, lists, icons, links, and content. GUI testing processes may be either manual or automatic and
are often performed by third-party companies, rather than developers or end users.

Usability testing:

Testing to determine the extent to which software product is understood, easy to learn ,easy to operate
and attractive to users under specifies conditions

Install/Uninstall Testing

Installation and Uninstallation Testing is done on full, partial, or upgraded install/uninstall processes on
different operating systems under different hardware or software environments.

Production testing
• Globalization Testing (or) Internationalization Testing(118N):

• Checks if the application has a provision of setting and changing languages date and time format and
currency etc. If it is designed for global users, it is called globalization testing.

Localization Testing:

• Checks default languages currency date and time format etc. If it is designed for a particular locality of
users is called Localization testing.

Function requirements:

What is this design do?

Nonfunctional requirements:

How will this system do what its designed to do?

These requirements describe how each feature should behave under what conditions, what limitations
there should be, and so on.

These are the constraints on the functions provided by the system.

The constraints, like how many processes the system can handle (performance), what are the (security)
issues the system needs to take care of such as SQL injections …

The rate of failure (reliability), what are the languages and tools that will be used (development), what
are rules you need to follow to ensure the system operates within the law of the organization
(legislative).

Requirement engineering

It is a process of gathering and defining what service should be provided by the system

Feasibility study-useful to the business

Check worth implementing and if it can be implemented under the current budget, technical skills,
schedule and if it does contribute to the whole organization objectives or not etc.,

Elicitation and analysis-discovering requirements

Specification-standard format(BRS)

Validation-customer satisfaction
It focuses on assessing if the system is useful to the business (feasibility study), discovering requirements
(elicitation and analysis), converting these requirements into some standard format (specification), and
checking that the requirements define the system that the customer wants (validation).

User requirements:

BRS

It describes the services that the system should provide and the constraints under which it must
operate. We don’t expect to see any level of detail, or what exactly the system will do, It’s more of
generic requirements.

It’s usually written in a natural language

System requirements:

SRS

The system requirements mean a more detailed description of the system services and the operational
constraints such as how the system will be used and development constraints such as the programming
languages.

This level of detail is needed by those who are involved in the system development, like engineers,
system architects, testers, etc

However, more specific functional system requirements describe the system functions, its inputs,
processing; how it’s going to react to a particular input, and what’s the expected output.

Non-functional and functional requirements are dependent

4 process of reuirement eng

1. Feasibility study
2. Requirement gathering
3. Software requirement specification
4. Software requirement validation

Feasibility study

When the client approaches the organization for getting the desired product developed, it comes up
with rough idea about what all functions the software must perform and which all features are expected
from the software.

Referencing to this information, the analysts does a detailed study about whether the desired system
and its functionality are feasible to develop.

This feasibility study is focused towards goal of the organization. This study analyzes whether the
software product can be practically materialized in terms of implementation, contribution of project to
organization, cost constraints and as per values and objectives of the organization. It explores technical
aspects of the project and product such as usability, maintainability, productivity and integration ability.

The output of this phase should be a feasibility study report that should contain adequate comments
and recommendations for management about whether or not the project should be undertaken.

Requirement Gathering

If the feasibility report is positive towards undertaking the project, next phase starts with gathering
requirements from the user. Analysts and engineers communicate with the client and end-users to know
their ideas on what the software should provide and which features they want the software to include.

Software Requirement Specification

SRS is a document created by system analyst after the requirements are collected from various
stakeholders.

SRS defines how the intended software will interact with hardware, external interfaces, speed of
operation, response time of system, portability of software across various platforms, maintainability,
speed of recovery after crashing, Security, Quality, Limitations etc.

SRS should come up with following features:

User Requirements are expressed in natural language.

Technical requirements are expressed in structured language, which is used inside the organization.

Design description should be written in Pseudo code.

Format of Forms and GUI screen prints.

Conditional and mathematical notations for DFDs etc.

Software Requirement Validation:

After requirement specifications are developed, the requirements mentioned in this document are
validated. User might ask for illegal, impractical solution or experts may interpret the requirements
incorrectly. This results in huge increase in cost if not nipped in the bud. Requirements can be checked
against following conditions -

If they can be practically implemented

If they are valid and as per functionality and domain of software

If there are any ambiguities

If they are complete

If they can be demonstrated

Test independence
As this type of testing is mainly performed by individuals, who are not related to the project directly or
are from a different organization, they are hired mainly to test the quality as well as the effectiveness of
the developed product. Test independence, therefore, helps developers and other stakeholders get
more accurate test results, which helps them build a better software product, with innovative and
unique features and functionality

What makes good requirements?

1. Requirement should be complete with no missing information


2. Requirement should be Consistent-should not contradict any other requirement
3. Requirement should be traceable-should meet business need as stated by the stake holder
4. Unambiguous
5. verifiable - the implementation of the requirement can be determined through one of 4 possible
methods – inspection, demonstration, test or analysis.

Functional testing

It is a kind of black-box testing that is performed to confirm that the functionality of an application or
system is behaving as expected.

It is to verify all the functionality of an application

Types of functional testing

1. unit
2. component
3. smoke
4. sanity
5. regression
6. integration
7. API
8. UI
9. System
10. White box
11. Blackbox
12. Acceptance
13. Alpha
14. Beta
15. Production

Entry criteria: set of conditions(preconditions) required to start testing(any kind of testing)

1. Requirement specification documents is defined and approved


2. Test cases have been prepared
3. Test data has been created
4. The environment for testing is ready, all the tools that are required are available and ready
5. Complete or partial application is developed and unit tested and is ready for testing

Exit criteria: set of conditions(preconditions) required to stop testing(any kind of testing)


1. Execution of all the functional test cases has been completed.
2. No critical or P1, P2 bugs are open.
3. Reported bugs have been acknowledged

Steps Involved

The various steps involved in this testing are mentioned below:

1. The very first step involved is to determine the functionality of the product that needs to be
tested and it includes testing the main functionalities, error condition, and messages, usability
testing i.e. whether the product is user-friendly or not, etc.,
2. The next step is to create the input data for the functionality to be tested as per the
requirement specification.
3. Later, from the requirement specification, the output is determined for the functionality under
test.
4. Prepared test cases are executed.
5. Actual output i.e. the output after executing the test case and expected output (determined
from requirement specification) are compared to find whether the functionality is working as
expected or not.

Non Functional Testing:

1. Performance
Load
Stress
Volume Tests.
2. Security Tests.
3. Upgrade & Installation Tests.
4. Recovery Tests
5. Compatibility testing
6. Configuration testing
7. Sanitation testing
Black box Testing Types

1. GUI Testing
2. Usabilty Testing
3. Functional Testing
4. Non Functional Testing

QA Vs QC

Quality Assurance Quality Control: Definition QA is a set of activities for ensuring quality in the
processes by which products are developed QC is a set of activities for ensuring quality in products. The
activities focus on identifying defects in the actual products produced.

Focus on QA aims to prevent defects with a focus on the process used to make the product. It is a
proactive quality process QC aims to identify (and correct) defects in the finished product. Quality
control, therefore, is a reactive process

Goal The goal of QA is to improve development and test processes so that defects do not arise when the
product is being developed The goal of QC is to identify defects after a product is developed and before
it's released

• Styles Shape Effects


• Select
• Editing
• Ideas
• Voice
• Designer
• Drawing

How Establish a good quality management system and the assessment of its adequacy Periodic
conformance audits of the operations of the system. Finding & eliminating sources of quality problems
through tools & equipment so that customer's requirements are continually met.

What Prevention of quality problems through planned and systematic activities including
documentation. The activities or techniques used to achieve and maintain the product quality, process
and service.

What is CMM and TMM and their different levels?

Verification Validation

It includes checking documents, design, codes It includes testing and validating the actual
and programs. product

Verification is the static testing. Validation is the dynamic testing.

lt does not include the execution of the code. It includes the execution of the code.

Methods used in verification are reviews, Methods used in validation are Black Box Testing,
walkthroughs, inspections and desk-checking, White Box Testing and non-functional testing

It checks whether the software conforms to It checks whether the software meets the
specifications or not. requirements and expectations of a customer or
not.

Positive testing & Negative Testing

Positive testing is a process where the system or an application is tested against the valid input data.

Negative testing is also known as error path testing or failure.

It is implemented to check how the application can gracefully handle invalid input or unpredicted user
performance.

Quality: the degree to which a component, system or process meets specified requirements and
customer/user needs and requirements

What is root cause analysis?

Finding out the real reason of the defect/problem

Debugging: the processs of finding ,analysing and removing the cause of failures or defects

Testing : finding the defects/bugs

Test independence: separation of responsibilities which encourages the accomplishment of objective


testing. The degree of independence avoids author bias and is often more effective at finding defects.

STLC (Software Testing Life Cycle):

1. Requirement analysis
2. Test planning
3. Test case development
4. Environment setup
5. Test execution
6. Test cycle closure

Test Plan
 A test plan is a detailed document which describes software testing areas and activities. It outlines the
test strategy, objectives, test schedule, required resources (human resources, software, and hardware),
test estimation and test deliverables.
 The test plan is a base of every software's testing.
 A document describing the scope, approach, resources and schedule of intented test activities

Test plan template

1. Test plan identifier


2. Introduction
3. Test items
4. Features to be used
5. Features not to be tested
6. Approach
7. Item pass/fail criteria
8. Test deliverables
9. Test task
10. Environment needs
11. Responsibilities
12. Staffing and training needs
13. Schedule
14. Risks
15. Approvals

Test level: A group of test activities that are organized and managed together.

Test strategy: a high level description of the test levels for an organization

Test approach: the implementation of the test strategy for the specific project

Factors which helps in defining the strategies:

1. Risks: Testing is about risk management, so consider the risks and level of risks

2. Skills: Which skills the testers possess and lack.

3. Objective

4.Business

Test manager/test lead: the person responsible for project management of testing activities and
resources. test lead tends to be involved in planning ,monitoring and control the tset activities

Risk Based Testing

 RBT is testing carried out based on product risks. RBT is to find out well in advance, as what is the
risk of the failure of a particular feature or functionality in the production and its impact on the
business in terms of cost and other damages by using a prioritization technique for the test cases.
 Hence, Risk-Based testing uses the principle of prioritizing the tests of the features, modules, and
functionalities of a product or software. The prioritization is based on the risk of the likelihood of
failure of that feature or functionality in production and its impact on the customers.

Attributes of good tester

#1) Be Skeptical

#2) Don’t Compromise On Quality

#3) Ensure End-User Satisfaction

#4) Think from the Users Perspective

#5) Prioritize Tests

#6) Never Promise 100% Coverage

#7) Be Open to Suggestions

#8) Start Early

#9) Identify and Manage Risks

#10) Do Market Research

#11) Develop Good Analyzing Skill

#12) Focus on the Negative Side as well

#13) Be a Good Judge of Your Product

#14) Learn to Negotiate

#15) Stop the Blame Game

#16) Finally, Be a Good Observer

PUBLIC - Certified software testers shall act consistently with the public interest.

•CLIENT AND EMPLOYER - Certified software testers shall act in a manner that is in the best interests
of their client and employer, consistent with the public interest.

•PRODUCT - Certified software testers shall ensure that the deliverables they provide (on the
products and systems they test) meet the highest professional standards possible.

•JUDGMENT - Certified software testers shall maintain integrity and independence in their
professional judgment.

•MANAGEMENT - Certified software test managers and leaders shall subscribe to and promote an
ethical approach to the management of software testing.

•PROFESSION - Certified software testers shall advance the integrity and reputation of the profession
consistent with the public interest.
•COLLEAGUES - Certified software testers shall be fair to and supportive of their colleagues, and
promote cooperation with software developers.

•SELF - Certified software testers shall participate in lifelong learning regarding the practice of their
profession and shall promote an ethical approach to the practice of the profession.

Psychological Testing in Software Testing

The psychology of testing improves mutual understanding among team members and helps them
work towards a common goal.

The three sections of the psychology of testing are:

• The mindset of Developers and Testers.


• Communication in a Constructive Manner.
• Test Independence.

Factors affecting test effort

1. Proper documentation
2. Software application size
3. Software life cycle
4. Process maturity
5. Time
6. Skilled team
7. Team and work relationship

Difficulties and challenges you will face when leading a project

1. Not enough time


2. Not enough resources
3. Budget is low
4. Testing team are not always in one place
5. Requirements are too complex to check and validate

Roles & Resposibilities of Tester

 To read all the documents and understand what needs to be tested.


 Based on the information procured in the above step decide how it is to be tested.
 Inform the test lead about what all resources will be required for software testing.
 Develop test cases and prioritize testing activities.
 Execute all the test case and report defects, define severity and priority for each defect.
 Carry out regression testing every time when changes are made to the code to fix defects.

Use case

'How a System will respond to a given Scenario?'. It is 'user-oriented' not 'system-oriented'. It is 'user-
oriented': We will specify 'what are the actions done by the user and 'What the Actors see in a system?'.
It is not 'system-oriented': We will not specify 'What are the input given to the system?' and 'What are
the output produced by the system?”.
Who uses 'Use Case' documents?

This documentation gives a complete overview of the distinct ways in which the user interacts with a
system to achieve the goal. Better documentation can help to identify the requirement for a software
system in a much easier way.

This documentation can be used by Software developers, software testers as well as Stakeholders.

Uses of the Documents:

Developers use the documents for implementing the code and designing it.

Testers use them for creating the test cases.

Business stakeholders use the document for understanding the software requirements.

#1) Sunny day Use Cases

They are the primary cases that are most likely to happen when everything does well. These are given
high priority than the other cases. Once we have completed the cases, we give it to the project team for
review and ensure that we have covered all the required cases.

# 2) Rainy day Use Cases

These can be defined as the list of edge cases. The priority of such cases will come after the 'Sunny Use
Cases'. We can seek the help of Stakeholders and product managers to prioritize the cases.

Elements in Use Cases

Given below are the various elements:

1) Brief description: A brief description explaining the case.

2) Actor: Users that are involved in Use Cases Actions.

3) Precondition: Conditions to be Satisfied before the case begins.

4) Basic Flow: 'Basic Flow' or 'Main Scenario' is the normal workflow in the system. It is the flow of
transactions done by the Actors on accomplishing their goals. When the actors interact with the system,
as it's the normal workflow, there won't be any error and the Actors will get the expected output.

5) Alternative flow: Apart from the normal workflow, a system can also have an 'Alternate workflow'.
This is the less common interaction done by a user with the system.

6) Exception: The flow that prevents a user from achieving the goal.

7) post conditions: The conditions that need to be checked after the case is completed.
Example: school management system

Use case name Login


Use case description A user login to system to access the functionality
of the system
Actors Parents, students, teacher, admin
Precondition System must be connected to network
Post condition After a successful login a notification mail is sent
to the user mail id

Main scenarios Serial no Steps


Actors/users 1 Enter username
Enter password
2 Validate username and
password
3 Allow access to system
Extensions 1a Invalid user name
System shows an error message
2b Invalid password
System shows an error message
3c Invalid password for 4 times
Applications closed
This is the Use case diagram of ‘Login’ case. Here, we have more than one
actor, they are all placed outside the system. Students, teachers, and parents
are considered as primary actors. That is why they all are placed on the left side
of the rectangle.

Admin and Staff are considered as secondary actors, so we place them on the
right side of the rectangle. Actors can log in to the system, so we connect the
actors and login case with a connector.

Points to be noted

Common mistakes that the participants do with use case is that either it contains too many details about
a particular case or no enough details at all

Write the process steps in the correct order.

Example:

e-commerce site like amazon

Actors: buyers, sellers, wholesale dealers, auditers, suppliers, distributors, customer care etc.,

A System is ‘whatever you are developing’. It can be a website, an app or any other software component.
It is generally represented by a rectangle. It Contains Use Cases. Users are placed outside the ‘rectangle’.

Use Cases are generally represented by Oval shapes specifying the Actions inside it.

Actors/Users are the people who use the system. But sometimes it can be other systems, person or any
other organization.

What is Use Case Testing?

It comes under the Functional Black Box testing technique. As it is a black box testing, there won’t be any
inspection of the codes. Several interesting facts about this are briefed in this section.
It ensures if the path used by the user is working as intended or not. It makes sure that the user can
accomplish the task successfully.
For Example, Consider the ‘Show Student Marks’ case, in a School Management System.
Use case Name: Show Student Marks
Actors: Students, Teachers, Parents
Pre-Condition:
1) The system must be connected to the network.
2) Actors must have a ‘Student ID’.
Use Case for ‘Show Student Marks’:
Main Scenario Serial Number Steps

A: Actor/ 1 Enter Student Name


S: System

2 System Validates Student Name

3 Enter Student ID

4 System Validates Student ID

5 System shows Student Marks

Extensions 3a Invalid Student ID


S: Shows an error message

3b Invalid Student ID entered 4 times.


S: Application Closes

Corresponding Test Case for ‘Show Student Marks’ case:


Test Cases Steps Expected Result

A View Student Mark List 1 -Normal Flow

1 Enter Student Name User can enter Student name

2 Enter Student ID User can enter Student ID

3 Click on View Mark System displays Student Marks

B View Student Mark List 2-Invalid ID

1 Repeat steps 1 and 2 of View Student Mark List 1

2 Enter Student ID System displays Error message


Use case steps for atm machine
Actors: Bank Customer, Cashier, Bank, Maintenance Person
Pre-Condition:
1) he Automated Teller Machine is a remote unit connected to the bank computer systems.
2) The ATM system requires that each bank customer has an ATM card and remembers his
PIN code.

Main Scenario SERIAL NUMBER STEPS


A: Actor/User 1 Insert ATM Card
S: System/ATM
2 System validates user’s
card
3 User enter PIN
4 System validates PIN
5 Show balance or
withdraw money

Extensions 3a Invalid PIN


S: Show error
3b Invalid PIN entered 3
times
S: Card Block

Types of Mobile Testing

There are broadly 2 kinds of testing that take place on mobile devices:

#1. Hardware testing:

The device including the internal processors, internal hardware, screen sizes, resolution, space or
memory, camera, radio, Bluetooth, WIFI etc. This is sometimes referred to as, simple "Mobile Testing".

#2. Software or Application testing:

The applications that work on mobile devices and their functionality are tested. It is called the "Mobile
Application Testing" to differentiate it from the earlier method. Even in mobile applications, there are
few basic differences that are important to understanding:

a) Native apps: A native application is created for use on a platform like mobile and tablets.
b) Mobile web apps are server-side apps to access website/s on mobile using different browsers like
Chrome, Firefox by connecting to a mobile network or wireless network like WIFI.

c) Hybrid apps are combinations of native app and web app. They run on

Native apps have single platform affinity while mobile web apps have the cross-platform affinity.

Native apps are written in platforms like SDKs while Mobile web apps are written with web technologies
like HTML, CSS, asp.net, Java, PHP.

For a native app, installation is required but for mobile web apps, no installation is required.

A native app can be updated from the play store or app store while mobile web apps are centralized
updates.

Many native apps don’t require an Internet connection but for mobile web apps, it’s a must.

Native app works faster when compared to mobile web apps.

Native apps are installed from app stores like Google play store or app store where mobile web are
websites and are only accessible through the Internet.

Use Case

The developers use the standard symbols to write a use case so that everyone will understand easily.
They will use the Unified modeling language (UML) to create the use cases.

There are various tools available that help to write a use case, such as Rational Rose. This tool has a
predefined UML symbols, we need to drag and drop them to write a use case, and the developer can also
use these symbols to develop the use case.

Test Scenario and Test Cases

Test Scenario: Test Scenario gives the idea of what we have to test. Test Scenario is like a high-level
description of what you are going to test

For example: Verify the login functionality of the Gmail account.

Test cases: Here are some test cases.

1. Enter valid User Name and valid Passwor

2. Enter valid User Name and invalid Password 3. Enter invalid User Name and valid Passwor

4. Enter invalid User Name and invalid Password

Test cases are derived from Test Scenario

Test Scenarios are derived from use cases

Test Scenario is "what is to be tested" and test cases are "how it is to be tested"

GUI and Usability Test Scenarios


1. All fields on the page (For Example, text box, radio options, drop-down lists) should be aligned
properly.

2. Numeric values should be justified correctly unless specified otherwise.

3. Enough space should be provided between field labels, columns, rows, error messages, etc.

4. The scrollbar should be enabled only when necessary.

5. Font size, style, and color for headline, description text, labels, infield data, and grid info should be
standard as specified in SRS.

6. The description text box should be multi-lined.

7. Disabled fields should be greyed out and users should not be able to set focus on these fields.

8. Upon clicking on the input text field, the mouse arrow pointer should get changed to the cursor.

9. The user should not be able to type in the drop-down select list.

10. Information filled out by users should remain intact

11. Check if proper field labels are being used in error messages.

12. Drop-down field values should be displayed in defined sort order.

13. Tab and Shift+Tab order should work properly.

14. Default radio options should be pre-selected on the page load.

15. Field-specific and page-level help messages should be available.

16. Check if the correct fields are highlighted in case of errors.

17. Check if the drop-down list options are readable and not truncated due to field size limits.

18. All buttons on the page should be accessible with keyboard shortcuts and the user should be able to

Database Testing Test Scenarios

1. Check if the correct data is getting saved in the database upon a successful page submit.

2. Check values for columns that are not accepting null values.

3. Check for data integrity. Data should be stored in single or multiple tables based on the design.

4. Index names should be given as per the standards e.g.

IND_<Tablename>_<ColumnName>

5. Tables should have a primary key column.

6. Table columns should have description information available (except for audit columns like created
date, created by, etc.)
7. For every database add/update operation logs should be added.

8. Required table indexes should be created.

9. Check if data is committed to the database only when the operation is successfully completed.

10. Data should be rolled back in case of failed Pirol Settings

Scenarios for Image Upload Functionality

(Also applicable for other file upload functionality)!

1. Check for the uploaded image path.

2. Check image upload and change functionality.

3. Check image upload functionality with image files of different extensions (For Example, JPEG, PNG,
BMP, etc.)

4. Check image upload functionality with images that have space or any other allowed special character
in the file name.

5. Check for duplicate name image upload.

6. Check the image upload with an image size greater than the max allowed size. Proper error message
should be displayed

7. Check image upload functionality with file types ther than images (For Example, txt, doc, pdf, exe,
etc.). A proper error message should be displayed.

8. Check if images of specified height and width (if defined) are accepted or otherwise rejected.

Security Testing Test Scenarios

1. Check for SQL injection attacks.

2. Secure pages should use the HTTPS protocol.

3. Page crash should not reveal application or server info. The error page should be displayed for this.

4. Escape special characters in the input.

5. Error messages should not reveal any sensitive information.

6. All credentials should be transferred over to an encrypted channel.

7. Test password security and password policy enforcement.

Test Cases For Login Page

 Verify that the login page have the basic web element which every login page contain such as-
Username ,Password, Login button, Forgot password, Remember me, cancel button, Logo, create
an account etc.
 Verify the login by entering the valid username and valid password.
 Check the login functionality by entering the valid username and invalid password (An error
message must be displayed).
 Verify the login functionality by entering the invalid username and valid password (An error
message must be displayed).
 Verify the functionality of login without entering the username and password (An error message
must be displayed).
 Verify by entering only username (An error message must be displayed).
 Check the login functionality by entering only password (An error message must be displayed).
 Verify that the password is encrypted or not when user entered.
 Verify that the “Remember me” check box by default unselected.(or as per srs)
 Verify that the user Login credential save or not in browser when user login by checking the
“Remember me” Check box .
 Verify the “forgot password” functionality , user able to click on “forgot password” link or not .
 Verify the functionality of “reset” button user able to click or not.
 Verify that the text box gets cleared when click on rest button.

Other Test scenarios Cases for login

 Verify the keyboard tab functionality ,User able to select the web element through the tab
button or not.
 Verify that the user able to click on login button through the enter button or not.
 Verify that cursor blink in text box by default or not when login page get load.
 Verify the Place holder for text boxes(username ,password).
 Verify that user able to login with new password or not after when he /she changed the
password
 Verify the Spelling ,font size etc. for alert message .

Design Related Test Cases For Login Page

 Verify the login screen will appear after clicking on a login link or login button.
 Verify all login related elements and fields are present on the login page.
 Verify the alignment of displayed elements on the login screen should be compatible in cross
browsers testing.
 Verify that the size, colour and UI of different elements should match with the specifications.
 Verify that the login page of the application is responsive and align properly on different screen
resolutions and devices.
 Verify login page title.
 After the user login page is open, the cursor should remain in the username text box by default.
 Verify that there is a checkbox with the label remember password on the login page.
 Verify the remember me checkbox should mark as checked after clicking on the label text and the
check box.

Functional Cases For login Page


 Verify the user credential remained on the field after clicking remember and get back to the
login screen again.
 Verify that the user will be able to log in with their account with the correct credential.
 Verify that the user will get into their dashboard screen after login with the correct credentials.
 Verify that the user can access all controls and elements by pressing the Tab key from the
keyboard.
 Verify that the user can log in by entering valid credentials and pressing Enter key.
 Verify that the user can log in by entering valid credentials and clicking on the login button.
 Verify that the password entered should be in encrypted form.
 Verify an eye icon is added on the password field or not.
 Verify that the user can be able to view the password by clicking on the eye icon.
 Verify line spacing added on password on mac.
 There should be an email verification check, as the user verifies the email address then the user
is able to view the dashboard and access features.
 Add captcha on the login form to prevent the robot attack.
 Verify the error message should display after just entering an email address and leaving the
password field blank.
 Verify the error message should display after just entering a password and leave the email field
blank.
 Verify the error message should display after entering the invalid credentials.
 Verify the error message should display after entering an invalid email format.
 Verify the displayed error message for invalid email format should be correct.
 Verify the displayed error message grammar should be correct.
 Verify the displayed error message spell should be correct.
 Check logged in user should not log out on closing the browser.

Test suite:----image

Mutation Testing

Mutation testing, also known as code mutation testing, is a form of white box testing in which testers
change specific components of an application's source code to ensure a software test suite will be able
to detect the changes. Changes introduced to the software are intended to cause errors in the program.
where we insert errors purposely into a program (under test) to verify whether the existing test case can
detect the error or not.

Requirement Traceability Matrix

Tracing the user requirements with test cases

it's a high-level document to map and trace user requirements with test cases to ensure that for each
and every requirement adequate level of testing is being achieved.

Requirement Traceability Matrix helps to link the requirements, Test cases, and defects accurately. The
whole of the application is tested by having Requirement Traceability (End to End testing of an
application is achieved).
Requirement Traceability assures good 'Quality' of the application as all the features are tested. Quality
control can be achieved as software gets tested for unforeseen scenarios with minimal defects and all
Functional and non-functional requirements being satisfied.

Advantage :

Very less chance of requirements getting missed out as each test cases are mapped to the requirements
so the test lead can easily makeout if any requirements are missed out.

RTM assures good quality as we make sure that all features are tested as they can be easily traced from
this document

Types of traceability Matrix

1. Forward traceabilty: requirement to test case


2. Backward traceability: test cases are mapped with the requirements

Forward Traceability

In forward Traceability requirements to the test cases, it ensures that the project progresses as per
the described direction and that every requirement is tested thoroughly.

Backward Traceability

The test cases are mapped with the Requirements in ‘Backward Traceability’. Its main purpose is to
ensure that the current product being developed is on the right track. It also helps to determine that
no extra unspecified functionalities are added and thus the scope of the project is affected.

Bi-directional traceability: forward+ backward

A Good Traceability matrix has references from test cases to requirements and vice versa (requirements
to test cases). This is referred to as 'Bi Directional Traceability. It ensures that all the Test cases can be
traced to requirements and each and every requirement specified has accurate and valid Test cases for
them.

What Is Test Coverage?

•Test Coverage states which requirements of the customers are to be verified when the testing phase
starts. Test Coverage is a term that determines whether the test cases are written and executed to
ensure to test the software application completely, in such a way that minimal or NIL defects are
reported.

How to achieve Test Coverage

• The maximum Test Coverage can be achieved by establishing good 'Requirement Traceability

Mapping all internal defects to the test cases designed

Mapping all the Customer Reported Defects (CRD) to individual test cases for the future regression test
suite

Business Test Scenario # Test Case # Defects #


Requirement #
TS1.TC1
BR1 TS1 D01
TS1.TC2
TS2.TC1
D02
BR2 TS2 TS2,TC2
D03
TS2.TC3
TS1.TC1
TS2.TC1
BR3 TS3 NIL
TS3.TC1
TS3.TC2

Test monitoring: a test management task that deats with the activities related to periodically checking
the status of a test project. Reports are prepared that compare the actuals to that which was planned.

Test management: the planning, estimating, monitoring and control of test activities, typically carried
out by test manager or test lead

Test summary report: a document summarizing testing activities and results. It also contains an
evaluation of the corresponding test items against the exit criteria.

Test control: a test management task that deals with developing and applying a set of corrective actions
to get a test project on track.

What is test harness?

What is configuration management?

Software metrics : Software metrics are used to measure the quality of the product

Software Metrics are used to measure the quality of the project. Simply, a Metric is a unit used for
describing an attribute. Metric is a scale for measurement. Software Metrics are used to measure the
quality of the project. Simply, a Metric is a unit used for describing an attribute. Metric is a scale for
measurement .Generation of Software Test Metrics is the most important responsibility of the Software
Test Lead/Manager.

Test Metrics are used to,

1.Take the decision for the next phase of activities such as, estimate the cost & schedule of future
projects.

2.Understand the kind of improvement required to success the project

3.Take a decision on the Process or Technology to be modified etc.

If Metrics are involved in the project, then the exact status of his/her work with proper numbers/data
can be published.
•%ge of work completed•%ge of work yet to be completed•Time to complete the remaining
work•Whether the project is going as per the schedule or lagging? etc.Based on the metrics, if the
project is not going to complete as per the schedule, then the manager will raise the alarm to the client
and other stakeholders by providing the reasons for lagging to avoid the last-minute surprises.

Definitions and Formulas for Calculating Metrics:

#1) %ge Test cases Executed: This metric is used to obtain the execution status of the test cases in terms
of %ge.

%ge Test cases Executed = (No. of Test cases executed / Total no. of Test cases written) * 100. So, from
the above data,

%ge Test cases Executed = (65 / 100) * 100 = 65%

# 2) %ge Test cases not executed: This metric is used to obtain the pending execution status of the test
cases in terms of %ge.

%ge Test cases not executed = (No. of Test cases not executed / Total no. of Test cases written) *

100.

So, from the above data,

%ge Test cases Blocked = (35 / 100) 100 = 35%


6) Defect Density = No. of Defects identified / size (Here "Size" is considered a requirement. Hence here
the Defect Density is calculated as a number of defects identified per requirement. Similarly, Defect
Density can be calculated as a number of Defects identified per 100 lines of code [OR] No. of defects
identified per module, etc.)

So, from the above data, Defect Density = (30/5) = 6

7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found
during QA testing +No. of Defects found by End-user)) * 100 DRE is used to identify the test
effectiveness of the system.

[2:49 PM] Trainer

Project managers have a wide variety of metrics to choose from. We can classify the most commonly
used metrics into the following groups:

1.Process Metrics

2.These are metrics that pertain to Process Quality. They are used to measure the efficiency and
effectiveness of various processes
.3.Project Metrics

4.These are metrics that relate to Project Quality. They are used to quantify defects, cost, schedule,
productivity and estimation of various project resources and deliverables.

5.Product Metrics

6.These are metrics that pertain to Product Quality. They are used to measure cost, quality, and the
product’s time-to-market.

7.Organizational Metrics

8.These metrics measure the impact of organizational economics, employee satisfaction,


communication, and organizational growth factors of the project.

9.Software Development Metrics Exampl esThese metrics enable management to understand the
quality of the software, the productivity of the development team, code complexity, customer
satisfaction, agile process, and operational metrics

Project Quality Management: Perform Quality Assurance Vs Perform Quality Control

Clarify the difference between perform quality assurance and perform quality control in the project
quality management

#7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of Defects found
during QA testing +No. of Defects found by End-user)) * 100DRE is used to identify the test effectiveness
of the system.

Suppose, During Development & QA testing, we have identified 100 defects.

After the QA testing, during Alpha & Beta testing, the end-user / client identified 40 defects, which could
have been identified during the QA testing phase.Now, The DRE will be calculated as,

DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%#8) Defect Leakage: Defect Leakage is the Metric
which is used to identify the efficiency of the QA testing i.e., how many defects are missed/slipped
during the QA testing.

What Is Efficiency Testing And How To Measure Test Efficiency

This tutorial explains what is Efficiency Testing, techniques to measure, formulae for Test Efficiency, Test
Efficiency Vs Test Effectiveness

Cyclomatic complexity (CYC) is a software metric used to determine the complexity of a program. It is a
count of the number of decisions in the source code. The higher the count, the more complex the code.

CYC= E-N + 2P
in this equation:

P = Number of disconnected parts of the flow graph (e.g. a calling program and a subroutine)

• E = Number of edges (transfers of control)

•N = Number of nodes (sequential group of statements containing only one transfer of control)

Second method: sum the number of binary decision statements(e.g if, while, for) and add 1 to it.

CYC>5---greater complex

CYC<5----LESS COMPLEX

A=10

If B>C then

A=B

Else

A=C

Endif

Print A

Print B

Print C

Edges=7

Nodes=7

P=1
If A=354

Then if B>C

Then A=B

Else A+C

Endif

Endif

Enlisted below are the widely used terminologies in Orthogonal Array Testing:

Term Description

Runs It is the number of rows which represents the number of test conditions to be performed.

Factors It is the number of columns which represents in the number of variable to be tested

Levels It represents the number of values for a Factor

 As the rows represent the number of test conditions (experiment test) to be performed, the
goal is to minimize the number of rows as much as possible.

 Factors indicate the number of columns which is the number of variables.

 Levels represent the maximum number of values for a factor (0 – levels – 1). Together, the
values in Levels and Factors are called LRUNS (Levels**Factors).

Implementation Technique Of OATS

The Orthogonal Array Testing technique has the following steps:

#1) Decide the number of variables that will be tested for interaction. Map these variables to the factors
of the array.

#2) Decide the maximum number of values that each independent variable will have. Map these values
to the levels of the array.

#3) Find a suitable orthogonal array with the smallest number of runs. The number of runs can be
derived from various websites. One such website is listed here.

#4) Map the factors and levels onto the array.

#5) Translate them into the suitable Test Cases

#6) Look out for the leftover or special Test Cases (if any)

After performing the above steps, your Array will be ready for testing with all the possible combinations
covered.
Example 1

Let’s say that the pages or links have three dynamic frames (Sections) that can be made as hidden or
visible.

Step 1: Determine the number of independent variables. There are three independent variables
(sections on the page) = 3 Factors.

Step 2: Determine the maximum number of values for each variable. There are two values (hidden and
visible) = 2 Levels.

Step 3: Determine the Orthogonal Array with 3 Factors and 2 Levels. Referring to the link we have
derived the number of rows required i.e. 4 Rows.

Orthogonal array follows the pattern LRuns(LevelsFactors). Hence in this example, the Orthogonal Array
will be L4(23).

What is path coverage?

Error handling refers to the response and recovery procedures from error conditions present in a
software application. In other words, it is the process comprised of anticipation, detection and
resolution of application errors, programming errors or communication errors. Error handling helps in
maintaining the normal flow of program execution. In fact, many applications face numerous design
challenges when considering error-handling techniques.

Cookies and sessions:

Cookie—temporary internet files which are created at client side when we open the web sites. these
files contains user data

Sessions: these are timeslots which are allocated to the user at the serve side

Basic Difference Between Mobile and Desktop Application Testing:


Few obvious aspects that set mobile app testing apart from the desktop testing

On the desktop, the application is tested on a central processing unit. On a mobile device, the
application is tested on handsets like Samsung, Nokia, Apple, and HTC.

Mobile device screen size is smaller than a desktop.

Mobile devices have less memory than a desktop.

Mobiles use network connections like 2G, 3G, 4G or WIFI where desktop use broadband or dial-up
connections.

The automation tool used for desktop application testing might not work on mobile applications.

Types of Mobile App Testing:

 To address all the above technical aspects, the following types of testing are performed on
Mobile applications.
 Usability testing– To make sure that the mobile app is easy to use and provides a satisfactory
user experience to the customers
 Compatibility testing– Testing of the application in different mobiles devices, browsers, screen
sizes and OS versions according to the requirements.
 Interface testing– Testing of menu options, buttons, bookmarks, history, settings, and navigation
flow of the application.
 Services testing– Testing the services of the application online and offline.
 Low-level resource testing: Testing of memory usage, auto-deletion of temporary files, local
database growing issues known as low-level resource testing.
 Performance testing– Testing the performance of the application by changing the connection
from 2G, 3G to WIFI, sharing the documents, battery consumption, etc.
 Operational testing– Testing of backups and recovery plan if a battery goes down, or data loss
while upgrading the application from a store.
 Installation tests– Validation of the application by installing /uninstalling it on the devices.
 Security Testing– Testing an application to validate if the information system protects data or
not.

Two kinds of automation tools are available to test mobile apps:

Object-based mobile testing tools– automation by mapping elements on the device screen into objects.
This approach is independent of screen size and mainly used for Android devices.

Eg:- Ranorex, jamo solution

Image-based mobile testing tools– create automation scripts based on screen coordinates of
elements.

Eg:- Sikuli, Egg Plant, RoutineBot


1. Documentation: Maintaining good documentation with detailed information overall adds advantage
to software development project.

•Software Application Size:

•Software Life Cycle: Selection of software development life cycle model is the most influential factor in
the test efforts. For each and every addition of new functionalities or features by the developers, tester
is bound to perform repetitive regression testing of the application

Process Maturity: In any software development, maturity of the process significantly impacts the test
efforts. Introducing and managing changes in the already existing mature process is a difficult task to
execute as these changes are need to be introduced with utmost care and needs considerable amount
of time.

•Time: Everyone wants to reduce time related factors to level-up the competition in the market. The
amount of time required to perform various tests by testing team should be decided appropriately.
Otherwise the whole system will face shortage of time, hence the project at risk.

•Skilled Team

•Team and Work Relationship

Defect report:

It is known as Bug report, is a document that identifies and describes a defect detected by a tester. The
purpose of a defect report is to state the problem as clearly as possible so that developers can replicate
the defect easily and fix it.

Defect Reporting Tools:

• Clear Quest

• DevTrack

Jira

Quality Center (ALM /etc

•Defect ID – Every bug or defect has it’s unique identification number

•Defect Description – This includes the abstract of the issue.

•Product Version – This includes the product version of the application in which the defect is found.

•Detail Steps – This includes the detailed steps of the issue with the screenshots attached so that
developers can recreate it.

•Date Raised – This includes the Date when the bug is reported

•Reported By – This includes the details of the tester who reported the bug like Name and ID
•Status – This field includes the Status of the defect like New, Assigned, Open, Retest, Verification,
Closed, Failed, Deferred, etc.

•Fixed by – This field includes the details of the developer who fixed it like Name and ID

Status of Defect

• New: Tester provides new status while Reporting (for the first time)

• Open: Developer / Dev lead /DTT opens the Defect

Rejected: Developer / Dev lead /DTT rejects if the defect is invalid or defect is duplicate.

• Fixed: Developer provides fixed status after fixing the defect

• Deferred: Developer provides this status due to time etc...

• Closed: Tester provides closed status after performing confirmation Testing

Re-open: Tester Re-opens the defect with valid reasons and proofs

Severity of bug/defect-how badly the defect/bug affects the application. Testers will decide the severity
of the bug.

Priority: what is effect/impact of the defect/bugs from business point of view. what is the priority of
fixing the bug that is decided only from the business perspective.

Will are the high severity bugs will have high priority(i.e have to be fixed urgently)? Not necessary

Defect Severity Severity describes the seriousness of defect. In software testing, defect severity can be
degree of impact a defect has on the development or operation of a component application being
tested.

Defect severity can be categorized into four class:

• Critical: This defect indicates complete shut-down of the process, nothing can proceed further.
(Show stopper defect).
• High: It is a highly severe defect and collapse the system. However, certain parts of the system
remain control.

• Medium: It cause some undesirable behaviour, but the system is still functional.

• Low: It wont cause any major break down of the system ow Severity and Low Priority

Any spelling mistakes /font casing/ misalignment in the paragraph of the 3rd or 4th page of the
application and not in the main or front page/ title.

• These defects are classified in the green lines as shown in the figure and occur when there is no
functionality impact, but still not meeting the standards to a small degree. Generally cosmetic errors or
say dimensions of a cell in a table on UI are classified here

Defect Priority

• Priority describes the importance of defect.

• Defect Priority states the order in which a defect should be fixed. Higher the priority the sooner the
defect should be resolved.

• Defect priority can be categorized into three class

• P1 (High): The defect must be resolved as soon as possible as it affects the system severely and cannot
be used until it is fixed

• P2 (Medium): During the normal course of the development activities defect should be resolved. It can
wait until a new version is created

• P3 (Low): The defect is an irritant but repair can be done once the more serious defect have been fixed

Severity Impact
• This bug is critical enough to crash the system, cause file corruption, or
cause potential data loss
1
• It causes an abnormal return to the operating system (crash or a system
(Critical)
failure message appears).
• It causes the application to hang and requires re-booting the system.
2 (High) • It causes a lack of vital program functionality with workaround.
• This Bug will degrade the quality of the System. However there is an
intelligent workaround for achieving the desired functionality - for example
3
through another screen.
(Medium)
• This bug prevents other areas of the product from being tested. However
other areas can be independently tested.
• There is an insufficient or unclear error message, which has minimum impact
4 (Low)
on product use.
5
(Cosmeti
c)

You might also like