Test Strategy

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18
At a glance
Powered by AI
The key takeaways are that a test strategy document defines the testing approach and is derived from business requirements, while a test plan describes what, how, when and who will test and is derived from requirements, use cases or product description. Metrics are also important for measuring test quality.

The components of a test strategy document include scope and objectives, business issues, roles and responsibilities, communication and status reporting, test deliverables, industry standards to follow, test automation and tools, testing measurements and metrics, risks and mitigation, and defect reporting and tracking.

A test strategy is a high-level document developed by a project manager that defines the testing approach, while a test plan is developed by a test lead/manager and contains details on what to test, how to test, when to test and who will test. A test strategy is generally a static document, while a test plan may be updated more often.

Test Strategy

A Test Strategy document is a high level document and normally developed by project
manager. This document defines Software Testing Approach to achieve testing
objectives. The Test Strategy is normally derived from the Business Requirement
Specification document.
The Test Strategy document is a static document meaning that it is not updated too often. It
sets the standards for testing processes and activities and other documents such as the Test
Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the Test Approach or Strategy inside the Test Plan, which is fine
and it is usually the case for small projects. However, for larger projects, there is one Test
Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document

Scope and Objectives

Business issues

Roles and responsibilities

Communication and status reporting

Test deliverability

Industry standards to follow

Test automation and tools

Testing measurements and metrices

Risks and mitigation

Defect reporting and tracking

Change and configuration management

Training plan

Test Plan
The Test Plan document on the other hand, is derived from the Product Description,
Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus
of the document is to describe what to test, how to test, when to test and who will do what
test.
It is not uncommon to have one Master Test Plan which is a common document for the test
phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static
document like the Test Strategy document mentioned above or should it be updated every
often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is

controlling the activities, the test plan should be updated to reflect any deviation from the
original plan. After all, Planning and Control are continuous activities in the formal test
process.

Test Plan id

Introduction

Test items

Features to be tested

Features not to be tested

Test techniques

Testing tasks

Suspension criteria

Features pass or fail criteria

Test environment (Entry criteria, Exit criteria)

Test deliverables

Staff and training needs

Responsibilities

Schedule

This is a standard approach to prepare test plan and test strategy documents, but things
can vary company-to-company

Test Plan: Test plan is a Document, developed by the Test Lead, which contains "What to
Test", "How to Test", "When to Test", "Who to Test".
Test Strategy: Test Strategy is a Document, developed by the Project manager, which
contains what type of technique to follow and which module to test.
Test Scenario: A name given to Test Cases is called Test Scenario. These
Test Scenario was deal by the Test Engineer.
Test Cases: It is also document and it specifies a Testable condition to validate
functionality. These Test Cases are deal by the Test Engineer
Order of STLC:
Test Strategy, Test Plan, Test Scenario, Test Cases.
1) Severity: It is the extent to which the defect can affect the software. In other words
it defines the impact that a given defect has on the system.
2) For example: If an application or web page crashes when a remote link is clicked,
in this case clicking the remote link by an user is rare but the impact of application
crashing is severe. So the severity is high but priority is low.
3. Priority: Priority defines the order in which we should resolve a defect. Should we fix it
now, or can it wait? This priority status is set by the tester to the developer mentioning the

time frame to fix the defect. If high priority is mentioned then the developer has to fix it at
the earliest.

Few very important scenarios related to the severity and priority which are asked
during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the
application and will not allow the user to use the system. (Eg. A site maintaining the student
details, on saving record if it, doesnt allow to save the record then this is high priority and
high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or
heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the
application (for which there is no workaround) and will not allow the user to use the system
but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a
paragraph or in the report (Not on cover page, heading, title).
3. Navigation Testing
Good Navigation is an essential part of a website, especially those that are complex and
provide a lot of information. Assessing navigation is a major part of usability testing.

Roles and responsibilities for test team lead:

Test leaders tend to be involved in the planning, monitoring, and control of the
testing activities and tasks.

At the outset of the project, test leaders, in collaboration with the other
stakeholders, devise the test objectives, organizational test policies, test strategies and
test plans.

They estimate the testing to be done and negotiate with management to acquire the
necessary resources.

They recognize when test automation is appropriate and, if it is, they plan the effort,
select the tools, and ensure training of the team. They may consult with other groups
e.g., programmers to help them with their testing.

They lead, guide and monitor the analysis, design, implementation and execution of
the test cases, test procedures and test suites.

They ensure proper configuration management of the testware produced and


traceability of the tests to the test basis.

As test execution comes near, they make sure the test environment is put into place

before test execution and managed during test execution.


They schedule the tests for execution and then they monitor, measure, control and

report on the test progress, the product quality status and the test results, adapting the
test plan and compensating as needed to adjust to evolving conditions.
During test execution and as the project winds down, they write summary reports on

test status.
Sometimes test leaders wear different titles, such as test manager or test

coordinator. Alternatively, the test leader role may wind up assigned to a project
manager, a development manager or a quality assurance manager. Whoever is playing
the role, expect them to plan, monitor and control the testing work.
Along with the test leaders testers should also be included from the beginning of the
projects, although most of the time the project doesnt need a full complement of testers
until the test execution period. So, now we will see testers responsibilities.
Role

Responsibilities

Project Manager
(2 to 5 hours per week)

Review and Approval of the testing strategy,


approach, and plans.
Provide formal sign-off on all testing
deliverables and testing events.
Review of testing results and defects to
determine/assess impact to overall project plan and
implementation schedule.

Quality Assurance Lead


(30 to 40 hours per week)

Responsible for management of all Quality


Assurance functions include planning, strategy,
testing execution and tools.
Works with the Project Manager and other
technical leaders to establish time tables and agree
on a Quality Assurance plan for the KFS
implementation.
Ensure that QA process is documented and
communicated and adequate to ensure agreed quality
levels for the application.
Ensure traceability of test cases to
requirements, working with the project Business
Analyst to ensure all requirements are tested.
Works with technical analysts to identify unit
testing coverage and ensure any gaps are
documented and addressed.
Work with Testing Coordinator to ensure
testing of functional areas is complete, tracked and
on-schedule.
Coordinates performance testing and ensures
that performance standards are communicated and
documented.
Oversees determination of need, selection and
implementation, and maintenance of QA tools.

Role

Responsibilities
Oversees, point of escalation for the Jira
(defect tracking) database for all testing phases
(update, follow-up and escalate overdue issues).
QA issue prioritization and resolution
facilitation.
Facilitates weekly Quality Assurance meetings
and maintains agenda.

Module Leads
(1 to 4 hours per week/per
person)

Responsible for the functional test planning


and coordination for their module.
Works with QA Lead and Testing Coordinator
to plan and prepare for testing. Monitor testing
progress; ensure testing is being assigned for a
particular module area.
Provide input, oversee and review the writing
of use cases and test cases. Work with Testing
Coordinator and others to ensure use cases and test
cases are written correctly to ensure complete test
coverage.
Provide formal sign-off on all testing
deliverables and events.
Attend monthly QA meeting and may travel to
group face-to-face testing meetings when needed to
assist Testing Coordinator in resolving issues.

Testing Coordinator/KFS Lead


Tester
(30 to 40 hours)

Primary point of communication for testers.


Coordinates team of testers and activities
across all modules.
Provide training and assistance to testers to
ensure they are following testing and defect reporting
processes.
Coordinate assignments for testers and ensure
that testers understand and are following the KFS QA
process.
Ensures their team is active and meeting at
least once a week to contribute to the testing effort,
or testing independently if permitted.
Acts as first line of support for testing using
KFS QA process, including entering and closing issues
and working with test cases.
Works with the Module Leads to ensure test
cases and scenarios are assigned and being tested
and that testers understand their assignment.
Provide bi-weekly status reports to module
leads and QA lead.
Performs the duties of a Tester as time from
coordination allows; tests application to ensure it is
working as specified, including use of test cases or on
an ad-hoc basis, reports defects and other issues
found during testing in the defect tracking system.
Manage the Jira (defect tracking) database for
all testing phases (update, follow-up and escalate

Role

Responsibilities
overdue issues).
Work with the QA Lead to improve KFS's
quality assurance capabilities and improve testing
processes.
Attend testing meetings and may travel to
group face-to-face meetings.

Campus Lead Testers


(2 - 10 hours)

Understand and follow KFS manual testing


processes and responsibilities.
Have a better understanding of the
business/functional requirements for their unit.
Provide guidance and assistance on the
testing process to Campus Testers in their unit.
Provide feedback on the testing process to the
KFS project QA team.
Assist in the validation of use cases and test
cases.
Campus Tester role as time permits.
Assist in the verification of test data used in
manual test scripts.
Identify and assess defects uncovered in
testing.
Participate in User Acceptance Testing, report
issues and test results as required.

Campus Testers
Super Users/Champions/Volunteer
Testers/DFA,Depts & Units (nonproject staff)
(2 - 5 hours)

Understand and follow KFS manual testing


processes and responsibilities.
Participate in executing manual test scripts.
Record the result (pass/fail).
Record any new defects uncovered during the
test execution.
Provide comments on any defects that are
discovered during testing.
Retest as necessary.
Participate in User Acceptance Testing, report
issues and test results as required.

Campus Readiness Lead


(2 - 5 hours)

Work with Testing Coordinator to coordinate


acquisition and deployment of testing resources.
Coordinate communication about the status of
QA efforts to campus stakeholders.

CIT IS Test Team


(1 @ 50%, 2 @ 50 -100%)

Technical leadership for the test project,


including test approach.
Support for customers and QA Team,
automation test-tool introduction, test planning, staff
supervision, and progress status reporting.
Verifying the quality of the requirements,
including testability, requirement definition, test
design, test-script and test-data development, test
automation, test-environment configuration; testscript configuration management, and test execution.
Interaction with test-tool vendor to identify

Role

Responsibilities
best ways to leverage test tool on the project.
Staying current on latest test approaches and
tools, and transferring this knowledge to test team.
Conducting test-design and test-procedure
walk throughs and inspections.
Implementing test-process improvements
resulting from lessons learned and benefits surveys.
Test Traceability Matrix (tracing the testprocedures to the test requirements).
Test-process implementation.
Ensuring that test-product documentation is
complete.
Attend local testing meetings, and may travel
to group face-to-face testing meetings.

Business Analyst
(3 @ 33% time, or 1 @ 75%100%)

Work with Testing Coordinator, Module Leads


and QA Lead to assist with the writing of use cases
and test cases.
Elicit Use Cases to meet the
capabilities.
Elicit acceptance criteria for Use Cases
(business rules).
Elicit Business Objectives.
Elicit Product Capabilities / Major
Features.
Engage customer (SMEs, Others).
Explore technical options.
A high-level system architecture, i.e. what the
system will do.
Conceptual data model.
Analyze data conversion needs.
Translate requirements to developers, others.

Interfaces Technical Lead


(2 - 15 hours)

Coordinate all inbound and outbound


interfaces and files.
Coordinate test file receipt and/or distribution
with outside providers.
Work with Business Analyst to gather data
requirements and files for the interfaces for the
different phases of testing.
Work with Testing Coordinator to plan test
sessions and manage issues and risks.

Configuration Management
(2 - 15 hours)

Managing and tracking change to the KFS


environment(s).
Communicating changes to the KFS
environment(s).
Implementing change to the KFS
environment(s).
Validating change to the KFS environment(s).

Technical Support
(includes development, conversion

Assess defects uncovered during testing.

Role

Responsibilities

and environment)
(2 - 15 hours)

Create and maintain testing environments.


Develop modifications to fix issues.
Migrate objects to appropriate test
environments.
Allocate technical resources to address
defects/issues during testing phases.

Training
(2 - 15 hours)

Work with Testing Coordinator to produce


training documentation for the testing effort if
necessary.
Assist Testing Coordinator in training testers
on the application functionality, if necessary.
Assist Testing Coordinator in training testers
on executing tests and following the KFS QA process,
if necessary.

Internal Audit
(1 - 2 hours)

Selectively review test results and


reconciliation for completeness and accuracy.

What is Estimation?
Estimation is the process of finding an estimate, or approximation, which is a value that
is usable for some purpose even if input data may be incomplete, uncertain, or unstable.
[Wiki Definition]
The Estimate is prediction or a rough idea to determine how much effort would take to
complete a defined task. Here the effort could be time or cost. An estimate is a forecast or
prediction and approximate of what it would Cost. A rough idea how long a task would take
to complete. An estimate is especially an approximate computation of the probable cost of a
piece of work.
The calculation of test estimation techniques is based on:

Past Data/Past experience

Available documents/Knowledge

Assumptions

Calculated risks

Before starting one common question arises in the testers mind is that Why do we
estimate? The answer to this question is pretty simple, it is to avoid the exceeding
timescales and overshooting budgets for testing activities we estimate the task.
Few points need to be considered before estimating testing activities:

Check if all requirements are finalize or not.

If it not then how frequently they are going to be changed.

All responsibilities and dependencies are clear.

Check if required infrastructure is ready for testing or not.

Check if before estimating task is all assumptions and risks are documented.

Software Estimation Techniques


There are different Software Testing Estimation Techniques which can be used for
estimating a task.
1)
2)
3)
4)

Delphi Technique
Work Breakdown Structure (WBS)
Three Point Estimation
Functional Point Method

1) Delphi Technique:
Delphi technique This is one of the widely used software testing estimation technique.
In the Delphi Method is based on surveys and basically collects the information from
participants who are experts. In this estimation technique each task is assigned to each
team member & over multiple rounds surveys are conduct unless & until a final estimation
of task is not finalized. In each round the thought about task are gathered & feedback is
provided. By using this method, you can get quantitative and qualitative results.
In overall techniques this technique gives good confidence in the estimation. This technique
can be used with the combination of the other techniques.
2) Work Breakdown Structure (WBS):
A big project is made manageable by first breaking it down into individual components in a
hierarchical structure, known as the Work breakdown structure, or the WBS.
The WBS helps to project manager and the team to create the task scheduling, detailed cost
estimation of the project. By using the WBS motions, the project manager and team will
have a pretty good idea whether or not theyve captured all the necessary tasks, based on
the project requirements, which are going to need to happen to get the job done.
In this technique the complex project is divided into smaller pieces. The modules are divided
into smaller sub-modules. Each sub-modules are further divided into functionality. And each
functionality can be divided into sub-functionalities. After breakdown the work all
functionality should review to check whether each & every functionality is covered in the
WBS.
Using this you can easily figure out the what all task needs to completed & they are
breakdown into details task so estimation to details task would be more easier than
estimating overall Complex project at one shot.
Work Breakdown Structure has four key benefits:

Work Breakdown Structure forces the team to create detailed steps:


In The WBS all steps required to build or deliver the service are divided into detailed
task by Project manager, Team and customer. It helps to raise the critical issues early
on, narrow down the scope of the project and create a dialogue which will help make
clear bring out assumptions, ambiguities, narrow the scope of the project, and raise
critical issues early on.

Work Breakdown Structure help to improve the schedule and budget.


WBS enables you to make an effective schedule and good budget plans. As all tasks
are already available so it helps in generating a meaningful schedule and makes
scheming a reliable budget easier.

Work Breakdown Structure creates accountability


The level of details task breakdown helps to assign particular module task to
individual, which makes easier to hold person accountable to complete the task. Also
the detailed task in WBS, people cannot allow hiding under the cover of broadness.

Work Breakdown Structure creation breeds commitment


The process of developing and completing a WBS breed excitement and
commitment. Although the project manager will often develop the high-level WBS,
he will seek the participation of his core team to flesh out the extreme detail of the
WBS. This participation will spark involvement in the project.

3) Three Point Estimation:


Three point estimation is the estimation method is based on statistical data. It is very much
similar to WBS technique, task are broken down into subtasks & three types of estimation
are done on this sub pieces.
Optimistic Estimate (Best case scenario in which nothing goes wrong and all conditions are
optimal.) = A
Most Likely Estimate (most likely duration and there may be some problem but most of the
things will go right.) = M
Pessimistic Estimate (worst case scenario which everything goes wrong.) = B
Formula to find Value for Estimate (E) = A + (4*M) + B / 6
Standard Deviation (SD) = = (B A)/6
Now a days, planning poker and Delphi estimates are most popular testing test estimation
techniques.
4) Functional Point Method:
Functional Point is measured from a functional, or user, point of view.
It is independent of computer language, capability, technology or development methodology
of the team. It is based on available documents like SRS, Design etc.
In this FP technique we have to give weightage to each functional point. Prior to start actual
estimating tasks functional points are divided into three groups like Complex, Medium &
Simple. Based on similar projects & Organization standards we have to define estimate per
function points.
Total Effort Estimate = Total Function Points * Estimate defined per Functional Point
Lets take a simple example to get clearer:

Weightage

Function Points

Total

Complex

Medium

Simple

25

20

60

35

35

Function Total Points

120

Estimate defined per point

4.15

Total Estimated Effort (Person Hours):

498

Advantages of the Functional Point Method:

In pre-project stage the estimates can be prepared.

Based on requirement specification documents the methods reliability is relatively


high.

Disadvantages of Software Estimation Techniques:

Due to hidden factors can be over or under estimated

Not really accurate

It is basd on thinking

Involved Risk

May give false result

Bare to losing

Sometimes cannot trust in estimate

Software Estimation Techniques Conclusion:


There may be different other methods also which can be effectively used for the
project test estimation techniques, in this article we have seen most popular Software
Estimation Techniques used in project estimation. There cant be a sole hard and fast rule
for estimating the testing effort for a project. It is recommended to add on to the possible
knowledge base of test estimation methods and estimation templates constantly revised
based upon new findings.

Definition of Retesting and Regression Testing:


Re-Testing: After a defect is detected and fixed, the software should be retested to confirm
that the original defect has been successfully removed. This is called Confirmation Testing or
Re-Testing
Regression testing: Testing your software application when it undergoes a code change to
ensure that the new code has not affected other parts of the software.
What is User Acceptance Testing?
User Acceptance testing is the software testing process where system tested for
acceptability & validates the end to end business flow. Such type of testing executed by
client in separate environment (similar to production environment) & confirm whether
system meets the requirements as per requirement specification or not.
UAT is performed after System Testing is done and all or most of the major defects have
been fixed. This testing is to be conducted in the final stage of Software Development Life
Cycle (SDLC) prior to system being delivered to a live environment. UAT users or end users
are concentrating on end to end scenarios & typically involves running a suite of tests on the
completed system.
Here are the top 10 tips to help you achieve your software testing documentation
goal:
1. QA should involve at the very first phase of project so that QA and Documentation work
hand in hand.
2. Process defined by QA should follow by technical people, this helps remove most of the
defects at very initial stage.
3. Only creating and maintaining software testing templates is not enough, force people
to use them.
4. Dont only create and leave document, Update as and when required.
5. Change requirement is important phase of project dont forget to add them in the list.
6. Use version controlling for everything. This will help you manage and track your
documents easily.
7. Make defect remediation process easier by documenting all defects. Make sure to include
clear description of defect, reproduce steps, affected area and details about author while
documenting any defect.
8. Try to document what is required for you to understand your work and what you will need
to produce to your stakeholders whenever required

9. Use standard template for documentation. Like any excel sheet template or doc file
template and stick to it for all your document needs.
10. Share all project related documents at single location, accessible to every team member
for reference as well to update whenever required.
I am not saying that by applying above steps you will get sudden results. I know this
change wont happen in a day or two, but at least we can start so that these changes start
happening slowly.
After all the documentation needs documentation. Isnt it?

Test case defect Density


Total number of errors found in test scripts v/s developed and executed.

(Defective Test Scripts /Total Test Scripts) * 100

Example: Total test script developed 1360, total test script executed 1280, total test script
passed 1065, and total test script failed 215
So, test case defect density is
215 X 100
---------------------------- = 16.8%
1280
This 16.8% value can also be called as test case efficiency %, which is depends upon total
number of test cases which uncovered defects

Defect Slippage Ratio


Number of defects slipped (reported from production) v/s number of defects reported
during execution.

Number of Defects Slipped / (Number of Defects Raised - Number of Defects


Withdrawn)

Example: Customer filed defects are 21, total defect found while testing are 267, total
number of invalid defects are 17
So, Slippage Ratio is
[21/(267-17) ] X 100 = 8.4%

Requirement Volatility

Number of requirements agreed v/s number of requirements changed.

(Number of Requirements Added + Deleted + Modified) *100 / Number of Original


Requirements

Ensure that the requirements are normalized or defined properly while estimating

Example: VSS 1.3 release had total 67 requirements initially, later they added another 7
new requirements and removed 3 from initial requirements and modified 11 requirements
So, requirement Volatility is
(7 + 3 + 11) * 100/67 = 31.34%

Review Efficiency:
The Review Efficiency is a metric that offers insight on the review quality and testing
Some organization also use this term as Static Testing efficiency and they are aiming to
get min of 30% defects in static testing
Review efficiency=100*Total number of defects found by reviews/Total number of project
defects
Example: A project found total 269 defects in different reviews, which were fixed and test
team got 476 defects which were reported and valid
So, Review efficiency is [269/(269+476)] X 100 = 36.1%

Efficency & Effectiveness of Processes

Effectiveness: Doing the right thing. It deals with meeting the desirable
attributes that are expected by the customer.

Efficiency: Doing the thing right. It concerns the resources used for the service to
be rendered

DEFECT DENSITY FORMULA

The defects are:

confirmed and agreed upon (not just reported).

Dropped defects are not counted.

Size = The period might be for one of the following:

for a duration (say, the first month, the quarter, or the year).

for each phase of the software life cycle.

for the whole of the software life cycle.

What is a Traceability Matrix?


The focus of any testing engagement is and should be maximum test coverage. By coverage, it simply
means that we need to test everything there is to be tested. The aim of any testing project should be
100% test coverage. Requirements Traceability Matrix to begin with, establishes a way to make sure
we place checks on the coverage aspect. It helps in creating a snap shot to identify coverage gaps.

How to Create a Traceability Matrix?


To being with we need to know exactly what is it that needs to be tracked or traced.
Testers start writing their test scenarios/objectives and eventually the test cases based on some input
documents Business requirements document, Functional Specifications document and Technical
design document (optional).

A web-application is a three-tier application.


This has a browser (monitors data) [monitoring is done using html, dhtml, xml, javascript]> webserver (manipulates data) [manipulations are done using programming languages or
scripts like adv java, asp, jsp, vbscript, javascript, perl, coldfusion, php] -> database server
(stores data) [data storage and retrieval is done using databases like oracle, sql server,
sybase, mysql].

WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.
The applications accessible in browser would be developed in HTML, DHTML, XML, JavaScript etc. (We
can monitor through these applications)
Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript, Perl, Cold
Fusion, PHP etc. (All the manipulations are done on the web server with the help of these programs
developed)
The DBserver would be having oracle, sql server, sybase, mysql etc. (All data is stored in the database
available on the DB server)

What are Software Testing Metrics?


A Metric is a quantitative measure of the degree to which a system, system component, or process
possesses a given attribute.
Metrics can be defined as STANDARDS OF MEASUREMENT.
Software Metrics are used to measure the quality of the project. Simply, Metric is a unit used for
describing an attribute. Metric is a scale for measurement.

Suppose, in general, Kilogram is a metric for measuring the attribute Weight. Similarly, in software,
How many issues are found in thousand lines of code?, here No. of issues is one measurement & No.
of lines of code is another measurement. Metric is defined from these two measurements.
Test metrics example:

How many defects are existed within the module?

How many test cases are executed per person?

What is the Test coverage %?

What is Software Test Measurement?


Measurement is the quantitative indication of extent, amount, dimension, capacity, or size of some
attribute of a product or process.
Test measurement example: Total number of defects.
Please refer below diagram for clear understanding of the difference between Measurement & Metrics.

Why Test Metrics?


Generation of Software Test Metrics is the most important responsibility of the Software Test
Lead/Manager.
Test Metrics are used to,
1.

Take the decision for next phase of activities such as, estimate the cost & schedule of future
projects.

2.

Understand the kind of improvement required to success the project

3.

Take decision on process or technology to be modified etc.

Importance of Software Testing Metrics:


As explained above, Test Metrics are the most important to measure the quality of the software.
Now, how can we measure the quality of the software by using Metrics?
Suppose, if a project does not have any metrics, then how the quality of the work done by a Test
analyst will be measured?
For Example: A Test Analyst has to,
1.

Design the test cases for 5 requirements

2.

Execute the designed test cases

3.

Log the defects & need to fail the related test cases

4.

After the defect is resolved, need to re-test the defect & re-execute the corresponding failed
test case.

In above scenario, if metrics are not followed, then the work completed by the test analyst will be
subjective i.e. the test reportwill not have the proper information to know the status of his
work/project.
If Metrics are involved in the project, then the exact status of his/her work with proper numbers/data
can be published.
I.e. in the Test report, we can publish:
1. How many test cases have been designed per requirement?
2. How many test cases are yet to design?
3. How many test cases are executed?
4. How many test cases are passed/failed/blocked?
5. How many test cases are not yet executed?
6. How many defects are identified & what is the severity of those defects?
7. How many test cases are failed due to one particular defect? etc.
Based on the project needs we can have more metrics than above mentioned list, to know the status
of the project in detail.
Based on the above metrics, test lead/manager will get the understanding of the below
mentioned key points.
a) %ge of work completed
b) %ge of work yet to be completed
c) Time to complete the remaining work
d) Whether the project is going as per the schedule or lagging? etc.

Mobile device check list


1. Application should be started by entering the URL in the application.

2. If application is busy it should show an hour glass or some other mechanism


to notify user that it is processing
3. Minimize and restore should work properly.
4. Tile of the window and information should make sense to the user.
5. If tab navigation is present Tab should move focus in forward direction and
Shift+Tab should move in backward direction.
6. Tab order should be left to right and top to bottom within a group box.
7. If focus is present on any control, that menu should get highlighted and
noticed by user easily.
8. User should not be able select grayed or disabled control. Try this by tapping
on that area multiple times.
9. Verify that text in the paragraph should be left justified.
10. Verify the animation of the error messages.
11. Check for correct and incorrect answer sound effects.
12. Check correct error message should pop up when user selected correct
answer and incorrect answer.
13. For list box tapping on arrow should give the list of options available.
14. The list should be scrollable and user should not be able to type in it.
15. For radio buttons only one should be selected from the given options at one
time.

You might also like