Test Plan Template 20
Test Plan Template 20
Test Plan Template 20
Amendment History:
Version Date Amendment History
1.0 27/07/2009 Re-drafted for Bolton
Reviewers:
This document must be reviewed by the following:
Name Signature Title / Responsibility Date Version
Approvals:
This document must be approved by the following:
Name Signature Title / Responsibility Date Version
Distribution:
Clinical Dashboard Programme Manager
Clinical Dashboard Test Co-ordinator
Document Status:
This is a controlled document.
Whilst this document may be printed, the electronic version maintained in FileCM is the
controlled copy. Any printed copies of the document are not controlled.
Related Documents:
These documents will provide additional information.
Ref no Doc Reference Number Title Version
1 NPFIT-FNT-TO-RM-GEN-0172 CAP Glossary of Terms 0.2
2 NPFIT-FNT-TO-TIN-1070 Common Assurance Process Procedure Latest
Glossary of Terms:
List any terms used in this document.
Term Acronym Definition
Common CAP CAP is a single end to end process for assuring
Assurance development and delivery of high quality and clinically
Contents
1. Introduction 6
2. Features to be tested 6
3. Features not to be tested 7
4. Test Items 7
5. Item Pass/Fail Criteria 8
6. Approach 8
7. Suspension Criteria and Resumption Requirements 8
8. Test Deliverables 8
9. Testing Tasks 9
9.1. Test Planning 9
9.2. Test Estimating and Scheduling 9
9.3. Allocation and Assignment of Personnel to the testing Sub-Project 9
9.4. Test Analysis and Design 9
9.5. Test Specification 9
9.6. Test Data preparation 9
9.7. Test Execution 9
9.7.1. Resources.........................................................................................................9
9.7.2. Re-Test Procedure..........................................................................................10
9.8. Test Management 10
9.9. Test Environment Planning and Co-ordination 10
9.10. Test Incident Management 10
9.11. Test Automation 10
9.12. Test Witnessing 10
9.12.1. Inspection of Test Results, Scripts and Procedures.......................................11
9.13. Reporting11
9.14. Ensuring Quality, Completeness and Relevance 11
10. Test Issue Management 12
11. Environmental Needs 12
12. Responsibilities 15
13. Staffing and Training Needs 15
14. Schedule 15
15. Risks and Contingencies 15
1. Introduction
This document describes how the Test Strategy for release 2009.2 of the Clinical Dashboard
Product will be implemented by detailing the processes by which testing is conducted and
the means by which it is recorded and managed.
It also details the scope of the tests to be conducted together with their expected outcome.
This is then taken into the Schedule that details the tests that are required to cover the areas
scoped in this document.
This release of the Clinical Dashboard software incorporates the changes required for the
Bolton PCT implementation, including rendering gauges within Reporting Services reports,
due to the practice-level data filtering specified by the Trust.
2. Features to be tested
2.1. Functionality
Data Load and localisation
Reporting Services functionality
Sigma Business Intelligence functionality
Sharepoint functionality
2.2. Interfaces
(External and Internal if necessary)
2.3. Report Production
Reports include “Urgent Contacts” sheets for each Practice
2.4. Screen Functionality
All screens to be usable, legible and to display the correct data.
2.5. Simulated Business Running
Test data will be loaded to check end-to-end reconciliation and screen functionality.
2.6. Backup/ Restore
Trust IT staff to have backed up data as instructed by the Technical team.
2.7. Failure/ Recovery
Centrally tested.
2.8. Performance / Capacity / Volume / Limit / Stress
Centrally tested.
2.9. Operational / Housekeeping / Database administration
All terminals in Practice offices to be functional and set to correct resolution. Infrastructure to
be in place and data connections made.
N.B.
A Requirements Matrix is one of the deliverables of CAP and will be created and submitted
at the Design Stage. This Requirements Matrix must be updated to reflect that all items in
this section are out of scope.
4. Test Items
A Test Item is anything that will be delivered; the physical things that are to be tested.
Identify as many pre-requisite Test Items that are required to undertake the testing. Include
their version/ revision level.
Also identify anything that is needed to convert, transfer or migrate Test Items for the
purpose of testing.
Supply references to the associated item documentation where it exists:
Requirements Specification
Design Specification
Package Components
Bespoke Components
Package Configuration Specifications
Data
Implementation Procedures
Operations Documentation
Training Documentation
Production Support / Production Acceptance Criteria
Maintenance Agency Acceptance Criteria
User Guides
- User
- Administration
Reference any outstanding fault or incident reports associated with any Test Item.
If the Entry and Exit Criteria for this Test Phase differ from what was defined in the Test
Strategy then any update should be detailed in this section.
6. Approach
Testing Stages are necessary to ensure that the code and configuration of the delivered
Dashboard are working in accordance with the requirements listed. To ensure this, there
must be a rigid process around which agreed tests are executed, witnessed, documented
with any ensuing issues, reviewed and reported on.
It is envisaged that the testing will be conducted over a life cycle consisting of four stages:
Module/System Testing
Integration Testing
Ready for Operations Testing
User Acceptance Testing
Go Live will take place after Ready for Operations testing has been completed. Integration,
Ready for Operations and User Acceptance testing will be conducted on the live
environment. System testing is conducted on a separate test environment at System C’s
Warrington Office.
Suspension of testing will be called for if there is an unacceptably high number of Level 5
faults/issues. The exact criteria for suspension are to be agreed.
Resumption of testing will be authorised once the number of Level 5 issues has fallen below
the agreed threshold. Authority to resume will be sought from the NHS CFH/Trust resources
by the Supplier Test Co-ordinator.
8. Test Deliverables
The following test documents will be supplied:
Incident Reports
Test Plan
Test Specification
Test Schedule
Test Scripts and Results Documentation
Test Report
Release Notes.
9. Testing Tasks
9.1. Test Planning
To be done by the Test Manager in consultation with the Development Team, Trust
and Authority
9.2. Test Estimating and Scheduling
As above
9.3. Allocation and Assignment of Personnel to the testing Sub-Project
Test Manager to request personnel from Programme Manager and Product
Development Manager
9.4. Test Analysis and Design
This will be done by the Test Department in consultation with the Product
Development Manager and the Programme Manager
9.5. Test Specification
Separate Document
9.6. Test Data preparation
A mixture of test and live data will be used, with the live data coming in towards the end of
the test cycle. Live data will be anonymised if necessary.
9.7. Test Execution
9.7.1. Resources
The formal testing activity is estimated to take 5 days elapsed time to complete. The
following resources are required to be available during some of or the entire duration of the
testing activity as indicated below.
The actual test execution time for formal testing activity is estimated to take a total of 30
hours which is scheduled within the 5 days allocated to the test phase. The detailed
programme for testing will be agreed with the Trust prior to testing. The Authority resources
required for the testing are listed in the Test Specification. The detailed resource
requirements will be listed in the Test Specification.
The Authority will be informed of the date and location of the testing and are expected to
provide a witness if desired.
9.13. Reporting
A draft test report will be prepared prior to completion of each Test stage. This report will be
internally reviewed before formal issue.
The underlying technical test infrastructure and environment will be configured prior to
execution of all tests.
Tests will be executed according to the test schedule as included in this test plan and
associated test specification or during available time after the scheduled test date if a test
could not be completed.
Against each Test these results will include as a minimum:
Person who executed the test;
Date the test was executed;
Pass/fail result of the test;
There are two options when a test fails:
When the cause of the failure is obvious, such as a configuration error, then
- the test failure is recorded;
- the cause of the failure is entered as a comment with the test result;
- error is fixed immediately;
- the test is repeated immediately; or
When the test fails and the cause is not obvious, a test issue will be raised in the test
management system.
Once all tests are completed, the development and test teams will review the test results to
make a recommendation if the software is ready for release. This team can make a positive
recommendation even if not all tests were passed, after carefully evaluating the failures and
the impact of those failures.
Test Issue Level 1 – prevents a critical element of the Trust Services from
functioning or being performed which has a direct or indirect impact on
patients and/or End Users.
Test Issue Level 2 – all elements of the Trust Services can still function with a
workaround, however functionality or performance is severely impacted;
Test Issue Level 3 – all elements of the Trust Services can still function with a
workaround, however required functionality or performance is materially
impacted;
Test Issue Level 4 – all elements of Trust Services can still function, however
there is minor functionality/performance impact; and
Test Issue Level 5 – all elements of Trust Services can still function, however
there are minor cosmetic defects with no functional impact and with no impact
on patients or clinical services.
11.4. Security and access requirements to the test area and equipment
Access to test areas should be restricted to authorised System C, Trust and NHS CFH
resources.
11.5. Test tools and utilities required (including those supplied by NHS
CFH which must be installed and configured prior to Integration
Testing)
None are envisaged apart from Test Track
11.9. Identify a source for any of the needs that are not currently
available to the test group
To be negotiated.
12. Responsibilities
This section is included to show who is responsible for which activities and deliverables.
Identify all of the groups responsible for test-related managing, designing, preparing,
executing, witnessing, checking, and resolving:
Detail who will prepare test cases, test data and expected results in each Test Stage
Detail who will execute the testing
Detail who will make the go/ no go decisions between Test Stages
In addition, identify the groups responsible for the Test Items identified in, Section 5 - Test
Items, and the environmental needs identified in Section 11 - Environmental Needs.
These groups may include the developers, testers, operations/production support staff,
technical support, user representatives, data administration, and quality support staff.
Identify point of contact for particular applications, platform, systems and business
functions (especially for interface testing).
Identify the point of contact for inter-dependent projects
Training may relate to the system under test, the business, the test techniques, the tools
being used.
14. Schedule
Include test milestones identified in the Project Schedule as well as key Testing Milestones
and tasks included in Section 10 - Testing Tasks. However include a warning that any dates
that are included are only to aid comprehension and that current dates should always be
ascertained from the current version of the Project Plan.
Detail when test cases, test data and expected results will be prepared
Detail when each Test Stage will be executed
Include test milestones, scheduled dates for test phase sign-off or for testing reviews
Identify additional testing activities
Provide a reference to Plan/ Gantt Chart appendix
Milestones for delivery of the software to the Test Team
Availability of the environment
Test deliverables
Dates for when NHS CFH can Witness Test
Document how milestone slippage is to be handled.
This section relates to what could go wrong and what will be done to minimise the adverse
impact if the risk does indeed occur.
Assumptions are not explicit within the IEEE 829 standard however it is necessary to detail
any in this section. Identify the high-risk assumptions of the Test Plan. Specify the impact
and contingency plans for each.
Possible risks are:
Changing Authority requirements
Changing end date
Quality problems
New technology
Lack of skilled personnel
Lack of good quality test environments.
It is necessary to focus on any risks to the testing outlined within this Test Plan and not to the
project itself. Also note that the risks will be managed via the internal (Supplier) and external
(NHS CFH) project risk registers.