ISTQB Advanced Tóm Tắt Kiến Thức 10 Chương - V1.0

Download as pdf or txt
Download as pdf or txt
You are on page 1of 43

ISTQB ADVANCED LEVEL

Test Manager and Test Analyst

Tài liệu tóm tắt

GV: Tạ Thị Thinh


SDT: 0986775464
ĐC: tầng 3- nhà văn hóa tổ 28
Ngõ 36/7, đường Duy Tân, Dịch Vọng Hậu, Cầu Giấy

1
Content

CHAPTER 1-Test basics and Test Metrics ........................................................................................................... 4

CHAPTER 2- Test processes .................................................................................................................................. 8

CHAPTER 3. Risk management .......................................................................................................................... 18

CHAPTER 4- Test Techniques ............................................................................................................................ 24

CHAPTER 5- Quality Characteristics for Technical Testing............................................................................ 27

CHAPTER 6. Review............................................................................................................................................. 29

CHAPTER7. Defect Management ........................................................................................................................ 32

CHAPTER 8. Improving the Testing Process ..................................................................................................... 36

CHAPTER 9. Test Tools and Automation........................................................................................................... 41

CHAPTER 10. People Skills and Team Composition......................................................................................... 42

2
ISTQB Advanced introduction

Câu hỏi của mức advance có thể khá dài, vẫn chia thành các mức là K1, K2, K3, K4 nhưng tỷ lệ các câu
K1, K2 sẽ giảm so với foundation:

Questions distribute following chapters:

Test Test technique


Module Chapter manager Test analyst analyst
Total 65 65 65
Total point 105 82 79
Chapter 1 Test basics 16 22 15
Chapter 2 Testing processes 7 8 12
Chapter 3 Test management 43 11 9
Chapter 4 Test techniques 3 21 X
Chapter 5 Test of software characteristics 3 7 7
Chapter 6 Reviews 6 3 7
Chapter 7 Incident management 8 2 6
Standard and test process
Chapter 8 improvement 6 1 X
Test tool and automation/ Test
Chapter 9 techniques 8 2 23
People skills and team
Chapter 10 composition 5 5 X

Tips for on the Evolution of the Exams

This book contains solid features to help you do that, including the following:

 You must work through all the exercises in the book


 You must work through all the sample exam questions in the book
 You must read the ISTQB glossary term definitions where they occurred in the chapters

You must read every chapter of this book and the entire ISTQB Advanced syllabus

3
CHAPTER 1-Test basics and Test Metrics

1.1 Development models


A- Sequential Lifecycle Models

Issues for testing that the test manager must manage:

 Schedule compression during testing at the end of the project


 Development groups, likewise pressured to achieve dates, delivering unstable and often
untestable systems to the test team
 The test team is involved late

B- Iterative or Incremental Lifecycle Models

 Regression test all the functions and capabilities after the first iteration
 Failure to plan for bugs and how to handle them
 The lack of rigor in and respect for testing
 The designs of the system will change
 Schedules can be quite unpredictable

C- Agile project:

 Use a less formalized process and a much closer working relationship that allows changes to
occur more easily within the project
 Less comprehensive test documentation in favor of having a more rapid method of
communication such as daily ―stand up‖ meetings
 Require the earliest involvement from the Test Analyst and throughout project lifecycle:
o Working with the developers as they do their initial architecture and design work
o Reviews may not be formalized but are continuous as the software evolves
 Good change management and configuration management are critical for testing

1.2 Test Levels

Foundation test levels:

 Unit or component
 Integration
 System
 Acceptance

With Integration testing:

 Component integration testing—integrating a set of components to form a system, testing the


builds throughout that process.
 System integration testing —integrating a set of systems to form a system of systems, testing the
system of systems as it emerges from the conglomeration of systems.

Additional test levels in Advanced level:

 Hardware-software integration testing


 Feature interaction testing
 Customer product integration testing

You should expect to find most, if not all, of the following for each level:

4
 Clearly defined test goals and scope
 Traceability to the test basis (if available)
 Entry and exit criteria, as appropriate both for the level and for the system lifecycle
 Test deliverables, including results reporting that will be expected
 Test techniques that will be applied, as appropriate for the level, for the team, and for the risks
inherent in the system
 Measurements and metrics
 Test tools, where applicable and as appropriate for the level
 And, if applicable, compliance with organizational or other standards

1.3 Specific Systems

System of systems: Multiple heterogeneous, distributed systems that are embedded in networks
at multiple levels and in multiple domains and are interconnected, addressing large-scale
interdisciplinary common problems and purposes.

Projects might include the following characteristics and risks:

 The integration of commercial off-the-shelf (COTS) software, along with some amount of
custom development, often taking place over a long period.
 Significant technical, lifecycle, and organizational complexity and heterogeneity.
 Different development lifecycles and other process among disparate teams, especially—as is
frequently the case—when distributed work, insourcing, and outsourcing are involved.
 Serious potential reliability issues due to intersystem coupling, where one inherently weaker
system creates ripple-effect failures across the entire system of systems.
 System integration testing, including interoperability testing, is essential.

Safety-Critical Systems:

Safety-critical system: A system whose failure or malfunction may result in death or serious
injury to people, or loss or severe damage to equipment, or environmental harm. - Safety-
critical systems are those systems upon which lives depend
 Defects can cause death, and deaths can cause civil and criminal penalties, so proof of adequate
testing can be and often is used to reduce liability.
 Focus on quality as a very important project priority.
 Various regulations and standards often apply to safety critical systems
 Traceability all the way from regulatory requirements to test results, and helps demonstrate
compliance.

1.4 Test metrics


Test progress monitoring and control dimensions: Product risk, Test, Coverage, Defect, Confidence
Metrics related to product risks include:

 Percentage of risks completely covered by passing tests


 Percentage of risks for which some or all tests fail
 Percentage of risk not yet completely tested
 Percentage of risks covered, sorted by risk category
 Percentage of risks identified after the initial quality risk analysis

Metrics related to defects include:

 Cumulative number reported (found) versus cumulative number resolved (fixed)

5
 Mean time between failure or failure arrival rate
 Breakdown of the number or percentage of defects categorized by the following:

o Particular test items or components


o Root causes
o Source of defect (e.g., requirement specification, new feature, regression, etc.)
o Test releases
o Phase introduced, detected, and removed
o Priority/severity
o Reports rejected or duplicated

 Trends in the lag time from defect reporting to resolution


 Number of defect fixes that introduced new defects (sometimes called daughter bugs)

Metrics related to tests include:

 Total number of tests planned, specified (implemented), run, passed, failed, blocked, and skipped
 Regression and confirmation test status, including trends and totals for regression test and
confirmation test failures
 Hours of testing planned per day versus actual hours achieved
 Availability of the test environment (percentage of planned test hours when the test environment
is usable by the test team)

Metrics related to test coverage include:

 Requirements and design elements coverage


 Risk coverage
 Environment/configuration coverage
 Code coverage

Project metrics- measurements about progress

Test activities Measurement


Test planning  Risk, requirements, and other test basis element coverage
and monitoring  Defect discovery
 Planned versus actual hours to develop testware and execute test cases

Analysis  Number of test conditions identified


 Number of defects found during test analysis

Design  Percentage of test conditions covered by test cases


 Number of defects found during test design

Implementation  Percentage of test environments configured


activities  Percentage of test data records loaded
 Percentage of test cases automated

Execution  Percentage of planned test cases executed, passed, and failed


activities  Percentage of test conditions covered by executed (and/or passed) test cases
 Planned versus actual defects reported/resolved
 Planned versus actual coverage achieved

Progress and  Number of test conditions, test cases or test specifications planned and
completion those executed broken down by whether they passed or failed

6
activities  Total defects, often broken down by severity, priority, current state,
affected subsystem, …
 Number of changes required, accepted, built, and tested
 Planned versus actual cost
 Planned versus actual duration
 Planned versus actual dates for testing milestones
 Planned versus actual dates for test-related project milestones (e.g., code
freeze)
 Product (quality) risk status, often broken down by those mitigated versus
unmitigated, major risk areas, new risks discovered after test analysis, etc.
 Percentage loss of test effort, cost, or time due to blocking events or planned
changes
 Confirmation and regression test status

Closure  Percentage of test cases executed, passed, failed, blocked, and skipped during
activities test execution
 Percentage of test cases checked into re-usable test case repository
 Percentage of test cases automated, or planned versus actual test cases
automated
 Percentage of test cases integrated into the regression tests
 Percentage of defect reports resolved/not resolved
 Percentage of test work products identified and archived

Product metrics - measurement quality of testing:


a. Measurements about coverage:
 Number of coverage elements covered by the executed test procedures code structures covered by
the test
b. Measurements about incidents:
 Number of reported incidents
 Number of incidents of different classes, for example, faults, misunderstandings, and
enhancement requests
 Number of defects reported to have been corrected
 Number of closed incident reports
c. Measurements about confidence:
 Subjective statements about confidence from different stakeholders

1.5 Business Value of Testing


Quantitative values Qualitative values
 finding defects  improved reputation for quality
 reducing risk by running tests,  smoother and more-predictable releases
 delivering information on project,  increased confidence
process, and product status  protection from legal liability
 reducing risk of loss of whole missions or
even lives
Cost of quality
• Costs of prevention
• Training
• Early test
• Build process
• Costs of detection
• Write test case
• Review document
• Execution test

7
• Costs of internal failure
• Fix bug prior to delivery
• Re-test
• Costs of external failure
• Support customer
• Fix bug after delivery
• Regression test

CHAPTER 2- Test processes

2.1 Introduction

2.2 Test plan, monitoring and control


Test policy- The document that describes the organization's philosophy, objectives, and key metrics
for testing—and possibly quality assurance as well.

Test strategy - describes the organization’s general, project-independent methods for testing

Test approach- The implementation of the test strategy for a specific project

Master test plan (or project test plan) - describes the implementation of the test strategy for a
particular project

Level test plan (or phase test plan) - describes the particular activities to be carried out within each
test level

2.2.1 Test plan

Test plan template:

a. The strategy:

The test strategy describes the organization’s general test methodology


This includes:
 The way in which testing is used to manage product and project risks
 The division of testing into levels
 The high-level activities associated with testing
 Identify test priority and allocate test effort

Types of test strategies

Preventive Reactive
When Early involvement of the test team responding to the system as it is

8
Use test planning, analysis, and design activities actually presented to the test
to identify problems in work products team
Advantage - resolve these problems before test execution Avoid costs associated with
- preventing the late discovery of a large number development of test work
of defects, which is a leading cause of project products that might not be useful
failure. if the actual system differs from
what was initially planned.
Project type for projects that are predictable, carefully For projects that are chaotic,
planned, and executed according to plan poorly planned, or in constant
fundamental change
Types of Analytical Model-based
strategies Methodical Dynamic or heuristic
Process- or standard-compliant Regression testing
Consultative

b. The approach:
 Define test levels
 The goals and objectives of each level
 What test techniques will be used at each level of testing
 Coverage of each level

c. Exist criteria
 Use to determine if we can stop the testing or if we have to go on to reach the objective of the
testing
 Derived from the strategy and risk analysis; the higher the risk
 Guide the specification of the test and the selection of test case design techniques
 The most appropriate completion criteria:
o Specified coverage has been achieved
o Specified number of failures found per test effort has been achieved
o No known serious faults
o The benefits of the system are bigger than known problems
o (The time has run out)

d. Tool and template:


Define templates, traceability matrix or tools to manage the complex relationships that may exist
between the test basis (e.g., specific requirements or risks), test conditions and the tests that cover
them

e. Environment
Test Manager to work with the project architects to define the initial test environment
specification, to verify availability of the resources required, to ensure that the people who will
configure the environment are committed to do so, and to understand cost/delivery timescales and
the work required to complete and deliver the test environment.

f. Service level agreements (SLA)


All external dependencies and associated service level agreements (SLAs) should be identified
and, if required, initial contact should be made.
Examples of dependencies are resource requests to outside groups, dependencies on other projects
(if working within a program), external vendors or development partners, the deployment team,
and database administrators.

2.2.2 Test Estimation


Estimation, as a management activity, is the creation of an approximate target for costs and
completion dates associated with the activities involved in a particular operation or project.

9
Based on fundamental test process, you can create your work breakdown structure with following major
activities:

 Planning
 Control
 Analysis
 Design
 Implementation
 Execution
 Results reporting and exit evaluation
 Closure

Factors Affecting Estimation:

 Process factors
 Material factors
 People factors
 Delaying factors

Some factors arise from the process by which work is done:

 The extent to which testing activities


 Clearly defined hand-offs between testing and the rest of the organization
 Well-managed change control processes for project and test plans, product requirements, design,
implementation, and testing
 The chosen system development or maintenance lifecycle, including the maturity of testing and
project processes within that lifecycle
 Timely and reliable bug fixes
 Realistic and actionable project and testing schedules and budgets
 Timely arrival of high-quality test deliverables
 Proper execution of early test phases (unit, component, and integration)

Some factors, the material factors, arise from the nature of the project, the tools at hand, the resources
available, and so forth:

 Existing, assimilated, high-quality test and process automation and tools


 The quality of the test system, by which I mean the test environment, test process, test cases, test
tools, and so forth
 An adequate, dedicated, and secure test environment
 Separate, adequate development debugging environment
 The availability of a reliable test oracle (so we can know a bug when we see one)
 Available, high-quality (clear, concise, accurate, etc.) project documentation like requirements,
designs, plans, and so forth
 Reusable test systems and documentation from previous, similar projects
 The similarity of the project and the testing to be performed to previous endeavors

Some factors arise from the people on the team:

 Inspired and inspiring managers and technical leaders


 An enlightened management team whose members are committed to appropriate levels of quality
and sufficient testing
 Realistic expectations across all participants, including the individual contributors, the managers,
and the project stakeholders

10
 Proper skills, experience, and attitudes in the project team, especially in the managers and key
players
 Stability of the project team, especially the absence of turnover
 Established, positive project team relationships, again including the individual contributors, the
managers, and the project stakeholders
 Competent, responsive test environment support
 Project-wide appreciation of testing, release engineering, system administration, and other non-
glamorous but essential roles (i.e., not an "individual heroics" culture)
 Use of skilled contractors and consultants to fill gaps
 Honesty, commitment, transparency, and open, shared agendas among the individual
contributors, the managers, and the project stakeholders

Finally, there are certain delaying factors that, when present, always increase schedule and effort:

 High complexity of the process, project, technology, organization, or test environment


 Lots of stakeholders in the testing, the quality of the system, or the project itself
 Many sub-teams, especially when those teams are geographically separated
 The need to ramp up, train, and orient a growing test or project team
 The need to assimilate or develop new tools, techniques, or technologies at the testing or project
levels
 The presence of custom hardware
 Any requirement for new test systems, especially automated testware, as part of the testing effort
 Any requirement to develop highly detailed, unambiguous test cases, espe-cially to an unfamiliar
standard of documentation
 Tricky timing of component arrival, especially for integration testing and test development
 Fragile test data, such as for example, data that is time-sensitive
 The requirements for a high level of quality of the system, as would be the case for mission-
critical and safety-critical systems
 A large, complex system under test
 Extensive or total newness, which would mean that no historical data from previous test projects
or applicable benchmark data could be used for estimation or for testing

Estimating Techniques:

 Intuition, guesses, and experience


 Work breakdown structures, such as Gantt charts and tools like Microsoft Project
 Team estimation sessions, either unstructured or structured with some approach like the Delphic
oracle, three point, or wideband
 Test Point Analysis, which is usable when function points are available
 Company standards and norms, which can influence and in some cases determine test estimates
 The typical or believed-typical percentages of the overall project effort or staffing level, such as
the infamous tester-developer ratio
 Organizational history and metrics, including proven tester-developer ratios within the company
 Industry averages, metrics, and other predictive models

2.2.1 Morning and control

Monitoring: compare actual progress against the plan; report the status of executing the plan, including
any deviations that arise.

Monitoring include measurement completeness of the test process and create reports. Some
measurement:

 Risk and test coverage

11
 Defect discovery rates and other defect information
 Planned versus actual hours to develop testware and execute test cases

Control: develop and apply a set of corrective actions to get a test project.

Test control action:

 Respond to the changes in the mission, strategies, and objectives of testing, or to changes in the
project.
 Re-allocate the remaining effort and reprioritize the tests.
 The creation or updating of documents

2.3 Test analysis

Test analysis is the activity that defines ―what‖ is to be tested in the form of test conditions
Activities:

 Analyze the test basis


 Identify the test conditions

Test case: A set of input values, execution preconditions, expected results, and execution post
conditions developed for a particular objective or test condition

Test condition: An item or event of a component or system that could be verified by one or more
test cases (e.g., a function, transaction, feature, quality attribute, or structural element).

Test execution: The process of running a test on the component or system under test, producing
actual result(s).

Test procedure: A document specifying a sequence of actions for the execution of a test. Also
known as test script or manual test script.
Test conditions:
 Be identified by analysis of the test basis, test objectives, and product risks
 Be viewed as the detailed measures and targets for success
 Be identified and traced back to each product risk
 Be traceable back to the test basis and defined strategic objectives
 Be traceable forward to test designs and other test work products

The level of detail of test condition is considered by:


-
 Level of testing
 Level of detail and quality of the test basis
 System/software complexity
 Project and product risk
 The relationship between the test basis, what is to be tested and how it is to be tested
 Software development lifecycle in use
 Test management tool being utilized
 Level at Skills and knowledge of the test analysts
 The level of maturity of the test process and the organization
 Availability of other project stakeholders for consultation

Some advantages of specifying test conditions at a detailed level include:

12
 Providing better and more detailed monitoring and control for a Test Manager
 Contributes to defect prevention by occurring early in a project for higher levels of testing
 Relates testing work products to stakeholders in terms that they can understand
 Enables test design, implementation and execution
 Provides the basis for clearer horizontal traceability within a test level

Some disadvantages of specifying test conditions at a detailed level


 Include: Potentially time-consuming
 Maintainability can become difficult in a changing environment
 Level of formality needs to be defined and implemented across the team

Test conditions specific more detail in cases:


 Lightweight test design documentation methods
 Little or no formal requirements or other development work products are available as the test
basis
 The project is large-scale, complex or high risk and requires a level of monitoring and control that
cannot be delivered by simply relating test cases to development work products
Test conditions specific less detail in case:
 Component level testing
 Less complex projects where simple hierarchical relationships exist between what is to be tested
and how it is to be tested
 Acceptance testing where use cases can be utilized to help define tests

2.4 Test design

Test design: the activity that defines ―how‖ something is to be tested.

Test design defines the following:

 Preconditions
 Test environment requirements
 Test inputs and other test data requirements
 Expected results
 Post conditions
Activities:

 Determine in which test areas low-level (concrete) or high-level (logical) test cases are most
appropriate
 Determine the test case design technique(s) that provide the necessary test coverage
 Create test cases that exercise the identified test conditions
 Apply the prioritization criteria identified during risk analysis and test planning

Good design test:

 The test conditions to be used as a guide for the unscripted testing.


 The pass/fail criteria should be clearly identified
 Can be understandable by other testers, not just the author
 Can be understandable by other stakeholders such as developers- who will review the tests, and
auditors- who may have to approve the tests
 Cover all the interactions of the software with the actors (e.g., end users, other systems), not just
the interactions that occur through the user-visible interface
 Mitigate risks: inter-process communications, batch execution and other interrupts, interact with
the software can contain defects

13
 Interfaces between the various test objects

2.4.1 Concrete and Logical Test Cases


Concrete test cases Logical test cases

provide all the specific information and Provide guidelines for what should be tested:
procedures needed for the tester to execute actual data, the procedure that is followed when
the test case and verify the results executing the test.

useful when: best used when:

 Requirements are well-defined  Requirements are not well-defined


 The testing staff is less experienced  TA is experienced with both testing and the
 External verification of the tests, such product
as audits, is required  Formal documentation is not required (e.g.,
no audits will be conducted)

Provide: Provide:

 Provide excellent reproducibility  Better coverage than concrete test cases


 Require a significant amount of maintenance  Can’t reproducibility defect
effort
 Limit tester ingenuity during execution

2.4.2 Creation of Test Cases


Test case design includes:

 Objective
 Preconditions.
 Test data requirements
 Expected results
 Post-conditions

Template:

The IEEE 829 test design specification

The IEEE 829 test case specification

14
Level of detail is considered by:

 The cost to develop


 The level of repeatability during execution
 Less detail, more flexibility when executing and investigate potentially interesting areas
 Less detail, less reproducibility

Create test work produce can be affected by:

 Project risks (what must/must not be documented)


 The ―value added‖ which the documentation brings to the project
 Standards to be followed and/or regulations to be met
 Lifecycle model used (e.g., an Agile approach aims for ―just enough‖ documentation)
 The requirement for traceability from the test basis through test analysis and design

2.4.3. Test Oracles


A test oracle is a source we use to determine the expected results of a test

A test oracle is in:

 The existing system


 A user manual
 An individual's specialized knowledge
 Never use the code itself as an oracle, even for structural testing.

Using all available oracles—written and mental, provided and derived—the tester can define expected
results before and during test execution

2.5 Test Implementation


Test implementation is the fulfillment of the test design

 Creating automated tests


 Organizing tests (both manual and automated) into execution order
 Finalizing test data and test environments
 Forming a test execution schedule, including resource allocation
 Checking against explicit and implicit entry criteria for the test level in question
 Ensuring that the exit criteria for the previous steps in the process have been met that will be
affected with delayed schedules, insufficient quality and unexpected extra effort

The execution order:

 Organize the test cases into test suites (i.e., groups of test cases)
 Related test cases are executed together
 Risk priority order may dictate the execution order for the test cases
 The availability of the right people, equipment, data and the functionality to be tested

The level of detail of documents:

15
 Influenced by the detail of the test cases and test conditions
 Regulatory rules apply
 Tests should provide evidence of compliance to applicable standards as DO-178B/ED 12B –Test
data for both inputs and the test environment is available now.

2.6 Test Execution


Test execution begins (precondition) when:

 Readiness of the test environment


 Configuration and release management for the system under test

Pre-conditions for test execution, including: testware, test environment, configuration


management, and, defect management.

 The readiness of a defect management and a test management system

Test execution tasks:

 Execute the tests per the test procedures


 Compare the actual results with the expected results
 Observed an anomaly to identify and report incidents
 Log the results

Test log: A chronological record of relevant details about the execution of tests. Test
logging: The process of recording information about tests executed into a test log.

The IEEE 829 standard for test log template:

2.7 Evaluating Exit Criteria and Reporting


Documentation and reporting for test progress monitoring and control
Activities:

 Collect the information necessary about test results


 Measure progress toward completion, detect deviation from the plan

To measure completeness of the testing with respect to exit criteria, we can measure properties of the test
execution process such as the following:

 Number of test conditions, cases, or test procedures planned, executed, passed, and failed
 Total defects, classified by severity, priority, status, or some other factor
 Change requests proposed, accepted, and tested
 Planned versus actual costs, schedule, effort
 Quality risks, both mitigated and residual

16
 Lost test time due to blocking events
 Confirmation and regression test results

IEEE829 summary report template

2.8 Test Closure Activities


Objective: determined to be complete, collect the key outputs and pass to the relevant person or
archived.
Test closure activities fall into four main groups:

Test artifacts
Test completion Lessons learned Archiving
handover

•All planned tests •Known defects •Quality risk •All test work
should be either run communicated to analysis? product archived in
or skipped ? operation and •Metrics analysis? CM?
•All known defects support team? •Root cause analysis
should be either •Tests and test and action defined?
fixed, deferred e, or environments to •Process
accepted? maintenance team? improvement?
•Regression test •Any unanticipated
documented? deviation?

2.9 Test document templates


The Test Manager should ensure consistency and quality of these work products through the
following activities

 Establishing and monitoring metrics that monitor the quality of these work products
 Working with the Test Analysts and Technical Test Analysts to select and customize appropriate
templates for these work products
 Working with the Test Analysts and Technical Test Analysts to establish standards for these work
products, such as the degree of detail necessary in tests, logs, and reports
 Reviewing testing work products using the appropriate techniques and by the appropriate
participants and stakeholders

IEEE-829 Templates

Test design template Test procedure template

17
Test case template Test transmittal report template

CHAPTER 3. Risk management


Risk is the possibility of a negative or undesirable outcome or event.

3.1 Risk management

3.2 Risk-Based Testing

3.2.1 Benefits of Risk-based testing


Risk-based testing strategy should test the areas and test for bugs that are worrisome and ignore the ones
that aren't.

Lightweight techniques allow the use of weighting of the likelihood and impact factors to emphasize
business or technical risks:

 Use only two factors, likelihood and impact; and,


 Use simple, qualitative judgments and scales.

18
3.2.2 Risk categories
Project risk (planning risks) Product risk (quality risk)
A risk to the project’s capability to deliver A risk to quality of product
products: Scope, cost, Time Quality
Related to management and control of the (test) Directly related to the test object
project, e.g. lack of ATM staffing, strict
deadlines, changing requirements
The primary effect of a potential problem is on the A possible reliability defect that could cause a
overall success of a project system to crash during normal operation

Project Risks

A specific list of all possible test-related project risks like that:

 Test environment and tool readiness


 Test staff availability and qualification
 Low quality of test deliverables
 Too much change in scope or product definition
 Sloppy, ad hoc testing effort

However, some project risks can and should be mitigated successfully by the Test Manager and
Test Analyst:
• Preparing the testware earlier,
• Pre-testing of test environments,
• Pre-testing of early versions of the product,
• Applying tougher entry criteria to testing,
• Enforcing requirements for testability,
• Participating in reviews of early project work products,
• Participating in change management,

3.2.3 Extents of testing


Extents of testing:

1. Extensive: run a large number of tests that are both broad and deep, exercising combinations and
variations of interesting conditions.

2. Broad : run a medium number of tests that exercise many different interesting conditions

3. Cursory: run a small number of tests that sample the most interesting conditions

4. Opportunity: leverage other tests or activities to run a test or two of an interesting condition, but
invest very little time and effort

5. Report bugs only: not test at all, but, if bugs related to this risk arise during other tests, report those
bugs

6. None: neither test for these risks nor report related bugs

3.4 How to Do Risk-Based Testing

Risk management includes three primary activities:

a. Risk identification figuring out what the different project and quality risks are for the project

Risk identification using techniques like these:

19
 Expert interviews
 Independent assessments
 Use of risk templates
 Project retrospectives
 Risk workshops and brainstorming
 Checklists
 Calling on past experience

b. Risk analysis, assessing the level of risk—typically based on likelihood and impact—for each
identified risk item

Technical factors should we consider:

 Complexity of technology and teams


 Personnel and training issues
 Intrateam and interteam conflict/communication
 Supplier and vendor contractual problems
 Geographical distribution of the development organization, as with out-sourcing
 Legacy or established designs and technologies versus new technologies and designs
 The quality—or lack of quality—in the tools and technology used
 Bad managerial or technical leadership
 Time, resource, and management pressure, especially when financial penalties apply
 Lack of earlier testing and quality assurance tasks in the lifecycle
 High rates of requirements, design, and code changes in the project
 High defect rates
 Complex interfacing and integration issues
 Lack of sufficiently documented requirements

Business factors should we consider:

 The frequency of use and importance of the affected feature


 Potential damage to image
 Loss of customers and business
 Potential financial, ecological, or social losses or liability
 Civil or criminal legal sanctions
 Loss of licenses, permits, and the like
 The lack of reasonable workarounds
 The visibility of failure and the associated negative publicity

Lightweight techniques allow the use of weighting of the likelihood and impact factors to emphasize
business or technical risks:

 Use only two factors, likelihood and impact; and,


 Use simple, qualitative judgments and scales.

Heavy-weight end of the scale has a number of options available:

 Hazard analysis, which extends the analytical process upstream, attempting to identify the
hazards that underlie each risk.
 Cost of exposure, where the risk assessment process involves determining, for each quality risk
item, three factors:
o 1) the likelihood (expressed as a percentage) of a failure related to the risk item;
o 2) the cost of a loss (expressed as a financial quantity) associated with a typical
failure related to the risk item, should it occur in production; and,

20
o 3) The cost of testing for such failures.
 Failure Mode and Effect Analysis (FMEA) and its variants, where quality risks, their
potential causes, and their likely effects are identified and then severity, priority and
detection ratings are assigned.
 Quality Function Deployment (QFD), which is a quality risk management technique
with testing implications, specifically, being concerned with quality risks that arise from an
incorrect or insufficient understanding of the customers’ or users’ requirements.
 Fault Tree Analysis (FTA), where various actual observed failures (from testing or from
production), or potential failures (quality risks), are subjected to a root cause analysis starting
with defects that could cause the failure, then with errors or defects that could cause those defects,
continuing on until the various root causes are identified.

c. Risk mitigation ( "risk control" because it consists of mitigation, contingency, transference, and
acceptance actions for various risks)

3.5 Distributed, Outsourced, and Insourced Testing

Distributed testing - Test effort occurs at multiple locations

Outsourced testing - Test effort is carried out by at more than single location by resources not of the
same company

Insourced testing -Test effort is carried out by people who are co-located with the project team but who
are not fellow employees

Main issues:

• Divide the test work across the multiple locations explicitly


• Gaps and areas of overlap between teams
• Inefficiency, potential rework during test execution
• Confusion about the meaning of the test results if the results for similar tests disagree

Test Manager Solutions:

• Clear channels of communication and well-defined expectations for missions, tasks, and
deliverables
• Alignment of methodologies
• For distributed testing, the division of the test work across the multiple locations must be explicit
and intelligently decided
• Develop and maintain trust that all of the test team(s) will carry out their roles properly in spite
of organizational, cultural, language, and geographical

Test Analyst solutions:

• Pay special attention to effective communication and information transfer


• Defects that are accurately recorded can be routed to co-workers for follow-up as needed

3.6 Test management issue

These issues are in four areas:

 Managing reactive test strategies and experience-based test techniques


 Managing system of systems testing
 Managing safety-critical systems testing

21
 Managing nonfunctional testing

3.6.1 System of Systems Issues

 Systems is the complexity of distributed system testing (multiple different levels of testing will
occur at different locations and different levels of formality performed by different groups)

Suggest:

 Test management will involve a master test plan that spans these various levels. This master test
plan is much easier to write and to manage if the entire project team follows a single, formal
lifecycle model.
 a formal quality assurance process and plan that includes verification and validation throughout
the lifecycle, including static and dynamic testing and formalization of all test levels, is very
useful
 Formal configuration management, change management, and release management plans and
processes must define agreed-upon touch points and hand-offs between testing and the rest of the
project in these areas.

3.6.2 Safety-Critical System Issues

For many safety critical systems, there are industry-specific standards that apply to testing and quality
assurance:

 For automotive systems, there is the voluntary Motor Industry Software Reliability Association
(MISRA) standard.
 For medical systems, government entities like the United States Food and Drug Administration
apply strict mandatory standards.
 Military systems are generally subject to various standards

A safety-critical system is safety critical because it might directly or indirectly kill or injure someone if it
fails. This risk creates legal liability as well as moral responsibility on the vendor or vendors creating the
system. While not all risks of failure can be reduced to zero likelihood, we can use formal, rigorous
techniques, including the following:

 Bidirectional requirements, design, and risks traceability through to code and to tests
 Minimum test coverage levels
 Quality-focused acceptance criteria, including ones focused on non-functional quality
characteristics
 Standardized, detailed, and mandatory test documentation, including of results

3.6.3 Nonfunctional Testing Issues

 Poorly specified or missing completely.


 Major problem arises from untestable nonfunctional requirements

Improve the testability of nonfunctional requirements by the following:

 Specify requirements with testing


 Written communication must be written.
 Avoid ambiguous requirements.
 Mitigate non-functional risks during early levels of testing, reviews are helpful.
 Use a standard format for requirements.
 Specify requirements quantitatively where possible and appropriate.

22
 non-functional tests should be prioritized and sequenced according to risk

In iterative lifecycles, Test design and implementation activities that take longer than the timescales
of a single iteration should be organized as separate work activities outside of the iterations.

3.6.4 Managing reactive test strategies and experience-based test techniques


Three of the chief complaints about reactive test strategies, such as bug hunting, software attacks, and
exploratory testing.

 Difficult to manage, don't document what was tested, and don't tend to scale to large teams of
testers very easily
 While some proponents of exploratory testing have reacted defensively to these criticisms, some
of them have attempted to respond in a productive fashion by devising techniques for managing
them

Solution:

 Using session-based test management: you break the test execution effort into test sessions. A test
session is the basic unit of testing work. It is generally limited in time, typically between 30 and
120 minutes in length. The period assigned to the test session is called the time box.
 Test sessions should focus on a specific test object. They should focus on a specific test objective,
the one documented in the test charter
 Out of this session comes a report, which captures the details of what the tester tested

Some proponents of session-based test management use an acronym, PROOF:

 Past: What happened during the session? What did you do, and what did you see?
 Results: What was achieved during the session? What bugs did you find? What worked?
 Outlook: What still needs to be done? What subsequent test sessions should we try?
 Obstacles: What got in the way of good testing? What couldn't you do that you wanted to?
 Feelings: How do you feel about this test session?

23
CHAPTER 4- Test Techniques
The Advanced syllabus divides dynamic tests into five main types:

 Black-box (also called specification-based or behavioral): We test based on the way the system is
supposed to work. Two main subtypes: functional and nonfunctional, following the ISO 9126
standard.
 White-box (also called structural): We test based on the way the system is built.
 Experience-based: We test based on our skills and intuition, along with our experience with
similar applications or technologies.
 Dynamic analysis: We analyze an application while it is running, usually via some kind of
instrumentation in the code.
 Defect-based: We use our understanding of the type of defect targeted by a test as the basis for
test design, with tests derived systematically from what is known about the defect.

Test design techniques

Specification-based Structure-based Experience-based

Equivalence Boundary Statement Error


Partitioning Value testing Guessing
Analysis
Decision Cause-effect Branch Exploratory
tables graph testing Testing

Classification Pairwise Condition Defect


tree testing testing Taxonomies

State Use case Multiple Checklist


transition testing conditions Testing
testing

LCSAJ

4.3 Structure based techniques


Structure-based technique (or white-box test design technique): Procedure to derive and/or select test
cases based on an analysis of the internal structure of a component or system.

Statement testing: A white-box test design technique in which test cases are designed to execute
statements.

Branch testing: A white-box test design technique in which test cases are designed to execute branches.

24
Condition testing: A white-box test design technique in which test cases are designed to execute
condition outcomes.

Condition outcome: The evaluation of a condition to True or False.

Condition determination testing (or multiple condition testing): A white box test design technique in
which test cases are designed to execute single condition outcomes that independently affect a decision
outcome.

Control flow: A sequence of events (paths) in the execution through a component or system.

Control flow analysis: A form of static analysis based on a representation of sequences of events (paths)
in the execution through a component or system.

Data flow analysis: A form of static analysis based on the definition and usage of variables.

Decision testing: A white-box test design technique in which test cases are designed to execute decision
outcomes.

LCSAJ: A Linear Code Sequence and Jump, consisting of the following three items (conventionally
identified by line numbers in a source code listing): the start of the linear sequence of executable
statements, the end of the linear sequence, and the target line to which control flow is transferred at the
end of the linear sequence.

Path testing: A white-box test design technique in which test cases are designed to execute paths.

Path: A sequence of events, e.g., executable statements, of a component or system from an entry point to
an exit point.

4.4 Defect- and Experience-based Techniques


Defect-based technique (or defect-based test design technique): A procedure to derive and/or select test
cases targeted at one or more defect categories, with tests being developed from what is known about the
specific defect category.

Defect-based testing derives tests from defect taxonomies (i.e., categorized lists) that may be
completely independent from the software being tested. The taxonomies can include lists of
defect types, root causes, failure symptoms and other defect-related data

Defect taxonomy: A system of (hierarchical) categories designed to be a useful aid for reproducibly
classifying defects.

The Test Analyst uses the taxonomy data to determine the goal of the testing, which is to find a specific
type of defect, defect taxonomy of known and common defects of a particular type

Experienced-based technique (or experienced-based test design technique): Procedure to derive and/or
select test cases based on the tester's experience, knowledge, and intuition.

4.5 Test techniques summary


Group Technique Use when

Elementary Techniques: Equivalence Input and output parameters have a large


- focused on analyzing the Partitioning number of possible values
input/output parameters
- can be combined other Boundary Value Parameter values have explicit (e.g. clearly
techniques Analysis defined in documentation) boundaries and
ranges, or implicit (e.g. known technical

25
limitations) boundaries

Combinatorial strategies: Decision Table Testing There is a set of parameter combinations and
- combine possible values of their outputs described by business or other
multiple input/output rules
parameters
- reduce the number of Pairwise Testing The number of input combinations is
possible values extremely large and should be reduced to an
acceptable set of cases

Advanced techniques: Classification Tree You have hierarchically structured data, or


- analyze the System from the Method data can be represented in a form of a
perspective of the business hierarchical tree
logic, hierarchical relations,
scenarios, etc. State Transition Testing There are obvious states in the functionality
that have transitions regulated by rules (e.g.
flows)

Cause-Effect Graphing Causes (inputs) and effects (outputs) are


connected by the large number of complex
logical dependencies

26
CHAPTER 5- Quality Characteristics for Technical Testing

5.1 Quality Attributes for Domain Testing


Technical test analysts should be able to identify opportunities to use test techniques that are appropriate
for testing:

■ Accuracy

■ Suitability

■ Interoperability

■ Usability

■ Security

5.2.1 Accuracy
Accuracy is defined as the capability of a system to provide the provably correct results to the needed
level of precision

Decision tables will be required when the system calculates values based on multiple inputs that interact.

During unit testing:

- Use the correct data types to ensure that the correct data is stored in memory.

During integration testing: A technical test analyst must be ready to test the entire data path:

- Ensure that precision is not lost as data travels between interfaces.


- Truncated and rounded values are a possible source of problems during this transfer.
- Incorrect buffer size could easily be an issue.
- When dealing with APIs to COM and DCOM objects, the operating system, interoperating
systems, remote procedure calls, middleware, etc., data may be manipulated, massaged, and
squeezed, causing a lack of precision.

Test types: Equivalence partitioning, Boundary value analysis, Decision tables, combination techniques

5.2.2 Suitability
The capability of the software product to provide an appropriate set of functions for specified tasks and
user objectives

27
- Static testing at the low-level design and code phases would be predominately done by technical
testers with domain testers being more involved in the requirements and high-level design phases.
- Once code becomes available, technical test analysts would likely be more involved at the lower
levels (unit and integration testing) and test analysts would be more involved at the system test
level.
- Ongoing feedback from users would be an important input to make sure suitability errors are
rectified in later releases.

Test types: use cases, scenarios, static testing

5.2.3 Interoperability
The capability of the software product to interact with one or more specified components or systems.

TA should test to:

- Ensuring the smooth functioning of data transfers between systems


- Checking that the interfaces between systems integrate smoothly
- All parts of the integration between systems are fair game, including hardware, middleware,
firmware, operating systems, network configurations, and so forth.
- At system integration, the software between all of the involved systems should be able to work
together.
- Static testing of the plans for integration will be an important part of preparing for the integration
process

Some of the testing that we would expect to do might include the following items:

- Use of industry-wide communications protocols and standards, including XML, JDBC, RPCs ,
DCOM, etc.
- Efficient data transfer between systems, with special emphasis on systems with different data
representations and precisions
- The ability to self-configure, that is, for the system to detect and adapt itself to the needs of other
systems
- Relative performance when working with other systems

Test types: use cases, pairwise testing, classification trees, and other techniques

5.2.4 Usability
The capability of the software to be understood, learned, used, and attractive to the user when used under
specified conditions.

TA and TTA should have a background in psychology rather than simply being technologists or domain
experts. Knowledge of sociology and ergonomics is also helpful. An understanding of national standards
related to accessibility can be important for applications subject to such standards.

Three main techniques for usability testing:

- Inspection (evaluation or review.- consider the specification and designs from a usability point of
view.
- And heuristic evaluation- A static usability test technique to determine the compliance of a user
interface with recognized usability principles
- Validation of the actual implementation look at the inputs, outputs, and results, usability test
scenarios look at various usability attributes, such as speed of learning or operability, interviews
for the users performing the tests, syntax test
- Surveys and questionnaires: standard and publicly available surveys like Software Usability
Measurement Inventory (SUMI) and Website Analysis and Measurement Inventory (WAMMI).

28
CHAPTER 6. Review

6.1 Introduction
Types of reviews in Foundation:
 Informal review
 Walkthrough
 Technical review
 Inspection
In addition to these, Test Managers may also be involved in:
 Management reviews
 Audits

A formal review should include the following essential roles and responsibilities:

 The manager: The manager allocates resources, schedules reviews, and the like. However, they
might not be allowed to attend based on the review type.
 The moderator or leader: This is the chair of the review meeting.
 The author: The person who wrote the item under review. A review meeting, done properly,
should not be a sad or humiliating experience for the author.
 The reviewers: The people who review the item under review, possibly finding defects in it.
Reviewers can play specialized roles, based on their expertise or based on some type of defect
they should target.
 The scribe or secretary or recorder: The person who writes down the findings.

6.2 Management Reviews and Audits

6.2.1 Management Reviews


Objective:

 Monitor progress, assess status, and make decisions about future actions
 Decisions about the future of the project, such as adapting the level of resources,
implementing corrective actions or changing the scope of the project

6.2.2 Audits
Objectives:
 Demonstrate conformance to a defined set of criteria, most likely an applicable standard,
regulatory constraint, or a contractual obligation.
 Provide independent evaluation of compliance to processes, regulations, standards, etc.

6.3 Managing Reviews


Determining the optimal time to perform reviews depends on:

 The availability of the items to review in a sufficiently final format


 The availability of the right personnel for the review
 The time when the final version of the item should be available
 The time required for the review process

Test leader should define review items:

 Metrics
 Objectives
 Number of reviews,
 Type of reviews
 Participants
 Roles and responsibilities

29
 Risk

Success review factor:


Plan for review
 The availability of reviewers with sufficient technical knowledge
 Commitment of all teams
 Allocating sufficient time
 The reviews at appropriate points in the project schedule
 Any required technical or process training for the reviewers
 Backup reviewers
During review:

 Adequate measurements
 Checklists
 Defect severity and priority evaluation

After each review:

 Collect the review metrics


 Determining the return on investment (ROI)
 Provide feedback information to shareholders
 Provide feedback to review participants

The defects to escape in review:


 Problems with the review process (e.g., poor entry/exit criteria)
 Improper composition of the review team
 Inadequate review tools (checklists, etc.)
 Insufficient reviewer training and experience
 Too little preparation and review meeting time

6.4 Managing Formal Reviews

Different phases of a formal review:


Planning

Kick off meeting

Individual preparation

Review meeting

Rework

Follow-up

Formal reviews have a number of characteristics such as:

30
 Defined entry and exit criteria
 Checklists to be used by the reviewers
 Deliverables such as reports, evaluation sheets or other review summary sheets
 Metrics for reporting on the review effectiveness, efficiency, and progress

If the prerequisite conditions for the formal review are not fulfilled, the review leader may propose final
decision:

 Redefinition of the review with revised objectives


 Corrective actions necessary for the review to proceed
 Postponement of the review

31
CHAPTER7. Defect Management

7.1 Introduction
BS 7925-1 defines: An incident is every (significant) unplanned event observed during testing, and
requiring further investigation.

IEEE 1044: uses the term ―anomaly‖ instead of ―incident.‖ ―Any condition that deviates from the
expected based on requirements specifications, design documents, user documents, standards, etc. or from
someone’s perception or experience.‖

Anomaly: Any condition that deviates from expectation based on requirements specifications,
design documents, user documents, standards, etc.

7.2 The Defect Lifecycle and the Software Development Lifecycle


Each of the stages the incident report must contain the following types of information according to IEEE
1044:

 Supporting data
 Classification
 Identified impact

Stage Classification More

Recognition - Project activity Identified impact :


- Project phase 1. Severity
- Symptom 2. Project schedule
3. Project cost
Investigation - Actual cause - Severity
- Source must be change - Project schedule
- Bug type - Project cost
Action - Resolution Possible actions may be:
1. Nothing—No failure after all or
the failure is too insignificant
2. Nothing right now—Changes
are postponed
3. Changes must be implemented
immediately where necessary

Disposition Disposition—Why was the incident Closed status: Closed / Deferred


closed? (pending) / Merged / Referred

The earlier each defect is detected and removed:


 The lower the overall cost of quality for the system
 Cost of quality for a given level of defects is minimized when each defect is removed in the same
phase in which it was introduced
If the two phases are the same, then perfect phase containment has been achieved:
 Phase containment means the defect was introduced and found in the same phase and
didn’t ―escape‖ to a later phase.
 Phase containment is an effective way to reduce the costs of defects.

32
7.2.1 Defect Workflow and States
IEEE 1044 incident management lifecycle:

Investigation Disposition
•observer an •resolve defect
anomaly •reveal related •prevent future •capture furthe
•in any phase issues •re-test and info
•propose solutions regression test •move to a
terminal state
Recognition Action

Defect terminal status may be:


 Closed: defect is fixed and the fix verified through a confirmation test
 Cancelled: the defect report is invalid
 Irreproducible: the anomaly can no longer be observed
 Deferred: the anomaly relates to a real defect, but that defect will not be fixed during the project

33
7.2.2 Defect Report Fields
The defect information has goals:

 Obtain a fix for the problem


 Support accurate classification
 Risk analysis
 Process improvement

Defect reports require:

 The information: clearly identify the scenario in which the problem was detected
 Non-functional defect reports: require more details regarding the environment, other
performance parameters (e.g., size of the load), sequence of steps and expected results
 Usability failure: state what the user expected the software to do
 In cases, the tester may use the "reasonable person" test to determine that the usability is
unacceptable

7.4 Root Cause Analysis


Purpose:

 Determine what caused the defect to occur


 Provide data to support process changes that will remove root causes that are responsible
for a significant portion of the defects

Root cause analysis is usually conducted by the person who investigates and either fixes the problem or
determines the problem should not or cannot be fixed

Typical root causes include:

 Unclear requirement
 Missing requirement
 Wrong requirement
 Incorrect design implementation
 Incorrect interface implementation
 Code logic error
 Calculation error
 Hardware error
 Interface error
 Invalid data

This root cause information is aggregated to determine common issues that are resulting in the
creation of defects.

Using root cause information for:

 Process improvement helps an organization to monitor the benefits of effective process changes
 Quantify the costs of the defects that can be attributed to a particular root cause
 Provide funding for process changes that might require purchasing additional tools and
equipment as well as changing schedule timing

7.5 Managing Invalid and Duplicate Defect Reports


An anomaly occurs not as the symptom of a defect related to:
 Test environment
 The test data
 The testware
 Tester’s own misunderstanding

34
The reports are typically cancelled or closed as invalid defect reports in case:
 Subsequently is found not to relate to a defect in the work product under test, that is a false-
positive result.
 Two or more defect reports are filed which subsequently are found to relate to the same root
cause, duplicate defect reports.
Test Manager has conducted some actions to:
 Attempt to eliminate all invalid and duplicate defect reports
 Increase the number of false-negatives
 Objective to increase the defect detection effectiveness
 Improve process

35
CHAPTER 8. Improving the Testing Process

8.1 Introduction
Once established, an organization’s overall test process should undergo continuous improvement.

8.2 Test Improvement Process


Test improvement models:

 Test Maturity Model integration (TMMi®),


 Systematic Test and Evaluation Process (STEP),
 Critical Testing Processes (CTP)
 TPI Next®

8.2.1 Introduction to Process Improvement


Deming improvement cycle: Plan, Do, Check, Act, has been used for many decades, and is still
relevant when testers need to improve the process in use today.

8.2.2 Types of Process Improvement


Common method to improving test processes using tried and trusted practices.

Process reference models Content reference models Without models


- provide a maturity - provides business- - analytical approaches
measurement: driven evaluations and retrospective
- Compare organization with - Guideline to address meetings
model its highest priority
- Evaluation organization within issues to improve
the framework
- provide a roadmap for - select the appropriate - select the appropriate
improving roadmap roadmap
2 types: staged models, continuous 1 type: continuous models No
models

8.3 Improving the Testing Process


IDEALSM model define steps to implement the process improvement

36
Initiating Diagnosing

• the objectives, goals, scope and • a current test assessment


coverage are agreed on by the report
stakeholders
• a list of possible process
• Select improve model, criteria improvements
and measure

Establishing plan

• established the priority order

• a plan for the delivery of the


improvements

Learnt and improve Implement

• verify which benefits were received • implement the plan

• check which of the success criteria • include any training or


mentoring required, piloting
• start the next level of maturity starts of processes and ultimately
their full deployment

37
8.4 Improving the Testing Process with TMMi

The Testing Maturity Model integration (TMMi) is composed of five maturity levels and is intended
to complement CMMI.

8.5 Improving the Testing Process with TPI Next


The TPI Next model defines 20 key areas, each of which covers a specific aspect of the test process,
such as test strategy, metrics, test tools and test environment.

Four maturity levels are defined in the model:

38
Level 1- Level 3-
Level 0- Inital Level 2- Efficient
Controlled Optimizing

Right thing Continously


Do the Righ adapting the
"ad-hoc" • Test way: righ things in
Strategy the right way

Characteristic:

- Specific checkpoints are defined to assess each key area


- Summarized and visualized by means of a maturity matrix which covers all key areas
- The definition of improvement objectives and their implementation can be tailored according
to the needs and capacity of the testing organization

There are two dimensions of maturity shown in this table:

 Process by process: Each process has some degree of maturity, measured on one of four possible
scales: none or A; none, A, or B; none, A, B, or C; or none, A, B, C, or D.
 Overall: Based on the maturity of each process, we can assess the overall maturity by moving
along the upper row from 0 to 13 until we find that one or more processes has not achieved the
required level of maturity to move to the next number.

Suppose you start at the level 0 of overall maturity.

To move to level 1, you need to make the following improvements:

 Take the Test Strategy process area to level A.


 Take the Lifecycle Model process area to level A.
 Take the Test Specification Techniques process area to level A.
 Take the Office Environment process area to level A.
 Take the Commitment and Motivation process area to level A.
 Take the Reporting process area to level A.
 Take the Defect Management process area to level A.
 Take the Test Process Management process area to level A.

Now, to move from level 1 to level 2, you need to make the following improvements:

 Take the Moment of Involvement process area to level A.


 Take the Communication process area to level A.
 Take the Testware Management process area to level A.

8.6 Improving the Testing Process with CTP


Critical Testing Processes (CTP) assessment model is that certain testing processes are critical.
CTP is primarily a content reference model.

12 critical testing processes:


• Testing

39
• Establishing context
• Quality risk analysis
• Test estimation
• Test planning
• Test team development
• Test system development
• Test release management
• Test execution
• Bug reporting
• Results reporting
• Change management

8.7 Improving the Testing Process with STEP


STEP (Systematic Test and Evaluation Process)
STEP is primarily a content reference model
The STEP methodology stresses ―test then code" by using a requirements-based testing strategy to
ensure that early creation of test cases validates the requirements specification prior to design and
coding.

Basic premises of the methodology include:

o A requirements-based testing strategy

o Testing starts at the beginning of the lifecycle

o Tests are used as requirements and usage models

o Testware design leads software design

o Defects are detected earlier or prevented altogether

o Defects are systematically analyzed

o Testers and developers work together

40
CHAPTER 9. Test Tools and Automation

9.1 Introduction

Test execution tool:

data-driven scripting technique keyword-driven scripting technique

Data files store test input and expected results in data files store test input, expected results and
table or spreadsheet keywords in table or spreadsheet

support capture/playback tools Writing script manually

9.2 Tool Selection

9.2.1 Open-Source Tools


Open-source tools are available for almost any facet of the testing process, from test case
management to defect tracking to test case automation

Open-source tools benefit:

 Does not have a high initial purchase cost


 Can be modified or extended by their users

Open-source tools disadvance:

 Not be any formal support available for the tool.


 Solve a specific problem or address a single issue; therefore, the tool may not perform all of the
functions of a similar vendor tool

A Test Manager must ensure that:

 The team does not start using open-source tools just for the sake of using them; as with
other tools, the effort must always be targeted at deriving a positive ROI.

9.2.2 Custom Tools


The testing organization may find that they have a specific need for which no vendor or open-source tool
is available. Reasons may be a proprietary hardware platform, a customized environment or a process that

41
has been modified in an unusual way. In such cases, if the core competence exists on the team, the Test
Manager may wish to consider developing a custom tool.

9.2.3 Return on Investment (ROI)


Non-recurring costs include the following:

- Defining tool requirements to meet the objectives and goals


- Evaluating and selecting the correct tool and tool vendor
- Purchasing, adapting or developing the tool
- Performing the initial training for the tool
- Integrating the tool with other tools
- Procuring the hardware/software needed to support the tool

Recurring costs include the following:

- Owning the tool


- Licensing and support fees
- Maintenance costs for the tool itself
- Maintenance of artifacts created by the tool
- Ongoing training and mentoring costs
- Porting the tool to different environments
- Adapting the tool to future needs
- Improving the quality and processes to ensure optimal use of the selected tools

CHAPTER 10. People Skills and Team Composition

10.1 Individual Skills


An individual's ability to test software can be obtained through experience or by education and training.
The tester's knowledge base:

 Use of software systems


 Knowledge of the domain or business
 Participation in various phases of the software development process activities including analysis,
development and technical support
 Participation in software testing activities

Test skill

Technical skills (hard skills)

Interpersonal skills (soft skills)

1. Tester’s skills:
o Skills in Foundation (requires curiosity, professional pessimism, a critical eye, attention to
detail, good communication)
o The ability to analyze a specification, participate in risk analysis, design test cases, and the
diligence for running tests and recording the results.
2. Test Manager’s skill
o Project management (making a plan, tracking progress and reporting to stakeholders)
o Technical skills, interpersonal skills
o Work effectively with others, the successful test professional must also be well-organized,
attentive to detail and possess strong written and verbal communication skills.

10.3 Fitting Testing Within an Organization


Cross-team has 3 types:

42
 Distributed testing
 In-sourced testing
 Out-sourced testing

Advantage: the inherent independence of testing + lower budget and overcoming shortage of staff.

Disadvantages: People not having taken close part in the development will be less prepared for the testing
task and might take longer to produce test specifications.

The risks related to distributed, in-sourced, and outsourced testing fall within the areas of:

 Process descriptions
 Distribution of work
 Quality of work
 Culture
 Trust

10.4 Motivation
• Recognition

• Approval

• Respect

• Responsibility

• Rewards

10.5 Communication
Test Managers communicate with:

 Communication must be effective for the target audience (users, project team members,
management, external testing groups and customers)
 Communicate effectively within the testing group to pass on news, instructions, changes in
priorities and other standard information that is imparted in the normal process of testing.
 Produce professional quality documentation that is representative of an organization that
promotes quality

43

You might also like