ISTQB Advanced Tóm Tắt Kiến Thức 10 Chương - V1.0
ISTQB Advanced Tóm Tắt Kiến Thức 10 Chương - V1.0
ISTQB Advanced Tóm Tắt Kiến Thức 10 Chương - V1.0
1
Content
CHAPTER 6. Review............................................................................................................................................. 29
2
ISTQB Advanced introduction
Câu hỏi của mức advance có thể khá dài, vẫn chia thành các mức là K1, K2, K3, K4 nhưng tỷ lệ các câu
K1, K2 sẽ giảm so với foundation:
This book contains solid features to help you do that, including the following:
You must read every chapter of this book and the entire ISTQB Advanced syllabus
3
CHAPTER 1-Test basics and Test Metrics
Regression test all the functions and capabilities after the first iteration
Failure to plan for bugs and how to handle them
The lack of rigor in and respect for testing
The designs of the system will change
Schedules can be quite unpredictable
C- Agile project:
Use a less formalized process and a much closer working relationship that allows changes to
occur more easily within the project
Less comprehensive test documentation in favor of having a more rapid method of
communication such as daily ―stand up‖ meetings
Require the earliest involvement from the Test Analyst and throughout project lifecycle:
o Working with the developers as they do their initial architecture and design work
o Reviews may not be formalized but are continuous as the software evolves
Good change management and configuration management are critical for testing
Unit or component
Integration
System
Acceptance
You should expect to find most, if not all, of the following for each level:
4
Clearly defined test goals and scope
Traceability to the test basis (if available)
Entry and exit criteria, as appropriate both for the level and for the system lifecycle
Test deliverables, including results reporting that will be expected
Test techniques that will be applied, as appropriate for the level, for the team, and for the risks
inherent in the system
Measurements and metrics
Test tools, where applicable and as appropriate for the level
And, if applicable, compliance with organizational or other standards
System of systems: Multiple heterogeneous, distributed systems that are embedded in networks
at multiple levels and in multiple domains and are interconnected, addressing large-scale
interdisciplinary common problems and purposes.
The integration of commercial off-the-shelf (COTS) software, along with some amount of
custom development, often taking place over a long period.
Significant technical, lifecycle, and organizational complexity and heterogeneity.
Different development lifecycles and other process among disparate teams, especially—as is
frequently the case—when distributed work, insourcing, and outsourcing are involved.
Serious potential reliability issues due to intersystem coupling, where one inherently weaker
system creates ripple-effect failures across the entire system of systems.
System integration testing, including interoperability testing, is essential.
Safety-Critical Systems:
Safety-critical system: A system whose failure or malfunction may result in death or serious
injury to people, or loss or severe damage to equipment, or environmental harm. - Safety-
critical systems are those systems upon which lives depend
Defects can cause death, and deaths can cause civil and criminal penalties, so proof of adequate
testing can be and often is used to reduce liability.
Focus on quality as a very important project priority.
Various regulations and standards often apply to safety critical systems
Traceability all the way from regulatory requirements to test results, and helps demonstrate
compliance.
5
Mean time between failure or failure arrival rate
Breakdown of the number or percentage of defects categorized by the following:
Total number of tests planned, specified (implemented), run, passed, failed, blocked, and skipped
Regression and confirmation test status, including trends and totals for regression test and
confirmation test failures
Hours of testing planned per day versus actual hours achieved
Availability of the test environment (percentage of planned test hours when the test environment
is usable by the test team)
Progress and Number of test conditions, test cases or test specifications planned and
completion those executed broken down by whether they passed or failed
6
activities Total defects, often broken down by severity, priority, current state,
affected subsystem, …
Number of changes required, accepted, built, and tested
Planned versus actual cost
Planned versus actual duration
Planned versus actual dates for testing milestones
Planned versus actual dates for test-related project milestones (e.g., code
freeze)
Product (quality) risk status, often broken down by those mitigated versus
unmitigated, major risk areas, new risks discovered after test analysis, etc.
Percentage loss of test effort, cost, or time due to blocking events or planned
changes
Confirmation and regression test status
Closure Percentage of test cases executed, passed, failed, blocked, and skipped during
activities test execution
Percentage of test cases checked into re-usable test case repository
Percentage of test cases automated, or planned versus actual test cases
automated
Percentage of test cases integrated into the regression tests
Percentage of defect reports resolved/not resolved
Percentage of test work products identified and archived
7
• Costs of internal failure
• Fix bug prior to delivery
• Re-test
• Costs of external failure
• Support customer
• Fix bug after delivery
• Regression test
2.1 Introduction
Test strategy - describes the organization’s general, project-independent methods for testing
Test approach- The implementation of the test strategy for a specific project
Master test plan (or project test plan) - describes the implementation of the test strategy for a
particular project
Level test plan (or phase test plan) - describes the particular activities to be carried out within each
test level
a. The strategy:
Preventive Reactive
When Early involvement of the test team responding to the system as it is
8
Use test planning, analysis, and design activities actually presented to the test
to identify problems in work products team
Advantage - resolve these problems before test execution Avoid costs associated with
- preventing the late discovery of a large number development of test work
of defects, which is a leading cause of project products that might not be useful
failure. if the actual system differs from
what was initially planned.
Project type for projects that are predictable, carefully For projects that are chaotic,
planned, and executed according to plan poorly planned, or in constant
fundamental change
Types of Analytical Model-based
strategies Methodical Dynamic or heuristic
Process- or standard-compliant Regression testing
Consultative
b. The approach:
Define test levels
The goals and objectives of each level
What test techniques will be used at each level of testing
Coverage of each level
c. Exist criteria
Use to determine if we can stop the testing or if we have to go on to reach the objective of the
testing
Derived from the strategy and risk analysis; the higher the risk
Guide the specification of the test and the selection of test case design techniques
The most appropriate completion criteria:
o Specified coverage has been achieved
o Specified number of failures found per test effort has been achieved
o No known serious faults
o The benefits of the system are bigger than known problems
o (The time has run out)
e. Environment
Test Manager to work with the project architects to define the initial test environment
specification, to verify availability of the resources required, to ensure that the people who will
configure the environment are committed to do so, and to understand cost/delivery timescales and
the work required to complete and deliver the test environment.
9
Based on fundamental test process, you can create your work breakdown structure with following major
activities:
Planning
Control
Analysis
Design
Implementation
Execution
Results reporting and exit evaluation
Closure
Process factors
Material factors
People factors
Delaying factors
Some factors, the material factors, arise from the nature of the project, the tools at hand, the resources
available, and so forth:
10
Proper skills, experience, and attitudes in the project team, especially in the managers and key
players
Stability of the project team, especially the absence of turnover
Established, positive project team relationships, again including the individual contributors, the
managers, and the project stakeholders
Competent, responsive test environment support
Project-wide appreciation of testing, release engineering, system administration, and other non-
glamorous but essential roles (i.e., not an "individual heroics" culture)
Use of skilled contractors and consultants to fill gaps
Honesty, commitment, transparency, and open, shared agendas among the individual
contributors, the managers, and the project stakeholders
Finally, there are certain delaying factors that, when present, always increase schedule and effort:
Estimating Techniques:
Monitoring: compare actual progress against the plan; report the status of executing the plan, including
any deviations that arise.
Monitoring include measurement completeness of the test process and create reports. Some
measurement:
11
Defect discovery rates and other defect information
Planned versus actual hours to develop testware and execute test cases
Control: develop and apply a set of corrective actions to get a test project.
Respond to the changes in the mission, strategies, and objectives of testing, or to changes in the
project.
Re-allocate the remaining effort and reprioritize the tests.
The creation or updating of documents
Test analysis is the activity that defines ―what‖ is to be tested in the form of test conditions
Activities:
Test case: A set of input values, execution preconditions, expected results, and execution post
conditions developed for a particular objective or test condition
Test condition: An item or event of a component or system that could be verified by one or more
test cases (e.g., a function, transaction, feature, quality attribute, or structural element).
Test execution: The process of running a test on the component or system under test, producing
actual result(s).
Test procedure: A document specifying a sequence of actions for the execution of a test. Also
known as test script or manual test script.
Test conditions:
Be identified by analysis of the test basis, test objectives, and product risks
Be viewed as the detailed measures and targets for success
Be identified and traced back to each product risk
Be traceable back to the test basis and defined strategic objectives
Be traceable forward to test designs and other test work products
12
Providing better and more detailed monitoring and control for a Test Manager
Contributes to defect prevention by occurring early in a project for higher levels of testing
Relates testing work products to stakeholders in terms that they can understand
Enables test design, implementation and execution
Provides the basis for clearer horizontal traceability within a test level
Preconditions
Test environment requirements
Test inputs and other test data requirements
Expected results
Post conditions
Activities:
Determine in which test areas low-level (concrete) or high-level (logical) test cases are most
appropriate
Determine the test case design technique(s) that provide the necessary test coverage
Create test cases that exercise the identified test conditions
Apply the prioritization criteria identified during risk analysis and test planning
13
Interfaces between the various test objects
provide all the specific information and Provide guidelines for what should be tested:
procedures needed for the tester to execute actual data, the procedure that is followed when
the test case and verify the results executing the test.
Provide: Provide:
Objective
Preconditions.
Test data requirements
Expected results
Post-conditions
Template:
14
Level of detail is considered by:
Using all available oracles—written and mental, provided and derived—the tester can define expected
results before and during test execution
Organize the test cases into test suites (i.e., groups of test cases)
Related test cases are executed together
Risk priority order may dictate the execution order for the test cases
The availability of the right people, equipment, data and the functionality to be tested
15
Influenced by the detail of the test cases and test conditions
Regulatory rules apply
Tests should provide evidence of compliance to applicable standards as DO-178B/ED 12B –Test
data for both inputs and the test environment is available now.
Test log: A chronological record of relevant details about the execution of tests. Test
logging: The process of recording information about tests executed into a test log.
To measure completeness of the testing with respect to exit criteria, we can measure properties of the test
execution process such as the following:
Number of test conditions, cases, or test procedures planned, executed, passed, and failed
Total defects, classified by severity, priority, status, or some other factor
Change requests proposed, accepted, and tested
Planned versus actual costs, schedule, effort
Quality risks, both mitigated and residual
16
Lost test time due to blocking events
Confirmation and regression test results
Test artifacts
Test completion Lessons learned Archiving
handover
•All planned tests •Known defects •Quality risk •All test work
should be either run communicated to analysis? product archived in
or skipped ? operation and •Metrics analysis? CM?
•All known defects support team? •Root cause analysis
should be either •Tests and test and action defined?
fixed, deferred e, or environments to •Process
accepted? maintenance team? improvement?
•Regression test •Any unanticipated
documented? deviation?
Establishing and monitoring metrics that monitor the quality of these work products
Working with the Test Analysts and Technical Test Analysts to select and customize appropriate
templates for these work products
Working with the Test Analysts and Technical Test Analysts to establish standards for these work
products, such as the degree of detail necessary in tests, logs, and reports
Reviewing testing work products using the appropriate techniques and by the appropriate
participants and stakeholders
IEEE-829 Templates
17
Test case template Test transmittal report template
Lightweight techniques allow the use of weighting of the likelihood and impact factors to emphasize
business or technical risks:
18
3.2.2 Risk categories
Project risk (planning risks) Product risk (quality risk)
A risk to the project’s capability to deliver A risk to quality of product
products: Scope, cost, Time Quality
Related to management and control of the (test) Directly related to the test object
project, e.g. lack of ATM staffing, strict
deadlines, changing requirements
The primary effect of a potential problem is on the A possible reliability defect that could cause a
overall success of a project system to crash during normal operation
Project Risks
However, some project risks can and should be mitigated successfully by the Test Manager and
Test Analyst:
• Preparing the testware earlier,
• Pre-testing of test environments,
• Pre-testing of early versions of the product,
• Applying tougher entry criteria to testing,
• Enforcing requirements for testability,
• Participating in reviews of early project work products,
• Participating in change management,
1. Extensive: run a large number of tests that are both broad and deep, exercising combinations and
variations of interesting conditions.
2. Broad : run a medium number of tests that exercise many different interesting conditions
3. Cursory: run a small number of tests that sample the most interesting conditions
4. Opportunity: leverage other tests or activities to run a test or two of an interesting condition, but
invest very little time and effort
5. Report bugs only: not test at all, but, if bugs related to this risk arise during other tests, report those
bugs
6. None: neither test for these risks nor report related bugs
a. Risk identification figuring out what the different project and quality risks are for the project
19
Expert interviews
Independent assessments
Use of risk templates
Project retrospectives
Risk workshops and brainstorming
Checklists
Calling on past experience
b. Risk analysis, assessing the level of risk—typically based on likelihood and impact—for each
identified risk item
Lightweight techniques allow the use of weighting of the likelihood and impact factors to emphasize
business or technical risks:
Hazard analysis, which extends the analytical process upstream, attempting to identify the
hazards that underlie each risk.
Cost of exposure, where the risk assessment process involves determining, for each quality risk
item, three factors:
o 1) the likelihood (expressed as a percentage) of a failure related to the risk item;
o 2) the cost of a loss (expressed as a financial quantity) associated with a typical
failure related to the risk item, should it occur in production; and,
20
o 3) The cost of testing for such failures.
Failure Mode and Effect Analysis (FMEA) and its variants, where quality risks, their
potential causes, and their likely effects are identified and then severity, priority and
detection ratings are assigned.
Quality Function Deployment (QFD), which is a quality risk management technique
with testing implications, specifically, being concerned with quality risks that arise from an
incorrect or insufficient understanding of the customers’ or users’ requirements.
Fault Tree Analysis (FTA), where various actual observed failures (from testing or from
production), or potential failures (quality risks), are subjected to a root cause analysis starting
with defects that could cause the failure, then with errors or defects that could cause those defects,
continuing on until the various root causes are identified.
c. Risk mitigation ( "risk control" because it consists of mitigation, contingency, transference, and
acceptance actions for various risks)
Outsourced testing - Test effort is carried out by at more than single location by resources not of the
same company
Insourced testing -Test effort is carried out by people who are co-located with the project team but who
are not fellow employees
Main issues:
• Clear channels of communication and well-defined expectations for missions, tasks, and
deliverables
• Alignment of methodologies
• For distributed testing, the division of the test work across the multiple locations must be explicit
and intelligently decided
• Develop and maintain trust that all of the test team(s) will carry out their roles properly in spite
of organizational, cultural, language, and geographical
21
Managing nonfunctional testing
Systems is the complexity of distributed system testing (multiple different levels of testing will
occur at different locations and different levels of formality performed by different groups)
Suggest:
Test management will involve a master test plan that spans these various levels. This master test
plan is much easier to write and to manage if the entire project team follows a single, formal
lifecycle model.
a formal quality assurance process and plan that includes verification and validation throughout
the lifecycle, including static and dynamic testing and formalization of all test levels, is very
useful
Formal configuration management, change management, and release management plans and
processes must define agreed-upon touch points and hand-offs between testing and the rest of the
project in these areas.
For many safety critical systems, there are industry-specific standards that apply to testing and quality
assurance:
For automotive systems, there is the voluntary Motor Industry Software Reliability Association
(MISRA) standard.
For medical systems, government entities like the United States Food and Drug Administration
apply strict mandatory standards.
Military systems are generally subject to various standards
A safety-critical system is safety critical because it might directly or indirectly kill or injure someone if it
fails. This risk creates legal liability as well as moral responsibility on the vendor or vendors creating the
system. While not all risks of failure can be reduced to zero likelihood, we can use formal, rigorous
techniques, including the following:
Bidirectional requirements, design, and risks traceability through to code and to tests
Minimum test coverage levels
Quality-focused acceptance criteria, including ones focused on non-functional quality
characteristics
Standardized, detailed, and mandatory test documentation, including of results
22
non-functional tests should be prioritized and sequenced according to risk
In iterative lifecycles, Test design and implementation activities that take longer than the timescales
of a single iteration should be organized as separate work activities outside of the iterations.
Difficult to manage, don't document what was tested, and don't tend to scale to large teams of
testers very easily
While some proponents of exploratory testing have reacted defensively to these criticisms, some
of them have attempted to respond in a productive fashion by devising techniques for managing
them
Solution:
Using session-based test management: you break the test execution effort into test sessions. A test
session is the basic unit of testing work. It is generally limited in time, typically between 30 and
120 minutes in length. The period assigned to the test session is called the time box.
Test sessions should focus on a specific test object. They should focus on a specific test objective,
the one documented in the test charter
Out of this session comes a report, which captures the details of what the tester tested
Past: What happened during the session? What did you do, and what did you see?
Results: What was achieved during the session? What bugs did you find? What worked?
Outlook: What still needs to be done? What subsequent test sessions should we try?
Obstacles: What got in the way of good testing? What couldn't you do that you wanted to?
Feelings: How do you feel about this test session?
23
CHAPTER 4- Test Techniques
The Advanced syllabus divides dynamic tests into five main types:
Black-box (also called specification-based or behavioral): We test based on the way the system is
supposed to work. Two main subtypes: functional and nonfunctional, following the ISO 9126
standard.
White-box (also called structural): We test based on the way the system is built.
Experience-based: We test based on our skills and intuition, along with our experience with
similar applications or technologies.
Dynamic analysis: We analyze an application while it is running, usually via some kind of
instrumentation in the code.
Defect-based: We use our understanding of the type of defect targeted by a test as the basis for
test design, with tests derived systematically from what is known about the defect.
LCSAJ
Statement testing: A white-box test design technique in which test cases are designed to execute
statements.
Branch testing: A white-box test design technique in which test cases are designed to execute branches.
24
Condition testing: A white-box test design technique in which test cases are designed to execute
condition outcomes.
Condition determination testing (or multiple condition testing): A white box test design technique in
which test cases are designed to execute single condition outcomes that independently affect a decision
outcome.
Control flow: A sequence of events (paths) in the execution through a component or system.
Control flow analysis: A form of static analysis based on a representation of sequences of events (paths)
in the execution through a component or system.
Data flow analysis: A form of static analysis based on the definition and usage of variables.
Decision testing: A white-box test design technique in which test cases are designed to execute decision
outcomes.
LCSAJ: A Linear Code Sequence and Jump, consisting of the following three items (conventionally
identified by line numbers in a source code listing): the start of the linear sequence of executable
statements, the end of the linear sequence, and the target line to which control flow is transferred at the
end of the linear sequence.
Path testing: A white-box test design technique in which test cases are designed to execute paths.
Path: A sequence of events, e.g., executable statements, of a component or system from an entry point to
an exit point.
Defect-based testing derives tests from defect taxonomies (i.e., categorized lists) that may be
completely independent from the software being tested. The taxonomies can include lists of
defect types, root causes, failure symptoms and other defect-related data
Defect taxonomy: A system of (hierarchical) categories designed to be a useful aid for reproducibly
classifying defects.
The Test Analyst uses the taxonomy data to determine the goal of the testing, which is to find a specific
type of defect, defect taxonomy of known and common defects of a particular type
Experienced-based technique (or experienced-based test design technique): Procedure to derive and/or
select test cases based on the tester's experience, knowledge, and intuition.
25
limitations) boundaries
Combinatorial strategies: Decision Table Testing There is a set of parameter combinations and
- combine possible values of their outputs described by business or other
multiple input/output rules
parameters
- reduce the number of Pairwise Testing The number of input combinations is
possible values extremely large and should be reduced to an
acceptable set of cases
26
CHAPTER 5- Quality Characteristics for Technical Testing
■ Accuracy
■ Suitability
■ Interoperability
■ Usability
■ Security
5.2.1 Accuracy
Accuracy is defined as the capability of a system to provide the provably correct results to the needed
level of precision
Decision tables will be required when the system calculates values based on multiple inputs that interact.
- Use the correct data types to ensure that the correct data is stored in memory.
During integration testing: A technical test analyst must be ready to test the entire data path:
Test types: Equivalence partitioning, Boundary value analysis, Decision tables, combination techniques
5.2.2 Suitability
The capability of the software product to provide an appropriate set of functions for specified tasks and
user objectives
27
- Static testing at the low-level design and code phases would be predominately done by technical
testers with domain testers being more involved in the requirements and high-level design phases.
- Once code becomes available, technical test analysts would likely be more involved at the lower
levels (unit and integration testing) and test analysts would be more involved at the system test
level.
- Ongoing feedback from users would be an important input to make sure suitability errors are
rectified in later releases.
5.2.3 Interoperability
The capability of the software product to interact with one or more specified components or systems.
Some of the testing that we would expect to do might include the following items:
- Use of industry-wide communications protocols and standards, including XML, JDBC, RPCs ,
DCOM, etc.
- Efficient data transfer between systems, with special emphasis on systems with different data
representations and precisions
- The ability to self-configure, that is, for the system to detect and adapt itself to the needs of other
systems
- Relative performance when working with other systems
Test types: use cases, pairwise testing, classification trees, and other techniques
5.2.4 Usability
The capability of the software to be understood, learned, used, and attractive to the user when used under
specified conditions.
TA and TTA should have a background in psychology rather than simply being technologists or domain
experts. Knowledge of sociology and ergonomics is also helpful. An understanding of national standards
related to accessibility can be important for applications subject to such standards.
- Inspection (evaluation or review.- consider the specification and designs from a usability point of
view.
- And heuristic evaluation- A static usability test technique to determine the compliance of a user
interface with recognized usability principles
- Validation of the actual implementation look at the inputs, outputs, and results, usability test
scenarios look at various usability attributes, such as speed of learning or operability, interviews
for the users performing the tests, syntax test
- Surveys and questionnaires: standard and publicly available surveys like Software Usability
Measurement Inventory (SUMI) and Website Analysis and Measurement Inventory (WAMMI).
28
CHAPTER 6. Review
6.1 Introduction
Types of reviews in Foundation:
Informal review
Walkthrough
Technical review
Inspection
In addition to these, Test Managers may also be involved in:
Management reviews
Audits
A formal review should include the following essential roles and responsibilities:
The manager: The manager allocates resources, schedules reviews, and the like. However, they
might not be allowed to attend based on the review type.
The moderator or leader: This is the chair of the review meeting.
The author: The person who wrote the item under review. A review meeting, done properly,
should not be a sad or humiliating experience for the author.
The reviewers: The people who review the item under review, possibly finding defects in it.
Reviewers can play specialized roles, based on their expertise or based on some type of defect
they should target.
The scribe or secretary or recorder: The person who writes down the findings.
Monitor progress, assess status, and make decisions about future actions
Decisions about the future of the project, such as adapting the level of resources,
implementing corrective actions or changing the scope of the project
6.2.2 Audits
Objectives:
Demonstrate conformance to a defined set of criteria, most likely an applicable standard,
regulatory constraint, or a contractual obligation.
Provide independent evaluation of compliance to processes, regulations, standards, etc.
Metrics
Objectives
Number of reviews,
Type of reviews
Participants
Roles and responsibilities
29
Risk
Adequate measurements
Checklists
Defect severity and priority evaluation
Individual preparation
Review meeting
Rework
Follow-up
30
Defined entry and exit criteria
Checklists to be used by the reviewers
Deliverables such as reports, evaluation sheets or other review summary sheets
Metrics for reporting on the review effectiveness, efficiency, and progress
If the prerequisite conditions for the formal review are not fulfilled, the review leader may propose final
decision:
31
CHAPTER7. Defect Management
7.1 Introduction
BS 7925-1 defines: An incident is every (significant) unplanned event observed during testing, and
requiring further investigation.
IEEE 1044: uses the term ―anomaly‖ instead of ―incident.‖ ―Any condition that deviates from the
expected based on requirements specifications, design documents, user documents, standards, etc. or from
someone’s perception or experience.‖
Anomaly: Any condition that deviates from expectation based on requirements specifications,
design documents, user documents, standards, etc.
Supporting data
Classification
Identified impact
32
7.2.1 Defect Workflow and States
IEEE 1044 incident management lifecycle:
Investigation Disposition
•observer an •resolve defect
anomaly •reveal related •prevent future •capture furthe
•in any phase issues •re-test and info
•propose solutions regression test •move to a
terminal state
Recognition Action
33
7.2.2 Defect Report Fields
The defect information has goals:
The information: clearly identify the scenario in which the problem was detected
Non-functional defect reports: require more details regarding the environment, other
performance parameters (e.g., size of the load), sequence of steps and expected results
Usability failure: state what the user expected the software to do
In cases, the tester may use the "reasonable person" test to determine that the usability is
unacceptable
Root cause analysis is usually conducted by the person who investigates and either fixes the problem or
determines the problem should not or cannot be fixed
Unclear requirement
Missing requirement
Wrong requirement
Incorrect design implementation
Incorrect interface implementation
Code logic error
Calculation error
Hardware error
Interface error
Invalid data
This root cause information is aggregated to determine common issues that are resulting in the
creation of defects.
Process improvement helps an organization to monitor the benefits of effective process changes
Quantify the costs of the defects that can be attributed to a particular root cause
Provide funding for process changes that might require purchasing additional tools and
equipment as well as changing schedule timing
34
The reports are typically cancelled or closed as invalid defect reports in case:
Subsequently is found not to relate to a defect in the work product under test, that is a false-
positive result.
Two or more defect reports are filed which subsequently are found to relate to the same root
cause, duplicate defect reports.
Test Manager has conducted some actions to:
Attempt to eliminate all invalid and duplicate defect reports
Increase the number of false-negatives
Objective to increase the defect detection effectiveness
Improve process
35
CHAPTER 8. Improving the Testing Process
8.1 Introduction
Once established, an organization’s overall test process should undergo continuous improvement.
36
Initiating Diagnosing
Establishing plan
37
8.4 Improving the Testing Process with TMMi
The Testing Maturity Model integration (TMMi) is composed of five maturity levels and is intended
to complement CMMI.
38
Level 1- Level 3-
Level 0- Inital Level 2- Efficient
Controlled Optimizing
Characteristic:
Process by process: Each process has some degree of maturity, measured on one of four possible
scales: none or A; none, A, or B; none, A, B, or C; or none, A, B, C, or D.
Overall: Based on the maturity of each process, we can assess the overall maturity by moving
along the upper row from 0 to 13 until we find that one or more processes has not achieved the
required level of maturity to move to the next number.
Now, to move from level 1 to level 2, you need to make the following improvements:
39
• Establishing context
• Quality risk analysis
• Test estimation
• Test planning
• Test team development
• Test system development
• Test release management
• Test execution
• Bug reporting
• Results reporting
• Change management
40
CHAPTER 9. Test Tools and Automation
9.1 Introduction
Data files store test input and expected results in data files store test input, expected results and
table or spreadsheet keywords in table or spreadsheet
The team does not start using open-source tools just for the sake of using them; as with
other tools, the effort must always be targeted at deriving a positive ROI.
41
has been modified in an unusual way. In such cases, if the core competence exists on the team, the Test
Manager may wish to consider developing a custom tool.
Test skill
1. Tester’s skills:
o Skills in Foundation (requires curiosity, professional pessimism, a critical eye, attention to
detail, good communication)
o The ability to analyze a specification, participate in risk analysis, design test cases, and the
diligence for running tests and recording the results.
2. Test Manager’s skill
o Project management (making a plan, tracking progress and reporting to stakeholders)
o Technical skills, interpersonal skills
o Work effectively with others, the successful test professional must also be well-organized,
attentive to detail and possess strong written and verbal communication skills.
42
Distributed testing
In-sourced testing
Out-sourced testing
Advantage: the inherent independence of testing + lower budget and overcoming shortage of staff.
Disadvantages: People not having taken close part in the development will be less prepared for the testing
task and might take longer to produce test specifications.
The risks related to distributed, in-sourced, and outsourced testing fall within the areas of:
Process descriptions
Distribution of work
Quality of work
Culture
Trust
10.4 Motivation
• Recognition
• Approval
• Respect
• Responsibility
• Rewards
10.5 Communication
Test Managers communicate with:
Communication must be effective for the target audience (users, project team members,
management, external testing groups and customers)
Communicate effectively within the testing group to pass on news, instructions, changes in
priorities and other standard information that is imparted in the normal process of testing.
Produce professional quality documentation that is representative of an organization that
promotes quality
43