CSTE Body of Knowledge Course 2006

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 16

CSTE Body of Knowledge

Knowledge Category 1

Software Testing Principles and Concepts

The “basics” of software testing are represented by the vocabulary of testing, testing
approaches, methods and techniques as well as the materials used by testers in
performing their test activities. Specifically, this knowledge category will address:

• Testing Techniques - Understand the various approaches used in testing,


including static (e.g., desk checking), white-box (logic driven), black-box
(requirements driven), load testing, coverage testing and regression testing.
Also included are the methods for designing and conducting tests.
• Levels of Testing - Identify the levels of testing such as unit, performance, string,
integration, systems recovery, acceptance, parallel, performance, and interface
testing.
• Testing Different Types of Software - The changes in the approach to testing
when testing different development approaches such as batch processing, client
server, web based, object oriented systems and wireless systems.
• Independent Testing - Testing by individuals other than those involved in the
development of the product or system.
• Vocabulary - The technical terms used to describe various testing techniques,
tools, principles, concepts, and activities.
• The Multiple Roles of Software Testers - The test objectives that can be
incorporated into the mission of software testers. This would include the testing to
determine whether requirements are met, testing effectiveness and efficiency,
testing user needs versus software specifications and testing software attributes
such as maintainability, ease of use and reliability.
• Testers Workbench - An overview of the process that testers use in performing a
specific test activity such as developing a test plan and preparing test data.
• The “V” Concept of Testing - The “V” concept relates the build components of
the development phases to the test components that occur during the test phases.

CSTE Body of Knowledge


Knowledge Category 2

Building the Test Environment

The test environment is comprised of all the conditions, circumstances, and influences
surrounding and affecting the testing of software. The environment includes the
organization’s policies, procedures, culture, attitudes, rewards, test processes, test tools,
methods for developing and improving test processes, management’s support of software
testing, as well as any test labs developed for the purpose of testing software and multiple
operating environments.
This category also includes assuring the test environment fairly represents the production
environment to enable realistic testing to occur. Specifically this knowledge category will
address:

Knowledge of Test Process Selection and Analysis

1. Concepts of Test Processes – the concepts of policies, standards and procedures


and their integration into test process.
2. Test Process Selection – selecting test processes that lead to efficient and effective
testing activities and products.
3. Acquisition or Development of a Test Bed/Test Lab/Test Processes – designing,
developing, and acquiring a test environment that simulates “the real world,”
including capability to create and maintain test data.
4. Test Quality Control – test quality control to assure that the test process has been
performed correctly.
5. Analysis of the Test Process – the test process should be analyzed to ensure:

a. The effectiveness and efficiency of test processes.


b. The test objectives are applicable, reasonable, adequate, feasible, and
affordable.
c. The test program meets the test objectives.
d. The correct test program is being applied to the project.
e. The test methodology, including the processes, infrastructure, tools,
methods, and planned work products and reviews, is adequate to ensure
that the test program is conducted correctly.
f. The test work products are adequate to meet the test objectives.
g. Test progress, performance, processes, and process adherence are assessed
to determine the adequacy of the test program.
h. Adequate, not excessive, testing is performed.

6. Continuous Improvement – identifying and making improvements to the test


process using formal process improvement processes.
7. Adapting the Test Environment to Different Software Development
Methodologies – the test environment must be established to properly test the
methodologies used to build software systems such as waterfall, web-based,
object oriented, agile, etc.
8. Competency of the Software Testers – management must provide the training
necessary to assure that their software testers are competent in the processes and
tools included in the test environment.

Test Tools

1. Tool Development and/or Acquisition – understand the processes for acquiring


and using test tools, methods, and understand the skills needed for test
development, execution, tracking, and analysis tools. (Both manual and
automated tools including test management tools).
2. Tool Usage – understanding of how tools are used for:

a. automated regression testing tools


b. defect management tools
c. performance/load testing tools
d. manual tools such as checklists, test scripts, and decision tables;
traceability tools
e. code coverage
f. test case management tools
g. common tools to aid in testing such as an excel spreadsheet.

Management Support for Effective Software Testing

1. Management must create a “tone” that encourages software testers to do their


work in an efficient and effective manner. This is accomplished through test
policies, management support of those policies, open communication between
management and testers, and enforcing compliance to policies and processes.
2. Test processes must align with organizational goals, user business objectives,
release cycles and different developmental methodologies.

CSTE Body of Knowledge


Knowledge Category 3

Managing the Test Project

Software testing is a project with almost all the same attributes as a software development
project. Software testing involves project planning, project staffing, scheduling and
budgeting, communicating, assigning and monitoring work and ensuring that changes to
the project plan are incorporated into the test plan. Specifically this knowledge category
will address:

Test Administration and Organizational Structure

1. Test planning, scheduling and budgeting.


2. Alignment – Assurance the test processes are aligned with organizational goals,
user business objectives, release cycles and different development methodologies.
3. Test Performance – monitoring test performance for adherence to the plan,
schedule and budget, reallocating resources as required, and averting undesirable
trends.
4. Staffing – acquiring, training, and retaining a competent test staff.
5. Management of Staff – keeping staff appropriately informed, and effectively
utilizing the test staff.
6. Organizational differences between traditional management utilizing a
hierarchical structure versus quality management using a flattened organization
structure.

Personal and Organizational Effectiveness

1. Communication Skills

a. Written Communication – providing written confirmation and explanation


of a variance from expectations. Being able to describe on paper a
sequence of events to reproduce the defect. The ability to analyze
information, so that all pertinent information is recorded and
communicated to the proper person.
b. Oral Communication – understand how to communicate problems and/or
defects in a non-offensive manner that will not incite ill feelings or
defensiveness on the part of the developers. The ability to articulate a
sequence of events in an organized and understandable manner. Includes
effective participation in team activities.
c. Listening Skills – actively listening to what is said; asking for clarification
when needed, and providing feedback statements to acknowledge
understanding; documenting conclusions.
d. Interviewing Skills – developing and asking questions for the purpose of
collecting data for analysis or evaluation; includes documenting
conclusions.
e. Analyzing Skills – determining how to use the information received.

2. Personal Effectiveness Skills

a. Negotiation – working together with one or more parties to develop


options that will satisfy all parties.
b. Conflict Resolution – bringing a situation into focus and satisfactorily
concluding a disagreement or difference of opinion between parties.
c. Influence and Motivation – using techniques and methods in order to
invoke a desired effect on another person. Influencing others to act in a
certain goal-oriented activity.
d. Judgment – applying beliefs, standards, guidelines, policies, procedures,
and values to a decision.
e. Facilitation – helping a group to achieve its goals by providing objective
guidance.

3. Project Relationships – software testers need to develop an effective working


relationship with project management, software customers and users, as well as
other stakeholders having invested interest in the success of the software project.
4. Recognition – recognition is showing appreciation to individuals and teams for
work accomplished. This also means publicly giving credit where due and
promoting other’s credibility.
5. Motivation – encouraging individuals to do the right thing and do it effectively
and efficiently.
6. Mentoring – working with testers to assure they master the needed skills.
7. Management and Quality Principles – understanding the principles needed to
build a world class testing organization.

Leadership

1. Meeting Chairing – organizing and conducting meetings to provide maximum


productivity over the shortest time period.
2. Facilitation – helping the progress of an event or activity. Formal facilitation
includes well-defined roles, an objective facilitator, a structured meeting, and
decision-making by consensus, and defined goals to be achieved.
3. Team Building – aiding a group in defining a common goal and working together
to improve team effectiveness.

CSTE Body of Knowledge


Knowledge Category 4

Test Planning

Testers need the skills to plan tests, including the selection of techniques and methods to
be used to validate the product against its approved requirements and design. Test
planning assesses the business and technical risks of the software application, and then
develops a plan to determine if the software minimizes those risks. Test planners must
understand the development methods and environment to effectively plan for testing,
including regression testing. Specifically this knowledge category will address:

Prerequisites to Test Planning

1. Risk Analysis and Risk Management

a. Identifying Software Risks – knowledge of the most common risks associated with
software development and the platform on which you are working.

b. Identifying Testing Risks – knowledge of the most common risks associated with
software testing for the platform you are working on, tools beings used, and test methods
being applied.

c. Identifying Premature Release Risk – understand how to determine the risk associated
with releasing unsatisfactory, untested software products.

d. Risk contributors – ability to identify contributors to risk


e. Identifying Business Risks – knowledge of the most common risks associated with the
business using the software.

f. Risk Methods – understanding of the strategies and approaches for identifying risks or
problems associated with implementing and operating information technology, products,
and processes; assessing their likelihood, and initiating strategies to test for those risks.

2. Managing Risks

a. Risk Magnitude – ability to calculate and rank the severity of a risk quantitatively.

b. Risk Reduction Methods – the strategies and approaches that can be used to minimize
the magnitude of a risk.
c. Contingency Planning – plans to reduce the magnitude of a known risk should the risk
event occur.

Test Planning Entrance Criteria

1. Pre-Planning Activities

a. Success Criteria/Acceptance Criteria – the criteria, established by the business at the


inception of a project, that must be validated through testing to provide user management
with the information needed to make an acceptance decision.

b. Test Objectives – understanding of the objectives to be accomplished through testing.

c. Assumptions – establishing those conditions that must exist for testing to be


comprehensive and on schedule; for example, software must be available for testing on a
given date, hardware configurations available for testing must include XYZ, etc.

d. Issues – identifying specific situations/products/processes which, unless mitigated, will


impact forward progress.

e. Constraints – limiting factors to success.

f. Entrance Criteria/Exit Criteria – the criteria that must be met prior to moving to the
next level of testing, or into production, and how to realistically enforce this or minimally
how to reduce risk to testing organization when external pressure (from other
organizations) causes you to move to the next level without meeting exit/entrance criteria.

2. Test Planning

a. Test Scope – what is to be tested

b. Test Plan – the deliverables to meet the test’s objectives; the activities to produce the
test deliverables; and the schedule and resources to complete the activities.
c. Requirements/Traceability – defines the tests needed and relates those tests to the
requirements to be validated.

d. Estimating – determines the amount of resources and timeframes required to


accomplish the planned activities.

e. Scheduling – establishes milestones for completing the testing effort and their
dependencies on meeting the rest of the schedule.

f. Staffing – selecting the size and competency of staff needed to achieve the test plan
objectives.

g. Approach – methods, tools, coverage and techniques used to accomplish test


objectives.

h. Test Check Procedures (i.e., test quality control) – set of procedures based on the test
plan and test design, incorporating test cases that ensure that tests are performed correctly
and completely.

i. Maximizing Test Effectiveness – methods to assure test resources will be used most
effectively.

3. Maintaining the Most Current Test Plan

a. Software Configuration Management (SCM) – SCM is the organization of the


components of a software system, including documentation, so that they fit together in a
working order. It includes change management and version control.

b. Change Management – modifies and controls the test plan in relationship to actual
progress and scope of the system development. c. Version (control) – the methods to
control, monitor, and achieve change.

CSTE Body of Knowledge


Knowledge Category 5

Executing the Test Plan

The skills needed to execute tests, design test cases; use test tools; and monitor testing to
ensure correctness and completeness. Specifically this knowledge category will address:

Test Design and Test Data/Scripts Preparation

1. Specifications – assure test data scripts meet the objectives included in the test plan.
2. Cases – development of test cases, including techniques and approaches for validation
of the product. Determination of the expected result for each test case.

3. Test Design – considerations including tests (including functional, negative,


performance, load/stress); Test Design Strategies (e.g. small modular tests, scenario based
tests); Test Design Attributes (repeatable, reusable level of detail: trade-offs in specificity
vs. test case maintenance, how to organize; e.g. by feature, by test type, by application
architectural area (client/server).

4. Scripts – development of the on-line steps to be performed in testing, focusing on the


purpose and preparation of procedures; emphasizing entrance and exit criteria.

5. Data – development of test inputs, use of data generation tools. Determination of the
data set or sub-sets needed to ensure a comprehensive test of the system. The ability to
determine data that suits boundary value analysis and stress testing requirements.

6. Test Coverage – achieving of the coverage objectives in the test plan to specific system
components.

7. Platforms – identify the minimum configuration and platforms on which the test must
function.

8. Test Cycle Strategy

a. Determination of the number of test cycles to be conducted during the test execution
phase of testing.

b. Determination of what type of testing will occur during each test cycle.

Performing Tests

1. Execute Tests – perform the activities necessary to execute tests in accordance with the
test plan and test design (including setting up tests, preparing data base(s), obtaining
technical support, and scheduling resources).

2. Compare Actual versus Expected Results – determine if the actual results met
expectations (note: comparisons may be automated).

3. Documenting Test Results – recording test results in a desired form. Information to be


recorded must be defined. Results can include incidents not related to testing that can
impact software quality, such as time required to process a business transaction or ease of
use.

4. Use of Test Results – how the results of testing are to be used, and who has access to
the test results.
5. Record Discrepancies – documenting defects as they happen including supporting
evidence.

Defect Tracking

(Note: defect tracking begins by recording a variance from expectations; and will not be
considered a true defect until the originator acknowledges the variance as an incorrect
condition.)

1. Defect Recording – defect recording is used to describe and quantify deviations from
requirements/expectations.

2. Defect Reporting – reports the status of defects; including severity and location.

3. Defect Tracking – monitoring defects from the time of recording until satisfactory
resolution has been determined and implemented.

Testing Software Changes

1. Static Testing – Evaluating changed code and associated documentation at the end of
the change process to ensure correct implementation of the change.

2. Regression Testing – testing the whole product to ensure that unchanged functionality
performs as it did prior to implementing a change.

3. Verification – Reviewing requirements, design, and associated documentation to ensure


they are updated correctly as a result of a change.

CSTE Body of Knowledge


Knowledge Category 6

Test Status, Analysis and Reporting

The testers need to demonstrate the ability to develop testing status reports. These reports
should show the status of the testing based on the test plan. Reporting should document
what tests have been performed and the status of those tests. To properly report status, the
testers should review and conduct statistical analysis on the test results and discovered
defects. The lessons learned from the test effort should be used to improve the next
iteration of the test process.

Metrics of Testing Metrics specific to testing include data collected on testing, defect
tracking, and software performance. Use quantitative measures and metrics to manage the
planning, execution, and reporting of software testing, should focus on whether test
objectives and goals are being reached.

Test Status Reports


Reports the status of testing as specified in the test plan and would include information
on:

1. Test Plan Coverage – percent of test plan completed.

2. Code Coverage – monitoring the execution of software and reporting on the degree of
coverage at the statement, branch, or path level.

3. Requirement Coverage – monitoring and reporting on the number of requirements


tested, and whether or not they are correctly implemented.

4. Test Status Metrics:

a. Metrics Unique to Test – includes metrics such as Defect Removal Efficiency, Defect
Density, and Mean Time to Last Failure.

b. Complexity Measurements – quantitative values accumulated by a predetermined


method, which measure the complexity of a software product.

c. Project Metrics – status of project including milestones, budget and schedule variance
and project scope changes.

d. Size Measurements – methods primarily developed for measuring the software size of
information systems, such as lines of code, and function points. These can also be used to
measure software testing productivity. Sizing is important in normalizing data for
comparison to other projects.

e. Defect Metrics – values associated with numbers or types of defects, usually related to
system size, such as “defects/1000 lines of code” or “defects/100 function points”;
severity of defects, uncorrected defects, etc. f. Product Measures – measures of a
product’s attributes such as performance, reliability, failure, usability.

Final Test Reports

1. Reporting Tools – use of word processing, database, defect tracking, and graphic tools
to prepare test reports.

2. Test Report Standards – defining the components that should be included in a test
report.

3. Statistical Analysis – ability to draw statistically valid conclusions from quantitative


test results.

CSTE Body of Knowledge


Knowledge Category 7
User Acceptance Testing

The objective of software development is to develop the software that meets the true
needs of the user, not just the system specifications. To accomplish this, testers should
work with the users early in a project to clearly define the criteria that would make the
software acceptable in meeting the user needs. As much as possible, once the acceptance
criterion has been established, they should integrate those criteria into all aspects of
development. This same process can be used by software testers when users are
unavailable for test; when diverse users use the same software; and for beta testing
software.

Concepts of Acceptance Testing

1. Acceptance testing is a formal testing process conducted under the direction of the
software users to determine if the operational software system meets their needs and is
usable by their staff.

2. Understand the difference between system test and acceptance test.

Roles and Responsibilities

The software testers need to work with users in developing an effective acceptance plan,
and to ensure the plan is properly integrated into the overall test plan. If users are not
available the software testers may become responsible for acceptance testing.

Acceptance Test Planning Process

The acceptance test plan should include the same type of analysis used to develop the
system test plan with emphasis on:

1. Defining the acceptance criteria

2. Develop an acceptance test plan for execution by user personnel

3. Test data is use case oriented

Acceptance Test Execution

1. Execute the acceptance test plan

2. Develop an acceptance decision based on the results of acceptance testing.

3. Sign off by users upon successful completion of the acceptance test plan.

CSTE Body of Knowledge


Knowledge Category 8
Testing Software Developed by Outside Organizations

Many organizations do not have the resources to develop the type and/or volume of
software needed to effectively manage their business. The solution is to obtain or contract
for software developed by another organization. Software can be acquired by purchasing
off the shelf software (COTS) or contracting for all or parts of the software development
to be done by outside organizations, often referred to as outsourcing. Software testers
need to be involved in the process of testing software acquired from outsourcers.
Specifically, this category addresses:

The difference in testing software developed in-house versus software developed by


outside organizations.

Differences between testing software developed in-house and software developed by


outside organizations:

1. COTS Software – testers normally do not have access to the methods in which the
software was developed or the people who developed it. 2. Contractors/Outsourced – the
contractual provisions will determine whether testers can perform verification activities
during development; and the ability of testers to access the developers.

Selection Process for Acquired Software:

1. Selecting COTS Software. This involves first determining the needed requirements;
second, the available software that might meet the requirements, and then third,
evaluating those software packages against the selection criteria. Testers can perform or
should participate in this process. Note that the acquisition of test tools follows this same
process.

2. Selecting organizations to build all or part of the needed software. Testers should be
involved in these activities, specifically to:

a. Assure that requirements are testable.

b. Review the adequacy of the test plan to be performed by the outsourcing organization.

c. Oversee acceptance testing.

d. Issue a report on the adequacy of the software to meet the contractual specifications.

e. Assure compatibility of software standards, communications, change control etc.


between the two organizations.

Testing Acquired Software


Uses the same approach as used for in-house software, but may need to be modified
based on documentation available from the developer.

Testers Involvement in Testing Changes for Purchased/Contracted Software

The objectives of involving testers in testing changes include:

1. Testing the changed portion of the software.

2. Perform regression testing.

3. Compare the documentation to the actual execution of the software.

4. Issue a report regarding the status of the new version of the software.

CSTE Body of Knowledge


Knowledge Category 9

Testing Software Controls and the adequacy of Security Procedures

The software system of internal control includes the totality of the means developed to
ensure the integrity of the software system and the products created by the software.
Controls are employed to control the processing components of software, assure that
software processing is in accordance with the organization's policies and procedures, and
according to applicable laws and regulations. Software systems are divided into two
parts, the part that performs the processing and the part that controls processing. The
control part includes a system of controls as well as the means employed to assure
processing cannot be penetrated by outside sources. This category addresses all the
components of the software system of internal control and security procedures.

Principles and Concepts of a Software System of Internal Control and Security

1. Vocabulary of Internal Control and Security – the vocabulary of internal control and
security which includes terms such as risk, threat, control, exposure, vulnerability and
penetration.

2. Internal Control and Security Models – includes internal control and security models.
The current model that is most accepted is the COSO model. (Committee of Sponsoring
Organizations, COSO, is comprised of five major U.S. accounting associations.)

Testing the System of Internal Controls

The test process for testing the system of internal controls in software is:

1. Perform risk analysis – determine the risks faced by the transactions/events processed
by the software.
2. Determine the controls for each of the processing segments for transactions processing
including:

a. transaction origination

b. transaction entry

c. transaction processing

d. data base control

e. transaction results

3. Determine whether the identified controls are adequate to reduce the risks to an
acceptable level.

4. When all components of the control system are present and functioning effectively, the
internal control process can be deemed “effective.”

Testing the Adequacy of Security for a Software System

Testers need to evaluate the security for an individual software system. The tests should
include:

1. Evaluate the adequacy of management’s security environment.

2. Security Risk Assessment – determining the types of risk requiring security controls.

3. Identify the most probable points where the software would be penetrated.

4. Determine the controls at those points of penetration.

5. Test/assess whether those controls are adequate to reduce the security risks to an
acceptable level. These tests should include:

a. Security awareness of the software stakeholders

b. Adequacy of management’s security environment.

CSTE Body of Knowledge


Knowledge Category 10

Testing New Technologies


Testers require skills in their organization’s current technology, as well as a general
understanding of the new information technology that might be acquired by their
organization. This knowledge category addresses:

An Understanding of the New Testing Challenges with These Technologies:

1. New application architecture including:

a. web based applications b. PDA’s

2. New application business models including:

a. e-commerce

b. e-business

3. New communication methods including:

a. wireless

4. New testing tools including:

a. test automation software

Evaluating New Technologies to Fit into the Organization’s Policies and Procedures

Assessing the adequacy of the controls within the technology and the changes to existing
policies and procedures that will be needed before the new technology can be
implemented effectively. This would include:

1. Testing new technology to evaluate actual performance versus supplier’s stated


performance.

2. Determine whether current policies and procedures are adequate to control the
operation of the new technology and modify to bring in currency.

3. Assess the need to acquire new staff skills to effectively implement the new
technology.

Bibliographic References
IMPORTANT: It is each candidate's responsibility to stay current in the field and to be
aware of published works and materials available for professional study and
development. Software Certifications recommends that candidates for certification
continually research and stay aware of current literature and trends in the field. There are
many valuable references that have not been listed here. These references are offered for
informational purposes only.

You might also like