CSTE Body of Knowledge Course 2006
CSTE Body of Knowledge Course 2006
CSTE Body of Knowledge Course 2006
Knowledge Category 1
The “basics” of software testing are represented by the vocabulary of testing, testing
approaches, methods and techniques as well as the materials used by testers in
performing their test activities. Specifically, this knowledge category will address:
The test environment is comprised of all the conditions, circumstances, and influences
surrounding and affecting the testing of software. The environment includes the
organization’s policies, procedures, culture, attitudes, rewards, test processes, test tools,
methods for developing and improving test processes, management’s support of software
testing, as well as any test labs developed for the purpose of testing software and multiple
operating environments.
This category also includes assuring the test environment fairly represents the production
environment to enable realistic testing to occur. Specifically this knowledge category will
address:
Test Tools
Software testing is a project with almost all the same attributes as a software development
project. Software testing involves project planning, project staffing, scheduling and
budgeting, communicating, assigning and monitoring work and ensuring that changes to
the project plan are incorporated into the test plan. Specifically this knowledge category
will address:
1. Communication Skills
Leadership
Test Planning
Testers need the skills to plan tests, including the selection of techniques and methods to
be used to validate the product against its approved requirements and design. Test
planning assesses the business and technical risks of the software application, and then
develops a plan to determine if the software minimizes those risks. Test planners must
understand the development methods and environment to effectively plan for testing,
including regression testing. Specifically this knowledge category will address:
a. Identifying Software Risks – knowledge of the most common risks associated with
software development and the platform on which you are working.
b. Identifying Testing Risks – knowledge of the most common risks associated with
software testing for the platform you are working on, tools beings used, and test methods
being applied.
c. Identifying Premature Release Risk – understand how to determine the risk associated
with releasing unsatisfactory, untested software products.
f. Risk Methods – understanding of the strategies and approaches for identifying risks or
problems associated with implementing and operating information technology, products,
and processes; assessing their likelihood, and initiating strategies to test for those risks.
2. Managing Risks
a. Risk Magnitude – ability to calculate and rank the severity of a risk quantitatively.
b. Risk Reduction Methods – the strategies and approaches that can be used to minimize
the magnitude of a risk.
c. Contingency Planning – plans to reduce the magnitude of a known risk should the risk
event occur.
1. Pre-Planning Activities
f. Entrance Criteria/Exit Criteria – the criteria that must be met prior to moving to the
next level of testing, or into production, and how to realistically enforce this or minimally
how to reduce risk to testing organization when external pressure (from other
organizations) causes you to move to the next level without meeting exit/entrance criteria.
2. Test Planning
b. Test Plan – the deliverables to meet the test’s objectives; the activities to produce the
test deliverables; and the schedule and resources to complete the activities.
c. Requirements/Traceability – defines the tests needed and relates those tests to the
requirements to be validated.
e. Scheduling – establishes milestones for completing the testing effort and their
dependencies on meeting the rest of the schedule.
f. Staffing – selecting the size and competency of staff needed to achieve the test plan
objectives.
h. Test Check Procedures (i.e., test quality control) – set of procedures based on the test
plan and test design, incorporating test cases that ensure that tests are performed correctly
and completely.
i. Maximizing Test Effectiveness – methods to assure test resources will be used most
effectively.
b. Change Management – modifies and controls the test plan in relationship to actual
progress and scope of the system development. c. Version (control) – the methods to
control, monitor, and achieve change.
The skills needed to execute tests, design test cases; use test tools; and monitor testing to
ensure correctness and completeness. Specifically this knowledge category will address:
1. Specifications – assure test data scripts meet the objectives included in the test plan.
2. Cases – development of test cases, including techniques and approaches for validation
of the product. Determination of the expected result for each test case.
5. Data – development of test inputs, use of data generation tools. Determination of the
data set or sub-sets needed to ensure a comprehensive test of the system. The ability to
determine data that suits boundary value analysis and stress testing requirements.
6. Test Coverage – achieving of the coverage objectives in the test plan to specific system
components.
7. Platforms – identify the minimum configuration and platforms on which the test must
function.
a. Determination of the number of test cycles to be conducted during the test execution
phase of testing.
b. Determination of what type of testing will occur during each test cycle.
Performing Tests
1. Execute Tests – perform the activities necessary to execute tests in accordance with the
test plan and test design (including setting up tests, preparing data base(s), obtaining
technical support, and scheduling resources).
2. Compare Actual versus Expected Results – determine if the actual results met
expectations (note: comparisons may be automated).
4. Use of Test Results – how the results of testing are to be used, and who has access to
the test results.
5. Record Discrepancies – documenting defects as they happen including supporting
evidence.
Defect Tracking
(Note: defect tracking begins by recording a variance from expectations; and will not be
considered a true defect until the originator acknowledges the variance as an incorrect
condition.)
1. Defect Recording – defect recording is used to describe and quantify deviations from
requirements/expectations.
2. Defect Reporting – reports the status of defects; including severity and location.
3. Defect Tracking – monitoring defects from the time of recording until satisfactory
resolution has been determined and implemented.
1. Static Testing – Evaluating changed code and associated documentation at the end of
the change process to ensure correct implementation of the change.
2. Regression Testing – testing the whole product to ensure that unchanged functionality
performs as it did prior to implementing a change.
The testers need to demonstrate the ability to develop testing status reports. These reports
should show the status of the testing based on the test plan. Reporting should document
what tests have been performed and the status of those tests. To properly report status, the
testers should review and conduct statistical analysis on the test results and discovered
defects. The lessons learned from the test effort should be used to improve the next
iteration of the test process.
Metrics of Testing Metrics specific to testing include data collected on testing, defect
tracking, and software performance. Use quantitative measures and metrics to manage the
planning, execution, and reporting of software testing, should focus on whether test
objectives and goals are being reached.
2. Code Coverage – monitoring the execution of software and reporting on the degree of
coverage at the statement, branch, or path level.
a. Metrics Unique to Test – includes metrics such as Defect Removal Efficiency, Defect
Density, and Mean Time to Last Failure.
c. Project Metrics – status of project including milestones, budget and schedule variance
and project scope changes.
d. Size Measurements – methods primarily developed for measuring the software size of
information systems, such as lines of code, and function points. These can also be used to
measure software testing productivity. Sizing is important in normalizing data for
comparison to other projects.
e. Defect Metrics – values associated with numbers or types of defects, usually related to
system size, such as “defects/1000 lines of code” or “defects/100 function points”;
severity of defects, uncorrected defects, etc. f. Product Measures – measures of a
product’s attributes such as performance, reliability, failure, usability.
1. Reporting Tools – use of word processing, database, defect tracking, and graphic tools
to prepare test reports.
2. Test Report Standards – defining the components that should be included in a test
report.
The objective of software development is to develop the software that meets the true
needs of the user, not just the system specifications. To accomplish this, testers should
work with the users early in a project to clearly define the criteria that would make the
software acceptable in meeting the user needs. As much as possible, once the acceptance
criterion has been established, they should integrate those criteria into all aspects of
development. This same process can be used by software testers when users are
unavailable for test; when diverse users use the same software; and for beta testing
software.
1. Acceptance testing is a formal testing process conducted under the direction of the
software users to determine if the operational software system meets their needs and is
usable by their staff.
The software testers need to work with users in developing an effective acceptance plan,
and to ensure the plan is properly integrated into the overall test plan. If users are not
available the software testers may become responsible for acceptance testing.
The acceptance test plan should include the same type of analysis used to develop the
system test plan with emphasis on:
3. Sign off by users upon successful completion of the acceptance test plan.
Many organizations do not have the resources to develop the type and/or volume of
software needed to effectively manage their business. The solution is to obtain or contract
for software developed by another organization. Software can be acquired by purchasing
off the shelf software (COTS) or contracting for all or parts of the software development
to be done by outside organizations, often referred to as outsourcing. Software testers
need to be involved in the process of testing software acquired from outsourcers.
Specifically, this category addresses:
1. COTS Software – testers normally do not have access to the methods in which the
software was developed or the people who developed it. 2. Contractors/Outsourced – the
contractual provisions will determine whether testers can perform verification activities
during development; and the ability of testers to access the developers.
1. Selecting COTS Software. This involves first determining the needed requirements;
second, the available software that might meet the requirements, and then third,
evaluating those software packages against the selection criteria. Testers can perform or
should participate in this process. Note that the acquisition of test tools follows this same
process.
2. Selecting organizations to build all or part of the needed software. Testers should be
involved in these activities, specifically to:
b. Review the adequacy of the test plan to be performed by the outsourcing organization.
d. Issue a report on the adequacy of the software to meet the contractual specifications.
4. Issue a report regarding the status of the new version of the software.
The software system of internal control includes the totality of the means developed to
ensure the integrity of the software system and the products created by the software.
Controls are employed to control the processing components of software, assure that
software processing is in accordance with the organization's policies and procedures, and
according to applicable laws and regulations. Software systems are divided into two
parts, the part that performs the processing and the part that controls processing. The
control part includes a system of controls as well as the means employed to assure
processing cannot be penetrated by outside sources. This category addresses all the
components of the software system of internal control and security procedures.
1. Vocabulary of Internal Control and Security – the vocabulary of internal control and
security which includes terms such as risk, threat, control, exposure, vulnerability and
penetration.
2. Internal Control and Security Models – includes internal control and security models.
The current model that is most accepted is the COSO model. (Committee of Sponsoring
Organizations, COSO, is comprised of five major U.S. accounting associations.)
The test process for testing the system of internal controls in software is:
1. Perform risk analysis – determine the risks faced by the transactions/events processed
by the software.
2. Determine the controls for each of the processing segments for transactions processing
including:
a. transaction origination
b. transaction entry
c. transaction processing
e. transaction results
3. Determine whether the identified controls are adequate to reduce the risks to an
acceptable level.
4. When all components of the control system are present and functioning effectively, the
internal control process can be deemed “effective.”
Testers need to evaluate the security for an individual software system. The tests should
include:
2. Security Risk Assessment – determining the types of risk requiring security controls.
3. Identify the most probable points where the software would be penetrated.
5. Test/assess whether those controls are adequate to reduce the security risks to an
acceptable level. These tests should include:
a. e-commerce
b. e-business
a. wireless
Evaluating New Technologies to Fit into the Organization’s Policies and Procedures
Assessing the adequacy of the controls within the technology and the changes to existing
policies and procedures that will be needed before the new technology can be
implemented effectively. This would include:
2. Determine whether current policies and procedures are adequate to control the
operation of the new technology and modify to bring in currency.
3. Assess the need to acquire new staff skills to effectively implement the new
technology.
Bibliographic References
IMPORTANT: It is each candidate's responsibility to stay current in the field and to be
aware of published works and materials available for professional study and
development. Software Certifications recommends that candidates for certification
continually research and stay aware of current literature and trends in the field. There are
many valuable references that have not been listed here. These references are offered for
informational purposes only.