CNIWP - 074 - DO-178C Handbook
CNIWP - 074 - DO-178C Handbook
CNIWP - 074 - DO-178C Handbook
2. Planning 6
2.1 Thinking about verification early 7
3. Configuration management 8
3.1 Control categories 8
3.2 Baselines 9
4. Quality assurance 11
4.1 SQA independence 12
5. Development 14
5.1 High-level requirements 14
5.2 Design 17
5.3 Implementation 19
page i | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
5.3.3 Using model-based design (DO-331) 23
6. Verification 25
6.1 Preparing for SOI#3 26
8. The authors 65
Efficient verification through the DO-178C life cycle | page ii © ConsuNova, Inc. All rights reserved
1. Introduction to DO-178C
DO-178 was originally developed in the late 1970s and released in 1982 to define a
prescriptive set of design assurance processes for airborne software that focused
on documentation and testing. In the 1980s, DO-178 was updated to DO-178A, which
suggested different levels of activities dependent on the criticality of the software, but
the process remained prescriptive. Released in 1992, DO-178B was a total re-write
of DO-178 to move away from the prescriptive process approach and define a set of
activities and associated objectives that a design assurance process must meet. This
update allowed flexibility in the development approaches that could be followed, but
also specified fundamental attributes that a design assurance process must have, which
were derived from airworthiness regulations. These included, for example, demonstrating
implementation of intended function, identifying potential unintended function, and
verification of an integrated build running on the target hardware.
Efficient verification through the DO-178C life cycle | page 2 © ConsuNova, Inc. All rights reserved
1.2 Objectives and activities
DO-330 The recommendations given in DO-178 fall into two types:
DO-330 provides tool- • Objectives, which are process requirements that should be met in order to demonstrate
specific guidance for compliance to regulations
developing airborne
• Activities, which are tasks that provide the means of meeting objectives
and ground-based
software. As DO-
In total, DO-178C includes 71 objectives, 43 of which are related to verification. The number
independently, it is
Level of the system reduces (Figure 2).
not considered as a
supplement to 1.3 Supplementary objectives and
DO-178C and is
guidance
therefore titled
differently from DO-178C introduced three technology supplements to provide an interpretation of
DO-178C’s specialized the DO-178C activities and objectives in the context of using specific technologies. The
technology three technologies are Model Based Development and Verification (DO-331), Object
supplements e.g. Oriented Technology and related technologies (DO-332), and Formal Methods (DO-333).
DO-331, DO-332 and Each supplement describes the technology, defines the scope of its use within airborne
DO-333. software, lists additional or alternative activities and objectives that must be met when
the technology is used, and includes specific FAQs (Frequently Asked Questions) that
clarify objectives and activities relating to the technology.
Figure 2 – Number of objectives and verification objectives in DO-178C Design Assurance Levels
page 3 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Figure 3 – Supplementary documents such as CAST-32A have provided additional
clarification or explanations for DO-178C
Many other documents support DO-178C by providing additional clarification or The CAST team
explanations that can help developers to correctly interpret the guidance and implement The Certification
appropriate design assurance processes. The Supporting Information (DO-248C) Authorities Software
supplementary document includes FAQs relating to DO-178C, and the document is Team (CAST) is a team
commonly referred to by the title Frequently Asked Questions. In addition to the FAQs of software specialists
in DO-248C, the document provides the rationale for the activities and objectives from certification
listed in DO-178C and includes discussion papers that provide clarification on specific authorities in the
topics related to software development and verification. A series of documents United States, Europe
produced by the Certification Authorities Software Team (CAST) since the release of and Canada. The
DO-178B provided information on specific topics of concern to certification authorities in team released a
order to harmonize approaches to compliance. These topics have had a greater scope than number of documents
just Software concerns, and much of the content in CAST documents has been implemented that provide
in guidance updates such as DO-178C, or formed the basis of authority publications, supplementary
such as A(M)C1 20-193 to address the use of multicore processors in avionics and compliance guidance
A(M)C 20-152A on the development of airborne electronics hardware. CAST has remained since the release of
inactive since October 2016 and links to most previous CAST papers have been removed DO-178B, but the
from the FAA’s website. The FAA plans to remove the link to CAST-32A after the publication team has remained
of A(M)C 20-193. inactive since October
2016.
1.4 Demonstrating compliance
The basic structure of a Design Assurance process consists of three components:
1. Planning
2. Development
1
A(M)C refers to either EASA AMC (Acceptable Means of Compliance) or FAA AC (Advisory Circular)
documents.
Efficient verification through the DO-178C life cycle | page 4 © ConsuNova, Inc. All rights reserved
Planning should occur first and this follows the basic design assurance principle that you
say what you are going to do before you do it so you can ensure that what you plan to do
will meet the required DO-178C objectives and provide evidence to demonstrate this.
The Integral processes cover the activities needed to ensure that the products of the
Planning and Development processes meet their requirements and are well managed,
and that the processes you follow match those you planned to follow. The Certification
Liaison activities focus on ensuring that there are sufficient data and checkpoints for the
certification authorities to be able to determine compliance and approve the software.
The typical process for the certification authority to determine compliance is based on four
Stage Of Involvement (SOI) reviews. These reviews are:
Each of these reviews focuses on an aspect of the process and evaluates the evidence
that demonstrates compliance incrementally throughout the development life cycle. We
discuss each of the SOIs in more detail later. Generally, certification authorities require
that each SOI is passed before a project can proceed to the next SOI. SOIs thus mark key
milestones in a DO-178C project.
page 5 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
2. Planning
TIP
Importance of the
PSAC
Planning is a crucial part of DO-178C compliance and it sets the scene for your
The Plan for
project’s activities and the efficiency of those activities in later stages. You can save
Software Aspects of
effort in your DO-178C planning by using a set of template planning documents
Certification (PSAC) is
as a starting point.
especially important
and is essentially a
compliance contract
The development of a set of plans covering all components of the Design Assurance
between your
process is a cornerstone of DO-178C. As part of this activity, the following plans must be
specific project and
developed:
your certification
1. Plan for Software Aspects of Certification (PSAC): a description of the software authority. It must
you plan to develop, the hardware environment it will be used in, the design assurance explain all features of
processes you will follow, and how you will demonstrate compliance, including how the software and the
you will verify your implemented code and any commercial tools you will use in your associated processes
verification. This acts as the parent planning document. that are salient
to demonstrating
2. Software Development Plan (SDP): a description of the software development
compliance to the
processes and the software life cycle that is used to satisfy DO-178C objectives.
objectives of DO-178C
3. Software Verification Plan (SVP): a description of the verification processes and any applicable
(Reviews, Analyses and Tests) used to satisfy DO-178C objectives. supplements.
Efficient verification through the DO-178C life cycle | page 6 © ConsuNova, Inc. All rights reserved
For DALs C and higher, as well as producing the above planning documents, you will also
Reducing planning
costs need to establish and document the standards you will use for Requirements, Design,
and Code, documenting the methods, rules, and tools you will use to ensure complete
Creating plans
and consistent generation of these items in a way that will meet DO-178C and project
and standards for
objectives. It pays to put effort into writing your standards – having well thought out
a project working
standards reduces the chance of incurring rework costs later due to issues that could be
towards DO-178C
addressed through having better standards, such as producing unverifiable requirements.
compliance requires
a lot of effort, but
you can save effort 2.1 Thinking about verification early
by using a complete
set of templates as a TIP
starting point. Some
companies offer such The earlier you begin to evaluate verification methods, the better. Knowing
templates for a fee. early on how you’re going to satisfy each verification objective, including which
verification tools you’re going to use (if any), and how you’re going to qualify any
such tools, will reduce the cost of your verification.
As you need to describe the processes and tools you are going to use for verification
during the Planning stage of a DO-178C project, you’ll need to ensure that what you plan
to do for your verification is feasible and efficient before you have designed any of your
code, let alone implemented it. The earlier you invest in evaluating options for verification,
the better. It pays to evaluate whether you will use manual processes or tools to satisfy
each verification objective and evaluate the tools you intend to use while considering any
tool qualification needs. Any changes to your verification processes after the completion
of SOI#1 would require renegotiation of your compliance strategy with your certification
authority, which could add add cost to and delay your project.
page 7 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
3. Configuration Management
Configuration items that you track through configuration management fall into two A configuration item is
Control Categories, which determine the level of control that must be applied to a software component
items in that category. DO-178C specifies which items must be treated as Control or item of software
Category 1 or 2 items based on the project’s Design Assurance Level. Items treated as life cycle data that is
Control Category 1 (CC1) must undergo full problem reporting processes, and formal treated as a unit for
change review and release processes. Configuration items classified as Control Category 2 software configuration
(CC2) items do not need to undergo these more formal processes, but items in this category management
must still comply with configuration identification and traceability needs, be protected purposes.
against unauthorized changes, and meet applicable data retention requirements.
Efficient verification through the DO-178C life cycle | page 8 © ConsuNova, Inc. All rights reserved
3.2 Baselines
An important concept relating to Configuration Management is the baseline – a record
of the state of specific compliance artifacts at a specific point in time. What counts as a
baseline varies from project to project, but it always has the following characteristics:
• The artifacts within one baseline must be consistent with one another.
Typically, you will create a baseline for each phase of your software life cycle. For example,
once high-level requirements have passed reviews and quality assurance checks, and
you are about to transition to the software design phase, you may create a baseline
representing the entire set of high-level requirements, reviews, analyses, the QA records,
and traceability to system level requirements.
If your development is organized into modules, you can create a baseline per module. For
example, you might establish a design baseline for each of three modules independently.
Your configuration index submitted for approval should identify the relevant baseline, or
baselines, of the accompanying compliance data.
page 9 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
3.3 Problem reporting
DO-178C Control Category 1 items (see Control categories on page 8) are subject to a formal
problem reporting process, which you will need to set up and implement. This process
should capture how you plan to respond to problems identified for Control Category 1 items
during your project, such as issues with implemented code not meeting requirements/
standards, or verification activities not yielding results needed to demonstrate DO-178C
compliance. You will need to write your Problem reporting process into your Software
Configuration Management Plan.
• How change requests, problem reports, issue reports or similar are raised. Your
Change Control
process should define the problem to be addressed or the need to be met. It must Boards
identify the configuration item and version affected, and especially in the case of a
A Change Control
feature request, may include some notes on the expected solution.
Board is a team of
• How you will triage requests to make sure that they adequately define scopes. How development experts
agreements are made on whether to proceed with a change, defer a change, or reject (often including
a change. Agreements to proceed with a change should capture specific resources software engineers,
and timescales. Such decisions are usually made by a Change Control Board. testing experts, etc.)
• When changes are made, how you will update your documentation with details of the and managers (e.g.
change, particularly identifying the configuration items and versions that resolve the Quality Assurance
Efficient verification through the DO-178C life cycle | page 10 © ConsuNova, Inc. All rights reserved
4. Quality assurance
1. Checking that the PSAC and the associated plans and standards align with the
objectives of DO-178C.
3. Checking that software life cycle processes comply with approved plans and standards.
5. Checking that software activity follows the software life cycle processes – in particular,
checking that transition criteria between life cycle processes are satisfied.
These activities must be conducted whether software activities are internal to one
organization or part of an integrator-supplier relationship. In the latter case, each
organization that has further suppliers must coordinate SQA activities so that the
integrator’s evidence includes evidence from their supplier’s SQA activities.
page 11 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
4.1 SQA independence
According to DO-178C, a project’s SQA team must have the authority to ensure corrective
action i.e. an organization must be arranged such that, if a software quality engineer
discovers a problem, SQA cannot be overruled by some other part of the organization.
This can be achieved both through processes (e.g. explicitly requiring SQA authorizations)
and through the reporting structures within the organization (e.g. SQA should not report
to engineering managers). SQA authorization or approval is typically needed for the data
items (plans, standards, requirements, configuration index and so on) that form each
baseline.
SQA should be the primary authors of the Software Quality Assurance Plan
(SQAP, see Planning on page 6), which should cover the initial, ongoing and conformity
review activities that SQA will manage, as discussed here and below.
When supplier organizations are involved in your program, it is best to coordinate SQA
activities between the organizations from the early stages of a project. For each SQA-
specific item in your own plans, you should coordinate how you will obtain that information
from the supplier, in what form, and at which milestones. This can be especially challenging
for witnessing and participation activities (see Ongoing SQA activities on page 12), where
you may need to delegate some authority to the supplier organization and then audit the
responses that you obtain.
Efficient verification through the DO-178C life cycle | page 12 © ConsuNova, Inc. All rights reserved
These activities include:
• Participation – activities in which SQA have a continuous active role. This can include,
for example, participation in reviews, change control boards, and problem reporting
(see Problem reporting on page 10). Software quality engineers would typically be
present at some percentage of these meetings to help identify issues such as process
deviations, missed checklist items, or confusing instructions. As well as generating
software quality reports to explain attendance and identifying issues and corrective
actions applied, another benefit of participation is to correct processes and update
supporting materials as early as possible. This can help avoid the need for significant
later rework to correct quality problems.
• Witnessing – this involves members of the SQA team witnessing activities such
as testing and the build and load process. Witnessing can be in multiple forms, for
example witnessing the original activity being performed, witnessing an engineer redo
the activity, or independently performing the activity. Using a mix of these techniques
will help to give a good spread between the efficiency of the checking process and the
independence needed to find inconsistencies.
• Planned processes were followed and generated traceable, controlled life cycle data,
with complete and consistent baselines, including any data relating to changes from a
previous baseline.
• Software test, build and load activities were performed from repeatable, well-defined
instructions.
page 13 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
5. Development
Efficient verification through the DO-178C life cycle | page 14 © ConsuNova, Inc. All rights reserved
Requirements development for DO-178C projects
• Quality assurance records from this stage can be generated by having SQA
personnel involved in some of the requirements reviews, independently
performing reviews and analyses, or having some proportion of the
requirements re-reviewed or re-analyzed.
It is important to ensure that the requirements you write can be tested using the verification
methodology you said you were going to use in your PSAC. This is important for both high
and low-level requirements. If you generate requirements that can’t be tested, you’ll have
two options: rewrite your requirements or change your verification methodology (and
renegotiate the changes with your certification authority). The cost of both options is high,
so it’s best to ensure that your requirements are verifiable.
• Not including specific verifiable and quantitative acceptance criteria such as values or
behaviors in requirements.
Having a well thought out Requirements standard should help your project avoid
generating unverifiable requirements and reduce rework costs.
page 15 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
5.1.2 The importance of requirements traceability
TIP
Most projects working towards DO-178C compliance select to use a tool (either
commercial or in-house) to manage requirements and generate traceability information. It
is worth considering how well any tools you plan to use for verification interface with your
requirements management tool. Some tools may offer features to import requirements
information from your requirements management tool, thus helping you view your
verification results in the context of your requirements. This makes it easier to map
between your verification results and the requirements you’re verifying, likely saving time
and effort during your verification process.
Figure 6 – Requirements traceability concept and dedicated requirements page in a RapiTest report
Efficient verification through the DO-178C life cycle | page 16 © ConsuNova, Inc. All rights reserved
5.2 Design
When your requirements are available, the next step is to design the architecture of your
software and define its low-level requirements in order to define a system that can meet
the high-level requirements you have identified. Design in DO-178C is the combination of
architecture and low-level requirements. You will need to follow the design standards you
said you would follow in your PSAC and provide evidence that you have done this through
the SQA Integral process (see Quality assurance on page 11). Your certification authority is
likely to check this evidence during future SOIs.
As software is designed, items that should have low-level requirements, but for which
these requirements do not trace to high-level requirements, are often discovered. These
requirements, known as derived requirements, can include requirements for things such
as reusable libraries, scheduling behavior, or the numerical accuracy of the software.
When considering bi-directional traceability, you must provide information to show where
derived requirements come from. This could be a reference to the design process or
a design standard, for example. Recording this information makes it easy to show that
your software design is the product of a repeatable process. This type of traceability is
also important for change impact analysis, as these decisions often affect more than one
software component.
Software architecture defines the components that software will have, including the
functionality that each component will achieve, and the relationships between components,
specifically data flows and control flows. Your architectural decisions can have a significant
impact on your project life cycle, not only for implementation but also for verification,
particularly considering data coupling and control coupling.
TIP
Architectural decisions you make can affect the efficiency of data coupling and
control coupling coverage analysis during your verification. To make this analysis
easier, you may want to:
• Clearly define how each of the data and control couples in your design should
operate, including specifying mode information.
Decisions you make in your software architecture will impact the verification effort needed
for DO-178C compliance. As verification requires much more effort than implementation,
it makes sense to consider the impact your decisions will have and choose a strategy to
reduce total effort.
page 17 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
One way in which your design decisions impact verification is in the data coupling and control Data coupling and
coupling of the interfaces that you plan to include in your implementation – to comply with control coupling
DO-178C guidance at DALs C and above, you need to verify that you have exercised the
Data coupling and
data coupling and control coupling interfaces in the implementation (see Data coupling and
control coupling occur
control coupling coverage analysis on page 55), and your design decisions will affect the ease
between components
of this verification.
of your code that
are separate within
your architecture,
but through which
data can be passed or
control decisions can
be affected:
Efficient verification through the DO-178C life cycle | page 18 © ConsuNova, Inc. All rights reserved
• Specifying the control flow in terms of when and how control passes into and out of
each component, including error cases and exceptions.
• Where software components interact with hardware, specifying data and control
interactions in the same way as for software-software interactions.
The more such information that your interface definitions include, the easier your
verification will be.
After developing the software architecture, the next step is to develop low-level
requirements. This involves specifying requirements for the functionality you have assigned
to each component in your software architecture. While most low-level requirements will
typically map to high-level requirements, some components may include functionality
not identified in high-level requirements and thus require that you develop derived
requirements (see Design on page 17).
Depending on your implementation decisions later on, you may need to develop new
low-level requirements that do not trace to high-level software requirements, known as
derived requirements.
5.3 Implementation
When your requirements and design are ready, the next step will be to implement your
product in code. This is comparatively only a small part of the overall compliance process,
usually taking 5% or less of overall effort.
As in every stage, you will need to ensure that your implementation follows the processes
and standards you said you would follow in your PSAC and provide evidence that you
have done this through the SQA Integral process (see Quality assurance on page 11). Your
certification authority is likely to request evidence demonstrating that you have done so
in future SOIs.
page 19 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
5.3.1 Implementation decisions can affect verification efficiency
TIP
The decisions you make on how to implement your product can have major effects on
your verification later on. These can make your verification much easier or much more
difficult, and as a result cause your project to either run smoothly or incur delays. As
verification takes much more effort than implementation, it’s worth considering the effects
your decisions will have and making decisions that will reduce your verification effort. Here
are a few things you may want to consider:
• Generally, the fewer and simpler the code constructs you use, the less effort you’ll
need to verify your software.
• Some coding standards require generated code to be written in a way that can
cause development of untestable code, such as a standard requiring that default
cases are included in every switch function. At higher DALs, you’ll need to review
this code even if you can’t test it, which can incur extra effort. The cost of changing
any of the items above grows larger the further you are through a DO-178C
project, so you should ideally consider them before your DO-178C Planning.
• DO-178C states that your coding standard should include code complexity
restrictions. Limiting code complexity in general makes it easier to verify the
code, while limiting the use of specific language features may be necessary for
your source code to be compatible with the verification techniques and tools that
you intend to use. If you are working at DAL A, you may need to take care to
structure your low-level requirements so that your corresponding source code
does not need an excessive number of conditions per decision. This will reduce
the effort needed to achieve and maintain full MC/DC of your source code later
(see Structural coverage analysis on page 46).
Efficient verification through the DO-178C life cycle | page 20 © ConsuNova, Inc. All rights reserved
• Choice of hardware platform and operating system – your choice of hardware
platform and operating system can have huge effects on the timing behavior and
predictability of the final software, especially if you are using multicore processors.
Some platforms and operating systems may have in-built features that make your
verification easier – it’s worth evaluating a range of options early and considering
downstream effects on verification. For example, using hardware with on-chip tracing
capabilities can make worst-case execution time analysis much easier than if only off-
chip tracing capabilities are available.
• Choice of compiler and compiler options – the compiler and compiler options you
use can affect how easy it is to verify your code. Your compiler and compiler options
can especially affect the efficiency of your structural coverage analysis. Structural
GPUs for compute coverage analysis can be performed either by analyzing execution of source code or
object code constructs (see Source code vs. object code structural coverage analysis on
In recent times,
page 51). If object code is analyzed, additional verification may be needed to map the
there has been an
object code coverage to the source code. Compilers can optimize the code, making
increasing use of
it require more effort to do this mapping. For DAL A software, it is also necessary to
Graphical Processing
verify any additional code that is introduced by the compiler. In both cases, the more
Units (GPUs) for
compiler optimizations that you allow to occur, the more effort you’ll need to verify
computation
your code.
in the avionics
industry. Using • Use of emerging technologies – if your implementation uses emerging technologies
GPUs for compute such as using GPU devices rather than CPU devices for computation, using multicore
offers benefits systems, or data-driven computation, your verification may be more challenging and
when performing time-consuming. While verification methodologies are well established for “standard”
computationally implementations, this is not the case for implementations using emerging technologies,
intensive tasks due to and verification tool support is less available.
high parallelization.
The cost of changing any of the items above becomes increasingly large the further you
By developing COTS
are through a DO-178C project, so you should ideally consider them before your DO-178C
GPU libraries to
Planning.
be used for safety-
critical software, GPU
manufacturers are
supporting the use
of GPUs by industries
including the avionics
industry.
page 21 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
5.3.2 It may pay to start verification early
Some verification activities, such as structural coverage analysis, take a long time. If your
implementation generates code that can’t be verified using your chosen methodology,
you’ll incur extra costs and potentially delays by needing to return to your implementation.
This can be mitigated by verifying your code as you write it. A sensible approach is to write
tests to ensure that you can get full coverage at the granularity you need it, then implement
these as regression tests that are run throughout your development. While these are not
requirements-based tests, and as such you won’t be able to use them to support your
compliance argument (see Unit testing is not enough on page 39), they give you confidence
that you can test your code fully, and you’ll be made aware of any changes to coverage
from your regression tests so that you can rectify them. Some software verification tools
can help you automatically generate test vectors to test your code in this way, reducing the
effort needed to do so.
Figure 9 – Starting verification early and setting up regression tests can help you avoid
unwanted suprises late in the compliance process
Efficient verification through the DO-178C life cycle | page 22 © ConsuNova, Inc. All rights reserved
Object-orientated 5.3.3 Using model-based design (DO-331)
programming
Model-based design technologies can reduce the effort needed to design and test
Object-oriented compliant software. Some benefits of using model-based design include being able to
programming and simulate software behavior, supporting unambiguous expression of requirements and
related techniques software architecture, enabling automated code generation, and being able to perform
offer benefits that some verification activities earlier in the software life cycle than would otherwise be
can be attractive to possible.
software engineers,
RTCA DO-331 (Model-Based Development and Verification Supplement to DO-178C and
such as allowing
DO-278A) provides additional objectives that apply when using model-based design in
encapsulation,
DO-178C projects, and clarification of how existing DO-178C objectives and activities apply
increased code
to projects using model-based design.
reusability and easier
troubleshooting of One of the key additional verification activities discussed in DO-331 is model coverage
code issues. As such, analysis, which aims to detect unintended functionality in the model. As per DO-331,
these technologies performing model coverage analysis does not eliminate the need to perform coverage
are often used in the analysis of the generated code that will actually execute.
aerospace industry.
The simulation tools provided by model-based design tools can reduce verification effort
by providing a means to produce evidence that a model complies with its requirements.
As per DO-331, using simulation does not eliminate the need to perform testing of the
executable object code on the target hardware.
If you choose to use model-based design processes in a DO-178C project, you will need
to understand the guidance in DO-331 and identify model-based design tools, as well as
your verification and tool qualification strategies, in your DO-178C planning documents
(see Planning on page 6).
page 23 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Some examples of extra objectives that must be achieved when using object-oriented
programming include:
If you choose to use object-oriented programming in a DO-178C project, you will need to
understand the guidance in DO-332 and consider the impact that using the technology will
have on all stages of your DO-178C life cycle, including verification. You will need to write
up your implementation and verification strategies in your DO-178C planning documents
(see Planning on page 6).
Efficient verification through the DO-178C life cycle | page 24 © ConsuNova, Inc. All rights reserved
6. Verification
A number of verification activities are required by DO-178C. Some of these activities involve
verification of the developed software (see Verification of DO-178C software on page 31), while
some involve verification that your DO-178C verification process was correctly followed (see
Verification of the verification process on page 46).
Many verification activities can be performed either manually or by using automated tools
to help run the analysis. When automated tools are used to achieve a DO-178C objective
without their output being verified, those tools must be qualified for use following the DO-
330 Guidelines (see Qualifying verification tools (DO-330) on page 60). Qualification materials
are often available from COTS tool providers.
As we discussed earlier, the number of verification objectives you must meet depends on the
DAL level of your software:
page 25 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
6.1 Preparing for SOI#3
TIP
To prepare for SOI#3, make sure that you begin verification early to produce
compliance artifacts for every verification activity you need to perform.
During SOI#3, you will need to provide evidence that you have produced compliance
artifacts for each of the verification activities you need to comply with based on your
software’s DAL (see The Verification milestone (SOI#3) on page 62).
Independence in
Because of this, it makes sense to begin your verification activities early and not leave
DO-178C testing
verification activities until late in your project life cycle.
RTCA defines
The ideal method used to perform a final run for score (see The final run for score on
page 63) to produce verification results is to run tests on an integrated hardware/software
environment representative of your final flight system. This stems from the following
activities in DO-178C:
• 4.4.3.b – “The differences between the target computer and the emulator or
simulator, and the effects of these differences on the ability to detect errors and verify
functionality, should be considered. Detection of those errors should be provided by
the software verification process and specified in the Software Verification Plan.”
Efficient verification through the DO-178C life cycle | page 26 © ConsuNova, Inc. All rights reserved
It is rarely practical to run all tests on the final system, however, due to two major limitations:
• Cost – on-target testing is much more expensive than on-host testing. Due to the cost
of hardware test rigs, the number and availability of these can be a limitation, while the
availability of systems for on-host testing is never limited in this way.
• Availability of test rigs – test rigs may not be available until relatively late in your
Development, meaning that if you choose to only do on-target testing, your testing
may be delayed by lack of test rigs, potentially introducing delays in your project.
Typically, a range of test environments is defined as part of the Software Verification Plan
activities, for example the following:
• On-host – on-host testing environments are useful for early stages of verification to
set up and evaluate testing processes, especially when targets may not yet be available.
On-host tests provide less assurance that test behavior is representative of behavior
on the final flight system for a number of reasons, for example because the compiler
and compiler options used may not be representative of the final target. When
running tests on-host, some devices may not be available and not all functionality can
be tested.
• On-host simulator – simulator environments are again useful for early stages of
verification. A simulator of the target will typically use a cross-compiler with the same
compiler options that will be used on the final target and tests run on a simulator will
thus be more representative than tests run on-host. While tests run on a simulator
provide more assurance than those run on-host, they are still not enough to provide
assurance of the final flight system – simulators may not be able to accurately simulate
all devices, and not all functionality can be tested.
• A range of on-target test environments may be available during a project, which are
more or less representative of the final flight system. While many types of environment
may exist and may be given different names by different DO-178C applicants, we will
describe some example environments as Model A-C environments:
• Model B (red Label, open box) – a Model B environment may be very similar
to the final system e.g. including the intended RTOS, BSP and devices, but still in
development. The hardware in these environments may be modified compared
to that intended for the final flight system to support testing and debugging. For
example, the Model B environment may have extra hardware set up to expose
GPIO pins or debugger headers for testing and debugging purposes. All devices in
the final system would typically be available.
page 27 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
• Model C (black label, closed box) – a Model C environment will be the final board
itself. This may have limited or no connectivity to support debugging. Typically, all
tests for the run for score would be performed on a Model C environment, but
due to the cost of this testing, this would only be done after it is known that the
system is working and that the tests are complete (after testing using previous test
environments).
A sensible approach to testing is to start on-host testing early in your project (see It may
pay to start verification early on page 22), then start on-target testing (e.g. Model A, B or C)
later, when test rigs are available.
The best option of whether to use a verification tool or not for a specific verification
activity depends on the project. For relatively simple verification activities for
which little rework is likely to be required, using manual processes may be the
better option, while for complex and time-consuming verification activities for
which rework is likely, an automated tool may be the better option. You should
always consider whether your use of any tool will need to be qualified or not and
if qualification kits are available for any tools you evaluate.
It is possible to do all of the verification needed for DO-178C compliance manually, but it
would be extremely inefficient.
Developing an internal tool for a specific verification activity can be a good idea as you
can develop it to best meet your specific use cases, however the cost of developing and
maintaining such a tool can be high, and if you need to provide tool qualification evidence
for an internal tool you use (see Qualifying verification tools (DO-330) on page 60), you will
need to either develop this internally or find another company to do the qualification for
you.
If you have any specific needs from a verification tool and these aren’t addressed by any of
the tools on the market, it may pay to contact several vendors and discuss whether they
are willing to develop features for you.
Efficient verification through the DO-178C life cycle | page 28 © ConsuNova, Inc. All rights reserved
Ultimately, the decision of whether to use tools or not should depend on whether this will
make your verification more efficient. While tools may make some verification projects
more efficient, every project has different needs, and in some cases, for example in a small
project, using a tool may not improve your efficiency. It is best to consider the specific
needs of your project and compare the efficiency of using manual processes vs. using an
automated tool.
For more information on Tool qualification and DO-330, see Qualifying verification tools
(DO-330) on page 60.
• Software Verification Results (SVR), which lists the verification activities conducted
(reviews, analyses and tests), gives the results, summarizes problem reports for
any failed verification activities, and typically also includes traceability and coverage
data. This document is sometimes submitted and sometimes just made available for
external review.
• Software Verification Cases and Procedures (SVCP), which gives the design of
each review, analysis and test to conduct. This includes details such as test environment
setup, schedules, staffing, auditing, and efficiency concerns. It isn’t expected that this
document is submitted.
Other documents may be expected if you are using emerging technologies in your
development.
The above list makes a one-to-one correspondence between a named DO-178C output
and a document. In practice, you can structure your deliverables into whatever document
arrangement best suits your development approach. This includes splitting out one output
into multiple documents (e.g. to put all your resource usage verification planning into a
separate document) or supporting one document with another more detailed document
(such as a general execution time test strategy document supporting the resource usage
part of the SVCP).
page 29 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
The form of the SVR will depend on the verification activities performed.
• For reviews, the SVR would typically summarize the completion of checklists, while
the checklists themselves stored under appropriate configuration control with unique
identifiers.
• For analyses, you will often create dedicated analysis reports. For example, your
resource usage results and structural coverage analysis results might be best
presented in their own reports and then summarized in the SVR. For consistency and
maintainability, consider generating the summary as part of the detailed document,
and then copying that across to the SVR. This will make it easier to deploy review
effort: experts reviewing the detailed document can check the detailed meaning of the
summary, while the consistency of the text between the documents can be handled
with a review checklist item.
• For tests, you will keep the results of test execution (e.g. log files or test tool output
files) and summarize those in the SVR. Make sure that you can trace each test execution
result back to the corresponding test procedure and to the verification environment
configuration that was used for the testing.
If part of your SQA activity involves witnessing or repeating verification activities (see
Ongoing SQA activities on page 12), you will typically document those as part of the SQA
record. If the result of a repeated activity is different to the original result, you should
invoke your problem reporting process (see Problem reporting on page 10) to analyze and
repeat the activity.
2
https://www.faa.gov/aircraft/air_cert/design_approvals/air_software/cast/media/cast-32A.pdf
A(M)C refers to either EASA AMC (Acceptable Means of Compliance) or FAA AC (Advisory Circular)
3
documents.
Efficient verification through the DO-178C life cycle | page 30 © ConsuNova, Inc. All rights reserved
DO-178C tests,
reviews, and analyses
6.8 Verification of DO-178C software
DO-178C verification For DO-178C compliance, software must be verified against both functional and non-
activities can include functional requirements.
reviews and analyses DO-178C software must be shown to comply with the coding standards you have
as well as tests. agreed to follow in your DO-178C planning (see Checking compliance to coding standards
According to on page 31). Functional requirements in DO-178C projects should be verified by
DO-178C, a test requirements-based functional testing (see Requirements-based functional testing on page
exercises a component 33), while non-functional requirements are typically verified with a combination of test,
to verify that it review, and analysis, including analysis of the resource usage of the DO-178C system (see
satisfies requirements Resource usage analysis on page 39).
(both normal range
and robustness 6.8.1 Checking compliance to coding standards
requirements), while
For DO-178C compliance, you must demonstrate that you have followed the coding
a review provides a
standards that you said you would follow in your Software Development Plan
qualitative assessment
(see Planning on page 6).
of correctness, and
an analysis provides
repeatable evidence of
correctness.
TIP
It pays to check your code’s compliance to coding standards early, and this should
ideally be a continuous process throughout your implementation. This is because
any deviations of your implementation from your coding standard will necessitate
changes to your code, which will have a downstream effect on all other verification
activities, requiring rework.
page 31 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Manual vs. automated compliance checking
TIP
• Does the tool support the coding standard you plan to use?
• If you’re not using a commonly used standard, does the tool let you define
custom rules?
• Does the tool integrate with the continuous integration software you are
using?
Compliance to coding standards can be checked manually or supported through the use
of automated compliance checking tools.
Compliance can be analyzed manually by reviewing every section of the code against your
coding standard. This can take a long time, the process is prone to human error, and if any
changes are made to the code after a specific component is checked, it may incur extra
rework. Nonetheless, if your implementation uses relatively few lines of code, following a
manual approach may be preferred.
Due to the factors above, most companies working towards DO-178C compliance use
automated tools to check compliance to coding standards, which significantly reduces the
effort needed to check for compliance. Most tools support checking against commonly
used coding standards such as MISRA CTM for C and C++ code4. If the coding standard you
plan to follow includes rules not present in these standards, some tools may offer features
to let you define custom rules to check your code against. Most compliance checking tools
will integrate with any continuous integrations tools you use, so compliance checking is
automatically performed with each update of the source code. If you choose to use a
compliance checking tool, you should consider whether the tool is qualifiable for DO-178C
(see Qualifying verification tools (DO-330) on page 69) – if not, you’ll need to manually review
the results generated by the tool.
4
https://www.misra.org.uk/MISRAChome/tabid/181/Default.aspx
Efficient verification through the DO-178C life cycle | page 32 © ConsuNova, Inc. All rights reserved
6.8.2 Requirements-based functional testing
DO-178C requires evidence to demonstrate that the function of the code has been tested
adequately with respect to the system’s high and low-level requirements in two ways:
• Normal range testing provides evidence that the software will provide intended
aircraft function and that it complies with high and low-level requirements
• Robustness testing provides evidence that the software will provide intended aircraft
function for all foreseeable operating conditions
With the exception of DAL D software, for which testing need only be done against high-
level requirements, the expectation when following DO-178C guidance is that every high
and low-level requirement in the system has been tested by both normal range and
robustness tests. When tests are complete, one or more test cases should be directly
traceable from every requirement in the system.
The execution of requirements-based tests can be done manually, but this is a laborious
process, so automated functional testing tools are often used.
TIP
page 33 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Requirements-based functional testing in DO-178C
Efficient verification through the DO-178C life cycle | page 34 © ConsuNova, Inc. All rights reserved
For DAL D software, functional testing is only required against high-level requirements.
For higher DALs, it is required against both high and low-level requirements. For DAL B
software, there must be independence between those implementing the code and those
verifying that it complies with low-level requirements, while for DAL A software, there must
be independence between those implementing the code and those verifying that it both
complies with and is robust with low-level requirements.
Normal range testing involves testing how the software responds to normal inputs and
conditions. Each normal range test used to claim compliance must trace to a high or low-
level requirement. As normal range tests used to claim DO-178C compliance must be
requirements-based, unit testing is not appropriate for demonstrating compliance, though
it can still be useful (see Unit testing is not enough on page 39).
Robustness testing involves stressing the system, including comprehensively testing it with
the full range of equivalent data classes, and testing its behavior when abnormal conditions
and faults are present. In robustness testing, abnormal conditions should be tested, such
as testing a function with input values outside the expected range.
When testing a component for robustness with respect to data inputs, not all possible
values must be tested – DO-178C defines the concept of equivalence classes, or classes
of data that behave in an identical way, and it is only necessary to test a single member of
each equivalence class. To make your generation of robustness tests more efficient, you
should ensure that you identify the equivalence classes in your system early, and you may
want to design the interfaces and data dictionary of your system to limit the number of
equivalence classes you will need to test.
To comply with DO-178C guidance, you will need to collect evidence that the Executable
Object Code meets its requirements while it executes on a target as representative of the
final environment as possible. Typically, access to such a target may not be available until
later stages of the project (see Testing on-host vs. on-target on page 26).
While you may not be able to claim credit for on-host testing, using this approach can still
be sensible as it allows you to de-risk your on-target testing as much as possible until you
have access to test rigs for on-target testing. In such an approach, you may develop and run
tests on-host during the early stages of the project (see It may pay to start verification early
on page 22), then repeat them on a Model A/B/C system (see Testing on-host vs. on-target on
page 26) when the tests are relatively complete and these test environments are available.
It may not be possible, however, to run tests against some high-level requirements on-
host, as these tests may need the devices intended for the system to be available.
page 35 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Manual vs. automated functional testing
TIP
• Which test formats are available with the tool, and how can they help you
reduce test authoring effort?
• Can the tool help you write and run robustness tests through forcing
abnormal software behavior and injecting faults?
• Will the tool integrate with your existing system, or do you need to modify
your system to use it?
• Can the tool help you merge results from different tests and builds?
• Does the tool integrate with any continuous integration tools you are
using so you can view your results throughout your project?
• Is the tool qualifiable for DO-178C, and are any of its features that you
want to use qualifiable (if needed)?
Functional testing can either be performed mostly manually (the software will of course
still need to be compiled and run) or can be supported by using an automated functional
testing tool.
Manual functional testing can take a huge amount of time, the process is prone to human
error, and if any changes are made to the code after the functional testing is performed,
it may incur significant extra rework. For this reason, most companies working to DO-
178C guidance use automated functional testing tools, which improve testing efficiency by
automating or supporting some or all of the following:
• Writing functional tests – most tools provide one or more methods to write
tests that are more efficient for functional testing than using raw code in common
programming languages such as C. Testing formats are typically designed to make it
easy to write and review tests. The test formats supported by different tools differ, and
tools offered by some vendors may better fit your project and use cases, so it pays to
consider your options. You should consider whether the tool you select can help you
test abnormal conditions by forcing abnormal software behavior and injecting faults.
Efficient verification through the DO-178C life cycle | page 36 © ConsuNova, Inc. All rights reserved
• Convert tests into test harnesses that can be run on the host computer or
target hardware – functional testing tools convert test cases written in an appropriate
input format into a test harness that can be executed to collect results. As on-target
testing is crucial for DO-178C compliance (see Testing on-host vs. on-target on page 49),
you should ensure that any tools you evaluate create a test harness that can be run
on-target.
• Run the tests on-host or on-target to collect results – some functional testing
tools may integrate with your build system to automatically run generated tests
on your local machine or target system, saving effort as you don’t need to do this
manually. As on-target testing is crucial for DO-178C compliance (see Testing on-host
vs. on-target on page 49), you may want to ensure that any tools you evaluate can run
tests on your target hardware. Note that it should be possible for a tool to integrate
with your build system to run tests – if a tool you’re evaluating isn’t able to offer this
and requires you to integrate with a specific build system or compiler, for example, you
may want to consider other options.
• Analyze results – functional testing tools will typically provide features that help
Merging test you efficiently analyze test results and make any modifications needed, for example
results filtering options that let you quickly identify failing tests. Some tools may provide further
features that can help you analyze results, such as integrating with any continuous
Some functional
integration system(s) you may be using so test results can be automatically collected
testing tools, like the
during every build of the code, letting you track test results throughout the project. As
Rapita Verification
mentioned above (Writing functional tests), the testing format(s) used by a tool can
Suite, enable users
help reduce review effort. When selecting a tool, it is sensible to consider how the
to merge results from
available features may support your verification.
separate test runs. An
example use case of • Merge results – you’ll likely produce test results from a large number of different
this feature is merging builds. Some functional testing tools can help you merge results from those tests into
results collected a single report. If you use such a feature, you should consider whether the feature is
from different testing qualified for DO-178C (see Qualifying verification tools (DO-330) on page 60) – if not,
strategies (system, you’ll need to manually review the results.
integration and lower
• Traceability – traceability information is crucial in DO-178C compliance, and you may
level testing) and
be asked to demonstrate traceability artifacts during SOI audits (see The importance of
different builds to
requirements traceability on page 16). Functional testing tools can make it much easier
combine test results
to trace between your artifacts quickly. Some tools support traceability by integrating
into a single report.
with requirements management tools so you can efficiently import requirements
Along with test results,
information to add to your test results and potentially export requirements information
structural coverage
back to your requirements management tool.
and timing data can
also be merged. • Export results and traceability results – when providing DO-178C compliance
evidence for functional testing, you will need to include a summary of your test results,
and you’ll likely have many such results. Most functional testing tools will include
features to export your results much more efficiently than using manual processes.
In some cases, exports will also include traceability results. If you plan to use export
features to generate your final compliance results, you should consider whether any
export features you plan to use are qualified for DO-178C (see Qualifying verification
tools (DO-330) on page 60) – if not, you’ll need to manually review the exports.
page 37 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
When evaluating functional testing tools, it is important to consider whether any features
you plan to use are qualifiable for DO-178C (see Qualifying verification tools (DO-330) on
page 60). If they aren’t and they need to be, using such a feature may be a false economy
as you may need to manually review any relevant artifacts generated by the tool.
Efficient verification through the DO-178C life cycle | page 38 © ConsuNova, Inc. All rights reserved
Unit testing is not enough
TIP
While unit tests can’t be used to claim credit for DO-178C compliance,
they can be a good way to informally test your code and set up a suite
of regression tests that can help you identify issues earlier and reduce
overall verification effort.
In many software industries, functional testing is achieved in part by traditional unit testing
(testing a specific subcomponent) of the code. Traditional unit tests may, for example, test
that expected output values are returned from a subcomponent of the code when a range
of input values are tested. When developing software according to DO-178C guidance,
however, traditional unit tests are not an appropriate means of compliance – functional
tests used to claim compliance must be written to satisfy high or low-level requirements
of the system.
While units test results can’t be submitted for DO-178C compliance, it may still be useful
doing unit testing, as unit tests can be created at the same time the code is implemented
(while this may not be the case for requirements-based tests), they can be used to ensure
that it is possible to get 100% coverage of the implemented code, and they can contribute
to a suite of regression tests (see It may pay to start verification early on page 22).
Ensuring completeness
As you approach your run for score (see The final run for score on page 63), you should
ensure that all of your requirements have been completely covered by tests. This means
not only that you should have tests traceable to every requirement, but also that you
should have all of the tests needed to fully test the requirement, including any robustness
tests.
• Memory requirements such as memory usage, stack usage and cache management
• Requirements related to contention in areas of the system such as bus loading and
input/output hardware
page 39 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
The ability of software to meet these requirements does not typically decompose easily
across components. Instead, resource usage requirements relate to emergent properties
of the fully integrated software. While you cannot completely determine these properties
before formal verification, you can take steps to control and monitor the ability to meet
these requirements (and hence likely save overall compliance effort) throughout the
software life cycle. As with most verification activities, it makes sense to begin considering
how you will analyze resource usage early in your DO-178C process.
DO-178C includes three objectives related to resource usage analysis, as shown in Table 2.
Table 2. DO-178C objectives relating to resource usage analysis (Source: RTCA DO-178C)
11.14
Software
Verification
Results
Budgeting and measuring execution time is the most challenging aspect of resource usage
management. We focus on specific issues relating to execution time analysis in Worst-case
execution time analysis on page 43.
Efficient verification through the DO-178C life cycle | page 40 © ConsuNova, Inc. All rights reserved
Analyzing resource usage
TIP
As is the case with all verification activities, development and implementation of a robust,
efficient methodology for analyzing resource usage involves considering the activity
throughout the DO-178C life cycle, beginning with Planning (see Planning on page 6).
For DO-178C compliance, you will need to be able to check that your requirements are
feasible when you review them. As such, when you create and review your requirements
standard, you should ensure that it explains how to write requirements for resource
usage that you will be able to verify (see Your requirements should be verifiable on page
15). In your processes, you should also include the steps you will take if you discover that
resource usage requirements are infeasible. If you discover this early, you may be able to
negotiate with system design and safety processes, or the hardware design, to implement
an alternative approach. If you only discover these issues later in the process, however,
you will have fewer options, and may even be forced to re-allocate budgets and re-work
unrelated parts of the software to meet resource usage requirements.
When you develop high-level requirements (see High-level requirements on page 14), you
should identify which of your requirements relate to emergent resource usage. This will
allow you to track the status of your resource usage analysis by using traceability from
these requirements.
When you develop the architecture for your software (see Design on page 17), decomposing
your resource usage requirements and allocating them to each component in your
software architecture can help you to de-risk late discovery of resource usage issues.
page 41 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
There is no general analytical method to apply when
allocating resources to components, but the accepted
practice is to divide up your available resources with
a tentative budget for each component, and monitor
component resource usage against that budget.
Your Software Verification Plan should explain how
verification at lower levels relates to the achievement
of requirements at higher levels. For memory usage,
including all of the different linker sections as well
as stack and heap size budgets can help to manage
and trade off budgets throughout development.
Your target and/or tool chain may impose limits here
as well – for example, your linker may be unable to
handle data storage that spans multiple on-target
memory segments. If your software architecture
includes partitioning, you will need to demonstrate that Figure 13 – RapiTask displaying CPU utilization metrics
partition breaches are prevented. Even if you are not
using partitioning, you should still consider how to mitigate the effects of exceeding your
component budget.
Your review processes for your software architecture should include feasibility checks after
your architecture has been defined but before your software has been implemented. For
execution time analysis, you can use schedulability analysis to show that, if tasks fit within
their budget, the software as a whole will meet its deadlines. For static memory usage, you
can usually just add up the allocated sizes to show that they fit, but you should be aware
that some memory may need to be allocated in specific devices to be able to support
your software’s functionality. For dynamic memory usage, you should consider the call tree
within and between components as well as any dynamic allocations made during start-up.
In all cases, you should consider how you will demonstrate a repeatable analysis process
to be able to meet SQA objectives.
Collating your current progress and achievements for verifying your software’s resource
usage in a Resource Usage Report throughout your DO-178C project can be an
efficient way of showing that your development meets the relevant DO-178C objectives
(see Resource usage analysis in DO-178C on page 39). This document should include links to
your resource usage requirements.
Efficient verification through the DO-178C life cycle | page 42 © ConsuNova, Inc. All rights reserved
Worst-case
Worst-case execution time analysis
execution time
To adhere to DO-178C guidance, you will need to produce evidence that demonstrates that
The Worst-case your software meets its timing deadlines as defined in high and low-level requirements.
execution time This is demonstrated by performing worst-case execution time (WCET) analysis.
(WCET) of a piece
Two main methods are used to analyze software WCET: static analysis and measurement-
of software is the
based approaches.
longest time that that
piece of software WCET analysis can be performed manually, but this approach is laborious, so automated
could possibly take WCET analysis tools are often used.
to execute. As such, if
the WCET of a piece of TIP
software is lower than
its timing deadline, As you’ll need to document the method by which you plan to analyze the
that piece of software worst-case execution time of your code in your DO-178C planning documents,
will execute before its you should evaluate methods early during your DO-178C planning. If you
deadline every time it choose to use any automated analysis tools, you should also evaluate these
executes. early.
Two major approaches are used for worst-case execution time analysis: static analysis and
measurement-based approaches.
In the static analysis approach, a model of the system’s hardware and software is generated
and used to estimate* the worst-case execution time of software components. A major
benefit of using this approach is that when a model is available, the analysis can be run
automatically by verification tools, and that in this case there is no need to generate tests
for software timing behavior. A major drawback is the need for a model of the system
– if no model is available for the system, it can be difficult, time-consuming and costly
to generate an accurate model. Model accuracy is also a factor – the more complex
the system, the more costly and less feasible it is to generate an accurate model of the
system. Furthermore, a system may demonstrate behavior that only occurs infrequently
and is unlikely to be modelled, such as a system’s execution time being affected by the
temperature of the hardware.
* While the calculated worst-case execution time for each software component is a true worst-
case execution time for the model, it is intractable to create a model that accurately reflects any
but the simplest system, so the real worst-case execution time will differ from that estimated from
the model.
page 43 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
In the measurement-based approach, tests that exercise the software are run in an
environment representative of the final flight system (on-target tests), and the timing
behavior of the software is measured during these tests. The worst-case execution time
of the software is then estimated± from the tests that take the longest time to run. Major
benefits of this approach include that the results it generates are less pessimistic than
those generated using a static analysis approach and that the results reflect the actual
hardware environment. A major drawback is that extensive testing of the software must be
done (and must be done on the target system, which may have limited availability) to use
this method of testing. Another drawback is that as the results do not actually reflect the
software’s pathological worst-case execution time±, they may be optimistic, yielding a value
less than the actual worst-case execution time of the software. The better the testing, the
more assurance that results are not optimistic.
± This method does not actually identify the worst-case execution time of software, but its high
water mark execution time. The accuracy of the results depends on the quality of testing. Typically,
engineers performing timing analysis using this method will apply a safety margin on top of any
results generated using this method (e.g. add 20% to the calculated high water mark execution
time to generate an estimated worst-case execution time).
A hybrid approach can also be used that combines the best of the static analysis and
measurement-based approaches. This approach, used by some commercial WCET tools
such as RapiTime, uses results from both static analysis of the software and on-target
testing to generate results that are not optimistic and are less pessimistic than those
generated by static analysis approaches. In this approach, timing tests are run to generate
a high water mark execution time, and the software’s control flow is then used to estimate
a worst-case execution time from this.
Efficient verification through the DO-178C life cycle | page 44 © ConsuNova, Inc. All rights reserved
Manual vs. automated WCET testing
Software worst-case execution time can be analyzed manually by developing and running
a test suite that intends to exercise worst-case execution time behavior for the software
(including considering variability in behavior across different execution paths through the
code) and reviewing results. This can take a huge amount of time, the process is prone
to human error, and if any changes are made to the code after the analysis is run, it may
incur significant extra rework. For this reason, most companies working towards DO-178C
compliance use automated verification tools to support the process, which significantly
reduce the effort needed to analyze software execution time behavior regardless of
whether they use a static analysis, measurement-based, or hybrid approach (see Static
analysis vs. measurement-based approaches to WCET analysis on page 43). Regardless of
whether WCET testing is done manually or supported through an automation tool,
typically, a safety margin is added to recorded high-water mark execution times to provide
additional assurance that the software will meet its timing deadlines.
If you perform worst-case execution time analysis using an automated tool, your
Zero instrumentation verification efficiency may be affected by the resources available on your system. This is
timing analysis
because of the way that most worst-case execution time analysis tools work, by adding
Two main approaches additional “instrumentation” code to your source code and then using a mechanism to
exist for automated store or capture additional data while your code runs. Limitations in the availability of the
timing analysis: following resources may cause additional verification overheads:
techniques that apply
• Code size – as most tools add additional code to “instrument” the native code, it may
instrumentation
be possible that the tool cannot add instrumentation to all of your code at once, and
to source code to
you may need to split your analysis over multiple builds.
measure timing
behavior during • Memory – one method of collecting timing results on-target is to have data written
execution, and to an area of memory and later output the data from this area of memory. If you use
instrumentation-free such a mechanism to collect timing results but do not have enough available memory
approaches. The latter to collect data for your entire code base at once, you may need to split your analysis
use methods such as over multiple builds, pause your software’s execution during testing to capture data
interpeting branch before clearing the memory location so more data can be captured, or use a different
traces collected method to collect data.
from a platform
• Bandwidth – another commonly used method to collect timing results on-target is
during execution
to have timing data written to an external device such as a debugger or datalogger
to understand the
and collected on-the-fly. If your system is not capable of writing accurate and thread-
timing behavior of an
safe instrumentation data to an IO port fast enough, or capable of writing this data to
application.
the device capturing the data fast enough, you may need to split your analysis over
multiple builds or use a different method to collect timing data.
page 45 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
If you plan to use a commercial tool to perform worst-case execution time analysis, it
makes sense to consider how well suited the tools you evaluate are in handling each of
the limitations above, especially the ones you are likely to encounter on your system. The
amount of additional code needed for instrumentation can vary greatly between different
tools, so using some tools may reduce the number of builds you need to fully test your
code. Across a project, this alone can make a huge difference in the cost of on-target
testing. Some tools may include features that help you automatically apply partitions,
reducing the effort needed to verify your entire code base.
• Structural coverage analysis (see Structural coverage analysis on page 46), which
demonstrates that the DO-178C software has been thoroughly tested such that it can
be claimed that the software includes no unintended functionality
• Data coupling and control coupling coverage analysis (see Data and control
coupling coverage analysis on page 55), which demonstrates that the data coupling
and control coupling interfaces in DO-178C software have been sufficiently exercised
during testing
DO-178C requires evidence to demonstrate that the code has been covered by
requirements-based testing. Either source code or object code can be tested, and each
type of testing has its benefits and drawbacks, as described in Source code vs. object code
structural coverage analysis on page 51.
The expectation when applying for DO-178C compliance is that 100% of the code has
been executed during testing or that, in cases where this is not possible (e.g. for defensive
code that cannot be executed under normal conditions), the reason this code is untested
is justified.
TIP
As you’ll need to document the method by which you plan to verify the structural
coverage of your software testing in your DO-178C planning documents, you
should evaluate methods early during your DO-178C planning. If you choose to
use any structural coverage analysis tools, you should also evaluate these early.
Efficient verification through the DO-178C life cycle | page 46 © ConsuNova, Inc. All rights reserved
Structural coverage can be analyzed at different granularities. The broadest granularity of
analysis considers the proportion of functions in the code that have been called, while the
narrowest granularity considers the proportion of conditions in the code that have been
shown to independently affect the outcome in the decisions in which they occur.
Structural coverage testing can be done manually, but this is a laborious process, so
automated structural coverage analysis tools are often used. Coverage can either be
tested at the source code or object code level, and each type of testing has benefits and
drawbacks.
The need to perform structural coverage analysis for DO-178C compliance is listed in 3
DO-178C objectives – 6.4.4.2, 6.4.4.2a and 6.4.4.2b (Table 3).
Table 3 - DO-178C objectives relating to structural coverage analysis (Source: RTCA DO-178C)
For DAL C, independence between the individuals implementing and verifying the code.
For DALs A and B, independence is required for all structural coverage analysis activities.
page 47 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Granularities of structural coverage
DO-178C lists 3 different types of structural coverage analysis that may need to be
performed, depending on the Design Assurance Level (DAL) of the tested application.
These are:
• Statement Coverage – this analyzes whether statements in the code have been
executed, and the overall coverage percentage indicates the proportion of statements
that have been executed.
• Decision Coverage – this analyzes whether decisions in the code have been
executed and observed to have both a true and false outcome, and the overall
coverage percentage indicates the proportion of decisions that have been executed
and observed to have both true and false outcomes.
Efficient verification through the DO-178C life cycle | page 48 © ConsuNova, Inc. All rights reserved
Modified Condition/Decision Coverage (Continued)
Pairs of test vectors (shown below) can show that each condition (kettle, cup and
coffee) independently affects the outcome of the decision:
To fully test this code for MC/DC, you’d need to run tests 4, 6, 7 and 8.
Structural coverage can be collected from system tests, integration tests, and lower level
(component) tests, so long as each type of testing represents requirements-based tests.
Higher level testing (such as system testing) generally produces more structural coverage
per test than lower-level testing (such as component testing) does. As such, it is common
practice to generate as much coverage as possible through higher-level tests before
running lower-level tests to cover sections of the code that haven’t already been covered.
A sensible approach to structural coverage analysis is to develop and run tests on-host
during the early stages of the project (see It may pay to start verification early on page 22),
then repeat them on a Model A/B system (see Testing on-host vs. on-target on page 26)
when the tests are complete and such a system is available. When collecting structural
coverage results from system level tests, it may only be possible to do so in Model B or C
environments as system tests need all devices in the system to be available.
page 49 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Manual vs. automated structural coverage analysis
TIP
• Does the tool let you mark sections of your code as covered by manual
analysis?
• Will the tool integrate with your existing system, or do you need to modify your
system to use it?
• Can the tool help you merge results from different tests and builds?
• Does the tool integrate with any continuous integration tools you are using so
you can view your results throughout your project?
• Is the tool qualifiable for DO-178C, and are any tool features you may want to
use qualifiable (if needed)?
Structural coverage can be analyzed manually by following the execution trace of test
runs of the code, analyzing the source code and the test steps needed to test the code,
and marking every section of code that was covered. This can take a huge amount of
time, the process is prone to human error, and if any changes are made to the code after
the structural coverage analysis is run, it may incur significant extra rework, which some
structural coverage analysis tools can mitigate through various features. Because of the
above, most companies working towards DO-178C compliance use automated structural
coverage analysis tools, which significantly reduce the effort needed to analyze structural
coverage by automatically applying coverage instrumentation to code and analyzing the
code segments that are executed during testing (see How coverage instrumentation works
info box on page 51).
Structural coverage analysis tools can provide further benefits over manual analysis
approaches:
• They often let you justify the reason that any untested code has not been tested and
let you manage these justifications through an intuitive interface.
• They usually let you automatically export your analysis results into a format that is
appropriate for providing to a certification authority.
• They may be able to integrate with continuous integration tools so you can automatically
collect coverage results with every build.
• They can significantly reduce the number of tests you need to run to reverify your
code if it changes through features such as merging results from multiple reports and
code change analysis.
Efficient verification through the DO-178C life cycle | page 50 © ConsuNova, Inc. All rights reserved
How coverage instrumentation works
TIP
There are benefits to both source code and object code structural analysis,
but source code analysis is used much more often in DO-178C projects. When
selecting a methodology to collect coverage results, you should consider:
• Whether you have access to source code for all of the software in your project.
• The DAL of your software – for DAL A software, if you perform coverage
analysis on object code, you will need to provide additional evidence to show
that any additional code introduced by your compiler has been verified.
page 51 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Structural coverage analysis can be performed on the source code, the object code, or
both. There are benefits and drawbacks to both types of analysis.
Coverage analysis is most often performed on source code, especially when using
automated tools for the analysis. One major benefit of analyzing source code is that
this often makes it easier to modify tests to produce 100% coverage (though these tests
should still be written to test any relevant requirement(s), not just to produce coverage!)
– while most testers can analyze source code to improve tests, this is not necessarily the
case for object code. Another benefit is that most structural coverage analysis tools are
designed to analyze source code and can’t analyze object code, and using these tools can
significantly reduce effort as we saw in Manual vs. automated structural coverage analysis on
page 50. A major drawback of source code coverage analysis is that the typical approach of
instrumenting the source code to collect coverage results can necessitate running multiple
builds, especially for resource-limited systems (see Coverage analysis on resource-limited
systems on page 52).
Object code structural coverage analysis is performed less often than source code
coverage analysis in the aerospace industry. A major benefit of this type of analysis is that
it can be performed even when there is no access to source code (for example if you are
using a third-party library for which you don’t have access to source code). Another benefit
is that some methods for object code coverage analysis do not require instrumentation of
the code, allowing testing of unmodified code. These methods typically impose restrictions
on compatible hardware environments, however. A major drawback of this type of analysis
is that if you choose to analyze coverage of object code, you will need to demonstrate that
your analysis provides the same level of confidence that you would get from analysis at the
source level. Further guidance on this was given in the CAST-12 position paper. Another
drawback is that less support is available from commercial tool vendors to perform
coverage analysis of object code as most structural coverage analysis tools cannot do this.
Regardless of whether you perform structural coverage analysis against source code or
object code, if you are working on a DAL A development, you will need to provide additional
evidence to show that any additional code introduced by your compiler has been verified
(see Verifying additional code introduced by compilers on page 54).
If you perform structural coverage analysis using an automated tool, your verification
efficiency may be affected by the resources available on your system. This is because
of the way that most structural coverage analysis tools work, by adding additional
“instrumentation” code to your source code and then using some mechanism to store or
capture additional data while your code runs.
Efficient verification through the DO-178C life cycle | page 52 © ConsuNova, Inc. All rights reserved
Limitations in the availability of the following resources may cause additional verification
Zero instrumentation
coverage analysis overheads:
Two main approaches • Code size – as most tools add additional code to “instrument” the native code, it may
exist for automated be possible that the tool cannot add instrumentation to all of your code at once, and
coverage analysis: you may need to split your analysis over multiple builds.
techniques that apply
• Memory – a commonly used method to collect structural coverage results on-target
instrumentation
is to have coverage data written to an area of memory and later output the data from
to source code to
this area of memory. If you use such a mechanism to collect coverage results but
measure coverage
do not have enough available memory to collect data for your entire code base at
achieved during
once, you may need to split your analysis over multiple builds, pause your software’s
test execution, and
execution during testing to capture data before clearing the memory location so more
instrumentation-free
data can be captured, or use a different method to collect coverage data.
approaches. The latter
use methods such as • Bandwidth – another commonly used method to collect coverage results on-target is
interpeting branch to have coverage data written to an external device such as a debugger or datalogger
traces collected and collected on-the-fly. If your system is not capable of outputting data fast enough,
from a platform you may need to split your analysis over multiple builds or use a different method to
during execution to collect coverage data.
measure the coverage If you plan to use a commercial tool to perform structural coverage analysis, it makes sense
achieved. Thus, they to consider how well suited the tools you evaluate are in handling each of the limitations
analyze the object above, especially the ones you are likely to encounter on your system. The amount of
code rather than additional code needed for instrumentation can vary greatly between different tools,
source code for an so some tools may require you to run fewer builds than others. Across a project, this
application. alone can make a huge difference in the cost of on-target testing. Some tools may include
features that help you automatically apply partitions, reducing the effort needed to verify
your entire code base.
If you do need to run your analysis across multiple builds, it pays to consider how the tools
you are evaluating can help you combine those results into a single report that you can
analyze and provide to your certification authority. Some tools may include features to
combine results, while some may not. For tools that do include such features, the result
merging feature may or may not be included in the qualification kits provided by the tool
vendor. If it isn’t, you would either need to qualify the feature yourself or not use it for your
final run for score (see Qualifying verification tools (DO-330) on page 60).
If your project is DAL C or above, you will need to provide evidence that your requirements-
based testing and analysis has covered or justified 100% of your code. Most code should
be coverable by requirements-based tests, but often some code cannot be, or cannot
easily be, covered by such tests. Some causes of untestable, or not easily testable code,
include:
• Coding standard – your coding standard may require the use of untestable coding
structures such as default case statements that can never be executed, or destructors.
page 53 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
• Programming language – your choice of programming language may lead to the
generation of code structures that cannot be executed, such as default base class
implementations in object-oriented programming languages.
• Unused library code – a general-purpose library may include routines that are never
called during your tests.
• Initialization code and shutdowns – some code executes before you can set up
the testing environment, making it difficult to test directly. Other tests may need to
invoke resets of the target hardware during the test (e.g. to check that non-volatile Justifications for
memory is used correctly on a warm restart first pass), making it hard to record test coverage resolution
TIP
If you are working on a DAL A project, you need to verify any additional code
introduced by your compiler. Hence, it makes sense to mitigate the effort needed
to verify additional code introduced by your compiler by:
Efficient verification through the DO-178C life cycle | page 54 © ConsuNova, Inc. All rights reserved
Compilers transform source code into equivalent object code. Along the way, they can
add and remove code as needed. For example, they can perform optimizations to make
best use of the facilities offered by the target CPU and platform, which typically removes
branches, avoids repeated calculations, or creates copies of object code (inlining) instead
of creating calls to existing object code. Compilers can also add new code structures that
have no equivalent in the original source code. For example, additional code may be
generated that includes loops to copy data in and out of functions, decision structures for
array bounds checking, or calls to library functions to implement operators (e.g. modulus)
that are not supported in a CPU’s native instruction set.
If your software is DAL A, then you must verify the correctness of any additions made
to your code by your compiler. This is covered in objective 6.4.4.2.b of DO-178C. Such
verification can include analysis, review, and testing as necessary:
• Analysis: to identify the combinations of code constructs and compiler options that
you use that lead to the insertion of additional code. This typically involves extensive
review of the object code generated from sample input source files. Based on this
analysis, you may decide to update your coding standard to limit the use of some
language constructs. Accordingly, you should perform this analysis early on to ensure
that you are aware of the behavior of your compiler environment before verification
begins in earnest.
• Review: for example, to identify whether you are following your coding standard and
keeping the analysis up to date whenever you change compiler settings, or to show
that all added code falls into the categories that you identified in your analysis.
Verification specialists may be able to help with all three aspects of verifying additional
object code.
DO-178C requires evidence that the data coupling and control coupling interfaces in
software have been tested. As we saw in Considering data coupling and control coupling
on page 17, data coupling interfaces and control coupling interfaces are points in the
code at which one component provides data to, or affects the choice to execute, another
component. DO-248C: Supporting Information for DO-178C and DO-278A provides the
following definitions for data coupling and control coupling, defining:
• Control coupling as “The manner or degree by which one software component influences
the execution of another software component.”
page 55 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Data coupling and control coupling coverage are metrics of the completeness of testing
of data coupling and control coupling interfaces in the software. By demonstrating that
your testing has exercised 100% of the data coupling and control coupling interfaces in
the software architecture, you are also demonstrating the sufficiency of your software-
software and hardware-software integration testing.
TIP
As you’ll need to document the method by which you plan to verify the data
coupling and control coupling coverage you have achieved of your code in your
DO-178C planning documents, you should evaluate methods early during your
DO-178C planning. If you choose to use any automated analysis tools to produce
coupling coverage results, you should also evaluate these early.
The need to analyze coverage of data coupling and control coupling interfaces is listed in a
single DO-178C objective – 6.4.4.d (Table 4).
Table 4 - DO-178C objectives related to data coupling and control coupling coverage
(Source: RTCA DO-178C)
This objective applies to DAL A-C systems, and the analysis should be with independence
for DAL A and B systems.
Efficient verification through the DO-178C life cycle | page 56 © ConsuNova, Inc. All rights reserved
Testing data coupling and control coupling coverage
TIP
The key to efficient data coupling and control coupling coverage analysis is defining
data coupling and control coupling interfaces with enough information with which
to write test cases to test those interfaces in your requirements-based tests. You
can then satisfy the objective for data coupling and control coupling coverage
analysis by analyzing the coverage you achieve during your tests.
For DO-178C compliance, you need to ensure that your requirements-based testing has
exercised 100% of the data coupling and control coupling interfaces in your code.
To make your data coupling and control coupling coverage analysis efficient, it is important
to include relevant interface requirements in your software architecture description
(see Considering data coupling and control coupling on page 17). These requirements
describe the control (including sequencing, timing and concurrency) and data interchange
dependencies between components. This will make it easier to write requirements for
both normal range and robustness testing (see Requirements-based functional testing on
page 33) that test the interfaces in your code. You can then collect coverage results during
these tests to demonstrate that the data coupling and control coupling interfaces in your
code are exercised during them. Typically, this will involve analyzing statement coverage of
relevant read/write and procedure call statements.
Data coupling and control coupling coverage analysis can be achieved manually or can be
supported by the use of automated tools. If you use an automated tool to support your
data coupling and control coupling coverage analysis, you should ensure that the tool is
flexible and able to adapt to the way you have mapped between your requirements and
software.
• Their improved size, weight and power (SWaP) characteristics allow system developers
to add more functionality to systems per unit volume, satisfying the increasing
expectations of the industry.
• It is slowly becoming more difficult to source single core processors as the majority of
the wider embedded software industry has moved towards using multicore processors
due to their better SWaP characteristics.
While using multicore systems offers great benefits, it also necessitates additional
verification activities (and additional planning activities) to meet the design assurance
required for DO-178C certification.
5
https://www.faa.gov /aircraft/air_cert/design_approvals/air_software/cast/media/cast-32A.pdf
6
A(M)C refers to either EASA AMC (Acceptable Means of Compliance) or FAA AC (Advisory Circular)
documents.
page 57 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
While DO-178C itself offers no specific guidance on the use of multicore systems
(they were rarely if ever used in DO-178C applications when DO-178C launched),
it has been addressed in supplementary documents. First, supplementary
guidance was given in CAST-32A5. This guidance is due to be superseded by
A(M)C6 20-193 in 2021.
CAST-32A lists 10 additional objectives that should be followed when developing DO-178C
software hosted on multicore systems. The first 2 objectives relate to additional planning
activities:
• MCP_Planning_1 – this objective requires that the multicore system (hardware and
RTOS) is described and that development and verification methodologies are planned.
• MCP_Planning_2 – this objective requires that the shared resources in the multicore
system are identified along with any methods used to mitigate interference effects
from contention on those resources. It also requires the identification of any active
dynamic hardware features used on the multicore system such as DVFS.
• MCP_Software_2 – this objective requires the data coupling and control coupling
of the multicore system to be verified (this is in addition to verification of the data
coupling and control coupling needed for all DO-178C projects, see Data coupling and
control coupling coverage analysis on page 55).
Efficient verification through the DO-178C life cycle | page 58 © ConsuNova, Inc. All rights reserved
• MCP_Accomplishment_Summary – this objective requires evidence from the
verification activities above to be incorporated into a Software Accomplishment
Summary (Documenting verification results on page 29) so as to describe how each
CAST-32A objective was met.
Worst-case execution time analysis of the multicore software will in most cases be the
most challenging and time and resource-consuming activity when complying with
CAST-32A guidance. While the timing behavior of single core systems is deterministic, the
timing behavior of multicore systems is non-deterministic as it is affected by interference
as separate cores access shared resources (see below).
Multicore interference
While A(M)C 20-193 has not yet been published, the objectives of the new document are
expected to largely follow the guidance provided in CAST-32A, while also providing insight
from the certification authorities on approaches that can be used to comply with the
objectives.
page 59 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
6.10.3 Complying with CAST-32A / AM(C) 20-193
TIP
• How can your choice of hardware and RTOS help you mitigate interference?
• How will you perform worst-case execution time analysis on your multicore
system?
Due to the challenging nature of compliance with CAST-32A objectives, the large number
of tests that must be run for compliance, and uncertainty in the scope of how much testing
must be performed, compliance with the additional objectives needed to follow CAST-32A
can be very expensive. Due to this high potential cost, it pays to make multicore verification
as efficient as possible. Discover multicore
verification
As is the case with single core systems, decisions on which hardware to use will have an
You can learn
impact on the cost of multicore certification. In recent years, silicon manufacturers, board
more about CAST-
suppliers and RTOS vendors have been developing and commercializing new mechanisms
32A objectives
that can help to mitigate the effects of multicore interference.
and strategies for
The design decisions you make will also affect the worst-case execution time and multicore verification
predictability of timing behavior of your software. This includes, for example, your at rapitasystems.com/
allocations of software functionality into components and of hardware I/O communication cast-32a.
into software components.
As you’ll need to describe how you plan to qualify any tools you use that replace
or mitigate DO-178C processes and for which you don’t manually verify the output
in your Plan for Software Aspects of Certification (PSAC), you should consider
tool qualification early during your DO-178C planning. If you plan to use any
commercial verification tools that need to be qualified, you would benefit from
finding out whether qualification kits are available for them.
Efficient verification through the DO-178C life cycle | page 60 © ConsuNova, Inc. All rights reserved
330)
As per DO-178C, you need to qualify any software tool you use that replaces or mitigates
any DO-178C process and for which the output is not manually verified. The qualification
process ensures that such software tools can be relied upon to produce appropriate and
repeatable results.
DO-178C itself describes when a tool must be qualified, but does not go into detail on how
this should be done. The DO-330: Software Tool Qualification Considerations7 supplement
to DO-178C expands on this guidance by defining corresponding objectives for the
specification, development and verification of qualified tools.
DO-178C defines 3 sets of tool assessment criteria which, when combined with the DAL
Table 5 - Assessment criteria for tool qualification levels (Source: RTCA DO-330)
DAL
Criteria
A B C D
level of your software, are used to classify tools at one of 5 different Tool Qualification
Levels (TQLs) as shown in Table 5.
For example, a code generator tool that converts an architectural description of the
software into package or class structures fulfils criteria 1. Verification tools typically fall
into Criteria 3 (and are thus classified at TQL-5) as they neither create airborne software
7
The DO-330 document is available for purchase from various sources online, including https://standards.
globalspec.com/std/1461615/RTCA%20DO-330
page 61 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
nor eliminate or reduce any processes other than the ones for which they are intended.
TIP
• Are all of the tool features that you want to use included in the tool qualification
kit?
What additional support and services are available to help complete the tool user
activities?
Criteria 2 typically applies in cases such as model-based testing with a qualified code
generator. In this case, the task of verifying the generated code is eliminated or reduced in
favor of testing the model, and so the model-based testing tool meets criteria 2.
If you use any commercial verification tools to automate DO-178C verification processes
and don’t plan on manually reviewing output from the tools, they will need to be qualified
at the appropriate tool qualification level.
DO-330 defines some tool qualification activities that must be performed by the tool
developer and some that must be performed by the tool user (you).
Many commercial verification tools have supporting qualification kits, which include
evidence needed to demonstrate that the activities the tool developer must perform have
been performed. Generally, not all features of verification tools are qualified. For each
feature you intend to use and for which the way you intend to use it would require tool
qualification, you should check with the tool developer whether the feature is included in
the qualification kit. All qualification kits should include all of the evidence needed from the
tool developer. Some qualification kits may also include supporting material to help meet
tool user objectives. It may pay to ask tool vendors what the scope of their qualification kits
is and how they can help you qualify the tool.
You do not need to complete the verification of your software before SOI#3, but this
milestone is usually reached after:
• Around half or more of the total expected test cases and procedures have been developed and reviewed
• The approach you’ll take for verifying your verification process (e.g. structural coverage, data coupling and control
coupling coverage) is demonstrable (ideally with at least sample data beng available)
Efficient verification through the DO-178C life cycle | page 62 © ConsuNova, Inc. All rights reserved
• The approach you’ll take for verifying non-functional properties of your software,
including resource usage, is demonstrable (ideally with at least sample data being
available)
Similarly to SOI#2, the review will examine traceability information and also examine how
changes have been managed throughout the life cycle of your project.
As you prepare for your final run for score, you should ensure that all of your
requirements-based tests are passing and that you have 100% coverage. Doing
so will make your final run for score go more smoothly.
As you approach the end of the Verification stage of your project, you’ll need to prepare
Did you know?
for your final run for score, which will produce comprehensive verification results and
For large projects, compliance evidence for your final software baseline in the context of your final product.
running a final This means that the test environment should be the same as the final flight system,
“run for score” may including all hardware and with all software components integrated. The final run for score
take weeks or even is typically performed after the Verification milestone (SOI#3), but is necessary to produce
months. the compliance artifacts you’ll need for certification.
The most efficient strategy when preparing for the final run for score is to ensure that
when you go ahead with it, you’ll collect all of the results you need to. This includes results
for all of your requirements-based tests and the evidence you need to show that you have
achieved 100% data coupling and control coupling coverage and structural coverage of
your final code (through either tests or manual analysis and justification) at the required
coverage granularities for your DAL.
Your requirements-based test results should be collected from the final code as it will be
written in your product. This means that, if you have instrumented your code to support
coverage analysis, you will need to run your requirements-based tests on code that has
not been instrumented and collect your coverage results separately.
page 63 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
7. The Certification milestone
(SOI#4)
SOI#4 is more than just a double-check of the release baseline – it is a detailed examination
of all the development and verification artifacts to ensure that a complete and consistent
design baseline is documented. This may involve the certification authority asking to see
you building and loading a specific baseline, running tests against that baseline to produce
the same results you reported previously, and performing desk reviews of documents and
face to face audits. During the SOI, you may be asked to re-review any artifact or evidence
produced during the whole life cycle of your project, so it’s important to make sure that
all of your life cycle data is kept up to date with the current baseline through all stages of
development and verification.
Efficient verification through the DO-178C life cycle | page 64 © ConsuNova, Inc. All rights reserved
8. The Authors
Martin Beeby
Head of Advanced Avionics Systems
Martin has been working in Software, Hardware, and Systems development for
over 30 years. He has worked on a wide range of avionics applications including
Engine Controllers, Display Systems, Communication Systems and Cabin Systems.
He also participates in industry working groups developing new guidance and
was part of the working group that defined DO-178C.
[email protected] [email protected]
+1 248-957-9801 (US)
+1 858-444-6762
+44 (0)1904 413945 (int.)
page 65 | Efficient verification through the DO-178C life cycle © ConsuNova, Inc. All rights reserved
Rapita Systems provides on-target software verification tools and services to the
embedded aerospace and automotive electronics industries. Its solutions help to
increase software quality, deliver evidence to meet safety and certification objectives
(including DO-178C) and reduce project costs. With offices in the UK and US, it serves its
solutions globally.
Daniel Wright
Technical Marketing Executive
Daniel’s roles at Rapita Systems include creating and curating technical marketing
content, including collateral, blogs and videos, and capturing and maintaining
data that is used to further develop Rapita’s verification solutions to best meet
the needs of the global aerospace market. Daniel received a PhD in Structural
Biology from the University of York in 2016.
Zoë Stephenson
Head of Software Quality
Efficient verification through the DO-178C life cycle | page 66 © ConsuNova, Inc. All rights reserved
About ConsuNova
Engineering Augmentation
Contact
Headquarters
9920 Pacific Heights Blvd, Suite 150
+1 858-444-6762
www.ConsuNova.com
UK Office
71-75 Shelton St., Covent Garden, linkedin.com/company/ConsuNova
London WC2H 9JQ, UK