En 17640 - 2022

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

EUROPEAN STANDARD EN 17640

NORME EUROPÉENNE
EUROPÄISCHE NORM October 2022

ICS 35.030

English version

Fixed-time cybersecurity evaluation methodology for ICT


products
Méthode d'évaluation de la cybersécurité pour Zeitlich festgelegte
produits TIC Cybersicherheitsevaluationsmethodologie für IKT-
Produkte

This European Standard was approved by CEN on 15 August 2022.

CEN and CENELEC members are bound to comply with the CEN/CENELEC Internal Regulations which stipulate the conditions for
giving this European Standard the status of a national standard without any alteration. Up-to-date lists and bibliographical
references concerning such national standards may be obtained on application to the CEN-CENELEC Management Centre or to
any CEN and CENELEC member.

This European Standard exists in three official versions (English, French, German). A version in any other language made by
translation under the responsibility of a CEN and CENELEC member into its own language and notified to the CEN-CENELEC
Management Centre has the same status as the official versions.

CEN and CENELEC members are the national standards bodies and national electrotechnical committees of Austria, Belgium,
Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy,
Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North Macedonia, Romania, Serbia,
Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and United Kingdom.

CEN-CENELEC Management Centre:


Rue de la Science 23, B-1040 Brussels

© 2022 CEN/CENELEC All rights of exploitation in any form and by any means Ref. No. EN 17640:2022 E
reserved worldwide for CEN national Members and for
CENELEC Members.
EN 17640:2022 (E)

Contents Page

European foreword ............................................................................................................................................ 4


Introduction .......................................................................................................................................................... 5
1 Scope.......................................................................................................................................................... 7
2 Normative references.......................................................................................................................... 7
3 Terms and definitions ......................................................................................................................... 7
4 Conformance........................................................................................................................................... 9
5 General concepts ................................................................................................................................. 11
5.1 Usage of this methodology ............................................................................................................... 11
5.2 Knowledge of the TOE ....................................................................................................................... 12
5.3 Development process evaluation.................................................................................................. 12
5.4 Attack Potential ................................................................................................................................... 12
5.5 Knowledge building ........................................................................................................................... 13
6 Evaluation tasks .................................................................................................................................. 13
6.1 Completeness check ........................................................................................................................... 13
6.1.1 Aim ........................................................................................................................................................... 13
6.1.2 Evaluation method ............................................................................................................................. 13
6.1.3 Evaluator competence....................................................................................................................... 13
6.1.4 Evaluator work units ......................................................................................................................... 13
6.2 FIT Protection Profile Evaluation ................................................................................................. 14
6.2.1 Aim ........................................................................................................................................................... 14
6.2.2 Evaluation method ............................................................................................................................. 14
6.2.3 Evaluator competence....................................................................................................................... 14
6.2.4 Evaluator work units ......................................................................................................................... 14
6.3 Review of security functionalities ................................................................................................ 15
6.3.1 Aim ........................................................................................................................................................... 15
6.3.2 Evaluation method ............................................................................................................................. 15
6.3.3 Evaluator competence....................................................................................................................... 15
6.3.4 Evaluator work units ......................................................................................................................... 15
6.4 FIT Security Target Evaluation ...................................................................................................... 16
6.4.1 Aim ........................................................................................................................................................... 16
6.4.2 Evaluation method ............................................................................................................................. 16
6.4.3 Evaluator competence....................................................................................................................... 16
6.4.4 Evaluator work units ......................................................................................................................... 16
6.5 Development documentation ......................................................................................................... 17
6.5.1 Aim ........................................................................................................................................................... 17
6.5.2 Evaluation method ............................................................................................................................. 17
6.5.3 Evaluator competence....................................................................................................................... 17
6.5.4 Work units ............................................................................................................................................. 17
6.6 Evaluation of TOE Installation ....................................................................................................... 17
6.6.1 Aim ........................................................................................................................................................... 17
6.6.2 Evaluation method ............................................................................................................................. 18
6.6.3 Evaluator competence....................................................................................................................... 18
6.6.4 Evaluator work units ......................................................................................................................... 18
6.7 Conformance testing .......................................................................................................................... 18

2
EN 17640:2022 (E)

6.7.1 Aim ........................................................................................................................................................... 18


6.7.2 Evaluation method ............................................................................................................................. 18
6.7.3 Evaluator competence ...................................................................................................................... 19
6.7.4 Evaluator work units ......................................................................................................................... 19
6.8 Vulnerability review.......................................................................................................................... 20
6.8.1 Aim ........................................................................................................................................................... 20
6.8.2 Evaluation method ............................................................................................................................. 20
6.8.3 Evaluator competence ...................................................................................................................... 21
6.8.4 Evaluator work units ......................................................................................................................... 21
6.9 Vulnerability testing .......................................................................................................................... 21
6.9.1 Aim ........................................................................................................................................................... 21
6.9.2 Evaluation method ............................................................................................................................. 22
6.9.3 Evaluator competence ...................................................................................................................... 22
6.9.4 Evaluator work units ......................................................................................................................... 22
6.10 Penetration testing ............................................................................................................................ 24
6.10.1 Aim ........................................................................................................................................................... 24
6.10.2 Evaluation method ............................................................................................................................. 24
6.10.3 Evaluator competence ...................................................................................................................... 25
6.10.4 Evaluator work units ......................................................................................................................... 25
6.11 Basic crypto analysis ......................................................................................................................... 26
6.11.1 Aim ........................................................................................................................................................... 26
6.11.2 Evaluation method ............................................................................................................................. 26
6.11.3 Evaluator competence ...................................................................................................................... 26
6.11.4 Evaluator work units ......................................................................................................................... 26
6.12 Extended crypto analysis ................................................................................................................. 27
6.12.1 Aim ........................................................................................................................................................... 27
6.12.2 Evaluation method ............................................................................................................................. 27
6.12.3 Evaluator competence ...................................................................................................................... 28
6.12.4 Evaluator work units ......................................................................................................................... 28
Annex A (informative) Example for a structure of a FIT Security Target (FIT ST)................... 30
Annex B (normative) The concept of a FIT Protection Profile (FIT PP) ....................................... 32
Annex C (informative) Acceptance Criteria ............................................................................................ 33
Annex D (informative) Guidance for integrating the methodology into a scheme .................. 40
Annex E (informative) Parameters of the methodology and the evaluation tasks .................. 45
Annex F (normative) Calculating the Attack Potential ....................................................................... 47
Annex G (normative) Reporting the results of an evaluation .......................................................... 52
Bibliography ....................................................................................................................................................... 54

3
EN 17640:2022 (E)

European foreword

This document (EN 17640:2022) has been prepared by Technical Committee CEN/CLC/JTC 13
“Cybersecurity and Data Protection”, the secretariat of which is held by DIN.

This European Standard shall be given the status of a national standard, either by publication of an
identical text or by endorsement, at the latest by April 2023, and conflicting national standards shall be
withdrawn at the latest by April 2023.

Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. CEN shall not be held responsible for identifying any or all such patent rights.

Any feedback and questions on this document should be directed to the users’ national standards body.
A complete listing of these bodies can be found on the CEN website.

According to the CEN-CENELEC Internal Regulations, the national standards organisations of the
following countries are bound to implement this European Standard: Austria, Belgium, Bulgaria, Croatia,
Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland,
Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North
Macedonia, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and the United
Kingdom.

4
EN 17640:2022 (E)

Introduction

The foundation for a sound product certification is a reliable, transparent and repeatable evaluation
methodology. Several product or scheme dependent evaluation methodologies exist. The Cybersecurity
Act (CSA) [1] will cause new schemes to be created which in turn require (new) methodologies to
evaluate the cybersecurity functionalities of products. These new methodologies are required to describe
evaluation tasks defined in the CSA. This methodology also adds a concept, independent of the
requirements of the CSA, namely the evaluation in a fixed time. Existing cybersecurity evaluation
methodologies (e.g. EN ISO/IEC 15408 in combination with EN ISO/IEC 18045) are not explicitly
designed to be used in a fixed time.
Scheme developers are encouraged to implement the evaluation methodology in their schemes. This can
be done for general purpose schemes or in dedicated (vertical domain) schemes, by selecting aspects for
self-assessment at CSA assurance level “basic” or third-party assessments. The self-assessment may be
performed at CSA assurance level “basic”, the third-party evaluations at CSA assurance level “basic”,
“substantial” or “high”. And the evaluation criteria and methodology might be subject to extra tailoring,
depending on the requirements of the individual scheme. This cybersecurity evaluation methodology
caters for all of these needs. This methodology has been designed so that it can (and needs to be) adapted
to the requirements of each scheme.
Scheme developers are encouraged to implement the evaluation methodology for the intended use of
the scheme, applicable for general purpose or in dedicated (vertical) domains, by selecting those aspects
needed for self-assessment at CSA assurance level “basic” or third-party evaluation at any CSA assurance
level required by the scheme.
This document provides the minimal set of evaluation activities defined in the CSA to achieve the desired
CSA assurance level as well as optional tasks, which might be required by the scheme. Selection of the
various optional tasks is accompanied by guidelines so scheme developers can estimate the impact of
their choices. Further adaption to the risk situation in the scheme can be achieved by choosing the
different evaluation tasks defined in the methodology or using the parameters of the evaluation tasks, e.g.
the number of days for performing certain tasks.
If scheme developers choose tasks that are not defined in this evaluation methodology, it will be the
responsibility of the scheme developer to define a set of companion requirements or re-use another
applicable evaluation methodology.
Nonetheless, it is expected that individual schemes will instantiate the general requirements laid out in
this evaluation methodology and provide extensive guidance for manufacturers (and all other parties)
about the concrete requirements to be fulfilled within the scheme.
Evaluators, testers and certifiers can use this methodology to conduct the assessment, testing or
evaluation of the products and to perform the actual evaluation/certification according to the
requirements set up by a given scheme. It also contains requirements for the level of skills and knowledge
of the evaluators and thus will also be used by accreditation bodies or National Cybersecurity
Certification Authorities during accreditation or authorization, where appropriate, and monitoring of
conformity assessment bodies.
Manufacturers and developers will find the generic type of evidence required by each evaluation task
listed in the evaluation methodology to prepare for the assessment or evaluation. The evidence and
evaluation tasks are independent from the fact of whether the evaluation is done by the
manufacturer/developer (i.e. 1st party) or by someone else (2nd/3rd party).
Users of certified products (regulators, user associations, governments, companies, consumers,
etc.) may also use this document to inform themselves about the assurance drawn from certain
certificates using this evaluation methodology. Again, it is expected that scheme developers provide
additional information, tailored to the domain of the scheme, about the assurance obtained by
evaluations / assessments under this methodology.

5
EN 17640:2022 (E)

Furthermore, this methodology is intended to enable scheme developers to create schemes which
attempt to reduce the burden on the manufacturer as much as possible (implying additional burden on
the evaluation lab and the certification body).
NOTE In this document the term “Conformity Assessment body” (CAB) is used for CABs doing the evaluation.
Other possible roles for CABs are not considered in this document.

It should be noted that this document cannot be used “stand alone”. Each domain (scheme) needs to
provide domain specific cybersecurity requirements (“technical specifications”) for the objects to be
evaluated / certified. This methodology is intended to be used in conjunction with those technical
specifications containing such cybersecurity requirements. The relationship of the methodology
provided in this document to the activities in product conformity assessment is shown in Figure 1.

Figure 1 — Relationship of this document to the activities in product conformity assessment

6
EN 17640:2022 (E)

1 Scope
This document describes a cybersecurity evaluation methodology that can be implemented using pre-
defined time and workload resources, for ICT products. It is intended to be applicable for all three
assurance levels defined in the CSA (i.e. basic, substantial and high).
The methodology comprises different evaluation blocks including assessment activities that comply with
the evaluation requirements of the CSA for the mentioned three assurance levels. Where appropriate, it
can be applied both to third-party evaluation and self-assessment.

2 Normative references
There are no normative references in this document.

3 Terms and definitions


For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
• IEC Electropedia: available at https://www.electropedia.org/

• ISO Online browsing platform: available at https://www.iso.org/obp

3.1
evaluator
individual that performs an evaluation

Note 1 to entry: Under accreditation the term “tester” is used for this individual.

3.2
auditor
individual that performs an audit

3.3
certifying function
people or group of people responsible for deciding upon certification

Note 1 to entry: Depending on the scheme the certifying function may use evidence beyond the ETR (3.13) as a basis
for the certification decision.

3.4
scheme developer
person or organization responsible for a conformity assessment scheme

Note 1 to entry: For schemes developed under the umbrella of the CSA the so-called “ad hoc group” helps the scheme
developer.

Note 2 to entry: This definition is based on and aligned with the definition of “scheme owner” in EN ISO/IEC 17000.

3.5
confirm
<evaluation verb> declare that something has been reviewed in detail with an independent
determination of sufficiency

[SOURCE: ISO/IEC 18045:2022, definition 3.2 with NOTE removed]

7
EN 17640:2022 (E)

3.6
verify
<evaluation verb> rigorously review in detail with an independent determination of sufficiency

Note 1 to entry: Also see “confirm”. This term has more rigorous connotations. The term “verify” is used in the
context of evaluator actions where an independent effort is required of the evaluator.

[SOURCE: ISO/IEC 18045:2022, definition 3.22]

3.7
determine
<evaluation verb> affirm a particular conclusion based on independent analysis with the objective of
reaching a particular conclusion

Note 1 to entry: The usage of this term implies a truly independent analysis, usually in the absence of any previous
analysis having been performed. Compare with the terms “confirm” or “verify” which imply that an analysis has
already been performed which needs to be reviewed.

[SOURCE: ISO/IEC 18045:2022, definition 3.5]

3.8
ICT product
product with information and/or communication technology

Note 1 to entry: ICT covers any product that will store, retrieve, handle, transmit, or receive digital information
electronically in a digital form (e.g., personal computers, smartphones, digital television, email systems, robots).

3.9
Target of Evaluation
TOE
product (or parts thereof, if product is not fully evaluated) with a clear boundary, which is subject to the
evaluation

3.10
FIT Security Target
FIT ST
documented information describing the security properties and the operational environment of the TOE
(3.9)

Note 1 to entry: The FIT ST may have different content, structure and size depending on the CSA assurance level.

3.11
FIT Protection Profile
FIT PP
implementation-independent statement of security needs for a TOE (3.9) type

[SOURCE: ISO/IEC 15408-1:2022, definition 3.68]

3.12
Secure User Guide
SUG
documented information describing the steps necessary to set up the TOE (3.9) into the intended secure
state (3.16)

8
EN 17640:2022 (E)

3.13
Evaluation Technical Report
ETR
documented information describing the results of the evaluation

3.14
scheme-specific checklist
list of items defining the required level of detail and granularity of the documentation, specified by the
scheme

3.15
knowledge
facts, information, truths, principles or understanding acquired through experience or education

Note 1 to entry: An example of knowledge is the ability to describe the various parts of an information assurance
standard.

Note 2 to entry: This concept is different from the concept “Knowledge of the TOE”.

[SOURCE: ISO/IEC TS 17027:2014, 2.56, modified — Note 1 to entry has been added from
ISO/IEC 19896-1:2018, Note 2 to entry is new]

3.16
secure state
state in which all data related to the TOE (3.9) security functionality are correct, and security functionality
remains in place

3.17
self-assessment
conformance assessment activity that is performed by the person or organization that provides the TOE
(3.9) or that is the object of conformity assessment

[SOURCE: EN ISO/IEC 17000:2020, definition 4.3 with Notes and Examples removed]

3.18
evaluation task parameter
parameter required to be set when using this document to define how the evaluation task shall be
executed by the evaluator (3.1)

4 Conformance
The following Table 1 provides a reference on how the evaluation tasks should be chosen for a certain
scheme for the different CSA assurance levels:

9
EN 17640:2022 (E)

Table 1 — Evaluation tasks vs. CSA assurance level conformance claim

CSA assurance level conformance claim


Evaluation tasks Reference Basic Substantial High
Completeness
6.1 Required Required Required
check
Review of security
6.3 Required
functionalities
FIT Security
6.4 Required Required
Target Evaluation
Development
6.5 Required Required Required
documentation1)
Evaluation of TOE
6.6 Recommended Required Required
Installation
Conformance
6.7 Recommended Required Required
Testing
Vulnerability Required (or done
6.8 Recommended
review with 6.9)
Vulnerability
6.9 Recommended
testing
Penetration
6.10 Required
testing
Basic crypto
6.11 Recommended Recommended2)
analysis
Extended crypto
6.12 Required
analysis
1) The scheme specific checklist may be empty for a particular scheme and then this evaluation tasks
would not apply.
2) If crypto functionality is at the core of the product, then it is sensible for scheme developers to
include it, e.g. defining appropriate product classes.
NOTE 1 FIT Protection Profile Evaluation is a dedicated process and not part of the evaluation of a TOE. While
the FIT PP specifies for which CSA assurance level it is applicable, the evaluation of a FIT PP is agnostic to this.

To implement the methodology for a certain scheme, the following steps shall be performed:
1. The scheme developer needs to perform a (domain) risk assessment, reviewing the domain under
consideration.

2. The scheme developer shall assign the Attack Potential (cf. Clause 5.4 and Annex F) to each CSA
assurance level used in the scheme

3. For each CSA assurance level the scheme developer shall select those evaluation tasks required for
this level, these are marked grey in Table 1.

4. For each task chosen, the scheme developers shall review the parameters for this evaluation task and
set them suitably based on the risk assessment and the determined attack potential. For the

10
EN 17640:2022 (E)

evaluation task “development documentation” this includes setting up a scheme specific checklist
(which maybe empty).

5. For each CSA assurance level the scheme developer shall review those evaluation tasks
recommended for this level if inclusion is sensible, these task contain the word “Recommended” in
Table 1.

6. For each CSA assurance level the scheme developer shall review if those evaluation tasks are
sufficient for the scheme based on the determined Attack Potential. If not, the scheme developer shall
select additional evaluation tasks (e.g. from the same CSA assurance level), tasks from a higher CSA
assurance level or additional tasks not defined in this methodology. These may replace tasks already
chosen.

7. For each new or updated task chosen the scheme developers shall review the parameters for this
task and set them suitably based on the risk assessment and the attack potential.

If the scheme developers want to include development process evaluation/assessment, there is an


additional task for the scheme: The scheme developer needs to decide about validity of process
evaluation/assessment results for future product evaluations. This means that the development related
tasks may be performed once, and the output of these tasks is subsequently used in several product
evaluations (e.g. as a precondition). Optionally the auditors may define a list of artefacts which are to be
provided in each subsequent product evaluation, to show that the audited processes are still operational.
If schemes intend to include this development process evaluation/assessment re-use mechanism, they
shall ensure that reuse is limited to cases where the development process is the same in all evaluations
or assessments, i.e. the site(s), the knowledge of the people and the actual processes are identical or
equivalent. To achieve this, the initial evaluation or assessment may be made more broadly (i.e. cover a
larger scope of the development). Additionally, the scheme shall limit the maximum period of time during
which the results are to be acceptable.
EXAMPLE A usual maximal age of these results is two years. This means, if the evaluation / assessment occurs
more than two years after the results of the development process evaluation have been produced, the development
process needs to be re-assessed/re-evaluated. If an existing SDL (Software Development Lifecyle) certificate is used,
its validity is another possibility for the validity of this evidence.

For some evaluation tasks the scheme may require additional inputs from the developer, e.g. an
architectural overview. This additional input should be limited as much as possible, especially if this
documentation is typically only prepared for the assessment or evaluation, i.e. not readily available for
the TOE anyhow.
NOTE 2 Requiring design information might preclude some products from assessment or certification, as this
information might not be available due to the fact that some third-party components, including hardware, might be
proprietary without the possibility to obtain this design information. This is in general not applicable if white box
testing is performed (if this is an option in the scheme). Further composition of certified parts is an option to
mitigate this problem.

An example of the integration of this methodology in a scheme is given in Annex D.

5 General concepts
5.1 Usage of this methodology

Clause 5 describes elements of an evaluation methodology for fixed-time security certification and self-
assessment.

11
EN 17640:2022 (E)

To instantiate a specific evaluation methodology based on this generic methodology, the required
evaluation tasks are selected depending on the intended CSA assurance level according to the CSA.
Depending on the domain, certain evaluation tasks are required, while others are optional (see Clause 4).
For sample-based evaluation tasks, the scheme needs to devise the sample size and sampling strategy as
well as the absolute or relative weight, i.e. the number of person days or the percentage of overall
evaluation time. Additional constraints on sampling might be provided, e.g. on the limits of sampling
depending on the CSA assurance level.
To use this methodology, it is not necessary to require all evaluation tasks described in Clause 6 for every
CSA assurance level. For example, a scheme designed for CSA assurance level “substantial” might require
a “Basic crypto analysis” evaluation task or might omit it and possibly integrate the necessary parts into
the “Conformance testing” task instead.
NOTE This document and the resulting scheme do not define the exact structure of the documents used or
produced by the evaluation, e.g. the FIT ST or the ETR. These are scheme dependent.

5.2 Knowledge of the TOE

The scheme will require different sets of information or information with different levels of detail. This
depends on the one hand on the assurance required, on the other hand additional information might
speed up certain evaluation tasks.
In general, the developer shall provide a FIT Security Target and a Secure User Guide. The latter may not
be needed, if the TOE goes into the secure state as defined in the FIT ST automatically, i.e. no further
guidance is necessary.
The scheme may require additional information for certain activities. This is indicated in the respective
evaluation tasks where applicable.
The evaluator shall have access to information (like standards, protocol specifications) regarding the
technology implemented in the TOE, where this information is publicly available.
NOTE Publicly available does not imply that it is available free of charge.

5.3 Development process evaluation

This methodology is concerned with ICT product evaluation, and a scheme might limit its evaluation tasks
to pure ICT product related activities. However, experience in ICT product certification has shown that it
is sensible and valuable to evaluate the development process as well. This concerns both the initial
development (e.g. regarding security during design and construction of the product, including site
security) as well as aspects beyond delivery of the product, e.g. vulnerability and update management
processes. To improve usage of audit results in future product evaluations, the auditor may define a set
of artefacts (e.g. meeting reports, listing of configuration management systems, filled in checklists) which
will then be requested in every subsequent product evaluation to verify that the processes audited have
been adhered to in this instance.
Generic standards for development process evaluations should be reused where possible, applicable or
available.
5.4 Attack Potential

To determine the necessary evaluation tasks and their parameters it is necessary to define the expected
threat agent, characterized with a specific strength, also called Attack Potential. The vulnerability analysis
task of the evaluator may include penetration testing assuming the Attack Potential of the threat agent.
The following levels of Attack Potential are assumed in this document, the categorization is based on [2]
and [5].
— Basic

12
EN 17640:2022 (E)

— Enhanced Basic

— Moderate

— High

NOTE 1 Attack Potential Moderate and High are unlikely to be addressable in a fixed-time evaluation scheme:
systematic availability of detailed documentation will probably be necessary to allow evaluators to be on par with
high level threat agents.

The CSA [1] defines three assurance levels: basic, substantial and high. Each level has an implicitly defined
attack scenario assigned. Scheme developers are advised to review the definitions in the CSA to align the
CSA assurance levels (as applicable to their domain) with the attack potential used in this methodology.
NOTE 2 The terms used in the context of attack potential are used as defined in this document and deviate from
the meaning of similar terms used in the CSA.

In the end evaluators will assess whether a threat agent possessing a given Attack Potential is able to
bypass or break the security functionality of the TOE.
The calculation of the attack potential is given in Annex F.
5.5 Knowledge building

Ensuring that each evaluation task produces the expected results requires certain knowledge and
competence by the evaluators. This knowledge is briefly described for each evaluation task and needs to
be refined when setting up the scheme.
To ensure that an overall evaluation produces the expected results, the competent evaluators need to
work as a good team. In particular the evaluators who work on the document parts of the evaluation need
to very closely collaborate with the evaluators performing the actual testing; ideally, they are the same
(set of) persons, especially if the total time span of the evaluation is low.

6 Evaluation tasks
6.1 Completeness check
6.1.1 Aim

The aim of this evaluation task is to verify that all evidence required for evaluation is provided.
6.1.2 Evaluation method

This evaluation task is a completeness check of the evidence required by the scheme. No access to internal
documents is required. Depending on the TOE, access to publicly available specifications or other
documents distributed with or referenced by the TOE might be necessary.
6.1.3 Evaluator competence

The evaluators shall know the scheme requirements regarding evidence.


6.1.4 Evaluator work units

6.1.4.1 Work unit 1

The evaluators shall check that all evidence required for the evaluation is present. This includes a
sufficient number of TOEs.

13
EN 17640:2022 (E)

Several samples of TOEs might be required. e.g. because a TOE may completely fail during testing, may
be rendered unusable due to some tests or parallel testing (to speed up) is implemented.
6.1.4.2 Work unit 2

The evaluator shall check that the manufacturer has provided the testing environment required to carry
out the TOE evaluation activities, if applicable to the testing activities and the TOE.
6.1.4.3 Work unit 3

If white box cryptography is part of the evaluation, the evaluators shall check that the evidence provided
covers the cryptography specified in the FIT ST and the requirements for evidence of the applicable
cryptographic work units.
This can be done by identifying all security functions which contain cryptography (as given in the FIT ST)
and checking that the corresponding evidence for the cryptographic part is present.
NOTE The evidence is usually source code, pseudo code or schematics and the crypto description/specification
provided in the FIT ST or in a separate document.

6.2 FIT Protection Profile Evaluation


6.2.1 Aim

The aim of this evaluation task is to verify that the FIT PP is well constructed, consistent and suitable as
basis for future FIT ST.
NOTE Background for the FIT PP concept is provided in Annex B.

6.2.2 Evaluation method

This evaluation task is a complete review of the protection profile. No access to any actual TOE is required.
Depending on the FIT PP, access to publicly available specifications (e.g. standards) for technology
described in the FIT PP might be necessary.
6.2.3 Evaluator competence

The evaluators shall have knowledge of the typical TOEs implementing the FIT PP. They shall be able to
review documentation like standards or usage scenarios.
6.2.4 Evaluator work units

6.2.4.1 Work unit 1

The evaluators shall check that the FIT PP follows the structural requirements stated by the scheme.
The evaluators shall verify that the information contained in the FIT PP is free of
contradictions/discrepancies and inconsistencies.
The evaluators shall confirm that the FIT PP describes real security properties of the class of TOEs,
especially that it is not “misleading” in respect to the security properties.
The evaluators shall confirm that the FIT PP describes security properties relevant for the intended usage.
The evaluators shall verify that the security functions in the FIT PP are relevant for the intended use case.
The evaluators shall confirm that the assumptions stated in the FIT PP are realistic assumptions for the
intended use case of the class of TOEs.
The evaluators shall confirm that the set of threat agents is realistic considering the intended use case of
the class of TOEs.

14
EN 17640:2022 (E)

The evaluators shall confirm that the boundaries of the class of TOEs and the boundaries of the evaluation
are clearly and unambiguously defined in the FIT PP.
If the scheme maintains cryptographic requirements for the intended CSA assurance level and product
class, then the evaluator shall examine the cryptographic specification provided in the FIT PP to
determine that the cryptographic mechanisms are suitable to meet their security objective as stated in
the FIT PP.
The evaluators shall verify that the FIT PP is understandable by the potential end customers.
“Understandability” in this context means that the language and depth of description in the FIT PP are
commensurable with the knowledge of the anticipated end customer, including the expected knowledge
about terms and concepts.
The evaluators shall confirm that the security functionality stated in the FIT PP is conformant to
applicable requirements, e.g. those stated in requirements in the scheme.
The evaluators shall check that all additional actions (to be later instantiated by the FIT ST author)
introduced in the FIT PP (called operations) are suitably marked.
NOTE “Suitably marked” implies that the operations can be easily found, and possible values (including free
text) are denoted for any variable defined in a given operation.

The evaluators shall verify that FIT PP operations with provided values do not lead to any contradiction
with the security objectives for any of the provided values.
6.3 Review of security functionalities
6.3.1 Aim

The aim of this evaluation task is to review that the security functionalities are sound, and no obvious
contradictions and omissions exist compared to best practices as defined by the scheme.
NOTE This evaluator task is the counterpart of the FIT ST evaluation for CSA assurance level basic.

6.3.2 Evaluation method

This evaluation task is a complete review of the security functionalities in the FIT ST. No access to the
TOE or internal documents is required. Depending on the TOE, access to publicly available sources might
be necessary.
6.3.3 Evaluator competence

The evaluators shall have knowledge of the TOE domain and current best security practices. They shall
be able to review documentation and research common sources for best practices for security
functionalities.
6.3.4 Evaluator work units

6.3.4.1 Work unit 1

The evaluator shall verify that the purpose of the TOE is well defined and clearly described.
The evaluators shall confirm that each security functionality mentioned in the FIT Security Target meets
current best security practices both for the domain as well as general best practices.
NOTE It is up to the scheme (developer) to define the best practices, e.g. using interpretation groups.

The evaluators shall confirm that the security functionality mentioned in the FIT Security Target is
understandable by the potential end customers.

15
EN 17640:2022 (E)

6.4 FIT Security Target Evaluation


6.4.1 Aim

The aim of this evaluation task is to verify that the FIT ST is well constructed, consistent and suitable as
basis for further evaluation.
NOTE This evaluator task is the counter part of the Review of security functionalities (6.3) for CSA assurance
levels substantial and high.

6.4.2 Evaluation method

This evaluation task is a complete review of the FIT ST. No access to the TOE is required. Access to all
documents provided for evaluation is necessary. Depending on the TOE, access to publicly available
specifications might be necessary.
If cryptography is part of the evaluation, the cryptographic specification needs to be considered part of
the evidence as well.
6.4.3 Evaluator competence

The evaluators shall have knowledge of the TOE domain. They shall be able to review documentation.
6.4.4 Evaluator work units

6.4.4.1 Work unit 1a

This work unit is applicable if the FIT ST is not based on a FIT PP.
The evaluators shall check that the FIT ST follows the structural requirements stated by the scheme.
NOTE Annex A provides guidance on the typical contents of STs.

The evaluators shall verify that the information contained in the FIT ST is free of contradictions and
inconsistencies.
The evaluators shall verify that the information in the FIT ST is free of contradictions to other information
provided along the TOE, especially the SUG and other (design relevant) documents mandated by the
scheme (if any).
The evaluators shall confirm that the FIT ST describes real security properties of the TOE, especially that
it is not “misleading” in respect to the security properties.
The evaluators shall verify that the FIT ST describes security properties relevant for the intended usage.
The evaluators shall verify that the security functions in the FIT ST are relevant for the intended use case.
The evaluators shall confirm that the assumptions stated in the FIT ST are realistic assumptions for the
intended use case of the TOE.
The evaluators shall confirm that the set of threat agents is realistic considering the intended use case of
the TOE.
The evaluators shall confirm that the scope and boundaries of the TOE are clearly and unambiguously
defined in the FIT ST.
If the scheme maintains cryptographic requirements for the intended CSA assurance level and product
class, then the evaluators shall check that the cryptographic algorithms listed in the FIT ST are contained
in the set of cryptographic algorithms approved by the scheme.
The evaluators shall review the cryptographic specification to determine that the cryptographic
mechanisms are suitable to meet their security objective.

16
EN 17640:2022 (E)

The evaluators shall verify that the FIT ST is understandable by the potential end customers.
“Understandability” in this context means that the language and depth of description in the FIT ST are
commensurable with the knowledge of the anticipated end customer, including the expected knowledge
about terms and concepts.
The evaluators shall confirm that the security functionality stated in the FIT ST is conformant to
applicable requirements, e.g. those stated in the scheme.
6.4.4.2 Work unit 1b

This work unit is applicable if the FIT ST is based on a FIT PP.


The evaluators shall confirm that all operations, stated in the FIT PP, have assigned values in the FIT ST.
The evaluators shall verify that the information in the FIT ST is free of contradictions to other information
provided along the TOE, especially the SUG and other (design relevant) documents mandated by the
scheme (if any).
6.5 Development documentation
6.5.1 Aim

The aim of this evaluation task is to assess the development documentation (including if applicable
development process documentation) and evidence available to the evaluator.
In some other methodologies this task is usually the most time-consuming, and schemes should keep this
task as light as possible and consider substituting it with other evaluation tasks (e.g. penetration testing)
wherever applicable.
6.5.2 Evaluation method

This evaluation task is a complete analysis of the entries given by the scheme-specific checklist. This can
include verifying
— the specification of the functionalities of the product, at a coarse level or down to the exact interface
parameters;
— development process documentation including vulnerability and patch management;
— the design of the product, at a subsystem or module level;
— an additional description of the security architecture (secure boot, self-protection, domain
separation, and so on);
— source code.

6.5.3 Evaluator competence

The evaluators shall have knowledge of the TOE domain and experience with secure development
processes.
6.5.4 Work units

6.5.4.1 Work unit 1

Evaluators shall check that the documentation is complete according to the scheme-specific checklist.
6.6 Evaluation of TOE Installation
6.6.1 Aim

The aim of this evaluation task is to verify that the TOE can be installed as described in the SUG.

17
EN 17640:2022 (E)

NOTE The TOE might enter the secure state automatically. In this case, the aim of this evaluation task is to
verify that this final state is indeed the intended secure state.

6.6.2 Evaluation method

This evaluation task is a complete installation of the TOE. Unless the installation fails, no access to the
documented information besides the SUG is needed.
NOTE It is possible that the SUG is split over several documents, e.g. parts of it are in other manuals.

6.6.3 Evaluator competence

The evaluators shall be able to observe the installation of the TOE. The knowledge and skills required to
install the TOE shall be comparable to those persons which have been defined as end users of the TOE. In
the case the installation does not work as described, the evaluators may be required to have additional
domain expertise to complete the installation despite the defects in the guidance.
6.6.4 Evaluator work units

6.6.4.1 Work unit 1

The evaluators shall check that all systems necessary, to install the TOE according to the SUG and to use
it for the purposes described in the FIT ST, are present and correctly set up. The setup of those additional
systems might be done together with the developer and on itself is not part of the evaluation. The
evaluation shall not proceed until the setup of additional systems, if any, has been successfully completed.
EXAMPLE The TOE needs a backend cloud service. In this case, the developer needs to provide test accounts
or a local (working) cloud installation to the evaluator.

The evaluator shall set up the TOE according to the SUG. If the SUG exists in several languages, the
guidance in the language of the FIT ST shall be used for this setting up.
NOTE The language of the FIT ST is defined by the scheme or the CAB.

The evaluator shall record in the ETR if the TOE is operating as described in the FIT ST (secure state)
after completion of the TOE set up.
If the TOE is not in the secure state then the evaluators shall use their general expert knowledge and the
remaining information (besides the SUG) to set up the TOE in the secure state. The evaluators shall record
the additional (or changed) steps compared to the SUG in the ETR.
6.6.4.2 Work unit 2

The evaluators shall check if the SUG follows the applicable scheme requirements (if any).
6.6.4.3 Work unit 3

The evaluator shall determine how hard it is to get the TOE out of the secure state or whether a warning
is presented to the user if the TOE is no longer in the secure state.
6.7 Conformance testing
6.7.1 Aim

The aim of this evaluation task is to verify that the TOE complies with the functional claims made in the
FIT ST.
6.7.2 Evaluation method

This evaluation task is a conformity testing of the TOE, which makes use of laboratory equipment.

18
EN 17640:2022 (E)

Access to scheme defined external documents (e.g. standards, sector specific minimum requirements,
blueprints of architectural requirement) is required. Access to scheme defined internal documents is
required. Access to publicly available specifications is required. Access to the TOE (and possibly
background systems provided by the vendor) is required.
NOTE For CSA assurance level “high” usually the scheme mandates some additional information, e.g. an
architectural overview or a structural overview of the update mechanism.

6.7.3 Evaluator competence

The evaluators shall have substantial knowledge of the TOE domain. They shall be able to execute the
required conformity tests. Evaluators shall have the skills to perform all test independently from the TOE
developer (and possibly to modify the tests), even if they are part of a larger test suite or tool under
normal circumstances.
For every interface where no automated tools are available or feasible, the evaluators shall have
substantial knowledge of such interfaces used in the TOE domain. They shall be capable of understanding
complex specifications and transforming them into tests.
6.7.4 Evaluator work units

6.7.4.1 Work unit 1

The evaluator shall devise a test plan. This test plan shall fulfil the scheme requirements regarding
coverage and depth and/or effort.
NOTE 1 The scheme can mandate a full coverage, irrespective of the effort required or some kind of sampling
strategy, usually within a time limited period.

The decision on the sampling strategy is a scheme decision.


In the context of conformity testing, “risk based” has two meanings:
a) The likelihood of nonconformity (based on the professional judgement of the evaluators); and

b) The impact of a potential nonconformity for the TOE (i.e. its security functions and assets).

If risk-based sampling is used, the evaluators shall set up a suitable sampling strategy, taking into account,
previous evaluation results, information received from the certification body and the experience with
similar TOEs. The evaluators shall further employ the entire documentation received for the TOE.
If acceptance criteria are used, the evaluator shall verify for each test case that a “pass” implies fulfilment
of the respective acceptance criterion as well.
The evaluators shall use validated tools where possible to complete this task. Where tools are used, their
coverage shall be analysed and if necessary additional (manual) tests shall be performed.
NOTE 2 The rigor of the testing depends on the CSA assurance level chosen and needs to be defined by the
scheme.

6.7.4.2 Work unit 2

The test case can be completely covered by a tool, possibly considering only part of its output. If the tool
is incomplete, insufficient or not present, then manual tests need to be derived and the given functional
security requirements for the TOE shall be transferred into test cases by the evaluators. A test case shall
be described with at least the following characteristics:
• test description with test expectation, test preparation, and testing steps;

• test result;

19
EN 17640:2022 (E)

• assessment (pass/fail).

The test expectation is the expected test result, which will occur if the component functions correctly.
The test expectation shall result from the component’s intended behaviour, possibly backed by an
acceptance criterion. The test result is the behaviour of the component detected during the testing steps.
The process model for transferring requirements to test cases is composed in the following steps:
1. Determine the security requirements to be tested

2. If applicable analyse the corresponding acceptance criteria (see Annex C)

3. Define the appropriate test cases

During conformity testing it shall be examined whether the chosen technical implementation produces
the expected results.
6.7.4.3 Work unit 3

The evaluators shall execute the defined tests from the test plan to find nonconformities of the TOE in
respect to the FIT ST. For each security requirement mentioned in the test plan, the evaluators shall
perform the defined tests and record the results. For each failed conformity test, the evaluators shall
exactly describe the failure (test case, expected result, observed result) and record the results.
NOTE 1 This might require the evaluator to repeat the test with changed parameters.

If the test result corresponds with the test expectation, the evaluation will be successful (pass verdict). If
the test result deviates, the evaluation will be unsuccessful (fail verdict).
If no test case can be specified for a security requirement, e.g. if one implementation detail cannot be
addressed via an external interface, an alternative proof of correct implementation shall be given. This
can be done as part of a different evaluation method.
NOTE 2 Further details on validation and calibration of equipment can be found in EN ISO/IEC 17025, 6.4. [4]

The evaluators shall record the testing strategy and the results in the ETR.
6.8 Vulnerability review
6.8.1 Aim

The aim of this evaluation task is to review that the TOE is not vulnerable to publicly known
vulnerabilities for the intended use in the intended environment.
NOTE The scheme might define certain “pre-defined” sources for potential vulnerabilities (e.g. for lower
assurance levels).

6.8.2 Evaluation method

This evaluation task is a search for public vulnerabilities, possibly using the sources provided by the
scheme. The evaluators shall employ the available documentation received for the TOE. No access to
internal documents is required.
NOTE When considering threat-agents with attack potential “basic”, the evaluator will only take into account
publicly known vulnerabilities/vulnerability classes, previous evaluation results, information received from the
certification body and the experience with similar TOEs.

20
EN 17640:2022 (E)

6.8.3 Evaluator competence

The evaluators shall have knowledge of the TOE domain. They shall be able to review vulnerability
descriptions to check if they apply to a TOE. If no pre-defined sources are mandated by the scheme, they
shall be able to efficiently search for public vulnerabilities in a wide range of sources. In case evidence
from the development process is present, the evaluators shall be able to review such evidence.
6.8.4 Evaluator work units

6.8.4.1 Work unit 1

If the scheme provides a pre-defined set of sources for vulnerabilities, the evaluator shall select those
potential vulnerabilities which technically might apply to the TOE, based on the FIT ST.
Otherwise, the evaluators shall search for vulnerabilities analysing the available evidence (public and
proprietary), including the FIT ST, taking into account the TOE technology.
The evaluators shall review if the vulnerabilities determined apply to the TOE.
NOTE 1 The review can be performed by comparing version numbers of TOE parts (e.g. libraries), checking for
countermeasures against the determined vulnerabilities (e.g. additional software parts like filters which prevent
certain types of “risky” data transport) or some limited testing.

NOTE 2 Only attacks which are possible in the operational environment specified in the FIT ST (including
assumptions on existing measures) are considered here.

6.8.4.2 Work unit 2

This work unit is only applicable if evidence from the development process is present.
The evaluators shall employ the available documentation received for the TOE. The document bundle
should comprise:
— Reports of vulnerability scans no older than three months;

— Vulnerability remediation actions (at least, a sample) annexed to the organization’s internal
vulnerability management policy where remediation delays are clearly indicated;

— Risk management analysis covering at least 12 months prior to the evaluation. The document would
bear unambiguous version tracking labels to reflect the evolution of the document.

Additional relevant documents may be requested by the evaluators to consolidate the TOE’s adherence
to the security standards set out in the certification scheme. This evaluation task may also involve a
search for public vulnerabilities deploying the sources provided by the scheme.
6.9 Vulnerability testing
6.9.1 Aim

The aim of this evaluation task is to verify that the TOE is not vulnerable to publicly known vulnerabilities
for the intended use in the intended environment.
NOTE The scheme might define certain “pre-defined” sources for potential vulnerabilities (e.g. for lower
assurance levels).

21
EN 17640:2022 (E)

6.9.2 Evaluation method

This evaluation task is a search for public vulnerabilities, possibly using the sources provided by the
scheme. The evaluators shall employ the available documentation received for the TOE. No access to
internal documents is required.
NOTE When considering threat agents with attack potential “basic”, the evaluator will only take into account
publicly known vulnerabilities/vulnerability classes, previous evaluation results, information received from the
certification body and the experience with similar TOEs.

6.9.3 Evaluator competence

The evaluators shall have knowledge of the TOE domain. They shall be able to execute the required tests.
If no pre-defined tests are available, the evaluator shall be able to devise tests for the vulnerabilities. If
no pre-defined sources are mandated by the scheme, they shall be able to efficiently search for public
vulnerabilities in a wide range of sources. In case evidence from the development process is present, the
evaluators shall be able to review such evidence.
6.9.4 Evaluator work units

6.9.4.1 Work unit 1

If the scheme provides a pre-defined set of sources for vulnerabilities, the evaluator shall select those
potential vulnerabilities which technically might apply to the TOE, based on the FIT ST.
Otherwise, the evaluators shall search for vulnerabilities analysing the available evidence (public and
proprietary), including the FIT ST, taking into account the TOE technology.
NOTE Only attacks which are possible in the operational environment specified in the FIT ST (including
assumptions on existing measures) are considered here.

6.9.4.2 Work unit 2

The evaluator shall devise a test plan. This test plan shall fulfil the scheme requirements regarding
coverage and depth and/or effort.
NOTE 1 The scheme can mandate a full coverage, irrespective of the effort required or some kind of sampling
strategy, usually within a time limited period.

If risk-based sampling is used, the evaluators shall set up a suitable sampling strategy, taking into account,
previous evaluation results, information received from the certification body and the experience with
similar TOEs. The evaluators shall further employ the entire documentation received for the TOE.
In the context of vulnerability testing, “risk based” has two meanings:
a) The likelihood of vulnerability, i.e. how likely such a vulnerability is (using the professional
judgement of the evaluators); and

b) The impact of a potential vulnerability for the security measures of the TOE.

NOTE 2 The decision on the sampling strategy is a scheme decision.

NOTE 3 The rigor of the testing depends on the CSA assurance level chosen and needs to be defined by the
scheme.

NOTE 4 This work unit provides a limited sampling strategy only. For the CSA assurance level “high” and if
further assurance is required, the evaluation task “Penetration Testing” is available (as replacement).

22
EN 17640:2022 (E)

6.9.4.3 Work unit 3

For each listed potential vulnerability that is selected, the evaluators shall produce one or several test(s).
NOTE Only attacks which are possible in the operational environment specified in the FIT ST (including
assumptions on existing measures) are considered here.

A test case shall be described with at least the following characteristics:


— test description with test preparation, and high level testing steps;

— expected test result;

— explanation, when test result is considered “fail”.

The expected test result is the outcome of the test, which will occur if the component functions correctly,
i.e. the vulnerability is not present. The expected test result shall result from the component’s intended
behaviour. The test result is the actually detected behaviour of the component during the testing steps
(see 6.9.4.4).
During vulnerability testing it shall be examined whether the chosen technical implementation is not
vulnerable to known potential weaknesses.
6.9.4.4 Work unit 4

For each test case mentioned in the test plan the evaluators shall perform tests and record the results.
The evaluators shall use validated tools where possible to complete this task. Where tools are used, their
coverage shall be analysed and if necessary additional (manual) tests shall be performed.
If the test result corresponds with the test expectation, the evaluation will be positive (pass). If the test
result deviates, the evaluation will be negative (fail).
For each failed vulnerability test the evaluators shall review the impact of the failure. If the impact cannot
be deduced from the test results, the evaluator shall calculate the relevant attack potential.
NOTE This might require the evaluator to repeat the test with changed parameters.

The evaluators shall record the testing strategy and the results in the ETR.
6.9.4.5 Work unit 5

This work unit is only applicable if evidence from the development process is present.
The evaluators shall employ the available documentation received for the TOE. The document bundle
should comprise:
— Reports of vulnerability scans no older than three months;

— Vulnerability remediation actions (at least, a sample) annexed to the organization’s internal
vulnerability management policy where remediation delays are clearly indicated;

— Risk management analysis covering at least 12 months prior to the evaluation. The document would
bear unambiguous version tracking labels to reflect the evolution of the document.

Additional relevant documents may be requested by the evaluators to verify that the TOE’s development
adheres to the security development standards applicable in the scheme.

23
EN 17640:2022 (E)

6.10 Penetration testing


6.10.1 Aim

The aim of this evaluation task is to perform a sampling testing strategy based on Flaw Hypothesis
Methodology that the TOE does not contain vulnerabilities from the class of known vulnerabilities.
NOTE This evaluation task is a superset of the “Vulnerability testing” task.

6.10.2 Evaluation method

6.10.2.1 General

This evaluation task is penetration testing of the TOE. It is based on lessons learned on similar products
/ technologies and threat analysis and uses methods such as attack trees and a search for public known
vulnerabilities. Access to scheme defined general information about the TOE domain is required. Access
to scheme defined internal documents may be required. Depending on the TOE an access to publicly
available specifications might be necessary.
NOTE For the CSA assurance level “high” usually the scheme mandates some additional information, e.g. an
architectural overview or a structural overview of the update mechanism.

6.10.2.2 Flaw Hypothesis Methodology

The Flaw Hypothesis Methodology is a general approach to vulnerability assessment.


The approach consists of several main steps:
1. The evaluator first gathers information relevant to the security of the TOE. This includes the available
developer documentation, but also evidence produced by the evaluator themselves during previous
evaluation activities and other sources of information such as:

— public vulnerabilities applicable to the product;

— unexpected behaviour of the product during the conformance tests;

— inconsistencies discovered during the analysis of the FIT ST.

Relevant information includes the state-of-the-art for technologies of product types related to the
TOE:

— public academic state-of-the-art;

— standards;

— guidelines or requirements from the scheme.

2. The evaluator hypothesizes flaws in the product to be tested. The evaluator cannot possibly test all
possible attacks on the product, therefore they shall:

— exclude attacks that are already known to require too much effort to be considered (the effort
estimate typically relies on the attack potential calculation);

— exclude attacks that are not possible due to the context defined in the FIT ST (for example,
physical attacks will not be assessed when an assumption in the security target states that the
product can be physically accessed only by trusted users);

24
EN 17640:2022 (E)

— prioritize the remaining attack scenarios, in order to assess whether flaws are actually present
(evaluator may choose for example to test first the flaws that are easier to exploit, or flaws that
are believed to be more frequent in the family of products under evaluation).

3. If needed, the evaluator will perform actual penetration testing to further characterize the attack
potential required to exploit the flaws (“exploit” means here “using the flaw to compromise an asset
or realize a threat as they are defined in the FIT Security Target”). The results of the testing may be
used again as evidence (see step 1) to update the hypotheses and reorient the test plan in a feedback
loop.

4. The evaluator eventually synthesizes their findings by describing the attack scenarios they consider
applicable to the product, and their cost in terms of attack potential.

It is not expected that evaluators follow strictly each of these steps, but they should be able to
demonstrate that their applied approach is equivalent. Moreover, certification schemes may further
refine the methodology, especially when considering a smaller subset of product types.
EXAMPLE The JIL document [14] is an example of such a restriction.

6.10.3 Evaluator competence

Evaluators shall be able to execute the required attacks. They shall have substantial knowledge of the
TOE domain. They shall be proficient in performing penetration testing.
6.10.4 Evaluator work units

6.10.4.1 Work unit 1

The evaluators shall attempt to bypass or break the security functionality of the TOE. For this the
evaluators shall set up a risk-based sampling strategy following the Flaw Hypothesis Methodology, taking
into account publicly known vulnerabilities / vulnerability classes, previous evaluation results,
information received from the certification body and the experience with similar TOEs. The evaluators
shall further employ the entire documentation received for the TOE.
NOTE 1 Bypassing does not mean actually providing the code to circumvent the security functionality. It is
sufficient to demonstrate that such code could be provided with effort commensurate with the expected skills of the
threat agents (as per defined attack potential).

The evaluators shall record in the ETR:


— the pentesting strategy; and

— actual tests performed.

NOTE 2 The breadth and width of the penetration testing depends on the assurance level according to the CSA
and needs further refinements by the scheme.

6.10.4.2 Work unit 2

If the evaluator discovered a potential vulnerability which cannot be exploited by itself but the evaluator
judges that such an exploit could be developed with more effort than available in the sample strategy, the
evaluator shall produce an expert judgment of the effort expected in the ETR.
Additionally, the evaluator shall record in the ETR each listed potential vulnerability and the evaluator
verdict on exploitability and applicability to the product.

25
EN 17640:2022 (E)

NOTE 1 Evaluators are not required to actually circumvent the security functionality, since exploits are costly to
develop. It is sufficient to estimate the effort required to circumvent it, based on the evaluator experience (and
scheme-specific guidance).

If the evaluator discovered a potential vulnerability which cannot be exploited by itself but the evaluator
judges that such an exploit could be developed with more effort than available in the pentesting strategy,
the evaluator shall provide a professional judgment of the effort expected in the ETR.
NOTE 2 The breadth and width of the penetration testing depends on the attack potential and needs further
refinements by the scheme.

6.11 Basic crypto analysis


6.11.1 Aim

The aim of this evaluation task is to validate that the cryptography (e.g. techniques, methods and services)
implemented by the TOE complies with the cryptographic specifications provided by the sponsor and the
guidelines provided by the scheme.
6.11.2 Evaluation method

This task is a probabilistic conformance testing tailored to cryptographic protocols and algorithms. For
this, the evaluator needs to receive a cryptographic specification.
NOTE 1 The focus of this work unit is on conformance testing, please refer to 6.12.4.2 in respect to sampling.

NOTE 2 The cryptographic specification can be part of the FIT ST.

6.11.3 Evaluator competence

The evaluators need a substantial knowledge of cryptography and the cryptographic requirements of the
scheme. The evaluators shall be proficient in devising tests for cryptographic protocols and algorithms.
6.11.4 Evaluator work units

6.11.4.1 Work unit 1

The evaluators shall attempt to find nonconformities of the TOE in respect to the cryptographic
requirements mandated by the scheme. For this the evaluators shall set up a risk-based sampling
strategy, taking into account, previous evaluation results, information received from the certification
body and the experience with similar TOEs. The evaluators shall further employ the entire documentation
received for the TOE as well as the scheme documents in respect to cryptography.
In the context of conformity testing, “risk based” has two meanings:
a) The likelihood of nonconformity, i.e. how likely a nonconformity is (using the professional judgement
of the evaluators); and

b) The impact of a potential nonconformity for the TOE.

The evaluators shall use validated tools where possible to complete this task.
EXAMPLE If properties on an interface are claimed to be random, a suitable tool can check if obvious statistical
defects in the random number generator or processor exist.

Positive test cases for cryptographic algorithms and schemes should comprise randomly generated
known-answer tests and iterated Monte-Carlo tests if applicable. The test vectors should be generated or
verified by an independent, known-good implementation and should not be static. Algorithms accepting

26
EN 17640:2022 (E)

variable-length inputs should be tested with inputs of different lengths (including corner cases like length
zero).
Positive testing of cryptographic protocols may be done by communicating with an independent, known-
good implementation. The cryptographic algorithms and schemes used by the protocol should be tested
separately as described above.
Negative test cases for cryptographic algorithms and schemes should be specifically crafted to trigger
certain error conditions (e.g. illegal-value errors, out-of-bounds errors, padding errors, etc.).
Negative testing of cryptographic protocols should comprise test cases for unspecified configurations
(unspecified ciphers, protocol version downgrade, etc.), test cases for illegal inputs (e.g. malformed
packets, oversized packets, etc.), and test cases for illegal transitions in the protocol’s state machine (e.g.
insertion of unexpected packets, omission of required packets, etc.).
Random sources should be tested using a statistical test suite.
NOTE Statistical test suites can only detect very specific statistical defects of a random source (e.g. caused by
implementation errors). It is not possible to assess the quality of random numbers with automatic tests.

The evaluators shall record the testing strategy and the results in the ETR. If certain algorithms, interfaces
or cryptographic functions have not been analysed during sampling the evaluators shall provide a
rational for this.
6.12 Extended crypto analysis
6.12.1 Aim

The aim of this evaluation task is to verify that the cryptography used complies with the cryptographic
specification provided by the applicant and the state-of-the-art cryptography as defined by the scheme.
It consists of mechanism verification (correct choice) and implementation verification (review for errors
in the implementation).
The scheme developer will specify which state-of-the-art should be considered by the evaluator (e.g. SOG-
IS crypto [15], French RGS [10], German TR-02102 [11], Spanish CCN-STIC 807 [12]).
6.12.2 Evaluation method

The task consists of two steps: A theoretical analysis based on the documents provided and a verification,
and vulnerability analysis based on the implementation representation and the TOE.
The theoretical analysis rests mainly on the document provided. Its objective is to detect any
vulnerabilities in the cryptographic mechanisms used to achieve the product's security objectives in its
operating environment. In the case of cryptographic services, these mechanisms' resistance shall be
analysed in its usage context.
The second step consists of a verification of the implementation’s conformity for all cryptographic
algorithms used and a vulnerability analysis of the source code.
For this, the evaluator needs to dispose the cryptographic specification's document used on the first step
and the source or pseudo code of the cryptographic routines or functions (including the calling parameter
where applicable). If the cryptography is done in hardware, the hardware specification (schematics) are
the equivalent of source code.
NOTE The focus of this work unit is on independent analysis which goes beyond conformance testing of 6.11,
please refer to 6.12.4.2 in respect to sampling.

27
EN 17640:2022 (E)

6.12.3 Evaluator competence

The evaluators need a substantial knowledge of cryptography and the cryptographic requirements of the
scheme. The evaluators shall be proficient in analysing the implementation representation and devising
tests for cryptographic protocols and algorithms.
6.12.4 Evaluator work units

6.12.4.1 Work unit 1

The evaluator shall check first that the documents which were delivered to carry out the analysis are
consistent with the other documents provided to evaluate the product.
The evaluator shall analyse the specification of all the following types of mechanisms of the TOE,
according to the state-of-the-art chosen for the evaluation:
1. Cryptographic algorithms, modes of operation and relevance with regard to objectives
(confidentiality, integrity, availability, authenticity, performances, and so on);

2. Cryptographic protocols designed to achieve the expected security functionality;

3. Procedures and mechanisms for key generation and key management.

Additionally, on the basis of the (developer's) description of the random number generator (RNG) the
evaluator shall identify the type of the RNG, e.g. whether it is a deterministic RNG, a physical RNG with /
without cryptographic post-processing algorithm or a non-physical true RNG. For deterministic RNGs (if
applicable) the evaluator may confirm the conformance or a partial conformance to recommendations of
RNGs in the applied scheme.
The evaluator shall produce an analysis report, which shall indicate any potential weaknesses or
vulnerabilities detected. The evaluator may issue recommendations regarding the use of cryptography
(e.g. if mechanisms might be superseded soon by better ones).
6.12.4.2 Work unit 2
The evaluator shall consider the results of the previous analysis (where applicable) to verify the
conformity of the implementation and to look for any vulnerabilities. The evaluator shall determine
whether these vulnerabilities can actually be exploited in the product's operating environment. The
analysis covers:
— Cryptographic implementation conformity (including for the random number generator); The form
of this analysis depends on the evidence elements available;
— Cryptographic implementation vulnerability analysis (consider if the implementation allows for
attacks that possibly impact the TOE's security objective, independent of the intrinsic algorithm
resistance).

The analysis is performed by testing and source code analysis.


It contains three main tasks:
1. The evaluators shall examine the implementation representation to determine if the implementation
is compliant with the cryptographic specification’s document used in Work Unit 1.
2. The evaluators shall perform a conformance testing tasks (Monte-Carlo Test, Known Answer Test,
etc.) for all implemented cryptographic mechanisms used in the system in order to establish that the
implementation correctly implements them (see 6.11.4.1 for more details).
3. The evaluators shall perform a source code analysis and determine whether the vulnerabilities
detected can actually be exploited in the product's operating environment. Depending on the type of

28
EN 17640:2022 (E)

the analysis demanded and given that the evaluators have limited time, they would probably have to
choose which vulnerabilities to investigate.

For this the evaluators shall set up a risk-based sampling strategy, taking into account, previous
evaluation results, information received from the certification body and the experience with similar
TOEs. The evaluators shall further employ the entire documentation received for the TOE as well as the
scheme documents in respect to cryptography.
In the context of conformity testing, “risk-based” has two meanings:
a) The likelihood of nonconformity, i.e. how likely a nonconformity is (using the professional judgment
of the evaluators); and
b) The impact of a potential nonconformity for the TOE.

The evaluators shall examine the implementation representation to determine that the implementation
is compliant with the cryptographic specification.
If the scheme maintains requirements for secure coding, then the evaluator shall examine the
implementation representation to determine that it adheres to secure coding guidelines.
The evaluators may use automated tools to facilitate human analysis (e.g. static/dynamic program
analysis tools).
The evaluators shall augment the analysis of the implementation representation by appropriate
conformance tests as specified in 6.7.
The evaluators shall record the testing strategy and the results in the ETR. If certain algorithms, interfaces
or cryptographic functions have not been analysed during sampling the evaluators shall provide a
rational for this.
If recommendations on the use of a cryptographic service were issued during the previous analysis, the
evaluator shall check that these recommendations are clearly indicated in the product usage and/or
administration guides.

29
EN 17640:2022 (E)

Annex A
(informative)

Example for a structure of a FIT Security Target (FIT ST)

A.1 General
The exact contents and structure of a FIT ST should be refined by the scheme. This annex summarizes a
typical outline of a FIT ST and can be used as starting points for schemes.
When developing the requirements for the scheme care should be taken to impose the least effort
possible on the developer. A typical FIT ST should be “easy” for developers to prepare. It should remain
flexible but can also adapted to the domain of the scheme where necessary.
A FIT ST targeting the CSA assurance level “basic” may contain less content, e.g. only the identification
and the list of security functions.

A.2 Example structure


A typical FIT ST may have the following sections.
1) Introduction

a) Context of this document

b) Product identification

c) Reference / Acronyms

2) Product description

a) General description

b) Features

c) Product usage

d) Operating environment

3) Security perimeter

a) Users

b) Assumptions

c) Assets

d) Threat model: Threat agents and threats

e) Security functions

f) Rationale

4) Limits of evaluation

30
EN 17640:2022 (E)

A.3 Typical content of a FIT ST


The following paragraphs describe typical content in a FIT ST, the references are to the example structure
given in A.2.
Section 1 should give some background (e.g. the scheme), identify the product in an unique way and
explain all terms used as well as provide (external) references for background information. This includes,
if applicable, information on re-use of already certified components (“composition”).
Section 2 should describe the product, both general as its features. It should also clearly state how the
product is intended to be used, including the environment (e.g. other products, physical environment,
expected personnel).
In Section 3 the Product security context shall be documented, that all the assumptions about the
environment are documented in order to achieve the security level for which the product is designed.
This could include:
a) The roles/users relevant for the product.

b) Physical or cyber security provided by the environment where the product will be deployed
(“Assumptions”).

c) Security asset that the product is protecting.

d) Potential impact (for example, loss of life, injury, loss of production, etc.) (threat agents and threats).

e) Technical capability to mitigate against the identify threats. (This shall cover all relevant interfaces
of the product, local and remote.)

f) A rationale how this all is consistent, e.g. why a certain technical capability (security function) in this
environment addresses the described threats. This could be in form of one or more matrices (threat
matrix).

Section 4 should describe what the limits are, e.g. functionality explicitly out of scope or threats which
are typically addressed but are of no relevance in this environment.

31
EN 17640:2022 (E)

Annex B
(normative)

The concept of a FIT Protection Profile (FIT PP)

B.1 General
This annex describes the basic concepts of FIT Protection Profiles. For the evaluation of FIT Protection
Profiles, please refer to 6.2.

B.2 Aim and basic principles of a FIT PP


A FIT Protection Profile is an agnostic way to describe the environment and security functionality for a
family of TOEs. FIT Protection Profiles are usually provided by a group of users (e.g. an industry
association), by governmental bodies, or Standards Development Organisations. Product developers can
base their FIT ST on the FIT PP and claim conformance to it, thus showing customers that their product
fulfils their requirements and is targeted for their intended environment.
EXAMPLE A hospital association could write a FIT PP for a security gateway for hospitals, defining the needed
functionality as well as the typical hospital environment for operation.

Customers benefit from FIT PPs as they can be sure their requirements (written in the PPs) are fulfilled
if a certified product complies with this FIT PP.
Since the FIT PPs are usually certified independently before the product is evaluated, the effort for
evaluating the FIT ST is reduced, because many of the necessary evaluation steps have already been
performed for the FIT PP.
While many parts of the FIT PP describe certain (fixed) parameters, some options are usually wanted.
EXAMPLE A FIT PP for a firewall might mandate certain protocols but also provide to option to include
additional protocols.

Therefore, a FIT PP might provide certain “operations”, i.e. set of actions by the FIT ST author. Usually
this means that the FIT ST author can provide additional values or select some items from a list.
The exact contents and structure of a FIT Protection Profile (FIT PP) should be refined by the scheme,
based on the scheme definitions for FIT STs (see Annex A).

B.3 Guidance for schemes to implement the FIT PP concept


Schemes implementing the FIT PP concept should ensure that the elaboration includes all relevant
stakeholders. If the FIT PP is provided by industry it might make sense to only accept FIT PPs where this
involvement has been ensured. Alternatively, schemes could provide a process to develop FIT PPs which
automatically includes all relevant stakeholders.
Secondly FIT PPs provide their full capability only if they are of high quality. To ensure this, schemes
should consider mandating a certification of FIT PPs before they can be used. This ensure that the FIT PP
authors are available for updates and that subsequently FIT STs can be safely based on these FIT PPs. To
enable this, evaluation tasks for FIT PP evaluation are provided in this methodology.

32
EN 17640:2022 (E)

Annex C
(informative)

Acceptance Criteria

C.1 Introduction
The objective of Acceptance Criteria is to help evaluators to specify test cases. Acceptance Criteria are an
implementation-independent definition of test case “expected results” criteria.
Acceptance Criteria are not security requirements. This annex identifies security requirements on a more
detailed level called security requirement attributes. For each of these attributes, a list of general
acceptance criteria is given.
The Acceptance Criteria listed in the following subclauses are organized by security requirement classes.
Each class contains the following structure: the first column lists the security requirement attribute,
which is always the link to (e.g. vertical or domain-specific) security requirements. The second column
lists the related acceptance criteria that are recommended to be considered when designing test cases.
For each category, some examples from the IT or Industry domain are given. The examples are presented
in a very brief format. In all examples, the statement “complies with the Acceptance Criteria” means that
some test cases were designed that use the Acceptance Criteria as part of the expected result, and these
test cases have to pass successfully (and therefore meet the Acceptance Criteria).
For the selection of the categories of Acceptance Criteria, two security requirements standards with high
interest were selected. One standard is focusing on industrial IT (IEC 62443-4-2:2019 [9]) and the second
standard on consumer IT (ETSI EN 303 645, Version 2.1.1 [7]). These standards represent a wide range
of requirements from different application domains.

C.2 Identification, Authentication Control, and Access Control


Table C.1 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Identification,
Authentication Control, and Access Control.
Table C.1 — Identification, Authentication Control, and Access Control

Security Requirement Attributes Evaluation Acceptance Criteria


Mechanisms
General — authentication mechanism is capable of
preventing well-known and non-sophisticated
attacks
user authentication — authentication of human users on the relevant
interfaces with human access
unique user authentication — unique authentication for every human user on
the relevant interfaces
multifactor user authentication — capability to employ multifactor authentication
on the relevant interfaces with human user access
process/device authentication — authenticates to any other process/device using
authenticator with desired security strength

33
EN 17640:2022 (E)

Security Requirement Attributes Evaluation Acceptance Criteria


unique process/device authentication — uniquely authenticates to any other
process/device using an authenticator
account/identifier management — capability to integrate into a system-level account
management system
authenticator management — support of (initial) authenticator content
— enforced change of default authenticators after
installation or initial configuration
— warning in case of unchanged default
authenticators
— periodic change of authenticators
— protection of unauthorised disclosure or
modification of authenticators
— no transmission of cleartext authenticators
Access/Authorization Tokens
Lifetime — timely limited validity of access token
— re-authentication after invalidity of the access
token
General Authentication
unsuccessful login — enforce (configurable) limit of consecutive invalid
access attempts for (configurable) time
— deny access for a specified period of time or until
unlocked
login feedback — sensitive data concerning the authentication
process
— no different feedback for wrong password or
username
— no timing differences for error and correct login
Password Authentication
enforced change of password, e.g. during — user dialogues for the change of password are not
initialisation or after factory reset bypassable
— login is only allowed after the change of password
change of password — the dialogue is tested and the password is never
shown in plain text
management/enforcement — enforce configurable password strength
— configurable password strength according to
recognized and proven password guidelines

34
EN 17640:2022 (E)

Security Requirement Attributes Evaluation Acceptance Criteria


Public-Key Authentication
certificate checks — use of accepted algorithms, cf. C.4 Cryptography
— validating the signature of a given certificate,
including certificate properly assigned to identity
by checking either the subject name, common
name or distinguished name, and integrity of the
public key
— validate the certificate chain
— in case of self-signed certificates, certificates are
deployed to all communicating hosts
— validate certification revocations status
private keys — access protection of private keys
One-Time Password Authentication
General — One-Time-PIN single-use, i.e. only one
authentication attempt for one One-Time-PIN
— limited lifetime of each One-Time-PIN
— One-Time-PIN can only be used for the linked
account
— non-guessable One-Time-PIN, i.e. sufficient length
— combination of One-Time-PIN length and token
validity
Transfer — different communication channel for the second
factor
Storage — if local secrets are stored those shall be secured,
i.e. no cleartext storage
standard-based — use of standard methods
domain separation — if the first factor was entered on a device then
One-Time-PIN generation is blocked
EXAMPLE

IT Domain

Web-based authentication that is implemented using TLS secure channel (and configured and operated state-of-
the-art) fulfils the acceptance criteria that the authentication mechanism is capable of preventing attacks like man-
in-the-middle or spoofing attacks.

Web-based applications that are available from public networks often require advanced protection methods for
authentication. In this case, a second factor is often implemented. A second factor that is implemented based on the
OATH-HOTP protocol complies with the acceptance criteria to use standard methods.

C.3 Secure Boot


Table C.2 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Secure Boot.

35
EN 17640:2022 (E)

Table C.2 — Secure Boot

Security Requirement Attributes Evaluation Acceptance Criteria

Verification — integrity and authentication verification of boot


process-relevant firmware, software, and
configuration
— verification happens before use
root-of-trust — use of product suppliers roots-of-trust for verification

EXAMPLE

IT Domain

For standard PCs, the secure boot functionality is able to authorize the start of the operating system. The signatures
of all boot-critical drivers are verified by some process before loading these drivers and the verification complies
with the acceptance criteria.

Industry domain

In an industrial environment, the secure boot mechanism ensures that only authenticated (genuine) software is
executed. The verification of the firmware that leads to preventing booting the attacked device complies with the
given acceptance criteria.

C.4 Cryptography
Table C.3 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Cryptography.
Table C.3 — Cryptography

Security Requirement Attributes Evaluation Acceptance Criteria

accepted algorithms The scheme defines a list of accepted cryptographic


algorithms, e.g. the SOG-IS scheme defined such a list
in the document “SOGIS Agreed Cryptographic
Mechanisms”[15]
appropriate cryptographic configuration The scheme references standards/collections of
standards providing proper cryptographic guidance
ensuring secure configurations
appropriate use of cryptographic algorithms — authenticity and integrity of data are protected by
the correct application of signatures or MACs,
possibly in conjunction with hash functions
— the secrecy of data are protected by the correct
application of symmetric or public-key
encryption schemes
— random numbers used for security purposes (e.g.
in encryption or key generation algorithms) need
to be generated by secure number generators
— sensitive data are only transmitted over
cryptographically secure channels
— sensitive data are securely stored by using
encryption, signatures/MACs and/or hash

36
EN 17640:2022 (E)

functions, dependent on the use case (e.g. user


data are recommended to be encrypted and
signed, passwords is recommended to be hashed
and salted)
— cryptographic keys are securely generated and
stored, dependent on the used primitives (e.g.
different encryption schemes require specific key
generation algorithms)
— cryptographic secret keys are not hard coded
EXAMPLE

IT Domain

A device that supports encryption often has to generate RSA public-private-key pairs for different purposes. During
the key generation for RSA, the used algorithms are recommended to prevent generating keys with known
weaknesses. A device that implements the following constraints complies (for this aspect) with the acceptance
criteria: the cryptographic primitives for RSA are recommended to be n = pq: log2(n) > 3000 and log2(e) > 16, (cf.
Agreed Cryptographic Mechanisms, v. 1.2, January 2020). SOG-IS Agreed Cryptographic Mechanisms, v. 1.2,
recommends RSA key pairs with additional requirements, see Section 7.3, in particular Note 53-RSAKeyGen and
Note 54-SmallD. Moreover, an agreed RSA key generation method (see B.3) using an agreed prime generation
method (see B.1 and B.2) using an agreed primality test (Section 7.3) is recommended to be used.

C.5 Secure State After Failure


Table C.4 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Secure State
After Failure.
Table C.4 — Secure State After Failure

Security Requirement Attributes Evaluation Acceptance Criteria

known secure state — known secure state can be gained from the
developer's documentation
— reach a known secure state after disruption or
failure
secure values — system parameters (either default or
configurable) are set to secure values
— security-related configuration settings are re-
established
backup recovery — security-critical patches are reinstalled
— components are reinstalled and configured with
established settings
— recovery uses a backup selected explicitly by an
authorized person, or the recovery uses an
internal authentic backup source
documentation and procedures — system documentation and operating procedures
are available

37
EN 17640:2022 (E)

EXAMPLE

IT Domain

Secure State After Failure mechanisms are implemented in network firewalls. After an incident is identified, the
firewall might run into a default state often called “deny/deny”. This behaviour complies with the acceptance
criteria.

A DBMS (Database Management System) is required to handle transactions in the event of a system failure properly.
DBMS failures may not leave transactions in an inconsistent state. This behaviour complies with the acceptance
criteria.

Industry Domain

In the industrial domain, the safe state after failure depends on the context where the product is going to be used.
In a critical process, it is vital that the product returns to a state where it performs all the critical functions. An
example can be a product responsible for monitoring an industrial process and. in particular for helping the
protection of the infrastructure against the risk of explosion. In the event of an error, the product is to return at least
to a state where it continues to perform its primary function. This behaviour complies with the acceptance criteria.

NOTE The industry domain example does not focus on the system or process perspective. It addresses only the
component level.

C.6 Least Functionality


Table C.5 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Least
Functionality.
Table C.5 — Least Functionality

Security Requirement Attributes Evaluation Acceptance Criteria

security-by-configuration — capability to restrict the use of unnecessary


functions, ports, protocols, and/or services
function deactivation — functions beyond a baseline configuration are
recommended to be able to be deactivated
least privilege principle — implementation of, for example, roles,
functionality, and product internal processes
follow the least privilege principle
EXAMPLE

IT Domain

For a Linux system, typical hardening is done according to some kind of security policy that defines, among others,
checking open ports and blocking them if unused or assigning strict access rights to files or folders. Such an
implementation complies with the security-by-configuration acceptance criteria.

For a DMBS, unused database components that are integrated into the DBMS and cannot be uninstalled but can be
disabled (e.g. using DISABLE TRIGGERS function provided by SQL server) comply with the function deactivation
acceptance criteria.

Industry Domain

In the case of industrial equipment, the security of the infrastructure depends on the context in which this
equipment is used. In many cases, the maintenance of critical infrastructure, or a factory, involves many different

38
EN 17640:2022 (E)

people who may belong to several subcontractors. They may have to intervene on the equipment for different
reasons: change firmware in case of a patch, change the configuration, check the security logs, etc. It is often essential
that the product is able, according to a previously defined security policy, to limit its access or functionality.
Especially the deactivation capabilities comply with the function deactivation Acceptance Criteria.

C.7 Update Mechanism


Table C.6 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Update
mechanisms.
Table C.6 — Update Mechanism

Security Requirement Attributes Evaluation Acceptance Criteria

update capability — capability to be updated and upgraded once


installed
authenticity and integrity — the authenticity and integrity of any update is
validated before installation
cryptography — use of digital signatures
— use of approved algorithms, cf. C.4 Cryptography
verification by the device — generate a log entry and deliver notification of the
log entry to a trusted peer
check for security updates — after initialisation and periodically (e.g. random
time)
impact on essential functions — if essential functions are executed, then patching
and updating is not allowed to impact those
essential functions
dependent component or environment — independent but required update mechanisms are
dependencies embedded in the product checked
EXAMPLE

IT Domain

For a device, a secure communication channel to the vendor to check for updates at defined or configured period
complies with the Acceptance Criteria.

Concurrent update mechanisms for the operating system and database server (e.g. to provide a patch to database
environment by using database templates) comply with the Acceptance Criteria “independent, but required update
mechanisms are checked”.

Industrial Domain

In the case of industrial equipment, a new firmware or software version can be proposed in the event of a change
in the functionalities of a product or in the case where a vulnerability in an existing firmware or software has been
fixed. It is critical to ensure that the firmware or software installed in the product is a genuine version supplied by
the vendor and that the integrity of this version is checked before the product restarts and activates the update. In
such a case, a mechanism like a certificate-based verification mechanism for new firmware or software before
installation complies with the Acceptance Criteria.

39
EN 17640:2022 (E)

Annex D
(informative)

Guidance for integrating the methodology into a scheme

D.1 General
D.1.1 Introduction

This annex provides guidance to scheme developers on how to use this standard while developing a
scheme.
While performing the following steps, scheme developers should bear in mind that depending on the level
of assurance required for evaluated products, the level of cooperation between evaluators and
developers/manufacturers can be different, i.e. at higher levels of assurance a closer co-operation
between these actors can be expected.
D.1.2 Perform a risk assessment, reviewing the vertical domain under consideration

In case the scheme is to be implemented in a vertical domain, the developer of such scheme needs to
perform a domain risk assessment. Such risk assessment forms the foundation of the scheme and is the
base for the scheme development.
The scheme needs to set the parameters as described in Annex E according to the risk assessment.
NOTE ISO/IEC 31000 provides guidance for performing risk assessments.

D.1.3 Assign the attack potential to the CSA assurance levels

The scheme developer shall assign the attack potential (c.f. 5.4) to the CSA assurance level.
Based on the definitions in the CSA for the three assurance levels and the definitions for the attack
potential the scheme developer needs to assign the attack potential to the assurance level(s) which are
considered.
D.1.4 Select the evaluation tasks required for this CSA assurance level

For each CSA assurance level the scheme developer shall select those evaluation tasks required for this
level, these are noted as “shall” and marked grey in Table D.2.
This is the minimum set necessary to fulfil the requirements of this document and the CSA.
D.1.5 Review and set the parameters for the tasks

For each task chosen, the scheme developers shall review the parameters for this task and set them
suitably based on the risk assessment and the determined attack potential.
For each task chosen the parameters need to be set, taking into account the risk assessment. The
parameters for each task are listed as summary in Annex E.
NOTE For some evaluation tasks this includes the decision if sampling is used and possibly providing additional
requirements or guidance for sampling, taking into account the expertise of the evaluators.

40
EN 17640:2022 (E)

D.1.6 Possible selection of additional or higher tasks

For each CSA assurance level, the scheme developer shall review if those evaluation tasks are sufficient
for the scheme based on the determined attack potential. If not, the scheme developer shall select
additional evaluation tasks (e.g. from the same CSA assurance level), tasks from a higher CSA assurance
level or additional tasks not defined in this methodology. This might replace tasks already chosen.
EXAMPLE 1 Typically the development process related tasks are added, which augment the evaluation tasks
related to the product.

EXAMPLE 2 The CSA does not mandate testing tasks for CSA assurance level “basic”. To ensure that the TOE is
at least configurable as stated, the scheme should augment “Evaluation of TOE installation”, i.e. 6.6.

D.1.7 Review and set the parameters for the additional tasks

For each new or updated task chosen the scheme developers shall review the parameters for this task
and set them suitably based on the risk assessment and the attack potential.
For each additional or updated task chosen the parameters need to be set, taking into account the risk
assessment. The parameters for each task are listed as summary in E.3.
NOTE For some evaluation tasks this includes the decision if sampling is used and possibly providing additional
requirements or guidance for sampling, taking into account the expertise of the evaluators.

D.1.8 Set up and maintain further scheme requirements and guidelines

Depending on the set of evaluation tasks and the parameters chosen, the scheme may require further
documents, e.g. a reference set for vulnerabilities, best practices. These requirements and / or guidelines
need to be continuously maintained.
EXAMPLE

The following Table D.1 is used to determine the workload for certain domains within the French CSPN scheme
[13]:

Table D.1 — Example workload

Workload
Task Notes
(person days)
Evaluation tasks shall include the analysis of
communications:
SW functions of clients — between clients and card readers;
20
and card readers
— within the system (between server and clients);
— on any other functional interface.
Evaluation tasks shall include the analysis of
communications :
— between the server and its resources (PKI,
SW functions of the database or directory, and so on), whether they
10 are part of the TOE or not;
server
— between the TOE and its environment:
enterprise networks, internet, and so on;
— on any other functional interface.

41
EN 17640:2022 (E)

Workload
Task Notes
(person days)
Evaluation tasks shall include:
— evaluation of the robustness of the HW
HW security 5 interfaces of the clients and card readers;
— documentary analysis of the card readers
(certification reports, guidance).
10 person days if implemented by the product or an
open source library.
5/ 5 person days if implemented by a closed source
Cryptography library. The analysis will only be focused on how
10
the product uses this library. The certification
report will mention that the cryptographic
mechanisms themselves are not evaluated.

D.2 Example
To illustrate the integration of the methodology into a scheme, a small “toy” example is presented below.
The chosen values are arbitrary and are used for illustration purposes only. Also, a fully developed
scheme will contain much more information, here only the relevant excerpt is presented.
Consider a scheme for smart screws. It supports self-assessment (intended for smart screws for home
use), CSA assurance level “basic” (for smart screws in industrial environments), “substantial” (for smart
screws in industrial environments with safety impact) and “high” (for smart screws in critical
infrastructures).
NOTE CSA assurance level basic applies to self-assessment as well. In this example, no differentiation between
self-assessment and third-party assessment for level basic is made.

The scheme developer performed a risk assessment and derived the attack potential “basic” for the CSA
assurance level “basic” and “substantial” and attack potential “enhanced basic” for the CSA assurance
level “high” using the Table F.2 given in the example in Annex F.
For CSA assurance level “basic” and “substantial” the scheme specific checklist on documentation
contains only a technical data sheet, which shall mention if the smart screw can report its chirality (an
optional feature in smart screws). For CSA assurance level “high” the scheme mandates an additional
document from the developer, namely the “architectural overview” (its specification is not reproduced in
this toy example). Additionally, the acceptance criteria are refined by the scheme (not reproduced in this
example).
The scheme maintains a document on “best practices” for smart screws, available both to developers and
evaluators. The scheme also maintains a list of potential vulnerabilities (together with industry), named
the “screw threats openly published” (STOP).
The scheme developer derives the fixed evaluation time for each CSA assurance level and the sampling
used.
Based on this, the scheme developer derived the following evaluation tasks given in Table D.2:

42
EN 17640:2022 (E)

Table D.2 — Chosen evaluation tasks and parameters

CSA assurance
Chosen evaluation task Parameters and notes
level
basic Completeness check
Review of security functionality
Development documentation The checklist contains only the
technical data sheet
Evaluation of the TOE installation
Substantial Completeness check
FIT Security Target Evaluation
Development documentation The checklist contains only the
technical data sheet
Evaluation of the TOE installation
Conformance testing The evaluators shall use sampling of
n days per smart functionality (and
additional m days if chirality support
is included).
Vulnerability testing The evaluators shall use the STOP list
as pre-defined source and use
sampling of x days per smart
functionality (and additional y days if
chirality support is included).
Basic crypto analysis The SOG-IS crypto catalogue is
mandated.
High Completeness check
FIT Security Target Evaluation
Development documentation The checklist contains the technical
data sheet and the architectural
overview.
Evaluation of the TOE installation
Conformance testing The evaluators shall completely test
the conformance (full coverage)
using the scheme defined Acceptance
Criteria.
Penetration testing The evaluators shall use sampling of
u days per smart functionality (and
additional v days if chirality support
is included). Vulnerability research
beyond STOP is required.
Extended crypto analysis The SOG-IS crypto catalogue is
mandated.
The vulnerability analysis shall be
performed within z days.

43
EN 17640:2022 (E)

The scheme developer decides to mandate the structure and content of STs as given in Annex A.
Finally, the scheme allows for FIT Protection Profiles but only at the CSA assurance level “high”. For this,
some additional guidance (not reproduced here) is contained in the scheme.
Evaluator competence:
Additional to the requirements of the standard the evaluators shall hold a bachelor of smart screw design
or comparable work experience for at least 3 years. They also need to demonstrate their knowledge by
at least one project in smart screw security (where their part is at least 5 full working days) if they want
to perform evaluations for the CSA assurance level substantial or high.

44
EN 17640:2022 (E)

Annex E
(informative)

Parameters of the methodology and the evaluation tasks

E.1 General
This annex summarizes the parameters for the individual evaluation tasks. These parameters need to be
set when using this document to develop a scheme. See Clause 4 for details.

E.2 Parameters of the methodology


When setting up a methodology, the following global parameters need to be defined:
Scheme can require specific artefacts (a given number of samples, availability of open samples, source
code, and so on).
The extent of each evaluation task (as person days or percentage of the overall evaluation time) needs to
be devised. The general competence requirements need to be devised.

E.3 Parameters of the evaluation tasks


E.3.1 Parameters for 6.1 “Completeness check”

The scheme needs to devise the required additional documents (if any), their content and the number of
samples of TOE required for evaluation.
EXAMPLE For some CSA assurance levels the scheme might require an additional document “Architectural
overview”.

E.3.2 Parameters for 6.2 “FIT Protection Profile Evaluation”

The scheme needs to decide if the concept of FIT PPs is used in the scheme, possibly limited to some
assurance levels according to the CSA. If the scheme decides to do so, then:
The scheme needs to review if the requirements for FIT PPs as given in Annex B are sufficient and if not
amend or refine those requirements.
E.3.3 Parameters for 6.3 “Review of security functionalities”

No additional scheme requirement foreseen.


E.3.4 Parameters for 6.4 “Security Target Evaluation”

The scheme needs to review if the requirements for FIT STs as given in Annex A are sufficient and if not
amend or refine those requirements.
E.3.5 Parameters for 6.5 “Development documentation”

The scheme needs to provide the “scheme-specific checklist”.


E.3.6 Parameters for 6.6 “Evaluation of TOE Installation”

The scheme needs to define how it handles different languages for the SUG, i.e. in which languages it is
acceptable.

45
EN 17640:2022 (E)

E.3.7 Parameters for 6.7 “Conformance testing”

The scheme needs to define which resources it maintains for evaluators to use, e.g. sectoral minimum
requirements for TOEs.
The scheme needs to define if it requires full coverage of the entire functionality or if sampling is used. In
the latter case the scheme may provide requirements or guidance for the sampling strategy.
The scheme needs to decide if Acceptance Criteria are to be used and needs to maintain them.
E.3.8 Parameters for 6.8 “Vulnerability review”

The scheme needs to decide if “Vulnerability review” is sufficient or “Vulnerability testing” is required.
The scheme needs to define how the search should be performed, especially if the sources are mandated
(or a minimum set of sources for vulnerabilities is mandated).
The scheme needs to decide if the developer is required to provide artefacts from this development
process (cf. Work Unit 2).
E.3.9 Parameters for 6.9 “Vulnerability testing”

The scheme needs to define how the search should be performed, especially if the sources are mandated
(or a minimum set of sources for vulnerabilities is mandated).
The scheme needs to decide if risk-based sampling is used. The scheme may provide requirements or
guidance for the sampling strategy.
The scheme needs to decide if the developer is required to provide artefacts from this development
process (cf. Work Unit 5).
E.3.10 Parameters for 6.10 “Penetration testing”

The scheme may provide additional requirements or guidance for the flaw hypothesis methodology.
E.3.11 Parameters for 6.11 “Basic crypto analysis”

The scheme needs to provide guidelines and requirements for cryptography, e.g. from SOG-IS.
E.3.12 Parameters for 6.12 “Extended crypto analysis”

The scheme needs to define the accepted state-of-the-art cryptography. See 6.12.1 for examples.

46
EN 17640:2022 (E)

Annex F
(normative)

Calculating the Attack Potential

F.1General
Before calculating the Attack Potential, the evaluator shall verify if the attack under consideration is
possible in the intended environment.
EXAMPLE 1 The TOE is operated in a physically trusted environment (according to the FIT ST or the FIT PP).
Then all attacks requiring physical access to the TOE are by default not possible and no attack calculation is
necessary for these scenarios.

EXAMPLE 2 The same situation as in EXAMPLE 1, but the TOE has a hard coded default password (used in all
instances of the TOE). In this case the threat agent needs only one TOE to obtain this default password and then the
threat agent is able to attack any other TOE. In this scenario a calculation of the Attack Potential is necessary (for
example to factor the cost of buying one instance of the TOE into the attack scenario).

F.2Factors for Attack Potential


The following factors shall be considered during calculation of the Attack Potential required to exploit a
vulnerability:

a) Time taken to identify and exploit (Elapsed Time);

b) Specialist technical expertise required (Specialist Expertise);

c) Knowledge of the TOE design and operation (Knowledge of the TOE);

d) Window of opportunity;

e) IT hardware/software or other equipment required for exploitation.

More information on the individual categories can be found in [5].

F.3Numerical factors for attack potential


F.3.1 General

For the vulnerability under consideration the evaluator shall assign for each factor listed in F.2 the
appropriate values taken from Table F.1, considering the operational environment as given in the FIT ST
or FIT PP.
NOTE This can be a theoretical exercise or can be supported by penetration testing, see 6.10.

47
EN 17640:2022 (E)

F.3.2 Default rating table

Table F.1 — Calculation of Attack Potential

Factor Value

Elapsed Time

< = one day 0

< = one week 1

< = two weeks 2

< = one month 4

< = two months 7

< = three months 10

< = four months 13

< = five months 15

< = six months 17

> six months 19

Expertise

Layman 0

Proficient (even if multiple distinct ones) 3

Expert 6

Multiple Expert 8

Knowledge of TOE

Public 0

Restricted 3

Sensitive 7

Critical 11

Window of Opportunity

Unnecessary / Unlimited 0

Easy 1

Moderate 4

Difficult 10

None, i.e. not exploitable in the environment

48
EN 17640:2022 (E)

Factor Value

Equipment

Standard 0

Specialized 4

Bespoke, including complex set of specialized 7

Multiple bespoke 9

The sum of the values selected is the Attack Potential.


Finally, the evaluator shall compare the calculated Attack Potential to the Attack Potential specified for
the selected assurance level according to the CSA (i.e. selected by the scheme). If the calculated value is
lower than the scheme defined value, then the attack is deemed possible.
As the various factors are not independent, several possibilities might exist and then the Attack Potential
is the minimum of these values.
EXAMPLE

To exploit a certain vulnerability a layman (0) would require more than one month (7) with restricted knowledge
(3) and specialized equipment (4) with an easy window of opportunity (1). The sum is 15. The same vulnerability
could be exploited by an Expert (6) in one week (1) with the same restricted knowledge (3) but standard equipment
(0) with an easy window of opportunity (1). Here the sum is again 11. Therefore, the final Attack Potential is 11.

F.3.3 Adaptation of the rating table

The “default” values in the table shown earlier are intended to be replaced or refined depending on the
context (technology, type of product, etc.). More generally, the rating table has an added value as a shared
vocabulary between experts when discussing an attack scenario, while the values themselves are a
parameter than can be adapted by the scheme developers. The definition of a set of values shared in a
given scheme is a non-trivial achievement and is an unavoidable step to achieve any meaningful
recognition between schemes.
The values from this historical default CEM table are well-suited to smartcard and, more generally,
security hardware. However, schemes have adapted this table to use different values when assessing e.g.
pure SW product.
As an example, CSPN [13] uses three different tables – Table F.2 shows the table used by default.

49
EN 17640:2022 (E)

Table F.2 — Table used by CSPN

Factor Values
< = 1 day 0

< = 1 week 1

< = 2 weeks 2

< = 1 month 4

< = 2 months 7
Time taken for the
exploitation
< = 3 months 10

< = 4 months 13

< = 5 months 15

< = 6 months 17

> 6 months 19

Layman 0

Competent 3
Attacker expertise
Expert 6

Multiple experts 8

None a 0

Restricted information 3
Knowledge required by the
attacker
Sensitive information 7

Critical information 11

Not necessary/unlimited 0

Easy 1

Access to the product by the Moderate 4


attacker
Difficult 10

None *b

50
EN 17640:2022 (E)

Factor Values
None/ standard 0
Type of equipment needed c
Specialized software 2

a Including the use of public documentation


b Indicates that the attack is not feasible due to counter-measures implemented in the operational
environment of the TOE.
c These values factor in application note 18 (ANSSI-CC-NOTE-18), which differs from the [CEM]. If an

attack requires physical intervention and the use of hardware, the whole attack shall be scored using the
[JIL_HW] or [JIL_HWD] scoring table (the evaluators will choose whichever seems most appropriate for
the evaluated product)

The note 18 referred to in this section states that no commercial SW tool can be considered higher than
“specialised”. If the threat agent needs to build, by itself, dedicated and complex software, this additional
effort will be taken into account as Expertise rather than Type of equipment.

51
EN 17640:2022 (E)

Annex G
(normative)

Reporting the results of an evaluation

G.1 General
This annex describes how the results of the evaluation shall be reported.

G.2 Written reporting


The evaluators involved in reporting shall be able to communicate the testing strategies, the evaluation
results and any auxiliary information required by the respective evaluation tasks. The evaluators shall be
able to provide the results in a precise and unambiguous form for readers not involved in the evaluation
and not accustomed to every detail of the TOE.
The evaluators shall report all results and a summary of all evidence required by the evaluation tasks in
an ETR.
For each identified nonconformity the evaluators shall report it in sufficient detail so that security experts
are capable of reproducing the nonconformity (which might be a vulnerability).
If there are no complete attack paths identified but insufficiencies in the implemented algorithms,
cryptography, principles or in the underlying properties the evaluators shall propose a verdict for the
evaluation and a rationale for the verdict.
An evaluator not involved in the preparation of the ETR shall be designated as reviewer. The reviewers
shall check that the ETR complies with all formal requirements of the scheme. The reviewers shall
determine that the ETR is complete and free of obvious errors.
The evaluators shall check that all information generated during the evaluation (including full protocol
outputs of tools used) are easily accessible.
The evaluators shall provide the ETR to the certifying function of the scheme by means mandated by the
scheme.
NOTE This usually includes securing the ETR during transmission against unauthorized disclosure and
tampering.

G.3 Oral defence of the results obtained


The evaluators shall be able to communicate orally the testing strategies, the evaluation results and any
auxiliary information required by the respective evaluation tasks. The evaluators shall be able to present
the results in a precise and unambiguous form and shall be able to answer any technical question
regarding the evaluation.
The evaluators shall record the results of the evaluation in an easily presentable format. The presentation
shall contain all results from previous evaluation tasks as well as auxiliary information, including the
testing strategy chosen and its development over the course of the evaluation.
The evaluator shall select representatives who are able to explain and defend the entire evaluation
(including the cryptographic part, if applicable), the testing strategy, the distribution of the allocated time
(“sampling”), the selection of evaluators and the rating of the results.
The representatives shall present the results to the certifying function. The representatives shall be able
to defend and justify all results obtained. The representatives shall be able to access additional

52
EN 17640:2022 (E)

information (this might include contact to other evaluators or access to detailed logs made during the
evaluation) to answer questions arising during the oral defence.
The representatives shall record all open and unresolved issues and summarize them at the end of the
oral defence. These open and unresolved issues typically need to be answered by (very limited)
additional evaluation.
The evaluators shall perform additional evaluation (if necessary) based on the open and unresolved
issues from the oral defence and update the ETR accordingly.
The evaluators shall provide the ETR to the certifying function of the scheme by means mandated by the
scheme.

53
EN 17640:2022 (E)

Bibliography

[1] Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on
ENISA (the European Union Agency for Cybersecurity) and on information and communications
technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (the
Cybersecurity Act, CSA), available from https://eur-
lex.europa.eu/eli/reg/2019/881/oj?locale=en

[2] EN ISO/IEC 15408, Information technology — Security techniques —Evaluation Criteria for IT
security

[3] EN ISO/IEC 17000, Conformity assessment—Vocabulary and general principles

[4] EN ISO/IEC 17025, General requirements for the competence of testing and calibration laboratories

[5] EN ISO/IEC 18045, Information technology — Security techniques — Methodology for IT security
evaluation

[6] ISO/IEC 19896-1, IT security techniques — Competence requirements for information security
testers and evaluators — Part 1: Introduction, concepts and general requirements

[7] ETSI EN 303-645 Version 2.1.1, Cyber Security for Consumer Internet of Things

[8] IEC 62443-4-1:2018, Security for industrial automation and control systems - Part 4-1: Secure
product development lifecycle requirements

[9] IEC 62443-4-2:2019, Security for industrial automation and control systems - Part 4-2: Technical
security requirements for IACS components

[10] ANSSIRGS, Référentiel Général de Sécurité (General Security Framework), Version 2.0,

[11] BSI TR-02102, Kryptographische Verfahren: Empfehlungen und Schlüssellängen

[12] CCN-STIC 807, Criptología de empleo en el Esquema Nacional de Seguridad

[13] CSPN - First Level Security Certification For Information Technology Products

[14] Joint Interpretation Library, Application of Attack Potential to Smartcards

[15] Joint Interpretation Library, SOG-IS Agreed Cryptographic Mechanisms

[16] Methodology for a Sectoral Cybersecurity Assessment


https://www.enisa.europa.eu/publications/methodology-for-a-sectoral-cybersecurity-
assessment/@@download/fullReport

54

You might also like