En 17640 - 2022
En 17640 - 2022
En 17640 - 2022
NORME EUROPÉENNE
EUROPÄISCHE NORM October 2022
ICS 35.030
English version
CEN and CENELEC members are bound to comply with the CEN/CENELEC Internal Regulations which stipulate the conditions for
giving this European Standard the status of a national standard without any alteration. Up-to-date lists and bibliographical
references concerning such national standards may be obtained on application to the CEN-CENELEC Management Centre or to
any CEN and CENELEC member.
This European Standard exists in three official versions (English, French, German). A version in any other language made by
translation under the responsibility of a CEN and CENELEC member into its own language and notified to the CEN-CENELEC
Management Centre has the same status as the official versions.
CEN and CENELEC members are the national standards bodies and national electrotechnical committees of Austria, Belgium,
Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy,
Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North Macedonia, Romania, Serbia,
Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and United Kingdom.
© 2022 CEN/CENELEC All rights of exploitation in any form and by any means Ref. No. EN 17640:2022 E
reserved worldwide for CEN national Members and for
CENELEC Members.
EN 17640:2022 (E)
Contents Page
2
EN 17640:2022 (E)
3
EN 17640:2022 (E)
European foreword
This document (EN 17640:2022) has been prepared by Technical Committee CEN/CLC/JTC 13
“Cybersecurity and Data Protection”, the secretariat of which is held by DIN.
This European Standard shall be given the status of a national standard, either by publication of an
identical text or by endorsement, at the latest by April 2023, and conflicting national standards shall be
withdrawn at the latest by April 2023.
Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. CEN shall not be held responsible for identifying any or all such patent rights.
Any feedback and questions on this document should be directed to the users’ national standards body.
A complete listing of these bodies can be found on the CEN website.
According to the CEN-CENELEC Internal Regulations, the national standards organisations of the
following countries are bound to implement this European Standard: Austria, Belgium, Bulgaria, Croatia,
Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland,
Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Republic of North
Macedonia, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Türkiye and the United
Kingdom.
4
EN 17640:2022 (E)
Introduction
The foundation for a sound product certification is a reliable, transparent and repeatable evaluation
methodology. Several product or scheme dependent evaluation methodologies exist. The Cybersecurity
Act (CSA) [1] will cause new schemes to be created which in turn require (new) methodologies to
evaluate the cybersecurity functionalities of products. These new methodologies are required to describe
evaluation tasks defined in the CSA. This methodology also adds a concept, independent of the
requirements of the CSA, namely the evaluation in a fixed time. Existing cybersecurity evaluation
methodologies (e.g. EN ISO/IEC 15408 in combination with EN ISO/IEC 18045) are not explicitly
designed to be used in a fixed time.
Scheme developers are encouraged to implement the evaluation methodology in their schemes. This can
be done for general purpose schemes or in dedicated (vertical domain) schemes, by selecting aspects for
self-assessment at CSA assurance level “basic” or third-party assessments. The self-assessment may be
performed at CSA assurance level “basic”, the third-party evaluations at CSA assurance level “basic”,
“substantial” or “high”. And the evaluation criteria and methodology might be subject to extra tailoring,
depending on the requirements of the individual scheme. This cybersecurity evaluation methodology
caters for all of these needs. This methodology has been designed so that it can (and needs to be) adapted
to the requirements of each scheme.
Scheme developers are encouraged to implement the evaluation methodology for the intended use of
the scheme, applicable for general purpose or in dedicated (vertical) domains, by selecting those aspects
needed for self-assessment at CSA assurance level “basic” or third-party evaluation at any CSA assurance
level required by the scheme.
This document provides the minimal set of evaluation activities defined in the CSA to achieve the desired
CSA assurance level as well as optional tasks, which might be required by the scheme. Selection of the
various optional tasks is accompanied by guidelines so scheme developers can estimate the impact of
their choices. Further adaption to the risk situation in the scheme can be achieved by choosing the
different evaluation tasks defined in the methodology or using the parameters of the evaluation tasks, e.g.
the number of days for performing certain tasks.
If scheme developers choose tasks that are not defined in this evaluation methodology, it will be the
responsibility of the scheme developer to define a set of companion requirements or re-use another
applicable evaluation methodology.
Nonetheless, it is expected that individual schemes will instantiate the general requirements laid out in
this evaluation methodology and provide extensive guidance for manufacturers (and all other parties)
about the concrete requirements to be fulfilled within the scheme.
Evaluators, testers and certifiers can use this methodology to conduct the assessment, testing or
evaluation of the products and to perform the actual evaluation/certification according to the
requirements set up by a given scheme. It also contains requirements for the level of skills and knowledge
of the evaluators and thus will also be used by accreditation bodies or National Cybersecurity
Certification Authorities during accreditation or authorization, where appropriate, and monitoring of
conformity assessment bodies.
Manufacturers and developers will find the generic type of evidence required by each evaluation task
listed in the evaluation methodology to prepare for the assessment or evaluation. The evidence and
evaluation tasks are independent from the fact of whether the evaluation is done by the
manufacturer/developer (i.e. 1st party) or by someone else (2nd/3rd party).
Users of certified products (regulators, user associations, governments, companies, consumers,
etc.) may also use this document to inform themselves about the assurance drawn from certain
certificates using this evaluation methodology. Again, it is expected that scheme developers provide
additional information, tailored to the domain of the scheme, about the assurance obtained by
evaluations / assessments under this methodology.
5
EN 17640:2022 (E)
Furthermore, this methodology is intended to enable scheme developers to create schemes which
attempt to reduce the burden on the manufacturer as much as possible (implying additional burden on
the evaluation lab and the certification body).
NOTE In this document the term “Conformity Assessment body” (CAB) is used for CABs doing the evaluation.
Other possible roles for CABs are not considered in this document.
It should be noted that this document cannot be used “stand alone”. Each domain (scheme) needs to
provide domain specific cybersecurity requirements (“technical specifications”) for the objects to be
evaluated / certified. This methodology is intended to be used in conjunction with those technical
specifications containing such cybersecurity requirements. The relationship of the methodology
provided in this document to the activities in product conformity assessment is shown in Figure 1.
6
EN 17640:2022 (E)
1 Scope
This document describes a cybersecurity evaluation methodology that can be implemented using pre-
defined time and workload resources, for ICT products. It is intended to be applicable for all three
assurance levels defined in the CSA (i.e. basic, substantial and high).
The methodology comprises different evaluation blocks including assessment activities that comply with
the evaluation requirements of the CSA for the mentioned three assurance levels. Where appropriate, it
can be applied both to third-party evaluation and self-assessment.
2 Normative references
There are no normative references in this document.
3.1
evaluator
individual that performs an evaluation
Note 1 to entry: Under accreditation the term “tester” is used for this individual.
3.2
auditor
individual that performs an audit
3.3
certifying function
people or group of people responsible for deciding upon certification
Note 1 to entry: Depending on the scheme the certifying function may use evidence beyond the ETR (3.13) as a basis
for the certification decision.
3.4
scheme developer
person or organization responsible for a conformity assessment scheme
Note 1 to entry: For schemes developed under the umbrella of the CSA the so-called “ad hoc group” helps the scheme
developer.
Note 2 to entry: This definition is based on and aligned with the definition of “scheme owner” in EN ISO/IEC 17000.
3.5
confirm
<evaluation verb> declare that something has been reviewed in detail with an independent
determination of sufficiency
7
EN 17640:2022 (E)
3.6
verify
<evaluation verb> rigorously review in detail with an independent determination of sufficiency
Note 1 to entry: Also see “confirm”. This term has more rigorous connotations. The term “verify” is used in the
context of evaluator actions where an independent effort is required of the evaluator.
3.7
determine
<evaluation verb> affirm a particular conclusion based on independent analysis with the objective of
reaching a particular conclusion
Note 1 to entry: The usage of this term implies a truly independent analysis, usually in the absence of any previous
analysis having been performed. Compare with the terms “confirm” or “verify” which imply that an analysis has
already been performed which needs to be reviewed.
3.8
ICT product
product with information and/or communication technology
Note 1 to entry: ICT covers any product that will store, retrieve, handle, transmit, or receive digital information
electronically in a digital form (e.g., personal computers, smartphones, digital television, email systems, robots).
3.9
Target of Evaluation
TOE
product (or parts thereof, if product is not fully evaluated) with a clear boundary, which is subject to the
evaluation
3.10
FIT Security Target
FIT ST
documented information describing the security properties and the operational environment of the TOE
(3.9)
Note 1 to entry: The FIT ST may have different content, structure and size depending on the CSA assurance level.
3.11
FIT Protection Profile
FIT PP
implementation-independent statement of security needs for a TOE (3.9) type
3.12
Secure User Guide
SUG
documented information describing the steps necessary to set up the TOE (3.9) into the intended secure
state (3.16)
8
EN 17640:2022 (E)
3.13
Evaluation Technical Report
ETR
documented information describing the results of the evaluation
3.14
scheme-specific checklist
list of items defining the required level of detail and granularity of the documentation, specified by the
scheme
3.15
knowledge
facts, information, truths, principles or understanding acquired through experience or education
Note 1 to entry: An example of knowledge is the ability to describe the various parts of an information assurance
standard.
Note 2 to entry: This concept is different from the concept “Knowledge of the TOE”.
[SOURCE: ISO/IEC TS 17027:2014, 2.56, modified — Note 1 to entry has been added from
ISO/IEC 19896-1:2018, Note 2 to entry is new]
3.16
secure state
state in which all data related to the TOE (3.9) security functionality are correct, and security functionality
remains in place
3.17
self-assessment
conformance assessment activity that is performed by the person or organization that provides the TOE
(3.9) or that is the object of conformity assessment
[SOURCE: EN ISO/IEC 17000:2020, definition 4.3 with Notes and Examples removed]
3.18
evaluation task parameter
parameter required to be set when using this document to define how the evaluation task shall be
executed by the evaluator (3.1)
4 Conformance
The following Table 1 provides a reference on how the evaluation tasks should be chosen for a certain
scheme for the different CSA assurance levels:
9
EN 17640:2022 (E)
To implement the methodology for a certain scheme, the following steps shall be performed:
1. The scheme developer needs to perform a (domain) risk assessment, reviewing the domain under
consideration.
2. The scheme developer shall assign the Attack Potential (cf. Clause 5.4 and Annex F) to each CSA
assurance level used in the scheme
3. For each CSA assurance level the scheme developer shall select those evaluation tasks required for
this level, these are marked grey in Table 1.
4. For each task chosen, the scheme developers shall review the parameters for this evaluation task and
set them suitably based on the risk assessment and the determined attack potential. For the
10
EN 17640:2022 (E)
evaluation task “development documentation” this includes setting up a scheme specific checklist
(which maybe empty).
5. For each CSA assurance level the scheme developer shall review those evaluation tasks
recommended for this level if inclusion is sensible, these task contain the word “Recommended” in
Table 1.
6. For each CSA assurance level the scheme developer shall review if those evaluation tasks are
sufficient for the scheme based on the determined Attack Potential. If not, the scheme developer shall
select additional evaluation tasks (e.g. from the same CSA assurance level), tasks from a higher CSA
assurance level or additional tasks not defined in this methodology. These may replace tasks already
chosen.
7. For each new or updated task chosen the scheme developers shall review the parameters for this
task and set them suitably based on the risk assessment and the attack potential.
For some evaluation tasks the scheme may require additional inputs from the developer, e.g. an
architectural overview. This additional input should be limited as much as possible, especially if this
documentation is typically only prepared for the assessment or evaluation, i.e. not readily available for
the TOE anyhow.
NOTE 2 Requiring design information might preclude some products from assessment or certification, as this
information might not be available due to the fact that some third-party components, including hardware, might be
proprietary without the possibility to obtain this design information. This is in general not applicable if white box
testing is performed (if this is an option in the scheme). Further composition of certified parts is an option to
mitigate this problem.
5 General concepts
5.1 Usage of this methodology
Clause 5 describes elements of an evaluation methodology for fixed-time security certification and self-
assessment.
11
EN 17640:2022 (E)
To instantiate a specific evaluation methodology based on this generic methodology, the required
evaluation tasks are selected depending on the intended CSA assurance level according to the CSA.
Depending on the domain, certain evaluation tasks are required, while others are optional (see Clause 4).
For sample-based evaluation tasks, the scheme needs to devise the sample size and sampling strategy as
well as the absolute or relative weight, i.e. the number of person days or the percentage of overall
evaluation time. Additional constraints on sampling might be provided, e.g. on the limits of sampling
depending on the CSA assurance level.
To use this methodology, it is not necessary to require all evaluation tasks described in Clause 6 for every
CSA assurance level. For example, a scheme designed for CSA assurance level “substantial” might require
a “Basic crypto analysis” evaluation task or might omit it and possibly integrate the necessary parts into
the “Conformance testing” task instead.
NOTE This document and the resulting scheme do not define the exact structure of the documents used or
produced by the evaluation, e.g. the FIT ST or the ETR. These are scheme dependent.
The scheme will require different sets of information or information with different levels of detail. This
depends on the one hand on the assurance required, on the other hand additional information might
speed up certain evaluation tasks.
In general, the developer shall provide a FIT Security Target and a Secure User Guide. The latter may not
be needed, if the TOE goes into the secure state as defined in the FIT ST automatically, i.e. no further
guidance is necessary.
The scheme may require additional information for certain activities. This is indicated in the respective
evaluation tasks where applicable.
The evaluator shall have access to information (like standards, protocol specifications) regarding the
technology implemented in the TOE, where this information is publicly available.
NOTE Publicly available does not imply that it is available free of charge.
This methodology is concerned with ICT product evaluation, and a scheme might limit its evaluation tasks
to pure ICT product related activities. However, experience in ICT product certification has shown that it
is sensible and valuable to evaluate the development process as well. This concerns both the initial
development (e.g. regarding security during design and construction of the product, including site
security) as well as aspects beyond delivery of the product, e.g. vulnerability and update management
processes. To improve usage of audit results in future product evaluations, the auditor may define a set
of artefacts (e.g. meeting reports, listing of configuration management systems, filled in checklists) which
will then be requested in every subsequent product evaluation to verify that the processes audited have
been adhered to in this instance.
Generic standards for development process evaluations should be reused where possible, applicable or
available.
5.4 Attack Potential
To determine the necessary evaluation tasks and their parameters it is necessary to define the expected
threat agent, characterized with a specific strength, also called Attack Potential. The vulnerability analysis
task of the evaluator may include penetration testing assuming the Attack Potential of the threat agent.
The following levels of Attack Potential are assumed in this document, the categorization is based on [2]
and [5].
— Basic
12
EN 17640:2022 (E)
— Enhanced Basic
— Moderate
— High
NOTE 1 Attack Potential Moderate and High are unlikely to be addressable in a fixed-time evaluation scheme:
systematic availability of detailed documentation will probably be necessary to allow evaluators to be on par with
high level threat agents.
The CSA [1] defines three assurance levels: basic, substantial and high. Each level has an implicitly defined
attack scenario assigned. Scheme developers are advised to review the definitions in the CSA to align the
CSA assurance levels (as applicable to their domain) with the attack potential used in this methodology.
NOTE 2 The terms used in the context of attack potential are used as defined in this document and deviate from
the meaning of similar terms used in the CSA.
In the end evaluators will assess whether a threat agent possessing a given Attack Potential is able to
bypass or break the security functionality of the TOE.
The calculation of the attack potential is given in Annex F.
5.5 Knowledge building
Ensuring that each evaluation task produces the expected results requires certain knowledge and
competence by the evaluators. This knowledge is briefly described for each evaluation task and needs to
be refined when setting up the scheme.
To ensure that an overall evaluation produces the expected results, the competent evaluators need to
work as a good team. In particular the evaluators who work on the document parts of the evaluation need
to very closely collaborate with the evaluators performing the actual testing; ideally, they are the same
(set of) persons, especially if the total time span of the evaluation is low.
6 Evaluation tasks
6.1 Completeness check
6.1.1 Aim
The aim of this evaluation task is to verify that all evidence required for evaluation is provided.
6.1.2 Evaluation method
This evaluation task is a completeness check of the evidence required by the scheme. No access to internal
documents is required. Depending on the TOE, access to publicly available specifications or other
documents distributed with or referenced by the TOE might be necessary.
6.1.3 Evaluator competence
The evaluators shall check that all evidence required for the evaluation is present. This includes a
sufficient number of TOEs.
13
EN 17640:2022 (E)
Several samples of TOEs might be required. e.g. because a TOE may completely fail during testing, may
be rendered unusable due to some tests or parallel testing (to speed up) is implemented.
6.1.4.2 Work unit 2
The evaluator shall check that the manufacturer has provided the testing environment required to carry
out the TOE evaluation activities, if applicable to the testing activities and the TOE.
6.1.4.3 Work unit 3
If white box cryptography is part of the evaluation, the evaluators shall check that the evidence provided
covers the cryptography specified in the FIT ST and the requirements for evidence of the applicable
cryptographic work units.
This can be done by identifying all security functions which contain cryptography (as given in the FIT ST)
and checking that the corresponding evidence for the cryptographic part is present.
NOTE The evidence is usually source code, pseudo code or schematics and the crypto description/specification
provided in the FIT ST or in a separate document.
The aim of this evaluation task is to verify that the FIT PP is well constructed, consistent and suitable as
basis for future FIT ST.
NOTE Background for the FIT PP concept is provided in Annex B.
This evaluation task is a complete review of the protection profile. No access to any actual TOE is required.
Depending on the FIT PP, access to publicly available specifications (e.g. standards) for technology
described in the FIT PP might be necessary.
6.2.3 Evaluator competence
The evaluators shall have knowledge of the typical TOEs implementing the FIT PP. They shall be able to
review documentation like standards or usage scenarios.
6.2.4 Evaluator work units
The evaluators shall check that the FIT PP follows the structural requirements stated by the scheme.
The evaluators shall verify that the information contained in the FIT PP is free of
contradictions/discrepancies and inconsistencies.
The evaluators shall confirm that the FIT PP describes real security properties of the class of TOEs,
especially that it is not “misleading” in respect to the security properties.
The evaluators shall confirm that the FIT PP describes security properties relevant for the intended usage.
The evaluators shall verify that the security functions in the FIT PP are relevant for the intended use case.
The evaluators shall confirm that the assumptions stated in the FIT PP are realistic assumptions for the
intended use case of the class of TOEs.
The evaluators shall confirm that the set of threat agents is realistic considering the intended use case of
the class of TOEs.
14
EN 17640:2022 (E)
The evaluators shall confirm that the boundaries of the class of TOEs and the boundaries of the evaluation
are clearly and unambiguously defined in the FIT PP.
If the scheme maintains cryptographic requirements for the intended CSA assurance level and product
class, then the evaluator shall examine the cryptographic specification provided in the FIT PP to
determine that the cryptographic mechanisms are suitable to meet their security objective as stated in
the FIT PP.
The evaluators shall verify that the FIT PP is understandable by the potential end customers.
“Understandability” in this context means that the language and depth of description in the FIT PP are
commensurable with the knowledge of the anticipated end customer, including the expected knowledge
about terms and concepts.
The evaluators shall confirm that the security functionality stated in the FIT PP is conformant to
applicable requirements, e.g. those stated in requirements in the scheme.
The evaluators shall check that all additional actions (to be later instantiated by the FIT ST author)
introduced in the FIT PP (called operations) are suitably marked.
NOTE “Suitably marked” implies that the operations can be easily found, and possible values (including free
text) are denoted for any variable defined in a given operation.
The evaluators shall verify that FIT PP operations with provided values do not lead to any contradiction
with the security objectives for any of the provided values.
6.3 Review of security functionalities
6.3.1 Aim
The aim of this evaluation task is to review that the security functionalities are sound, and no obvious
contradictions and omissions exist compared to best practices as defined by the scheme.
NOTE This evaluator task is the counterpart of the FIT ST evaluation for CSA assurance level basic.
This evaluation task is a complete review of the security functionalities in the FIT ST. No access to the
TOE or internal documents is required. Depending on the TOE, access to publicly available sources might
be necessary.
6.3.3 Evaluator competence
The evaluators shall have knowledge of the TOE domain and current best security practices. They shall
be able to review documentation and research common sources for best practices for security
functionalities.
6.3.4 Evaluator work units
The evaluator shall verify that the purpose of the TOE is well defined and clearly described.
The evaluators shall confirm that each security functionality mentioned in the FIT Security Target meets
current best security practices both for the domain as well as general best practices.
NOTE It is up to the scheme (developer) to define the best practices, e.g. using interpretation groups.
The evaluators shall confirm that the security functionality mentioned in the FIT Security Target is
understandable by the potential end customers.
15
EN 17640:2022 (E)
The aim of this evaluation task is to verify that the FIT ST is well constructed, consistent and suitable as
basis for further evaluation.
NOTE This evaluator task is the counter part of the Review of security functionalities (6.3) for CSA assurance
levels substantial and high.
This evaluation task is a complete review of the FIT ST. No access to the TOE is required. Access to all
documents provided for evaluation is necessary. Depending on the TOE, access to publicly available
specifications might be necessary.
If cryptography is part of the evaluation, the cryptographic specification needs to be considered part of
the evidence as well.
6.4.3 Evaluator competence
The evaluators shall have knowledge of the TOE domain. They shall be able to review documentation.
6.4.4 Evaluator work units
This work unit is applicable if the FIT ST is not based on a FIT PP.
The evaluators shall check that the FIT ST follows the structural requirements stated by the scheme.
NOTE Annex A provides guidance on the typical contents of STs.
The evaluators shall verify that the information contained in the FIT ST is free of contradictions and
inconsistencies.
The evaluators shall verify that the information in the FIT ST is free of contradictions to other information
provided along the TOE, especially the SUG and other (design relevant) documents mandated by the
scheme (if any).
The evaluators shall confirm that the FIT ST describes real security properties of the TOE, especially that
it is not “misleading” in respect to the security properties.
The evaluators shall verify that the FIT ST describes security properties relevant for the intended usage.
The evaluators shall verify that the security functions in the FIT ST are relevant for the intended use case.
The evaluators shall confirm that the assumptions stated in the FIT ST are realistic assumptions for the
intended use case of the TOE.
The evaluators shall confirm that the set of threat agents is realistic considering the intended use case of
the TOE.
The evaluators shall confirm that the scope and boundaries of the TOE are clearly and unambiguously
defined in the FIT ST.
If the scheme maintains cryptographic requirements for the intended CSA assurance level and product
class, then the evaluators shall check that the cryptographic algorithms listed in the FIT ST are contained
in the set of cryptographic algorithms approved by the scheme.
The evaluators shall review the cryptographic specification to determine that the cryptographic
mechanisms are suitable to meet their security objective.
16
EN 17640:2022 (E)
The evaluators shall verify that the FIT ST is understandable by the potential end customers.
“Understandability” in this context means that the language and depth of description in the FIT ST are
commensurable with the knowledge of the anticipated end customer, including the expected knowledge
about terms and concepts.
The evaluators shall confirm that the security functionality stated in the FIT ST is conformant to
applicable requirements, e.g. those stated in the scheme.
6.4.4.2 Work unit 1b
The aim of this evaluation task is to assess the development documentation (including if applicable
development process documentation) and evidence available to the evaluator.
In some other methodologies this task is usually the most time-consuming, and schemes should keep this
task as light as possible and consider substituting it with other evaluation tasks (e.g. penetration testing)
wherever applicable.
6.5.2 Evaluation method
This evaluation task is a complete analysis of the entries given by the scheme-specific checklist. This can
include verifying
— the specification of the functionalities of the product, at a coarse level or down to the exact interface
parameters;
— development process documentation including vulnerability and patch management;
— the design of the product, at a subsystem or module level;
— an additional description of the security architecture (secure boot, self-protection, domain
separation, and so on);
— source code.
The evaluators shall have knowledge of the TOE domain and experience with secure development
processes.
6.5.4 Work units
Evaluators shall check that the documentation is complete according to the scheme-specific checklist.
6.6 Evaluation of TOE Installation
6.6.1 Aim
The aim of this evaluation task is to verify that the TOE can be installed as described in the SUG.
17
EN 17640:2022 (E)
NOTE The TOE might enter the secure state automatically. In this case, the aim of this evaluation task is to
verify that this final state is indeed the intended secure state.
This evaluation task is a complete installation of the TOE. Unless the installation fails, no access to the
documented information besides the SUG is needed.
NOTE It is possible that the SUG is split over several documents, e.g. parts of it are in other manuals.
The evaluators shall be able to observe the installation of the TOE. The knowledge and skills required to
install the TOE shall be comparable to those persons which have been defined as end users of the TOE. In
the case the installation does not work as described, the evaluators may be required to have additional
domain expertise to complete the installation despite the defects in the guidance.
6.6.4 Evaluator work units
The evaluators shall check that all systems necessary, to install the TOE according to the SUG and to use
it for the purposes described in the FIT ST, are present and correctly set up. The setup of those additional
systems might be done together with the developer and on itself is not part of the evaluation. The
evaluation shall not proceed until the setup of additional systems, if any, has been successfully completed.
EXAMPLE The TOE needs a backend cloud service. In this case, the developer needs to provide test accounts
or a local (working) cloud installation to the evaluator.
The evaluator shall set up the TOE according to the SUG. If the SUG exists in several languages, the
guidance in the language of the FIT ST shall be used for this setting up.
NOTE The language of the FIT ST is defined by the scheme or the CAB.
The evaluator shall record in the ETR if the TOE is operating as described in the FIT ST (secure state)
after completion of the TOE set up.
If the TOE is not in the secure state then the evaluators shall use their general expert knowledge and the
remaining information (besides the SUG) to set up the TOE in the secure state. The evaluators shall record
the additional (or changed) steps compared to the SUG in the ETR.
6.6.4.2 Work unit 2
The evaluators shall check if the SUG follows the applicable scheme requirements (if any).
6.6.4.3 Work unit 3
The evaluator shall determine how hard it is to get the TOE out of the secure state or whether a warning
is presented to the user if the TOE is no longer in the secure state.
6.7 Conformance testing
6.7.1 Aim
The aim of this evaluation task is to verify that the TOE complies with the functional claims made in the
FIT ST.
6.7.2 Evaluation method
This evaluation task is a conformity testing of the TOE, which makes use of laboratory equipment.
18
EN 17640:2022 (E)
Access to scheme defined external documents (e.g. standards, sector specific minimum requirements,
blueprints of architectural requirement) is required. Access to scheme defined internal documents is
required. Access to publicly available specifications is required. Access to the TOE (and possibly
background systems provided by the vendor) is required.
NOTE For CSA assurance level “high” usually the scheme mandates some additional information, e.g. an
architectural overview or a structural overview of the update mechanism.
The evaluators shall have substantial knowledge of the TOE domain. They shall be able to execute the
required conformity tests. Evaluators shall have the skills to perform all test independently from the TOE
developer (and possibly to modify the tests), even if they are part of a larger test suite or tool under
normal circumstances.
For every interface where no automated tools are available or feasible, the evaluators shall have
substantial knowledge of such interfaces used in the TOE domain. They shall be capable of understanding
complex specifications and transforming them into tests.
6.7.4 Evaluator work units
The evaluator shall devise a test plan. This test plan shall fulfil the scheme requirements regarding
coverage and depth and/or effort.
NOTE 1 The scheme can mandate a full coverage, irrespective of the effort required or some kind of sampling
strategy, usually within a time limited period.
b) The impact of a potential nonconformity for the TOE (i.e. its security functions and assets).
If risk-based sampling is used, the evaluators shall set up a suitable sampling strategy, taking into account,
previous evaluation results, information received from the certification body and the experience with
similar TOEs. The evaluators shall further employ the entire documentation received for the TOE.
If acceptance criteria are used, the evaluator shall verify for each test case that a “pass” implies fulfilment
of the respective acceptance criterion as well.
The evaluators shall use validated tools where possible to complete this task. Where tools are used, their
coverage shall be analysed and if necessary additional (manual) tests shall be performed.
NOTE 2 The rigor of the testing depends on the CSA assurance level chosen and needs to be defined by the
scheme.
The test case can be completely covered by a tool, possibly considering only part of its output. If the tool
is incomplete, insufficient or not present, then manual tests need to be derived and the given functional
security requirements for the TOE shall be transferred into test cases by the evaluators. A test case shall
be described with at least the following characteristics:
• test description with test expectation, test preparation, and testing steps;
• test result;
19
EN 17640:2022 (E)
• assessment (pass/fail).
The test expectation is the expected test result, which will occur if the component functions correctly.
The test expectation shall result from the component’s intended behaviour, possibly backed by an
acceptance criterion. The test result is the behaviour of the component detected during the testing steps.
The process model for transferring requirements to test cases is composed in the following steps:
1. Determine the security requirements to be tested
During conformity testing it shall be examined whether the chosen technical implementation produces
the expected results.
6.7.4.3 Work unit 3
The evaluators shall execute the defined tests from the test plan to find nonconformities of the TOE in
respect to the FIT ST. For each security requirement mentioned in the test plan, the evaluators shall
perform the defined tests and record the results. For each failed conformity test, the evaluators shall
exactly describe the failure (test case, expected result, observed result) and record the results.
NOTE 1 This might require the evaluator to repeat the test with changed parameters.
If the test result corresponds with the test expectation, the evaluation will be successful (pass verdict). If
the test result deviates, the evaluation will be unsuccessful (fail verdict).
If no test case can be specified for a security requirement, e.g. if one implementation detail cannot be
addressed via an external interface, an alternative proof of correct implementation shall be given. This
can be done as part of a different evaluation method.
NOTE 2 Further details on validation and calibration of equipment can be found in EN ISO/IEC 17025, 6.4. [4]
The evaluators shall record the testing strategy and the results in the ETR.
6.8 Vulnerability review
6.8.1 Aim
The aim of this evaluation task is to review that the TOE is not vulnerable to publicly known
vulnerabilities for the intended use in the intended environment.
NOTE The scheme might define certain “pre-defined” sources for potential vulnerabilities (e.g. for lower
assurance levels).
This evaluation task is a search for public vulnerabilities, possibly using the sources provided by the
scheme. The evaluators shall employ the available documentation received for the TOE. No access to
internal documents is required.
NOTE When considering threat-agents with attack potential “basic”, the evaluator will only take into account
publicly known vulnerabilities/vulnerability classes, previous evaluation results, information received from the
certification body and the experience with similar TOEs.
20
EN 17640:2022 (E)
The evaluators shall have knowledge of the TOE domain. They shall be able to review vulnerability
descriptions to check if they apply to a TOE. If no pre-defined sources are mandated by the scheme, they
shall be able to efficiently search for public vulnerabilities in a wide range of sources. In case evidence
from the development process is present, the evaluators shall be able to review such evidence.
6.8.4 Evaluator work units
If the scheme provides a pre-defined set of sources for vulnerabilities, the evaluator shall select those
potential vulnerabilities which technically might apply to the TOE, based on the FIT ST.
Otherwise, the evaluators shall search for vulnerabilities analysing the available evidence (public and
proprietary), including the FIT ST, taking into account the TOE technology.
The evaluators shall review if the vulnerabilities determined apply to the TOE.
NOTE 1 The review can be performed by comparing version numbers of TOE parts (e.g. libraries), checking for
countermeasures against the determined vulnerabilities (e.g. additional software parts like filters which prevent
certain types of “risky” data transport) or some limited testing.
NOTE 2 Only attacks which are possible in the operational environment specified in the FIT ST (including
assumptions on existing measures) are considered here.
This work unit is only applicable if evidence from the development process is present.
The evaluators shall employ the available documentation received for the TOE. The document bundle
should comprise:
— Reports of vulnerability scans no older than three months;
— Vulnerability remediation actions (at least, a sample) annexed to the organization’s internal
vulnerability management policy where remediation delays are clearly indicated;
— Risk management analysis covering at least 12 months prior to the evaluation. The document would
bear unambiguous version tracking labels to reflect the evolution of the document.
Additional relevant documents may be requested by the evaluators to consolidate the TOE’s adherence
to the security standards set out in the certification scheme. This evaluation task may also involve a
search for public vulnerabilities deploying the sources provided by the scheme.
6.9 Vulnerability testing
6.9.1 Aim
The aim of this evaluation task is to verify that the TOE is not vulnerable to publicly known vulnerabilities
for the intended use in the intended environment.
NOTE The scheme might define certain “pre-defined” sources for potential vulnerabilities (e.g. for lower
assurance levels).
21
EN 17640:2022 (E)
This evaluation task is a search for public vulnerabilities, possibly using the sources provided by the
scheme. The evaluators shall employ the available documentation received for the TOE. No access to
internal documents is required.
NOTE When considering threat agents with attack potential “basic”, the evaluator will only take into account
publicly known vulnerabilities/vulnerability classes, previous evaluation results, information received from the
certification body and the experience with similar TOEs.
The evaluators shall have knowledge of the TOE domain. They shall be able to execute the required tests.
If no pre-defined tests are available, the evaluator shall be able to devise tests for the vulnerabilities. If
no pre-defined sources are mandated by the scheme, they shall be able to efficiently search for public
vulnerabilities in a wide range of sources. In case evidence from the development process is present, the
evaluators shall be able to review such evidence.
6.9.4 Evaluator work units
If the scheme provides a pre-defined set of sources for vulnerabilities, the evaluator shall select those
potential vulnerabilities which technically might apply to the TOE, based on the FIT ST.
Otherwise, the evaluators shall search for vulnerabilities analysing the available evidence (public and
proprietary), including the FIT ST, taking into account the TOE technology.
NOTE Only attacks which are possible in the operational environment specified in the FIT ST (including
assumptions on existing measures) are considered here.
The evaluator shall devise a test plan. This test plan shall fulfil the scheme requirements regarding
coverage and depth and/or effort.
NOTE 1 The scheme can mandate a full coverage, irrespective of the effort required or some kind of sampling
strategy, usually within a time limited period.
If risk-based sampling is used, the evaluators shall set up a suitable sampling strategy, taking into account,
previous evaluation results, information received from the certification body and the experience with
similar TOEs. The evaluators shall further employ the entire documentation received for the TOE.
In the context of vulnerability testing, “risk based” has two meanings:
a) The likelihood of vulnerability, i.e. how likely such a vulnerability is (using the professional
judgement of the evaluators); and
b) The impact of a potential vulnerability for the security measures of the TOE.
NOTE 3 The rigor of the testing depends on the CSA assurance level chosen and needs to be defined by the
scheme.
NOTE 4 This work unit provides a limited sampling strategy only. For the CSA assurance level “high” and if
further assurance is required, the evaluation task “Penetration Testing” is available (as replacement).
22
EN 17640:2022 (E)
For each listed potential vulnerability that is selected, the evaluators shall produce one or several test(s).
NOTE Only attacks which are possible in the operational environment specified in the FIT ST (including
assumptions on existing measures) are considered here.
The expected test result is the outcome of the test, which will occur if the component functions correctly,
i.e. the vulnerability is not present. The expected test result shall result from the component’s intended
behaviour. The test result is the actually detected behaviour of the component during the testing steps
(see 6.9.4.4).
During vulnerability testing it shall be examined whether the chosen technical implementation is not
vulnerable to known potential weaknesses.
6.9.4.4 Work unit 4
For each test case mentioned in the test plan the evaluators shall perform tests and record the results.
The evaluators shall use validated tools where possible to complete this task. Where tools are used, their
coverage shall be analysed and if necessary additional (manual) tests shall be performed.
If the test result corresponds with the test expectation, the evaluation will be positive (pass). If the test
result deviates, the evaluation will be negative (fail).
For each failed vulnerability test the evaluators shall review the impact of the failure. If the impact cannot
be deduced from the test results, the evaluator shall calculate the relevant attack potential.
NOTE This might require the evaluator to repeat the test with changed parameters.
The evaluators shall record the testing strategy and the results in the ETR.
6.9.4.5 Work unit 5
This work unit is only applicable if evidence from the development process is present.
The evaluators shall employ the available documentation received for the TOE. The document bundle
should comprise:
— Reports of vulnerability scans no older than three months;
— Vulnerability remediation actions (at least, a sample) annexed to the organization’s internal
vulnerability management policy where remediation delays are clearly indicated;
— Risk management analysis covering at least 12 months prior to the evaluation. The document would
bear unambiguous version tracking labels to reflect the evolution of the document.
Additional relevant documents may be requested by the evaluators to verify that the TOE’s development
adheres to the security development standards applicable in the scheme.
23
EN 17640:2022 (E)
The aim of this evaluation task is to perform a sampling testing strategy based on Flaw Hypothesis
Methodology that the TOE does not contain vulnerabilities from the class of known vulnerabilities.
NOTE This evaluation task is a superset of the “Vulnerability testing” task.
6.10.2.1 General
This evaluation task is penetration testing of the TOE. It is based on lessons learned on similar products
/ technologies and threat analysis and uses methods such as attack trees and a search for public known
vulnerabilities. Access to scheme defined general information about the TOE domain is required. Access
to scheme defined internal documents may be required. Depending on the TOE an access to publicly
available specifications might be necessary.
NOTE For the CSA assurance level “high” usually the scheme mandates some additional information, e.g. an
architectural overview or a structural overview of the update mechanism.
Relevant information includes the state-of-the-art for technologies of product types related to the
TOE:
— standards;
2. The evaluator hypothesizes flaws in the product to be tested. The evaluator cannot possibly test all
possible attacks on the product, therefore they shall:
— exclude attacks that are already known to require too much effort to be considered (the effort
estimate typically relies on the attack potential calculation);
— exclude attacks that are not possible due to the context defined in the FIT ST (for example,
physical attacks will not be assessed when an assumption in the security target states that the
product can be physically accessed only by trusted users);
24
EN 17640:2022 (E)
— prioritize the remaining attack scenarios, in order to assess whether flaws are actually present
(evaluator may choose for example to test first the flaws that are easier to exploit, or flaws that
are believed to be more frequent in the family of products under evaluation).
3. If needed, the evaluator will perform actual penetration testing to further characterize the attack
potential required to exploit the flaws (“exploit” means here “using the flaw to compromise an asset
or realize a threat as they are defined in the FIT Security Target”). The results of the testing may be
used again as evidence (see step 1) to update the hypotheses and reorient the test plan in a feedback
loop.
4. The evaluator eventually synthesizes their findings by describing the attack scenarios they consider
applicable to the product, and their cost in terms of attack potential.
It is not expected that evaluators follow strictly each of these steps, but they should be able to
demonstrate that their applied approach is equivalent. Moreover, certification schemes may further
refine the methodology, especially when considering a smaller subset of product types.
EXAMPLE The JIL document [14] is an example of such a restriction.
Evaluators shall be able to execute the required attacks. They shall have substantial knowledge of the
TOE domain. They shall be proficient in performing penetration testing.
6.10.4 Evaluator work units
The evaluators shall attempt to bypass or break the security functionality of the TOE. For this the
evaluators shall set up a risk-based sampling strategy following the Flaw Hypothesis Methodology, taking
into account publicly known vulnerabilities / vulnerability classes, previous evaluation results,
information received from the certification body and the experience with similar TOEs. The evaluators
shall further employ the entire documentation received for the TOE.
NOTE 1 Bypassing does not mean actually providing the code to circumvent the security functionality. It is
sufficient to demonstrate that such code could be provided with effort commensurate with the expected skills of the
threat agents (as per defined attack potential).
NOTE 2 The breadth and width of the penetration testing depends on the assurance level according to the CSA
and needs further refinements by the scheme.
If the evaluator discovered a potential vulnerability which cannot be exploited by itself but the evaluator
judges that such an exploit could be developed with more effort than available in the sample strategy, the
evaluator shall produce an expert judgment of the effort expected in the ETR.
Additionally, the evaluator shall record in the ETR each listed potential vulnerability and the evaluator
verdict on exploitability and applicability to the product.
25
EN 17640:2022 (E)
NOTE 1 Evaluators are not required to actually circumvent the security functionality, since exploits are costly to
develop. It is sufficient to estimate the effort required to circumvent it, based on the evaluator experience (and
scheme-specific guidance).
If the evaluator discovered a potential vulnerability which cannot be exploited by itself but the evaluator
judges that such an exploit could be developed with more effort than available in the pentesting strategy,
the evaluator shall provide a professional judgment of the effort expected in the ETR.
NOTE 2 The breadth and width of the penetration testing depends on the attack potential and needs further
refinements by the scheme.
The aim of this evaluation task is to validate that the cryptography (e.g. techniques, methods and services)
implemented by the TOE complies with the cryptographic specifications provided by the sponsor and the
guidelines provided by the scheme.
6.11.2 Evaluation method
This task is a probabilistic conformance testing tailored to cryptographic protocols and algorithms. For
this, the evaluator needs to receive a cryptographic specification.
NOTE 1 The focus of this work unit is on conformance testing, please refer to 6.12.4.2 in respect to sampling.
The evaluators need a substantial knowledge of cryptography and the cryptographic requirements of the
scheme. The evaluators shall be proficient in devising tests for cryptographic protocols and algorithms.
6.11.4 Evaluator work units
The evaluators shall attempt to find nonconformities of the TOE in respect to the cryptographic
requirements mandated by the scheme. For this the evaluators shall set up a risk-based sampling
strategy, taking into account, previous evaluation results, information received from the certification
body and the experience with similar TOEs. The evaluators shall further employ the entire documentation
received for the TOE as well as the scheme documents in respect to cryptography.
In the context of conformity testing, “risk based” has two meanings:
a) The likelihood of nonconformity, i.e. how likely a nonconformity is (using the professional judgement
of the evaluators); and
The evaluators shall use validated tools where possible to complete this task.
EXAMPLE If properties on an interface are claimed to be random, a suitable tool can check if obvious statistical
defects in the random number generator or processor exist.
Positive test cases for cryptographic algorithms and schemes should comprise randomly generated
known-answer tests and iterated Monte-Carlo tests if applicable. The test vectors should be generated or
verified by an independent, known-good implementation and should not be static. Algorithms accepting
26
EN 17640:2022 (E)
variable-length inputs should be tested with inputs of different lengths (including corner cases like length
zero).
Positive testing of cryptographic protocols may be done by communicating with an independent, known-
good implementation. The cryptographic algorithms and schemes used by the protocol should be tested
separately as described above.
Negative test cases for cryptographic algorithms and schemes should be specifically crafted to trigger
certain error conditions (e.g. illegal-value errors, out-of-bounds errors, padding errors, etc.).
Negative testing of cryptographic protocols should comprise test cases for unspecified configurations
(unspecified ciphers, protocol version downgrade, etc.), test cases for illegal inputs (e.g. malformed
packets, oversized packets, etc.), and test cases for illegal transitions in the protocol’s state machine (e.g.
insertion of unexpected packets, omission of required packets, etc.).
Random sources should be tested using a statistical test suite.
NOTE Statistical test suites can only detect very specific statistical defects of a random source (e.g. caused by
implementation errors). It is not possible to assess the quality of random numbers with automatic tests.
The evaluators shall record the testing strategy and the results in the ETR. If certain algorithms, interfaces
or cryptographic functions have not been analysed during sampling the evaluators shall provide a
rational for this.
6.12 Extended crypto analysis
6.12.1 Aim
The aim of this evaluation task is to verify that the cryptography used complies with the cryptographic
specification provided by the applicant and the state-of-the-art cryptography as defined by the scheme.
It consists of mechanism verification (correct choice) and implementation verification (review for errors
in the implementation).
The scheme developer will specify which state-of-the-art should be considered by the evaluator (e.g. SOG-
IS crypto [15], French RGS [10], German TR-02102 [11], Spanish CCN-STIC 807 [12]).
6.12.2 Evaluation method
The task consists of two steps: A theoretical analysis based on the documents provided and a verification,
and vulnerability analysis based on the implementation representation and the TOE.
The theoretical analysis rests mainly on the document provided. Its objective is to detect any
vulnerabilities in the cryptographic mechanisms used to achieve the product's security objectives in its
operating environment. In the case of cryptographic services, these mechanisms' resistance shall be
analysed in its usage context.
The second step consists of a verification of the implementation’s conformity for all cryptographic
algorithms used and a vulnerability analysis of the source code.
For this, the evaluator needs to dispose the cryptographic specification's document used on the first step
and the source or pseudo code of the cryptographic routines or functions (including the calling parameter
where applicable). If the cryptography is done in hardware, the hardware specification (schematics) are
the equivalent of source code.
NOTE The focus of this work unit is on independent analysis which goes beyond conformance testing of 6.11,
please refer to 6.12.4.2 in respect to sampling.
27
EN 17640:2022 (E)
The evaluators need a substantial knowledge of cryptography and the cryptographic requirements of the
scheme. The evaluators shall be proficient in analysing the implementation representation and devising
tests for cryptographic protocols and algorithms.
6.12.4 Evaluator work units
The evaluator shall check first that the documents which were delivered to carry out the analysis are
consistent with the other documents provided to evaluate the product.
The evaluator shall analyse the specification of all the following types of mechanisms of the TOE,
according to the state-of-the-art chosen for the evaluation:
1. Cryptographic algorithms, modes of operation and relevance with regard to objectives
(confidentiality, integrity, availability, authenticity, performances, and so on);
Additionally, on the basis of the (developer's) description of the random number generator (RNG) the
evaluator shall identify the type of the RNG, e.g. whether it is a deterministic RNG, a physical RNG with /
without cryptographic post-processing algorithm or a non-physical true RNG. For deterministic RNGs (if
applicable) the evaluator may confirm the conformance or a partial conformance to recommendations of
RNGs in the applied scheme.
The evaluator shall produce an analysis report, which shall indicate any potential weaknesses or
vulnerabilities detected. The evaluator may issue recommendations regarding the use of cryptography
(e.g. if mechanisms might be superseded soon by better ones).
6.12.4.2 Work unit 2
The evaluator shall consider the results of the previous analysis (where applicable) to verify the
conformity of the implementation and to look for any vulnerabilities. The evaluator shall determine
whether these vulnerabilities can actually be exploited in the product's operating environment. The
analysis covers:
— Cryptographic implementation conformity (including for the random number generator); The form
of this analysis depends on the evidence elements available;
— Cryptographic implementation vulnerability analysis (consider if the implementation allows for
attacks that possibly impact the TOE's security objective, independent of the intrinsic algorithm
resistance).
28
EN 17640:2022 (E)
the analysis demanded and given that the evaluators have limited time, they would probably have to
choose which vulnerabilities to investigate.
For this the evaluators shall set up a risk-based sampling strategy, taking into account, previous
evaluation results, information received from the certification body and the experience with similar
TOEs. The evaluators shall further employ the entire documentation received for the TOE as well as the
scheme documents in respect to cryptography.
In the context of conformity testing, “risk-based” has two meanings:
a) The likelihood of nonconformity, i.e. how likely a nonconformity is (using the professional judgment
of the evaluators); and
b) The impact of a potential nonconformity for the TOE.
The evaluators shall examine the implementation representation to determine that the implementation
is compliant with the cryptographic specification.
If the scheme maintains requirements for secure coding, then the evaluator shall examine the
implementation representation to determine that it adheres to secure coding guidelines.
The evaluators may use automated tools to facilitate human analysis (e.g. static/dynamic program
analysis tools).
The evaluators shall augment the analysis of the implementation representation by appropriate
conformance tests as specified in 6.7.
The evaluators shall record the testing strategy and the results in the ETR. If certain algorithms, interfaces
or cryptographic functions have not been analysed during sampling the evaluators shall provide a
rational for this.
If recommendations on the use of a cryptographic service were issued during the previous analysis, the
evaluator shall check that these recommendations are clearly indicated in the product usage and/or
administration guides.
29
EN 17640:2022 (E)
Annex A
(informative)
A.1 General
The exact contents and structure of a FIT ST should be refined by the scheme. This annex summarizes a
typical outline of a FIT ST and can be used as starting points for schemes.
When developing the requirements for the scheme care should be taken to impose the least effort
possible on the developer. A typical FIT ST should be “easy” for developers to prepare. It should remain
flexible but can also adapted to the domain of the scheme where necessary.
A FIT ST targeting the CSA assurance level “basic” may contain less content, e.g. only the identification
and the list of security functions.
b) Product identification
c) Reference / Acronyms
2) Product description
a) General description
b) Features
c) Product usage
d) Operating environment
3) Security perimeter
a) Users
b) Assumptions
c) Assets
e) Security functions
f) Rationale
4) Limits of evaluation
30
EN 17640:2022 (E)
b) Physical or cyber security provided by the environment where the product will be deployed
(“Assumptions”).
d) Potential impact (for example, loss of life, injury, loss of production, etc.) (threat agents and threats).
e) Technical capability to mitigate against the identify threats. (This shall cover all relevant interfaces
of the product, local and remote.)
f) A rationale how this all is consistent, e.g. why a certain technical capability (security function) in this
environment addresses the described threats. This could be in form of one or more matrices (threat
matrix).
Section 4 should describe what the limits are, e.g. functionality explicitly out of scope or threats which
are typically addressed but are of no relevance in this environment.
31
EN 17640:2022 (E)
Annex B
(normative)
B.1 General
This annex describes the basic concepts of FIT Protection Profiles. For the evaluation of FIT Protection
Profiles, please refer to 6.2.
Customers benefit from FIT PPs as they can be sure their requirements (written in the PPs) are fulfilled
if a certified product complies with this FIT PP.
Since the FIT PPs are usually certified independently before the product is evaluated, the effort for
evaluating the FIT ST is reduced, because many of the necessary evaluation steps have already been
performed for the FIT PP.
While many parts of the FIT PP describe certain (fixed) parameters, some options are usually wanted.
EXAMPLE A FIT PP for a firewall might mandate certain protocols but also provide to option to include
additional protocols.
Therefore, a FIT PP might provide certain “operations”, i.e. set of actions by the FIT ST author. Usually
this means that the FIT ST author can provide additional values or select some items from a list.
The exact contents and structure of a FIT Protection Profile (FIT PP) should be refined by the scheme,
based on the scheme definitions for FIT STs (see Annex A).
32
EN 17640:2022 (E)
Annex C
(informative)
Acceptance Criteria
C.1 Introduction
The objective of Acceptance Criteria is to help evaluators to specify test cases. Acceptance Criteria are an
implementation-independent definition of test case “expected results” criteria.
Acceptance Criteria are not security requirements. This annex identifies security requirements on a more
detailed level called security requirement attributes. For each of these attributes, a list of general
acceptance criteria is given.
The Acceptance Criteria listed in the following subclauses are organized by security requirement classes.
Each class contains the following structure: the first column lists the security requirement attribute,
which is always the link to (e.g. vertical or domain-specific) security requirements. The second column
lists the related acceptance criteria that are recommended to be considered when designing test cases.
For each category, some examples from the IT or Industry domain are given. The examples are presented
in a very brief format. In all examples, the statement “complies with the Acceptance Criteria” means that
some test cases were designed that use the Acceptance Criteria as part of the expected result, and these
test cases have to pass successfully (and therefore meet the Acceptance Criteria).
For the selection of the categories of Acceptance Criteria, two security requirements standards with high
interest were selected. One standard is focusing on industrial IT (IEC 62443-4-2:2019 [9]) and the second
standard on consumer IT (ETSI EN 303 645, Version 2.1.1 [7]). These standards represent a wide range
of requirements from different application domains.
33
EN 17640:2022 (E)
34
EN 17640:2022 (E)
IT Domain
Web-based authentication that is implemented using TLS secure channel (and configured and operated state-of-
the-art) fulfils the acceptance criteria that the authentication mechanism is capable of preventing attacks like man-
in-the-middle or spoofing attacks.
Web-based applications that are available from public networks often require advanced protection methods for
authentication. In this case, a second factor is often implemented. A second factor that is implemented based on the
OATH-HOTP protocol complies with the acceptance criteria to use standard methods.
35
EN 17640:2022 (E)
EXAMPLE
IT Domain
For standard PCs, the secure boot functionality is able to authorize the start of the operating system. The signatures
of all boot-critical drivers are verified by some process before loading these drivers and the verification complies
with the acceptance criteria.
Industry domain
In an industrial environment, the secure boot mechanism ensures that only authenticated (genuine) software is
executed. The verification of the firmware that leads to preventing booting the attacked device complies with the
given acceptance criteria.
C.4 Cryptography
Table C.3 shows Security Requirement Attributes and Evaluation Acceptance Criteria for Cryptography.
Table C.3 — Cryptography
36
EN 17640:2022 (E)
IT Domain
A device that supports encryption often has to generate RSA public-private-key pairs for different purposes. During
the key generation for RSA, the used algorithms are recommended to prevent generating keys with known
weaknesses. A device that implements the following constraints complies (for this aspect) with the acceptance
criteria: the cryptographic primitives for RSA are recommended to be n = pq: log2(n) > 3000 and log2(e) > 16, (cf.
Agreed Cryptographic Mechanisms, v. 1.2, January 2020). SOG-IS Agreed Cryptographic Mechanisms, v. 1.2,
recommends RSA key pairs with additional requirements, see Section 7.3, in particular Note 53-RSAKeyGen and
Note 54-SmallD. Moreover, an agreed RSA key generation method (see B.3) using an agreed prime generation
method (see B.1 and B.2) using an agreed primality test (Section 7.3) is recommended to be used.
known secure state — known secure state can be gained from the
developer's documentation
— reach a known secure state after disruption or
failure
secure values — system parameters (either default or
configurable) are set to secure values
— security-related configuration settings are re-
established
backup recovery — security-critical patches are reinstalled
— components are reinstalled and configured with
established settings
— recovery uses a backup selected explicitly by an
authorized person, or the recovery uses an
internal authentic backup source
documentation and procedures — system documentation and operating procedures
are available
37
EN 17640:2022 (E)
EXAMPLE
IT Domain
Secure State After Failure mechanisms are implemented in network firewalls. After an incident is identified, the
firewall might run into a default state often called “deny/deny”. This behaviour complies with the acceptance
criteria.
A DBMS (Database Management System) is required to handle transactions in the event of a system failure properly.
DBMS failures may not leave transactions in an inconsistent state. This behaviour complies with the acceptance
criteria.
Industry Domain
In the industrial domain, the safe state after failure depends on the context where the product is going to be used.
In a critical process, it is vital that the product returns to a state where it performs all the critical functions. An
example can be a product responsible for monitoring an industrial process and. in particular for helping the
protection of the infrastructure against the risk of explosion. In the event of an error, the product is to return at least
to a state where it continues to perform its primary function. This behaviour complies with the acceptance criteria.
NOTE The industry domain example does not focus on the system or process perspective. It addresses only the
component level.
IT Domain
For a Linux system, typical hardening is done according to some kind of security policy that defines, among others,
checking open ports and blocking them if unused or assigning strict access rights to files or folders. Such an
implementation complies with the security-by-configuration acceptance criteria.
For a DMBS, unused database components that are integrated into the DBMS and cannot be uninstalled but can be
disabled (e.g. using DISABLE TRIGGERS function provided by SQL server) comply with the function deactivation
acceptance criteria.
Industry Domain
In the case of industrial equipment, the security of the infrastructure depends on the context in which this
equipment is used. In many cases, the maintenance of critical infrastructure, or a factory, involves many different
38
EN 17640:2022 (E)
people who may belong to several subcontractors. They may have to intervene on the equipment for different
reasons: change firmware in case of a patch, change the configuration, check the security logs, etc. It is often essential
that the product is able, according to a previously defined security policy, to limit its access or functionality.
Especially the deactivation capabilities comply with the function deactivation Acceptance Criteria.
IT Domain
For a device, a secure communication channel to the vendor to check for updates at defined or configured period
complies with the Acceptance Criteria.
Concurrent update mechanisms for the operating system and database server (e.g. to provide a patch to database
environment by using database templates) comply with the Acceptance Criteria “independent, but required update
mechanisms are checked”.
Industrial Domain
In the case of industrial equipment, a new firmware or software version can be proposed in the event of a change
in the functionalities of a product or in the case where a vulnerability in an existing firmware or software has been
fixed. It is critical to ensure that the firmware or software installed in the product is a genuine version supplied by
the vendor and that the integrity of this version is checked before the product restarts and activates the update. In
such a case, a mechanism like a certificate-based verification mechanism for new firmware or software before
installation complies with the Acceptance Criteria.
39
EN 17640:2022 (E)
Annex D
(informative)
D.1 General
D.1.1 Introduction
This annex provides guidance to scheme developers on how to use this standard while developing a
scheme.
While performing the following steps, scheme developers should bear in mind that depending on the level
of assurance required for evaluated products, the level of cooperation between evaluators and
developers/manufacturers can be different, i.e. at higher levels of assurance a closer co-operation
between these actors can be expected.
D.1.2 Perform a risk assessment, reviewing the vertical domain under consideration
In case the scheme is to be implemented in a vertical domain, the developer of such scheme needs to
perform a domain risk assessment. Such risk assessment forms the foundation of the scheme and is the
base for the scheme development.
The scheme needs to set the parameters as described in Annex E according to the risk assessment.
NOTE ISO/IEC 31000 provides guidance for performing risk assessments.
The scheme developer shall assign the attack potential (c.f. 5.4) to the CSA assurance level.
Based on the definitions in the CSA for the three assurance levels and the definitions for the attack
potential the scheme developer needs to assign the attack potential to the assurance level(s) which are
considered.
D.1.4 Select the evaluation tasks required for this CSA assurance level
For each CSA assurance level the scheme developer shall select those evaluation tasks required for this
level, these are noted as “shall” and marked grey in Table D.2.
This is the minimum set necessary to fulfil the requirements of this document and the CSA.
D.1.5 Review and set the parameters for the tasks
For each task chosen, the scheme developers shall review the parameters for this task and set them
suitably based on the risk assessment and the determined attack potential.
For each task chosen the parameters need to be set, taking into account the risk assessment. The
parameters for each task are listed as summary in Annex E.
NOTE For some evaluation tasks this includes the decision if sampling is used and possibly providing additional
requirements or guidance for sampling, taking into account the expertise of the evaluators.
40
EN 17640:2022 (E)
For each CSA assurance level, the scheme developer shall review if those evaluation tasks are sufficient
for the scheme based on the determined attack potential. If not, the scheme developer shall select
additional evaluation tasks (e.g. from the same CSA assurance level), tasks from a higher CSA assurance
level or additional tasks not defined in this methodology. This might replace tasks already chosen.
EXAMPLE 1 Typically the development process related tasks are added, which augment the evaluation tasks
related to the product.
EXAMPLE 2 The CSA does not mandate testing tasks for CSA assurance level “basic”. To ensure that the TOE is
at least configurable as stated, the scheme should augment “Evaluation of TOE installation”, i.e. 6.6.
D.1.7 Review and set the parameters for the additional tasks
For each new or updated task chosen the scheme developers shall review the parameters for this task
and set them suitably based on the risk assessment and the attack potential.
For each additional or updated task chosen the parameters need to be set, taking into account the risk
assessment. The parameters for each task are listed as summary in E.3.
NOTE For some evaluation tasks this includes the decision if sampling is used and possibly providing additional
requirements or guidance for sampling, taking into account the expertise of the evaluators.
Depending on the set of evaluation tasks and the parameters chosen, the scheme may require further
documents, e.g. a reference set for vulnerabilities, best practices. These requirements and / or guidelines
need to be continuously maintained.
EXAMPLE
The following Table D.1 is used to determine the workload for certain domains within the French CSPN scheme
[13]:
Workload
Task Notes
(person days)
Evaluation tasks shall include the analysis of
communications:
SW functions of clients — between clients and card readers;
20
and card readers
— within the system (between server and clients);
— on any other functional interface.
Evaluation tasks shall include the analysis of
communications :
— between the server and its resources (PKI,
SW functions of the database or directory, and so on), whether they
10 are part of the TOE or not;
server
— between the TOE and its environment:
enterprise networks, internet, and so on;
— on any other functional interface.
41
EN 17640:2022 (E)
Workload
Task Notes
(person days)
Evaluation tasks shall include:
— evaluation of the robustness of the HW
HW security 5 interfaces of the clients and card readers;
— documentary analysis of the card readers
(certification reports, guidance).
10 person days if implemented by the product or an
open source library.
5/ 5 person days if implemented by a closed source
Cryptography library. The analysis will only be focused on how
10
the product uses this library. The certification
report will mention that the cryptographic
mechanisms themselves are not evaluated.
D.2 Example
To illustrate the integration of the methodology into a scheme, a small “toy” example is presented below.
The chosen values are arbitrary and are used for illustration purposes only. Also, a fully developed
scheme will contain much more information, here only the relevant excerpt is presented.
Consider a scheme for smart screws. It supports self-assessment (intended for smart screws for home
use), CSA assurance level “basic” (for smart screws in industrial environments), “substantial” (for smart
screws in industrial environments with safety impact) and “high” (for smart screws in critical
infrastructures).
NOTE CSA assurance level basic applies to self-assessment as well. In this example, no differentiation between
self-assessment and third-party assessment for level basic is made.
The scheme developer performed a risk assessment and derived the attack potential “basic” for the CSA
assurance level “basic” and “substantial” and attack potential “enhanced basic” for the CSA assurance
level “high” using the Table F.2 given in the example in Annex F.
For CSA assurance level “basic” and “substantial” the scheme specific checklist on documentation
contains only a technical data sheet, which shall mention if the smart screw can report its chirality (an
optional feature in smart screws). For CSA assurance level “high” the scheme mandates an additional
document from the developer, namely the “architectural overview” (its specification is not reproduced in
this toy example). Additionally, the acceptance criteria are refined by the scheme (not reproduced in this
example).
The scheme maintains a document on “best practices” for smart screws, available both to developers and
evaluators. The scheme also maintains a list of potential vulnerabilities (together with industry), named
the “screw threats openly published” (STOP).
The scheme developer derives the fixed evaluation time for each CSA assurance level and the sampling
used.
Based on this, the scheme developer derived the following evaluation tasks given in Table D.2:
42
EN 17640:2022 (E)
CSA assurance
Chosen evaluation task Parameters and notes
level
basic Completeness check
Review of security functionality
Development documentation The checklist contains only the
technical data sheet
Evaluation of the TOE installation
Substantial Completeness check
FIT Security Target Evaluation
Development documentation The checklist contains only the
technical data sheet
Evaluation of the TOE installation
Conformance testing The evaluators shall use sampling of
n days per smart functionality (and
additional m days if chirality support
is included).
Vulnerability testing The evaluators shall use the STOP list
as pre-defined source and use
sampling of x days per smart
functionality (and additional y days if
chirality support is included).
Basic crypto analysis The SOG-IS crypto catalogue is
mandated.
High Completeness check
FIT Security Target Evaluation
Development documentation The checklist contains the technical
data sheet and the architectural
overview.
Evaluation of the TOE installation
Conformance testing The evaluators shall completely test
the conformance (full coverage)
using the scheme defined Acceptance
Criteria.
Penetration testing The evaluators shall use sampling of
u days per smart functionality (and
additional v days if chirality support
is included). Vulnerability research
beyond STOP is required.
Extended crypto analysis The SOG-IS crypto catalogue is
mandated.
The vulnerability analysis shall be
performed within z days.
43
EN 17640:2022 (E)
The scheme developer decides to mandate the structure and content of STs as given in Annex A.
Finally, the scheme allows for FIT Protection Profiles but only at the CSA assurance level “high”. For this,
some additional guidance (not reproduced here) is contained in the scheme.
Evaluator competence:
Additional to the requirements of the standard the evaluators shall hold a bachelor of smart screw design
or comparable work experience for at least 3 years. They also need to demonstrate their knowledge by
at least one project in smart screw security (where their part is at least 5 full working days) if they want
to perform evaluations for the CSA assurance level substantial or high.
44
EN 17640:2022 (E)
Annex E
(informative)
E.1 General
This annex summarizes the parameters for the individual evaluation tasks. These parameters need to be
set when using this document to develop a scheme. See Clause 4 for details.
The scheme needs to devise the required additional documents (if any), their content and the number of
samples of TOE required for evaluation.
EXAMPLE For some CSA assurance levels the scheme might require an additional document “Architectural
overview”.
The scheme needs to decide if the concept of FIT PPs is used in the scheme, possibly limited to some
assurance levels according to the CSA. If the scheme decides to do so, then:
The scheme needs to review if the requirements for FIT PPs as given in Annex B are sufficient and if not
amend or refine those requirements.
E.3.3 Parameters for 6.3 “Review of security functionalities”
The scheme needs to review if the requirements for FIT STs as given in Annex A are sufficient and if not
amend or refine those requirements.
E.3.5 Parameters for 6.5 “Development documentation”
The scheme needs to define how it handles different languages for the SUG, i.e. in which languages it is
acceptable.
45
EN 17640:2022 (E)
The scheme needs to define which resources it maintains for evaluators to use, e.g. sectoral minimum
requirements for TOEs.
The scheme needs to define if it requires full coverage of the entire functionality or if sampling is used. In
the latter case the scheme may provide requirements or guidance for the sampling strategy.
The scheme needs to decide if Acceptance Criteria are to be used and needs to maintain them.
E.3.8 Parameters for 6.8 “Vulnerability review”
The scheme needs to decide if “Vulnerability review” is sufficient or “Vulnerability testing” is required.
The scheme needs to define how the search should be performed, especially if the sources are mandated
(or a minimum set of sources for vulnerabilities is mandated).
The scheme needs to decide if the developer is required to provide artefacts from this development
process (cf. Work Unit 2).
E.3.9 Parameters for 6.9 “Vulnerability testing”
The scheme needs to define how the search should be performed, especially if the sources are mandated
(or a minimum set of sources for vulnerabilities is mandated).
The scheme needs to decide if risk-based sampling is used. The scheme may provide requirements or
guidance for the sampling strategy.
The scheme needs to decide if the developer is required to provide artefacts from this development
process (cf. Work Unit 5).
E.3.10 Parameters for 6.10 “Penetration testing”
The scheme may provide additional requirements or guidance for the flaw hypothesis methodology.
E.3.11 Parameters for 6.11 “Basic crypto analysis”
The scheme needs to provide guidelines and requirements for cryptography, e.g. from SOG-IS.
E.3.12 Parameters for 6.12 “Extended crypto analysis”
The scheme needs to define the accepted state-of-the-art cryptography. See 6.12.1 for examples.
46
EN 17640:2022 (E)
Annex F
(normative)
F.1General
Before calculating the Attack Potential, the evaluator shall verify if the attack under consideration is
possible in the intended environment.
EXAMPLE 1 The TOE is operated in a physically trusted environment (according to the FIT ST or the FIT PP).
Then all attacks requiring physical access to the TOE are by default not possible and no attack calculation is
necessary for these scenarios.
EXAMPLE 2 The same situation as in EXAMPLE 1, but the TOE has a hard coded default password (used in all
instances of the TOE). In this case the threat agent needs only one TOE to obtain this default password and then the
threat agent is able to attack any other TOE. In this scenario a calculation of the Attack Potential is necessary (for
example to factor the cost of buying one instance of the TOE into the attack scenario).
d) Window of opportunity;
For the vulnerability under consideration the evaluator shall assign for each factor listed in F.2 the
appropriate values taken from Table F.1, considering the operational environment as given in the FIT ST
or FIT PP.
NOTE This can be a theoretical exercise or can be supported by penetration testing, see 6.10.
47
EN 17640:2022 (E)
Factor Value
Elapsed Time
Expertise
Layman 0
Expert 6
Multiple Expert 8
Knowledge of TOE
Public 0
Restricted 3
Sensitive 7
Critical 11
Window of Opportunity
Unnecessary / Unlimited 0
Easy 1
Moderate 4
Difficult 10
48
EN 17640:2022 (E)
Factor Value
Equipment
Standard 0
Specialized 4
Multiple bespoke 9
To exploit a certain vulnerability a layman (0) would require more than one month (7) with restricted knowledge
(3) and specialized equipment (4) with an easy window of opportunity (1). The sum is 15. The same vulnerability
could be exploited by an Expert (6) in one week (1) with the same restricted knowledge (3) but standard equipment
(0) with an easy window of opportunity (1). Here the sum is again 11. Therefore, the final Attack Potential is 11.
The “default” values in the table shown earlier are intended to be replaced or refined depending on the
context (technology, type of product, etc.). More generally, the rating table has an added value as a shared
vocabulary between experts when discussing an attack scenario, while the values themselves are a
parameter than can be adapted by the scheme developers. The definition of a set of values shared in a
given scheme is a non-trivial achievement and is an unavoidable step to achieve any meaningful
recognition between schemes.
The values from this historical default CEM table are well-suited to smartcard and, more generally,
security hardware. However, schemes have adapted this table to use different values when assessing e.g.
pure SW product.
As an example, CSPN [13] uses three different tables – Table F.2 shows the table used by default.
49
EN 17640:2022 (E)
Factor Values
< = 1 day 0
< = 1 week 1
< = 2 weeks 2
< = 1 month 4
< = 2 months 7
Time taken for the
exploitation
< = 3 months 10
< = 4 months 13
< = 5 months 15
< = 6 months 17
> 6 months 19
Layman 0
Competent 3
Attacker expertise
Expert 6
Multiple experts 8
None a 0
Restricted information 3
Knowledge required by the
attacker
Sensitive information 7
Critical information 11
Not necessary/unlimited 0
Easy 1
None *b
50
EN 17640:2022 (E)
Factor Values
None/ standard 0
Type of equipment needed c
Specialized software 2
attack requires physical intervention and the use of hardware, the whole attack shall be scored using the
[JIL_HW] or [JIL_HWD] scoring table (the evaluators will choose whichever seems most appropriate for
the evaluated product)
The note 18 referred to in this section states that no commercial SW tool can be considered higher than
“specialised”. If the threat agent needs to build, by itself, dedicated and complex software, this additional
effort will be taken into account as Expertise rather than Type of equipment.
51
EN 17640:2022 (E)
Annex G
(normative)
G.1 General
This annex describes how the results of the evaluation shall be reported.
52
EN 17640:2022 (E)
information (this might include contact to other evaluators or access to detailed logs made during the
evaluation) to answer questions arising during the oral defence.
The representatives shall record all open and unresolved issues and summarize them at the end of the
oral defence. These open and unresolved issues typically need to be answered by (very limited)
additional evaluation.
The evaluators shall perform additional evaluation (if necessary) based on the open and unresolved
issues from the oral defence and update the ETR accordingly.
The evaluators shall provide the ETR to the certifying function of the scheme by means mandated by the
scheme.
53
EN 17640:2022 (E)
Bibliography
[1] Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on
ENISA (the European Union Agency for Cybersecurity) and on information and communications
technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (the
Cybersecurity Act, CSA), available from https://eur-
lex.europa.eu/eli/reg/2019/881/oj?locale=en
[2] EN ISO/IEC 15408, Information technology — Security techniques —Evaluation Criteria for IT
security
[4] EN ISO/IEC 17025, General requirements for the competence of testing and calibration laboratories
[5] EN ISO/IEC 18045, Information technology — Security techniques — Methodology for IT security
evaluation
[6] ISO/IEC 19896-1, IT security techniques — Competence requirements for information security
testers and evaluators — Part 1: Introduction, concepts and general requirements
[7] ETSI EN 303-645 Version 2.1.1, Cyber Security for Consumer Internet of Things
[8] IEC 62443-4-1:2018, Security for industrial automation and control systems - Part 4-1: Secure
product development lifecycle requirements
[9] IEC 62443-4-2:2019, Security for industrial automation and control systems - Part 4-2: Technical
security requirements for IACS components
[10] ANSSIRGS, Référentiel Général de Sécurité (General Security Framework), Version 2.0,
[13] CSPN - First Level Security Certification For Information Technology Products
54