Safety Assurance of Artificial Intelligence-Based Systems
Safety Assurance of Artificial Intelligence-Based Systems
Safety Assurance of Artificial Intelligence-Based Systems
ABSTRACT The objective of this research is to present the state of the art of the safety assurance of Artificial
Intelligence (AI)-based systems and guidelines on future correlated work. For this purpose, a Systematic
Literature Review comprising 5090 peer-reviewed references relating safety to AI has been carried out, with
focus on a 329-reference subset in which the safety assurance of AI-based systems is directly conveyed.
From 2016 onwards, the safety assurance of AI-based systems has experienced significant effervescence
and leaned towards five main approaches: performing black-box testing, using safety envelopes, designing
fail-safe AI, combining white-box analyses with explainable AI, and establishing a safety assurance process
throughout systems’ lifecycles. Each of these approaches has been discussed in this paper, along with their
features, pros and cons. Finally, guidelines for future research topics have also been presented. They result
from an analysis based on both the cross-fertilization among the reviewed references and the authors’
experience with safety and AI. Among 15 research themes, these guidelines reinforce the need for deepening
guidelines for the safety assurance of AI-based systems by, e.g., analyzing datasets from a safety perspective,
designing explainable AI, setting and justifying AI hyperparameters, and assuring the safety of hardware-
implemented AI-based systems.
INDEX TERMS Artificial intelligence, formal verification, learning systems, machine learning, neural
networks, product safety engineering, risk analysis, safety.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 130733
A. V. Silva Neto et al.: Safety Assurance of AI-Based Systems
of transportation and foresees major architecture changes suggest that additional investigation of the literature
on future generation AI-based systems, which may be fully shall be carried out in the future.
distributed aboard trains instead of featuring centralized ele- Hence, this research has been idealized to fill the
ments as in current Communication-Based Train Control aforementioned gaps of preexisting literature reviews on
(CBTC) systems [5]. the theme. Hence, the objectives of this paper are to
Based on this context, the safety assurance of AI-based (i.) present the state of the art the safety assurance of AI-
systems is deemed of paramount importance to allow the based systems, including methods and techniques to do
successful deployment of these systems in their respective so, and (ii.) identify the main challenges needing further
applications [6]. Safety assurance is herein defined as the research – notably towards a general-purpose, application-
set of activities, means, and methods that shall be con- independent method with guidelines for the safety assur-
sidered, throughout the lifecycle of a system, to produce ance of AI-based systems. For that purpose, a Systematic
results towards building arguments that confidently support Literature Review (SLR) of peer-reviewed material formally
the safety requirements / targets of such a system have been published up to August 26th , 2022 has been carried out
met. This concept extends the definitions of ‘safety assur- along with critical cross-fertilization among the reviewed
ance’ coined by McDermid et al. [7] and Habli et al. [8] research to draw more extensive conclusions on both
with the ‘safety management’ concept of several standards objectives.
of the safety métier, such as IEC61508:2010, DO-254:2000, The remainder of the paper is structured in six sections.
CENELEC EN50126-1:2017, and CENELEC EN50129: Section II aims to justify the contribution of this research,
2018, related to the process of building and maintaining notably comparing and contrasting it with other SLR-based
safety arguments throughout the lifecycle of a system [9], papers. The SLR itself is covered in sections III to VI. The
[10], [11], [12]. The original definition of ‘safety assur- SLR method is presented in section IV, whereas the SLR
ance’ presented by McDermid et al. is ‘‘(. . . ) justified con- results themselves are split into three parts: bibliometrics
fidence or certainty in a system’s capabilities, including its analyses are discussed in section V, the state of the art on
safety’’, whereas Habli et al. [8], state that ‘‘safety assurance the safety assurance of AI-based systems is presented in
is concerned with demonstrating confidence in a system’s section VI, and the guidelines for future work in the area is
safety’’. covered in section VI. Finally, section VII closes the paper
The first step towards improving the safety assurance of with the conclusions of the research.
AI-based systems is establishing the state of the art of sci-
entific and technical advance in the theme and identifying II. CONTRIBUTION JUSTIFICATION: DIFFERENCES FROM
potential gaps for future research. Even though reviews with OTHER LITERATURE REVIEWS
similar motivation which have been identified, they are not During this research, the literature review-oriented papers
deemed able to fully characterize such a theme due to the by Ballingall et al. [13], Chia et al. [14], Dey and Lee
following reasons, further detailed and justified within this [15], Kabir [16], Nascimento et al. [17], Rajabli et al. [18],
paper’s section II: Rawson and Brito [19], Siedel et al [20], Tahir and Alexander
a) Given the significant effervescence of research on [21], Tambon et al. [22], Wang and Chapman [23], Wang
means, methods and tools to support the design, ver- and Chung [24], Wen et al [25], Zhang and Li [26], and
ify, and validate safety-critical AI from 2018 onwards, Zhang et al. [27] have been identified as somehow relating
there is an increasing need to keep track of these safety to AI-based systems. The objective of this section is to
updates and report them in a didactic yet detailed way justify that the present SLR either differs from these or goes
to the research community. Hence, even rather recent beyond their scope, hence supporting the contribution of the
literature reviews, published in the last couple of years, present research as a broader and deeper literature review on
are potentially unable to fully capture the current land- the safety assurance of AI-based systems.
scape towards assuring the safety of AI-based systems; The approach employed by Chia et al. [14],
b) There are literature reviews focused on exploring the Nascimento et al. [17], Rajabli et al. [18], and Tahir
safety assurance of AI-based systems in specific appli- and Alexander [21] focuses on safety analysis findings
cation domains only (e.g., autonomous vehicles). As a specifically related to autonomous ground vehicles (i.e.,
result, it is deemed that they are potentially restrained in autonomous cars). Similarly, Rawson and Brito [19] have
reporting relevant general-purpose research for safety- concentrated their research on reviewing the application of
critical AI-based systems as a whole; AI on maritime applications, notably on automating the
c) Guidance for future research on the safety assurance prediction and the assessment or risks and accidents involving
of AI-based systems has not been detected on some automated ships. The approach used in the present SLR
literature reviews. As a result, one might not be able not only includes both of the aforementioned applications,
to easily identify themes that might still be of interest but it also goes beyond them in exploring research not
for the research community; only related to other application domains, but also unrelated
d) Finally, there are reviews which lack proper method- to specific target applications (i.e., which concentrates on
ological systematization or whose authors themselves general-purpose technical aspects of AI and safety).
The review presented by Kabir [16] focuses on how The literature reviews presented by Ballingall et al. [13],
to employ Fault Tree Analysis (FTA) and its extensions Dey and Lee [15], Siedel et al. [20], Wang and Chapman [23],
on Model-Based Dependability Analysis, hence not fully Wang and Chung [24], Wen et al. [25], and Zhang et al. [27]
addressing the problem of AI-based systems safety assurance. all share two main similarities with the herein presented SLR:
In the present SLR, several safety analysis techniques in (i.) the lack of scope limiting to specific applications and
addition to FTA are also covered; moreover, the scope of the (ii.) the objective of presenting an overview of safety analysis
present SLR is broader, as approaches and gaps on the safety techniques for intelligent systems. However, all of them differ
assurance of AI-based systems are also discussed. from the present one in aspects that make the latter broader
The SLR performed by Zhang and Li [26] has the objective and/or more accurate on depicting the state of the art and
of analyzing methods and approaches for the testing and the the gaps on the safety assurance of AI-based systems. This
verification of AI-based systems and identifying challenges is justified for each of the six aforementioned reviews on the
and gaps for future studies on the area. Since testing and following paragraphs.
verification are means to build safety arguments to ensure The literature review by Ballingall et al. [13] was not
that a safety-critical is safe, this work is deemed relevant and carried out systematically, and the authors not only justify
somewhat overlapping to this SLR in that regard. On the other studying the safety assurance of AI-based systems based on a
hand, the SLR of Zhang and Li [26] has two limitations which single application (automated driving systems), but they also
have been overcome on the present SLR. Firstly, Zhang and conclude that future investigation on the matter is still needed.
Li [26] have restricted their analyses to neural networks only; Based on these remarks, the aim of the present research is to
secondly, the reviewed publications are within the timespan fill the gap of the review by Ballingall et al. [13] by means
2011-2018. On the herein reported SLR, the scope of AI of a systematic and reproducible literature review spanning a
approaches, techniques and algorithms has been significantly broader search range.
broadened– hence not restrained to neural networks -, and, The SLR performed by Dey and Lee [15] comprise three
ultimately, research published up to August 26th , 2022 has limitations, namely (i.) potentially non-peer-reviewed papers
been considered. Based on the effervescence of research (e.g., available on arXiv), (ii.) conflicting information regard-
on AI-based safety-critical systems identified and dis- ing the timespan of the considered papers (2005-2020 and
cussed in this paper, significant advancements have occurred 2015-2020 intervals are mentioned by the authors) and
since 2018. (iii.) brief discussion on future work, which are deemed better
The SLR performed by Tambon et al. [22] shares simi- addressed in the present SLR. This is justified by four argu-
larity with the present research with regard to the research ments: (i.) covering a wider range of official peer-reviewed
objectives themselves – namely, (i.) providing a landscape on reference databases (e.g., Engineering Village and Web of
means and methods for assuring that AI systems are suffi- Science), (ii.) disregarding information from research which
ciently safe for their certification and (ii.) suggesting future has not been formally published yet (e.g., arXiv-sourced
work yet to be explored in the area. Despite these similarities, papers have not been considered in this SLR), (iii.) defining
there are methodological and scope differences which support a clear and wide timespan to the considered publications (all
the relevance of the herein reported SLR in contributing with papers published until August 26th , 2022 with no starting date
the safety assurance of AI-based systems. limit) and (iv.) dedicating a full paper section to the discussion
Firstly, Tambon et al. [22] have focused their review of future work stemming from the gaps identified in current
efforts specifically on software-implemented machine learn- research on the safety assurance of AI-based systems.
ing, whereas only other types and implementations of AI Siedel et al. [20], in turn, have four main limitations. First,
are explored in the present SLR (for instance, knowledge- the controlled vocabulary used in searches comprises limited
based systems, hardware-implemented AI). Secondly, the expressions from the safety and AI areas and also includes
controlled search vocabulary utilized by Tambon et al. [22] marginally-related topics, such as reliability. Secondly,
is more restrictive than the one considered in the present Scopus was the only search engine considered by the authors.
research with regard not only to expressions for the safety and Thirdly, the keywords from the search vocabulary were
AI métiers, but also to potential application domains (only checked only on the title of publications. Finally, the authors
transportation on Tambon et al. [22], and unrestricted in the have not explored how the safety assurance of AI-based
present SLR). systems has evolved with time, nor captured technical trends
Finally, Tambon et al. [22] have constrained the themes of of the area for future work. In the present SLR, an in-depth
interest for future work to six major topics, therein referred search language enriched in both wide and depth of expres-
to as ‘robustness’, ‘uncertainty and out-of-distribution’, sions related to safety and AI has been crafted. Moreover, the
‘explainability, ‘formal and non-formal verification’, ‘safety search domain has been expanded to four other search engines
considerations in reinforcement learning’, and ‘direct certifi- other than Scopus and, in addition to the publication titles,
cation’. In the present SLR, these themes have been covered searches have also been performed within the abstracts and
(albeit with potentially different names) along with others, the keywords of the indexed publications. Finally, a detailed
making up for a total of 15 major research areas for future landscape of the safety assurance of AI-based systems has
work towards the safety assurance of AI-based systems. also been presented. It includes bibliometrics, trends of the
area throughout its past and present, and guidelines for future devoted to presenting the work method, and two sections,
work. to characterizing how the relationship between AI and safety
The SLR developed by Wang and Chapman [23] is limited has emerged and progressed up to 2022. Furthermore, the
to presenting the link between risk analyses and the control guidelines for future work also include further themes and
of autonomous systems, focusing on reviewing the main additional discussions on feasibility which remained unex-
variants and algorithms of AI that are used on such appli- plored by Zhang et al. [27].
cations and how their safety is ensured. In the present SLR,
safety-critical AI applications other than the risk analysis of III. SYSTEMATIC LITERATURE REVIEW METHOD
autonomous systems have also been covered, such as the The objective of this section is to present the SLR process
usage of AI within the core control of safety-critical systems). which bases the findings and the conclusions of this research.
Such an expansion of scope also allowed identifying more This section has been structured in such a way to (i.) provide
means to potentially build safety-critical AI-based systems foundation that the literature review method is systematic and
and ensure their safety in comparison to those observed by sound for the research purpose, and (ii.) define and present
Wang and Chapman [23]. information and notation from the SLR itself which is utilized
Wang and Chung [24] have performed an SLR aiming on the remainder of the paper.
to describe how AI has been used in safety-critical systems The SLR was carried out with six main activities in the
and propose potential future work to further promote such following order: (A) the definition of the search keywords,
usage. Despite the partial convergence of results obtained by (B) the decision on the search engines which were part of the
Wang and Chung [24] with the conclusions of the present searches scope, (C) the collection of search results and dupli-
study, there are noteworthy remarks that justify the relevance cate removal, (D) the Title-Abstract-Keywords (TAK) filter-
of the herein reported SLR. Firstly, the search expressions ing, (E) the definition of the questionnaire for the full-text
used by Wang and Chung [24] to characterize the AI and semantic filtering, and, finally, (F) the full-text semantic
safety métiers are simpler and more restrictive than the ones filtering.
of this SLR. Secondly, Wang and Chung [24] have added These activities are illustrated in the workflow of Figure 1
dependability within the scope of the SLR: since dependabil- and detailed in the forthcoming subsections. Since the analy-
ity comprises concepts other than safety, such as reliability ses corresponding to steps (C), (D), and (F) are progressively
and availability, some results obtained by the authors are not finer filters towards obtaining relevant research papers on
related to safety per se. In order to circumvent this, the present the safety assurance of AI-based systems, the SLR process
SLR has been conceived with a tighter link to the safety depicted in Figure 1 was shaped as an horizontal funnel-
area. Thirdly, alike Dey and Lee [15], non-peer-reviewed shaped process with three filtering stages, each of which
papers (e.g., available on arXiv) have been considered by representing one of the aforementioned steps. Moreover, the
Wang and Chung [24], whereas only publications which have numeric results depicted in Figure 1 for steps (C), (D) and
been formally published after acceptance on peer reviews (F) will be explored in section IV.
have been taken into consideration in the present SLR. These Similarly to Nascimento et al. [17], the tasks suggested by
limitations also translate into the numbers of retrieved and Asadollah et al. [28] and Petersen et al. [29] were employed
analyzed publications: while Wang and Chung [24] have to guide the crafting of keywords considered in step (A),
assessed 3087 research papers and identified 92 of them as well as the questionnaire of step (E). Furthermore, all
as potentially relevant, the herein SLR starts with a set of activities of the SLR were led by the first author or this paper
5090 publications, among which 329 were deemed relevant (A. V. Silva Neto), and the results were discussed with other
for the safety assurance of AI-based systems. researchers in walkthroughs in order to ensure their validity
The SLR performed by Wen et al. [25] has as its main lim- and the adjudication of potential conflicts.
itation the fact that the reviewed papers were randomly sam-
pled from a set of papers. The authors themselves claim that a A. DEFINITION OF SEARCH KEYWORDS SLR
major contribution with their work is to exhaustively analyze SEARCH LANGUAGE
publications on the safety assurance of AI-based systems, In order to formally define logical expressions to guide
which is exactly one of the present research’s objectives. the searches of relevant publications on the search engines,
Finally, the SLR published by Zhang et al. [27] is the a formal regular search language, herein called ‘SLR
one that resembles the most the present SLR with regard Search Language’, was conceived using Wirth’s notation.
to identifying relevant future work on the safety assurance An overview of the structure of the SLR Search Language
of AI-based systems. Despite such similarities, Zhang et al. is presented in Figure 2, in which ‘AND’, ‘OR’, and ‘NOT’
[27] lags behind the herein documented research on two gates are employed to express the relationship among the SLR
main aspects: (i.) the criteria employed to collect and review Search Language expression groups.
reference studies, which is not clearly stated by the authors, The SLR Search Language comprises expressions which
and (ii.) the lack of a detailed description on how the state of shall simultaneously satisfy three criteria: (i.) the presence
the art of AI-based safety-critical systems has evolved with of terms related to the safety assurance domain (‘Safety
time up to the present time. In this SLR, a full section has been Assurance Area’ on Figure 2), (ii.) the presence of terms
related to AI (‘AI Area’ on Figure 2), and (iii.) the absence ‘safety validation’, ‘safety evaluation’, ‘risk analysis’, ‘risk
of terms related to applications which are not related to assessment’, ‘risk evaluation’, ‘hazard analysis’, ‘hazard
the technical safety domain (‘Unrelated Area’ on Figure 2). assessment’, and ‘hazard evaluation’. The safety assurance
The SLR Search Language was iteratively crafted in pre- products, in turn, include ‘safety case’, ‘assurance case’, and
liminary phases of the SLR based on two aspects: (i.) the ‘assurance pattern’. Even though ‘assurance pattern’ is not
relevance of expressions for the scope of the research, and a safety assurance result per se, it was considered part of
(ii.) the retrieval of a quantity of publications feasible for the SLR Search Language given the increasing efforts on
the objectives of this SLR whilst maximizing as much as building reusable de facto patterns for assuring the safety of
possible relevant papers on the safety assurance of AI-based similar AI-based systems [30].
systems. The ‘Safety Assurance Systematization’ subgroup, in turn,
The expressions of the ‘Safety Assurance Area’ shall comprises expressions that are related to systematizing a
include at least one term from the ‘Safety Assurance Process systems safety lifecycle. It includes the words ‘method’,
Items’ subgroup and one term from the ‘Safety Assurance ‘methodology’, ‘technique’, ‘approach’, ‘framework’, and
Systematization’ subgroup, as per Figure 2. The ‘Safety their corresponding plurals.
Assurance Process Items’ subgroup includes a set of verbal The ‘AI Area’ group reunites terms related to the artifi-
and non-verbal expressions (on singular and plural forms cial intelligence métier. These are divided in six subgroups:
whenever applicable) with relevant activities and/or prod- ‘General AI’, ‘Machine Learning’, ‘Adaptive Systems’,
ucts of the safety assurance process. The safety assurance ‘Industry 4.0’, ‘Data Science’, and ‘Knowledge-Based Sys-
activities that have been considered are ‘safety analysis’, tems’ (formerly known as ‘expert systems’). The ‘OR’ gate
‘safety assessment’, ‘safety assurance’, ‘safety verification’, interconnecting them in Figure 2 indicates that each terminal
expression from any group suffices as a search expression for (e.g., sourced directly from private repositories on arXiv and
‘AI Area’. Zenodo) are pruned at this step for quality concerns.
Each subgroup’s keywords have been defined to cover ScienceDirect was not considered on this SLR because
the high level definitions of each subgroup (e.g., ‘artificial its TAK-related indexed content, which is the starting point
intelligence’, ‘AI’, ‘machine learning’, ‘data mining’, etc.). for this SLR, is entirely within Scopus [31]. SpringerLink,
Wherever applicable, two other subsets of expressions have in turn, was disregarded after it has been assessed that relevant
also been included within each subgroup: (i.) the names of rel- Springer-sourced publications, both periodic (e.g., journals)
evant AI variants (e.g., ‘supervised learning’, ‘reinforcement and non-periodic (e.g., conference proceedings and book
learning’, ‘data clustering’), and (ii.) relevant formalisms chapters), have been successfully captured by means of Engi-
and algorithms that implement the corresponding AI variants neering Village, Scopus, and Web of Science.
(e.g., ‘neural network’).
Finally, the ‘Unrelated Areas’ group has been iteratively C. COLLECTION OF SEARCH RESULTS AND
built as a non-exhaustive set of expressions which shall lead DUPLICATE REMOVAL
to the exclusion of texts with them regardless of expressions The results of this research comprise references retrieved
from ‘Safety Assurance Area’ and ‘AI Area’ groups being from TAK searches performed in all search engines from
satisfied. The ‘Unrelated Areas’ group expressions include the previous section on August 26th , 2022. The results were
four main application domains that have been manually exported in either BibTeX or Research Information Systems
identified as unrelated to the safety assurance of AI-based (RIS) formats and loaded onto Mendeley and JabRef tools
systems at the initial stages of the SLR – namely, pharmaceu- for a semiautomatic duplicate removal (i.e., aided by tools
tical research (drug(s)), natural disaster prediction (flood(s), but manually confirmed or rejected case by case).
earthquake(s)), stock market analyses (credit(s), asset(s),
insurance(s), portfolio(s), investment(s), stock(s)), and envi- D. TAK FILTER
ronment, health and safety (EHS) as a whole. The EHS The TAK filter was based on manual semantic analysis of
field comprises public health policies (pregnancy, drug(s)), TAK information of each and every reference retrieved from
ergonomics, and issues related to occupational risk manage- the previous step and allowed classifying the obtained results
ment of the oil industry, coal mines, dams, and construction into six categories (C0 to C5). These categories were defined
sites. taking into account a didactic approach to split the obtained
results into proper semantic groups related to this research up
B. CHOICE OF SEARCH ENGINES to some extent.
With the exception of ScienceDirect and SpringerLink, all of • C0: Not relevant to the research;
the remaining search engines employed by Nascimento et al. • C1: Research on AI applied in off-line safety assurance;
[17] – namely ACM, Engineering Village, Scopus, Web of • C2: Research on AI applied in safety-critical functions,
Science, and Wiley – were considered in this SLR. Pre-prints but without evidence of safety assessment per se;
• C3: Research on the safety assurance of AI-based safety- crafted in order to rate how well each reference contributes
critical functions; to the objectives of the present research based on its overall
• C4: Contextualization of AI on Industry 4.0 applications; quality and the covered topics. The Q-index includes two
• C5: Research on AI employed in security-critical func- definitions: a discrete definition for each of the valid integer
tions with potential impact on safety. values within the interval [0; 6] and a categorized definition
It is worth noting that a reference can be classified into on three fuzzy groups: low quality, average quality and high
more than a single category from C1 to C5 based on its quality.
TAK information, since these categories are not mutually Low Quality References
exclusive. Since the aim of the research is to study the safety • Q = 0: Highly restrictive relevance;
assessment of AI-based safety-critical systems, category C3 • Q = 1: Weak relevance.
is considered the most important for that purpose, whereas Average Quality References
the other ones (except for C0) are deemed marginally relevant • Q = 2: Fair relevance;
due to the following reasons: • Q = 3: Sufficient relevance.
a) Studies within categories C1 and C2 provide examples High Quality References
of AI-based safety-critical systems and applications
• Q = 4: Above average relevance;
which can, thus, benefit from the safety assurance of
• Q = 5: Very good relevance;
AI-based systems;
• Q = 6: Strong relevance.
b) References within categories C4 and C5 deal with cor-
relate themes and, as a result, shed some light on the
IV. SYSTEMATIC LITERATURE REVIEW
contextualization of this research and may also provide
BIBLIOMETRICS RESULTS
guidance for potential future work.
The objective of this section is to present an overview of
the bibliometrics extracted from the SLR results. Such an
E. QUESTIONNAIRE FOR THE FULL-TEXT SEMANTIC FILTER
analysis is deemed relevant because it allows characterizing
Six questions (Q1 to Q6) were conceived to extract rele- major trends on how research which joins AI and safety
vant information from the full-text review of references in assurance has evolved with time. The first analysis, presented
line with the SLR objectives. The questionnaire has been in subsection IV-A, is based on how the number of reviewed
only applied to category C3 references, since these are references of each category C1 to C5 has evolved with time.
directly related to the research theme (i.e., safety assurance of Since the focus of this SLR is on the safety assurance of
AI-based systems). AI-based systems and such theme corresponds to the scope of
• Q1: What is the objective of the research? C3-categorized publications, the bibliometrics analyses of C3
• Q2: Which AI techniques were considered in the are enriched by correlating them to the Q-index attributed to
research? each C3 publication as part of the SLR method. This analysis
• Q3: How has the safety assurance of AI been considered is covered in subsection IV-B.
within the study? Finally, the concluding remarks of the bibliometrics analy-
• Q4: Which results were obtained on the research sis and the justification of its importance to the remainder of
(including potential shortcomings)? the research are summarized in subsection IV-C.
• Q5: Which future research topics were identified by the
authors? A. OVERALL BIBLIOMETRICS FOR CATEGORIES C1 TO C5
• Q6: What other strengths and weaknesses were identi- A total of 5090 references, as shown in Figure 1, was obtained
fied in the study during its review? after filtering duplicates from the set of results retrieved
It is worth noting that Q6 resorts to the researchers’ on August 26th , 2022 from the search engines listed in
knowledge in assessing positive and negative aspects of the subsection III-B when applying the SLR Search Language
reviewed references. This is an important question to fulfill defined in subsection III-A.
the objective of this research in providing an overview of After applying the TAK filter to the 5090 references
future work which goes beyond what is directly proposed by obtained at the previous step, 4112 of them (80.8%) were
the reviewed references’ authors. included into category C0 and, hence, they were not consid-
ered relevant for this research. The remaining 978 (19.2%)
F. FULL-TEXT SEMANTIC FILTER were classified as part of at least one of the categories C1
In this step, the full text of every reference on category C3 is to C5 defined in subsection III-D according to the quantities
reviewed based on the questionnaire defined in step E. A brief presented in Figure 1 for each category – namely, 414 ref-
report of the results obtained for each text is developed, erences on C1, 487 references on C2, 329 references on C3,
and the results of these reports are compiled based on the 32 references on C4 and 29 references on C5.
objectives of this SLR. There are two reasons why summing the results for each
In addition to questions Q1 to Q6, an integer quality metric of the C1 to C5 categories yields a result greater than the
ranging from 0 to 6 – herein referred to as Q-index – has been 978 references which were classified as somehow relevant
by means of the TAK filter. The first one, discussed in sub- the preliminary results (up to August 26th ), two trends can
section III-D, is that categories C1 to C5 are not mutually be identified. From a qualitative standpoint, it is noticeable
exclusive. The second reason is that, since the TAK filter that the effervescence on research joining AI and safety still
is somehow coarse to ensure proper classification of every persists, as 2022 data up to August 26th are comparable to the
single reference into their actual categories, some references whole set of publications of 2019. Moreover, if one assumes
were also conservatively classified in additional categories the hypothesis that 2022 will follow the same publication rate
whenever the latter ones could not be categorically ruled out. observed up to August 26th (i.e., after 238 days since the
This is especially important for C3, as the analysis of all year started), an estimate of the number of publications for
references within it would mandatorily progress up to their 2022 can be obtained by multiplying the current 2022 results,
full-text semantic analysis (as mentioned in subsections III-E depicted in Figure 3, by 365 days/238 days =1.53. By doing
and III-F). so and comparing the 2022 estimates with 2021, one can infer
The graph presented in Figure 3 depicts the evolution on near-stability for categories C2 (97 vs. 99 respectively) and
the quantity of publications which have been classified as part C5 (6 vs. 6, respectively), a 10% decrease on C3 (84 vs. 72,
of C1, C2, C3, C4 and C5 between 1986 and 2022 (up to respectively), and steeper reductions on C1 (59 vs. 32) and
August 26th ). 1986 corresponds to the year of the very first C4 (11 vs. 2, respectively). This might indicate a lowered
reference relating AI to safety assurance. interest on C1 and C4, followed by a trend of continued
Two main results can be inferred from Figure 3. The first interest on C2, C3, and C5. Since the latter three categories
result is that, after initial exploratory research carried out up represent relevant themes towards full AI autonomy within
to the mid-2000s, there have been two waves of significant safety-critical contexts, whereas the former two categories are
increase in research correlating safety to AI. The first of closer to general-purpose applications of AI, such a behavior
them occurred between 2008 and 2014 and has been mostly would not be unexpected if effectively confirmed.
concentrated on C1 and C2 publications, Hence, one can
infer that the relationship between AI and safety on this B. C3 PUBLICATIONS Q-INDEX ANALYSIS
first research wave has at most dealt with using AI as a It is possible to expand on the bibliometrics of the 329
tool to support the safety analysis of safety-critical systems, C3-classified publications by cross-analyzing them with the
regardless of AI being present in such assessed systems (C1), Q-index attributed to each of the C3 references. In order to
as well as with initial research on using AI in safety-critical improve the readability of the graphs used for this purpose,
systems without formal coverage on assuring that such safety- the fuzzy Q-index classification defined in subsection III-F
critical AI is indeed safe (C2). (i.e., low, average and high) is herein adopted.
The second wave of research in which AI and safety Among all the 329 C3 references, 115 (35.0%) were clas-
have been jointly addressed is significantly more vigorous sified as low quality, 70 (21.3%) were classified as average
than the first one and comprises a trend of steep increase quality, and 144 (43.8%) were classified as high quality. The
in all categories from 2016 onwards, except for outliers in rather significant quantity of low quality papers for C3 stems
2018 and 2021 on C1. This indicates that further efforts from the conservative C3 classification criterion explained in
on other areas – remarkably assuring that safety-critical AI subsection III-D. By this criterion, some references were ini-
reaches its safety requirements (C3) – have been increasingly tially classified in C3 together with other categories because
investigated along with those already covered in the first their TAK information was not significant enough to rule
wave. this classification out. After the full-text review, 73 of the
In addition to the identified growth waves, another result 161 references jointly classified in C3 and in at least another
worth identifying and analyzing is the drop in publications category were deemed of low quality for C3. These 73 refer-
of all categories but C1 in 2021. Once such a decrease has ences represent 63.5% of the C3 low quality group.
occurred for a single year so far, it is deemed that it is still Figure 4 shows how the yearly average Q-index has
insufficient to characterize a consistent loss of interest in this evolved with time from 1994 up to August 26th , 2022. This
category. It is also worth noting that C1 has still had signif- period has been defined because 1994 is the year in which
icantly more publications in 2021 than in 2019 according to the first C3 research paper has been published. Moreover, the
Figure 3, which suggests that, along with the increase of the period from 1995 to 2002 has been omitted from the graph
other categories, further studies on AI and safety assurance to improve its readability because no C3 publications have
are still relevant despite the aforementioned decrease of C1 been identified for any of these years. Moreover, the quantity
in 2021. This is particularly true for research directly related of yearly C3 publications is explicitly listed on Table 1 to
to the safety assurance of AI-based systems – which bases all improve the understanding of the analyses.
C3-categorized publications. C3 has actually reached a higher It is possible to notice that, after two isolated peaks
share among all categories in 2021 than in 2020 given its between 2003 and 2007 and on 2012, the Q-index has con-
steeper increase in 2021 than the other categories with most sistently increased with time during its 2016-2022 grow-
published research (i.e., C1 and C2). ing wave. The yearly average Q-index started with 1.0 on
Finally, with regard to 2022 data, even though direct anal- 2015 and has continuously grown up to 3.43 in 2022 except
yses are not feasible because of the restricted timespan of for two drops: one in 2017, when the growth wave was still at
its beginning, and another one in 2021, with a small relative TABLE 1. Number of C3 publications per Year.
reduction of 7.6% in relation to 2020 (from 3.31 to 3.08).
It is worth highlighting that the isolated peaks between
2003 and 2007, as well as that on 2012, result from the
fact that most of the few C3 publications on each of these
years (no higher than 4 C3 publications, as per Figure 3)
were deemed of high importance to the safety assurance of
AI-based systems. This stems from early and successful
attempts on addressing means to either assess if neural
network-based control systems are safe [32], [33], [34], [35],
[36], [37] or conceive fail-safe AI-based systems [38].
The remainder of the Figure 4 behavior can be understood
by the analysis of Figure 5, which shows the relative growth
of C3 references during the same period of Figure 4. Figure 5
shows, for each of the assessed years, the percentage of texts
from each year which were rated with either a low quality
Q-index (i.e., 0 or 1), an average quality Q-index (i.e., 2 or
3), or a high quality Q-index (i.e., 4, 5 or 6).
Up to 2015, when no more than 4 C3 publications were
available per year, the percentage attributed to each of the 2022 (August 26th), yearly peaking at between 40% and 55%
aforementioned categories varies significantly. Starting in of the total of C3 publications at this timespan.
2016, such oscillations have reduced due to the increase of C3
publications, and high quality publications have consistently C. CONCLUDING REMARKS ON THE
risen in participation since then, with small drops on 2017 and BIBLIOMETRICS ANALYSIS
2021. Moreover, high quality C3 publications have become The results presented in sections IV-A and IV-B are highly
the most prevalent among all C3 references in 2019 up to suggestive that the safety assurance of AI-based systems has
been increasingly deemed worthy of relevant research by mid-to-late-1980s and are related to using knowledge-based
the research community especially from 2016 onwards. This systems as a means to detect potential faults on nuclear
reinforces the importance of the present work not only in power plants and report such faults to human operators.
compiling, assessing and reporting the progressively effer- In this context, the information produced by the knowledge-
vescent state of the art and future work raised by the research based systems would support the decision-making process of
community on the safety assurance of AI-based systems, human operators on triggering, e.g., preventive maintenance
but also in expanding on these subjects by cross-fertilizing and emergency actions to respectively avoid and contain
current research in order to draw further conclusions on potentially unsafe scenarios [39], [40], [41], [42], [43]. Such
these matters. Technical aspects regarding this ‘expanded trend of using knowledge-based systems to support human
overview’ on the state of the art and future work on the decision-making in safety-critical applications, which rep-
safety assurance of AI-based systems will be covered in the resents the ‘first wave’ shown in Figure 6, was still highly
forthcoming sections of this paper. prevalent through the 1990s [44], [45], [46], [47], [48], during
which only scarce efforts on other AI approaches, such as
V. STATE OF THE ART RELATED TO THE SAFETY machine learning, have been carried out [49], [50].
ASSURANCE OF AI-BASED SYSTEMS On the early 2000s, a ‘second wave’, slightly stronger than
The objective of this paper section is to present the state of the the first, emerged with two major changes on the relationship
art related to the safety assurance of AI-based safety-critical between AI and safety. Firstly, machine learning techniques
systems. It starts on subsection V-A with a brief introduction started replacing knowledge-based systems as the preferred
on how the relationship between AI and safety has evolved so AI technique used in research related to safety-critical sys-
far and up to the point when the safety assurance of AI-based tems. Secondly, efforts in including AI within the control
systems became a major research problem on its own. After- loop of safety-critical systems, rather than just supporting the
wards, the state of the art related to the safety assurance of decision-making of human operators, also became increas-
AI-based systems per se is explored in subsection V-B. ingly more frequent.
One of the earliest research towards these changes is the
A. RELATIONSHIP BETWEEN AI AND SAFETY: ORIGINS one carried out by Wei [51], who presented an Artificial
AND EVOLUTION UNTIL THE SAFETY ASSURANCE OF Neural Network (ANN)-based system that supports drivers of
AI-BASED SYSTEMS ground vehicles in performing safe lane-changing operations
A summary of the overall evolution on how AI and safety by supervised learning of potentially safe scenarios from
have been combined with time is depicted in Figure 6, video recordings of human drivers. Even though the system
in which three major ‘waves’ of research are presented. These was still not developed aiming fully autonomous vehicles –
are detailed throughout this section. which ultimately still makes it a human decision-making sup-
The earliest records of research addressing the usage port system –, the results obtained by the author showed that
of AI in safety-critical applications date back to the his system was successful in mimicking human behavior in
FIGURE 5. Relative evolution of yearly c3 papers per q-index fuzzy group with time.
safely recommending lane-changing operations yet retaining successfully exercised within a Gas Turbine Aero-Engine
high driving performance (e.g., higher speed than state-of- system in the following years [33], [36]. With this effort, the
the-art solutions by that time) while moving from one lane to authors have not only introduced the possibility of using AI as
another [51]. part of the control loop of safety-critical applications, but also
Another early study worthy of mention is the one by discussed and exercised explainable AI, which would only
Kurd and Kelly [32], who have developed a white-box, emerge on its own as a concept and research theme within the
fuzzy map-based model to design and represent ANNs used AI field in the mid-2010s, based on its unconscious aware-
in safety-critical applications and which would be further ness at the expert systems era [52]. Moreover, the works by
Kurd and Kelly [32], [33], [36] represent the earliest studies 1) OBJECTIVES OF RESEARCHING THE SAFETY ASSURANCE
that are highly relevant to the safety assurance of AI-based OF AI-BASED SYSTEMS – QUESTION Q1
systems as per details covered in subsection V-B. By means of the SLR question Q1, the analysis of the C3 pub-
Since then, research that combines the areas of AI and lications allowed identifying four mutually exclusive objec-
safety assurance experienced significant effervescence. This tive groups (OGs) related to their goals towards the safety
trend, which characterizes the ‘third wave’ of research in assurance of AI-based systems:
Figure 6, emerged in 2009 as greater than the previous waves • OG1: The research only aims to review and/or spot gaps
and became even stronger especially from 2016 onwards, on AI-based systems verification, validation and safety
as sustained by the bibliometrics analyzed in section IV. activities;
The main justification for the steep increase of publica- • OG2: The research aims to propose means to ensure
tions involving AI and safety stems from the increase in that an AI-based system/function is safe and present
cost-effective sensing and data processing capabilities of supporting results;
computer-based systems throughout the 2000s and 2010s, • OG3: The research aims to apply methods defined in
supported by parallel computing and, more recently, cloud other research to ensure that an AI-based system is safe;
computing [53]. With these features, computationally costly • OG4: The research covers other topics which may be
AI solutions – notably those related to deep learning –, either marginally related or unrelated to OG1, OG2
have become feasible to pave the way towards implementing and OG3.
complex functions with AI [53] – including safety-critical Table 2 summarizes the absolute and relative results of
ones. papers belonging to each of these OGs considering all C3
So far, research involving AI and safety without directly texts and only those rated with high quality as per their
addressing the safety assurance of AI-based systems (i.e., Q-index. The results indicate that high quality C3 references
research included within categories C1, C2, C4, and C5, as focus especially on OG2, which is expected given the scope
per subsection III-D) have spanned a multitude of application of this research.
domains and AI techniques. The target applications include,
but are not limited to, power plants, transportation systems 2) AI TECHNIQUES CONSIDERED IN THE SAFETY
of several means, medical systems, process industries, and ASSURANCE OF AI-BASED SYSTEMS – QUESTION Q2
automating safety analyses. AI techniques, in turn, comprise a In order to evaluate what AI techniques have been indeed
non-exhaustive list with several variants of ANNs (including covered at the references on the safety assurance of AI-based
deep learning – DL), logistic regression, k-nearest neighbors systems, this information has been collected from each of the
(kNN), decision trees (DTs), random forests (RFs), sup- reviewed research papers by means of the SLR question Q2.
port vector machines (SVMs), boosting, and reinforcement During the analysis of the C3 publications, however, it has
learning (RL). been noticed that some of them do not explicitly mention
AI, and even those which do perform it with varying depth
B. SAFETY ASSURANCE OF AI–BASED SYSTEMS: THE degrees with regard to AI variants, machine learning (ML)
ROAD SO FAR categories, and even specific AI and ML techniques. The
As introduced in subsection V-A, the series of research observed variations are listed as follows:
papers by Kurd and Kelly [32], [33], [36] has been a) AI Variants:
considered the first meaningful efforts in exploring how
i.
No explicit mention to AI;
to ensure that AI-implemented safety-critical functions
ii.
AI in general;
indeed meet their related safety requirements. These were
iii.Machine Learning in general (ML);
followed by 326 other research papers aiming to explore
iv.‘Classic AI’ search, game theory and evolutionary
the safety assurance of AI-based safety-critical systems
algorithms;
up to August 26th , 2022, with 144 of them1 meeting the
v. Knowledge-Based Probabilistic Models (KBPMs),
criteria for high relevance to the area, as explained in
such as Bayesian approaches, Kalman and Parti-
subsection IV-B.
cle Filters, and Dempster-Shafer Theory.
The objective of this section is to expand on the safety
assurance of AI-based systems’ state of the art and contex- b) ML categories:
tualize it with the guidance of questions Q1 to Q4 from the i. Supervised Learning (SL);
SLR method (defined in subsection III-E). Special focus is ii. Unsupervised Learning (UL);
given to the responses for these questions stemming from the iii. Reinforcement Learning (RL);
144 C3 papers that were deemed to highly contribute to the iv. Deep Learning (DL).
safety assurance of AI-based systems theme. c) Specific AI and ML techniques:
i. Artificial Neural Networks (ANNs);
ii. Decision Trees (DTs) and Random Forests (RFs);
1 This quantity includes the work by Kurd and Kelly [32], [33], [36]. iii. Support Vector Machines (SVMs).
FIGURE 7. Proportion of yearly C3 high quality publications between 2018 and 2021 for each AI Variant, ML Category and AI and ML technique.
b: SAFETY ASSURANCE BASED ON SAFETY ENVELOPES of machine learning for safety-critical components used
The basic idea of this approach is to restrain the behavior in UGVs. Among the recommended actions, the authors
of AI-based systems by design within a deterministic (i.e., highlight creating ontologies to enforce design level deci-
non-AI-implemented)safety envelope (alternatively referred sions and translate them into a safe envelope that limits the
to as safety cage). In this context, such an envelope constrains response of ML-based components. Since the research paper
the overall system response to a knowingly safe image set by solely focuses on presenting the proposed method for dealing
design regardless of its AI, thus leading the underlying AI with ML-related uncertainties, no practical results have been
elements to play at most a minor role on safety. As a result, obtained by the authors [76].
typical safety assurance methods for non-AI-based systems Kuutti et al. [77] have explored implementing redundant
would suffice, since these would be solely applied to the non- safety envelopes on the control loop of an UGV so that
AI-related safety envelope elements [18]. the safety envelopes avoid front collisions based on two
An overview of the advantages and disadvantages of the AI-based movement controllers: a Deep Neural Network
safety envelope approach, thoroughly reviewed in the remain- (DNN)-based controller for optimum performance, and a
der of this section along with representative references, is pre- suboptimal ANN-based controller with less layers. Within a
sented in Figure 10. Green text boxes indicate potential simulated environment, Kuutti et al. [77] have observed that
advantages of the approach, whereas red text boxes indicate the safety envelopes prevented unsafe scenarios and that such
its disadvantages and difficulties. safe action was required with a higher frequency when the
Solutions of this category have been presented and dis- suboptimal ANN-based controller was in charge of control-
cussed by, e.g., Machin et al. [75], Shafaei et al. [76], Kuutti et ling the UGV instead of the DNN-based one.
al. [77], and Lazarus et al. [58]. Further information on each Lazarus et al. [58] have developed an RL approach to
of these research papers is presented henceforth. synthesize safety envelopes which aims to increase their flexi-
Machin et al. [75] have presented a general framework to bility by means of dynamic boundaries determined according
translate safety requirements into predicate logic rules that to operational characteristics. The authors have presented
formally define safety envelopes for active safety monitors. positive results of their result in simulated case studies involv-
A case study of a mobile manipulator robot for co-working ing Unmanned Aerial Vehicles (UAVs), since none of the sim-
led to only partially successful results in defining safe con- ulated UAVs went outside their respective safety envelopes
straints for the robot operation, with the following limitations: even when adverse operation conditions (e.g., strong winds)
a) Some safety requirements could not be addressed by were exercised. Nevertheless, since RL determines the syn-
means of the proposed framework due to the lack of thesized safety envelopes, assessing a priori if underlying
observable data to generate safety envelopes [75]; RL models are sufficiently safe is still needed to ensure
b) Physical tests evidenced that the generated safety that the safety envelopes synthesized with it are themselves
envelopes still allowed violating some safety indeed safe. This last aspect has not been discussed by
requirements [75]; Lazarus et al. [58].
c) There is no explicit mention as to whether AI has Safety envelope-based solutions are mostly criticized for
indeed been used within the case study robot design. two main reasons: (i.) underlying difficulties in formally
Shafaei et al. [76] have crafted a set of recommended defining them, and (ii.) their inherent feature of overly con-
actions to reduce the impacts of the underlying uncertainties straining the performance gains that AI can introduce [78].
Sha et al. [84] have explored their proposed overapproxi- due to e.g., haze, blur and contrast changes. Moreover, other
mation scheme by simulating a mass-spring damper system tools that can aid the verification of discrete-time systems
controlled by a DNN and varying the latter’s architecture with AI, such as dReal, dReach, and Flow ∗, have also
(e.g., number of neurons, number of hidden layers, and activa- been covered in the research by Tuncali et al. [96] and
tion functions) in each experiment. With the aid of a prototyp- Val et al. [97].
ing tool of their own overapproximation model, the authors Phan et al. [98] and Shukla et al. [99] have proposed archi-
were able to create 10 safe DNNs out of the 12 exercised tectural models which define the so-called simplex architec-
architectures [84]. Since at least a single solution is sufficient ture for safety-critical ML-based systems. This architecture
to deal with a specific problem, it is possible to consider comprises four main modules: (i.) an AI controller and three
that the authors were successful in crafting a proven-as-safe non-AI-based elements: (ii.) a reference controller, which has
AI-based solution for their case study. been proven to safely accomplish the same safety-critical
Claviere et al. [85], in turn, have developed means to function of the AI controller albeit with subpar performance,
overapproximate ANNs strictly based on the ReLU (Rectified (iii.) a safety-critical controller switch, which chooses the
Linear Unit) activation function. Their approach has been output of the AI controller if it is safe or the output of
exercised with a case study that involves a modified version the reference controller otherwise, and (iv.) an optional AI
of the Airborne Collision Avoidance System for Unmanned controller adapter, which improves the AI controller with
Aircraft (ACAS Xu), in which ANNs were introduced as a time by means of a learning process whenever it produces
means to generate safe UAS maneuvers whilst reducing the an incorrectly permissive (unsafe) output. The objective
storage needs of the original ACAS system. The case study of the preexistent non-AI-based safe elements is twofold:
scenario, involving two aircraft in potentially conflicting (i.) ensuring safety when AI-based controllers fail to do so,
routes to be resolved by the action of ACAS Xu, showed that and (ii.) leveraging the runtime learning of AI-based con-
the overapproximation of the ANNs yielded to safe situations trollers so that the safe controller is used as little as possible
in 98.8% of the tested settings, whereas the remaining 1.2% with time.
could not be proven as safe (i.e., they are not necessarily Phan et al. [98] have exercised the full simplex architecture
unsafe, but there is no sufficient evidence to say otherwise). by means of two simulated case studies with ANN-based AI
Wang et al. [86] have also focused their efforts on controllers: a moving-target tracking system for UGVs and an
ReLU-based ANNs with three main objectives: (i.) tight- automated insulin pump for medical patients with diabetes.
ening the overapproximations more than reference studies, In both scenarios, the simplex architecture as a whole led
(ii.) improving the overall processing time in overapproxi- both systems to behave safety: the UGV was able to track
mating neural networks, and (iii.) incorporating underlying a moving target and avoid colliding with it, and the insulin
uncertainties of input data into the overapproximation cal- pump avoided long-term hyperglycemia and short or long-
culations. By means of a case study of an advanced cruise term hypoglycemia [98].
control system for UGVs, the authors have reached the over- Shukla et al. [99], in turn, designed an ANN-based control
approximate set of its ANN and the conclusion that such an system for UAVs based on the simplex architecture, albeit
overapproximation would be safe if the uncertainties of inputs disregarding the AI controller adapter on their model. Case
were constrained to a specific interval. When exercising the studies carried out in simulated environment and in hardware-
ANN with input data within and outside such an uncertainty in-the-loop scheme (i.e., with the physical implementation
range, the authors have obtained proof to support the sound- of the UAV control system) supported that the UAV control
ness of the previous conclusion: all outputs of the ANN system met its safety requirements, since no collisions with
were safe when the uncertainty input bounds were respected, other elements have occurred. Furthermore, the authors have
and unsafe states when reached when this condition was not also observed proper switching between the AI controller
met [86]. and the reference controller whenever needed to avoid unsafe
Other research in which tools for formally verifying ANNs scenarios [99].
are covered are the ones by Zhu et al. [87] (ReachNN – Reach- In addition to Phan et al. [98] and Shukla et al. [99],
ability of Neural Networks), Ivanov et al. [88], [89] (Verisig other recent research has explored the usage of the simplex
and Verisig 2.0), Sidrane et al. [90] (OVERT2 ), Tran et al. architecture for safety-critical systems with AI. On 2022,
[91] (NNV – Neural Network Verification), Fahmy et al. [92] for instance, four research papers report its usage: Chen et
(HUDD – Heatmap-Based Unsupervised Debugging of Neu- al [100], Peng et al. [101], and Wang et al. [56] have used
ral Networks), Pulina and Tacchella [93] (Neural Networks the simplex architecture to support safety-critical functions
Verifier – NeVer), and Katz et al. [94] (Marabou). The Deep- on UGVs, whereas Thumm and Althoff [102] have exper-
Cert tool by Paterson et al. [95] mixes formal verification of imented its usage on industrial environments with human-
ANNs and DNNs used in image processing functions with robot collaboration.
black-box tests that aim to model potential image corruptions In all three UGV-related research papers, the authors have
conceived a RL-based AI controller and a proven-as-safe
2 A formal definition of the acronym ‘OVERT’ is missing in its originating reference controller to perform driving control functions.
reference by Sidrane et al. [90]. It is important to highlight that, whereas Phan et al. [98]
and Shukla et al. [99] have crafted a non-AI-based refer- design of fail-safe, no ‘design patterns’ towards fail-safe AI
ence controller as the reference controller, Chen et al [100], architectures have been established so far [37].
Peng et al. [101], and Wang et al. [56] opted for using AI
controllers which have been proven-as-safe by means of d: SAFETY ASSURANCE BASED EXPLAINABLE AI AND
overapproximate mathematical models. In all three research WHITE-BOX ANALYSES
papers, the authors have performed simulated case studies Explainable AI (XAI) is also deemed an emergent topic to
in which the safety-critical UGV functions they explore are address the safety assurance of AI-based systems, since it
on controllers included in the simulation loop and reached aims to build AI elements which clearly allow humans to
overall positive conclusions with regard to safety assurance identify decisions taken by AI and their underlying reasoning.
whilst also retaining adequate performance. This ultimately makes it easier to assess AI-based systems
Finally, Mehmood et al. [103] has extended the original with white-box analyses, which are the norm of traditional
simplex architecture by adding to it a look-ahead mechanism approaches used with non-AI-based safety-critical systems,
which loosens the safety requirements of the reference con- and also allows building robust safety arguments due to the
troller, allowing the latter to be also AI-based whilst ensuring in-depth analyses [105].
global system safety. In the approach proposed by the authors, A summary of the advantages and disadvantages of the
the safety-critical controller switch is augmented with two approach combining XAI with white-box analyses, discussed
capabilities: firstly, it is able to process the immediate-future in more detail throughout this subsection of the paper, is pre-
safety states of the whole system; secondly, it carries out sented in Figure 12. Green text boxes indicate potential
reachability analyses on the reference controller to check advantages of the approach, whereas red text boxes indicate
whether it will reach or not the near-future safe states. its disadvantages and difficulties.
If safety is not ensured, two safe actions are possible: (i.) the Kurd et al. [37] and Kurd and Kelly [32], [33], [36] have
reference controller downgrades to previous versions up to developed a W-shaped systems lifecycle to build and analyze
meeting the safety constraints, or (ii.) the augmented safety- safety-critical hybrid ANNs based on Fuzzy Self-Organizing
critical controller switch takes a deterministic safety decision Maps (FSOMs). The lifecycle introduces the concept that
if the downgrading of the reference controller times out. the aforementioned hybrid ANNs can be assessed as safe
The authors have exercised the extended simplex archi- because they are explainable. Such explainability results from
tecture with two simulated case studies: a model-predictive the ANN being generated by a gradual refinement of the data
control for multi-robot coordination, and a collision avoid- used in the ANN learning as its design progresses, which led
ance mechanism for aircraft. The authors have not identified these ANNs to be called Safety-Critical Artificial Neural Net-
potentially unsafe scenarios from a systems point of view, works (SCANNs). This refinement, in turn, is automatically
but highlighted the difficulty in using their extended simplex achieved by the FSOMs, which are created by human experts
architecture because it requires a significant amount of stor- through fuzzy rules that explicitly define the ANNs expected
age space for the look-up tables of the reference controller behavior.
(e.g., hundreds of gigabytes for the aircraft collision avoid- A case study of an hybrid, FSOM-based ANN to control
ance controller) [103]. a gas turbine has been presented by the authors along with
The main limitations on designing fail-safe AI is the high results that support that the system is explainable and safe
computational cost of reachability analyses even with simple, for the three safety requirements they defined – namely,
non-deep AI models with not many input variables. This (i.) avoiding engine surge, (ii.) avoiding turbine blade over-
is due to the inherently NP-Hard computational complexity heating, and (iii.) avoiding engine overspeed [37].
of the involved models, which require techniques such as Grushin et al. [105] have conceived an overapproxima-
Satisfiability Modulo Theories (SMTs) and Linear Program- tion model to translate Long Short-Term Memory (LSTM)
ming to be solved [104]. Furthermore, even if simplifica- ANNs into explainable models by clearly defining hyper-
tion schemes such as overapproximations are considered, planes which characterize the image set of the LSTM ANNs.
these can either mask potential safety issues if misconceived, The case study explored by the authors has aimed to conceive
or even lead the resulting system to be ‘excessively safe’, and assess the safety of an explainable LSTM ANN-derived
to the point that an allegedly better performance introduced model which is in charge of predicting if an aircraft will
by the AI might be unjustified by the added complexity [74], reach a degraded state. In this context, the LSTM ANN
[82], [87]. decides whether a degradation is expected based on both
Moreover, when fail-safe architectures make use of non- the aircraft’s internal systems’ health and the operational
AI-related fail-safe elements to mitigate potentially unsafe context of the global airspace as monitored by the aircraft
responses of the AI, they also share the same corresponding itself [105].
limitations of safety envelopes. On the other hand, though, The authors have presented two main results. Firstly, the
as the AI elements are designed to learn with the previous hyperplanes which define the boundaries between opera-
unsafe responses, greater flexibility can still be achieved than tional and degraded states corroborate the explainability of
with safety envelopes per se. Finally, since there is still no the model. Secondly, the example of the case study leaned
consensus on which types of AI redundancy support the towards safety, as the explainable model derived from the
considered that more testing criteria is still needed for an Finally, Jia et al. [114] have discussed the theoretical rela-
increased test coverage in, e.g., a context of automated test tionship between XAI and safety and illustrated their findings
generation for safety-critical systems [109]. by means of a case study in which different types of AI are
DeepXplore has a motivation similar to that of DeepGauge employed to guide the process of extubation of patients in
in leveraging test coverage by using data extracted from the intensive care units. The main conclusion of the authors is
internal structure of an AI-based model, and it relies on using that, even though XAI plays an important role for safety, once
AI for that purpose as well. According to Pei et al. [111], it allows tracing back the reasoning performed by the AI in a
experiments have allowed not only detecting that the assessed way that humans are able to understand, XAI is not sufficient
DNNs failed in dealing with specific types of corner cases, for ensuring safety on its own.
but also in increasing in 3% the accuracy of the models Based on the previous discussion, the benefits of XAI
once improvements had been introduced to the design of and white-box testing come at the expense of the following
the DNNs. The authors have not discussed, though, how burdens:
DeepXplore’s AI has been ensured as appropriate for such a) There is a higher need for multidisciplinary experts on
an application. both safety and AI to support the safety assurance of
Another relevant avenue for building safety arguments AI-based systems;
based on white box analyses and tests is by using fault b) XAI on its own poses challenges because it still is an
injection techniques in such a way that faults are injected emerging area [52];
on the internal elements of an AI model. By injecting faults c) XAI does not ensure that a system is safe [114];
to safety-critical AI elements, one can assess how resiliently d) White-box analyses of rather opaque AI models, such
these are tolerated and whether an unsafe state otherwise as large ANNs and DNNs, are challenging enough
undetected in regular tests and analyses can be reached. Two to the point of leaning towards unfeasibility on many
interconnected tools developed by an overlapping group of applications. If their internals could be properly under-
researchers – namely, TensorFI (TensorFlow Fault Injection) stood during white-box analyses, simpler and more
[112] and BinFI (Binary Fault Injection) [113] – are herein explainable AI models could have been conceived
highlighted as relevant research on this theme. beforehand instead [57], [58]. For instance, among all
TensorFI represents the core fault injection engine for reviewed research papers of this class, only those by
ML-based components of the research by Chen et al. [112], Grushin et al. [105] and Kurd et al. [37] have pro-
[113]. Even though its development was targeted towards ML vided clear, analytic geometry-related XAI capabilities
implemented with the TensorFlow framework, it is claimed to ANNs.
that other frameworks and libraries can also benefit from
the underlying fault injection techniques if properly adapted. e: DEFINITION OF SAFETY ASSURANCE PROCESSES
By assuming the hypothesis that hardware and software SPECIFICALLY FOR AI-BASED SYSTEMS
faults can be equally represented by corrupting a TensorFlow The last relevant approach for the safety assurance of
internal operator, Chen et al. [112] have crafted a tool that AI-based systems is that it shall be continuously carried
allows modeling a wide and plausible set of random and out with a process-oriented approach, starting on require-
systematic faults that can affect the elements of a computer- ments elicitation and extending up to system operation,
based safety-critical system. Experiments with ANNs used by monitoring the system outputs with time and comparing
in image recognition functions, including those embed- them with expected results. It has been argued that such a
ded on autonomous driving systems, allowed inferring that process-oriented approach shall take into account specific
TensorFI allows improving the robustness of ML-based ele- safety assurance techniques for AI [70], [78] and that the typ-
ments to faults. Moreover, increasing TensorFI’s flexibility ical V-shaped method from non-AI-based safety standards,
to support other frameworks for developing ML, includ- such as IEC61508 and CENELEC EN50129 [9], [11], is not
ing the C++ version of TensorFlow, are listed as needed deemed enough to deal with AI-based systems [115], [116].
improvements [112]. In addition to the previous characteristics, extending the
BinFI, in turn, is a binary search-based approach to identify safety assurance process of AI-based systems so that it is con-
safety-critical ML elements and concentrate the fault injec- tinuously performed during system operation up to its decom-
tion strategy to these elements instead of performing a mode missioning is another difference from non-AI-based systems.
comprehensive and random fault injection strategy. By means This is mostly important for online learning-based systems,
of experiments with DNNs used in autonomous driving sys- since their constant learning changes their architecture as they
tems, Chen et al. [113] have identified that, by using BinFI operate, and the original safety arguments that supported its
along with TensorFi, the binary search strategy has outper- safety prior to revenue service can be undermined with new,
formed a random fault injection strategy in making ML safer. on-demand learned settings.
On the other hand, it has been stressed out that such positive A landscape of the advantages and disadvantages in craft-
results of the BinFI strategy only apply to ML elements ing a safety assurance process for AI-based systems, based
whose error propagation functions are at least approximately on the further analyses in the present subsection, is presented
monotonic [113]. in Figure 13. Green text boxes indicate potential advantages
components used within UGV systems. The 12 steps regular, non-AI-based systems, shall be used along with spe-
are as follows: (i.) specifying customer-facing functional- cific techniques for AI to build sound safety arguments [121].
ities; (ii.) specifying operational design domain context; The conceptual lifecycle and hazards identified by Pereira
(iii.) specifying system architecture; (iv.) specifying system and Thomas [121] are further exercised on a case study of
functions, notably AI-related ones; (v.) specifying and acquir- a self-driving vehicle used in a collaborative human-robot
ing training and development data; (vi.) designing ML mod- industrial environment. In this case study, the authors illus-
els; (vii.) pre-processing data; (viii.) training (supervised) trate how the lifecycle and the underlying ML hazards can
ML models; (ix.) post-processing data; (x.) performing tests, be expanded from a technical standpoint; nevertheless, tech-
verification and validation activities; (xi.) monitoring the sys- niques for assuring that AI is safe are not further explored by
tem operation; and (xii.) performing maintenance whenever the authors.
needed. The authors have neither discussed recommended The SafeML approach proposed by Aslansefat et al. [125]
techniques for any of these steps nor presented a practical establishes a safety assurance process for classifiers (i.e.,
application of the proposed model by means of, e.g., a case supervised learning-based components with discrete out-
study [120]. puts). Its safety-related activities range from the selection of
Following a similar approach, Häring et al. [118] have appropriate datasets for building and training the classifiers
developed an 8-step process to guide the lifecycle of AI-based up to the monitoring of the system during its operation. Sta-
systems – including those with online learning. The eight tistical criteria – notably the cumulative distribution functions
steps defined by the authors are (i.) context analysis, scope, of each output class – are used to assess whether safety has
and aim formulation; (ii.) AI method selection; (iii.) data been reached with a given confidence. SafeML has been built
selection and spotting; (iv.) data preprocessing; (v.) AI model in such a way to provide proper integration with XAI and
development and training; (vi.) model testing, verification, security, given their contributions to safety.
and validation; (vii.) model application; and (viii.) model Even though the case studies performed by Aslansefat et al.
modification and updating. Even though the authors have [125] focus on experimental datasets and ML elements not
identified that Generative Adversarial Networks (GANs) are necessarily with a tight link to actual safety-critical applica-
useful tools to support the generation of safety-critical sce- tions, as per the SafeML official GitHub project history [126],
narios on steps ‘‘(i.)’’ and ‘‘(vi.)’’, the research has limitations additional studies have been carried out by other researchers.
similar to those of Mock et al [120] – namely, no deepening of Bergler [127], for instance, has applied it to an autonomous
the needed technical activities for each step, and no examples driving system, focusing especially on the training dataset
or guidelines for its application [118]. safety activities. The overall results were positive and sup-
The same trend has also been followed by Pedroza and ported the soundness of SafeML for ML-based systems used
Adedjouma [116], who have proposed an iterative lifecycle in typical safety-critical applications [127].
for developing safe-by-design AI-based systems. Each lifecy- An important aspect worth highlighting is that all previ-
cle iteration includes the following set of 11 steps: (i.) defin- ously discussed references have included the safety assurance
ing missions and goals; (ii.) structuring AI principles; of datasets employed at safety-critical AI design as part of
(iii.) performing decompositional analyses; (iv.) structur- the systems’ lifecycle. This is due to the reason that datasets
ing AI knowledge bases; (v.) allocating AI techniques; can affect the training and the validation of AI-based systems,
(vi.) selecting knowledge bases; (vii.) designing the detailed leading to, e.g., overfitting and underfitting issues, overly
AI architecture; (viii.) developing and integrating AI models; sensitive corner cases and susceptibility to adversarial attacks
(ix.) settling validation benchmarks; (x.) evaluating the AI if they are inappropriate for the target application.
performance; and (xi.) implementing and deploying the AI Most research on the safety assurance of datasets used
system. Safety entwines with this lifecycle by means of situa- in safety-critical AI-based systems targets to circumvent the
tional analyses and the identification of hazards, safety goals, aforementioned issues. For instance, Aoki et al. [128] have
and AI-related malfunctions and faults. Even though the developed a method to assess labeled datasets used in super-
authors have applied the proposed method to build the con- vised learning schemes which combines statistical analy-
ceptual design of an autonomous shuttle system in Systems ses with FTA to assess potential faults on the datasets and
Modeling Language (SysML), no further technical aspects exercised it with the recognition of handwritten characters.
and/or practice have been presented. Boulineau [129] has discussed, among other topics related to
The research by Pereira and Thomas [121] also follows safety-critical AI based on supervised learning, a taxonomy
a similar approach. On this study, the authors advocate that of failure modes applicable to labeled datasets and applied it
the lifecycle of an ML-based system shall have at least five to a train control system which automatically detects track
steps – namely (i.) requirements specification; (ii.) data man- signals. Gauerhof et al. [130] have followed an approach
agement; (iii.) model development; (iv.) model testing and similar to that of Boulineau [129] and defined, among other
verification; and (v.) model deployment. Furthermore, the characteristics on safety-critical AI, means to elicit and assess
authors present a non-exhaustive list of hazards that shall dataset-related safety requirements. Gauerhof et al. [130]
be addressed for safety-critical systems at each of these have also explored the practical application of their method
steps. They also highlight that safety assurance techniques of on an image dataset applied to obstacle detection by UGVs.
Klaes et al. [131] have discussed the importance of incorpo- provided further details on the means that shall be considered
rating uncertainty quantification into the safety assurance of to collect the needed evidence that supports the underlying
AI-based systems, which are tied to the quality of input data safety arguments of their assurance pattern [30].
and to the underlying architecture and mathematical models Finally, Cheng et al. [136] have developed an open-
of AI by means of a model called Uncertainty Wrapper. source toolbox, called nn-dependability-kit, to support the
Finally, Subbaswamy et al. [132] have presented a framework engineering of ANN-based systems used in autonomous
for analyzing the robustness of ML models to changes on driving systems. The foundations of nn-dependability-kit
datasets and illustrated its application with a random forest are based on an assurance pattern with four major safety
model employed to predict sepsis in hospital patients with goals linked to the AI-based system lifecycle: (i.) ensur-
different health profiles. ing appropriate data collection prior to designing the ANN,
Another theme of interest for systematizing the safety (ii.) ensuring proper ANN performance during training and
assurance of AI-based systems conceiving safety assurance validation, (iii.) ensuring that no potentially unsafe behav-
patterns for specific AI variants, categories, and/or tech- ior emerges during tests and design generalization, and
niques. A safety assurance pattern is a meta-model whose (iv.) ensuring that no potentially unsafe behavior emerges
structure defines, for a specific class of systems, a set of during actual operation. In order to allow users to reach
safety goals, the contexts in which they are inserted and the these goals, specific design and verification techniques (e.g.,
arguments that are needed, along with contexts, to fulfill the based on formal methods) are available as part of the tool-
safety goals [133]. Some research aiming to establish safety box, which has also been positively referred to in previously
assurance patterns includes the efforts by Bragg and Habli analyzed studies such as those by Klaes et al. [131] and
[134], Gauerhof et al. [135], and Salay et al. [30]. Gauerhof et al. [135].
Bragg and Habli [134] have developed the foundations
for an assurance pattern that might be used to support the 4) CONCLUDING REMARKS ON THE STATE OF THE ART OF
safety assurance of RL-based systems. The authors have AI-BASED SYSTEMS
defined that an RL system can be safe on its environment if In summary, research correlating safety to AI has signif-
it satisfies four other lower level goals: (i.) achieving a safe icantly evolved since it first emerged in the mid-1980s.
configuration, (ii.) performing a safe reconfiguration when The technological evolution of computer systems – notably
needed, (iii.) transitioning to a fail-safe state when needed, related to their processing and storage capabilities – have
and (iv.) reverting to a safe state when needed. Despite the paved the way towards using AI in safety-critical systems
relevance of assurance patterns as a tool to systematize and and, hence, made the safety assurance of such AI-based
simplify the safety assurance of systems, the authors them- systems a major research concern from 2016 onwards.
selves recognized that the lower level goals still need to be An overview of the most relevant research towards assuring
further expanded, notably with regard to three main themes: that AI-based safety-critical systems indeed meet their safety
(i.) means to constrain RL for safety, (ii.) means to implement requirements shows that most research on the area spans
dynamic safety monitoring mechanisms, and (iii.) how to five major methods towards that objective. These include
guarantee that online RL ensures its safety on its own [134]. (i.) black-box testing of AI, (ii.) designing non-AI-based
Gauerhof et al. [135] have conceived a safety assurance safety envelopes that limit AI response, (iii.) designing fail-
case for a pedestrian detection function, which is a typical part safe AI, (iv.) combining explainable AI with white-box
of an UGV. Even though specific features of the system have analyses, and (v.) establishing a process-oriented approach
been taken into account when building the safety case, such throughout systems’ lifecycle considering specific technical
as the usage of convolutional neural networks (CNNs) for aspects of AI.
image processing, it serves as a relevant pattern not only for Furthermore, the main AI variants that have been exercised
other pedestrian detection functions, but also to other object follow the current trend of AI research itself, leaning towards
detection features that rely on CNNs. This stems from the fact machine learning and, more specifically, neural networks,
that the upper goal of the model, defined as ‘machine learning deep learning, and reinforcement learning. Even though this
function meets all of its safety requirements’, is broad enough allows safety and AI areas to evolve together, it is deemed that
for such a generalization. focusing on rather opaque and hard-to-understand models
Salay et al. [30] have proposed a safety case template – such as deep neural networks is rather challenging for safety,
hence, an assurance pattern – with a systematic method to notably because further advancements on simpler and easier-
generate safety arguments among systems-level and unit- to-understand AI models are still needed.
level components of computer vision AI-based systems. The Finally, with regard to the final results of research papers
authors have instantiated their template for an object detec- on the safety assurance of AI-based systems, two main cate-
tion task and included a semi-literal solution, which led not gories have been identified.
only to a qualitative assurance pattern, but also to bound The first of them comprises research whose aim is just to
probabilities to reach the applicable safety goals. Even though propose means to address the safety assurance of AI-based
the research by Salay et al. [30] represents relevant advance systems. In this case, the methods themselves are the
on establishing assurance patterns per se, the authors have not main results presented by the authors, and unless formal
mathematical proof is provided to support the methods’ Since these results stem from the answers to questions Q5
soundness, further research on case studies is usually indi- to Q6 (defined in subsection III-E) for all the 329 full-
cated as the aim for future research. Hence, in these sce- text reviewed C3 references, the guidelines herein presented
narios, one might assume that the research leans towards have a twofold origin. Hence, they not only cover rele-
improving the safety of AI-based systems; nevertheless, there vant future work identified by the authors of the reviewed
is still no strong conclusion on whether such alleged safety research themselves (subsection VI-A), but also those based
improvements could indeed be reached due to the lack of on the cross-fertilization among the reviewed research
formal or practical results. This is the case, for instance, and the present research authors’ experience with AI and
of the research by Häring et al. [118], Koopman and Wagner safety-critical systems (subsection VI-B). Finally, the main
[73], Koopman et al. [119], Mock et al. [120], Pedroza and conclusions of the presented guidelines are covered in
Adedjouma [116], Salay and Czarnecki [122], Shafaei et al. subsection VI-C.
[76], Tarrisse et al. [123], and Watanabe and Wolf [68].
The second variant includes research in which, along A. FIRST PART OF THE GUIDELINES: FUTURE RESEARCH
with safety assurance methods, case studies with simulated SUGGESTED IN PUBLISHED RESEARCH QUESTION Q5
or real world-based tests are also presented to support the Out of the 329 C3 papers, 58 of them (17.6%) lack discussion
application of the proposed methods. As per the analyses on future work. Hence, the remaining 271 papers in which this
performed throughout subsection ‘‘V-B-3)’’, the results pre- topic has been covered served as reference to establish the
sented by the authors are typically positive and supportive first part of the guidelines for future work related the safety
of their proposed methods, with a research being proposed assurance of AI-based systems.
for additional improvements. This is the case of Aoki et al. An overview of the eleven major items that are part of the
[128], Aslansefat et al [125], Bergler [127], Boulineau [129], guidelines for future work as per research recommended on
Chen et al [100], Chen et al. [112], [113], Cheng et al. the reviewed references is presented in Figure 14. Further
[136], Claviere et al. [85], Corso et al. [65], Douthwaite and details on each of them, including specific themes and recom-
Kelly [117], Gauerhof et al. [130], [135], Gerasimou et al. mended practice stemming from the higher level future work
[108], Gillula and Tomlin [38], Groza et al [61], Grushin et al. areas, are covered in the following subsections.
[105], Hussain et al. [70], Jaeger et al. [79], Jia et al.
[114], Klaes et al. [131], Kozal and Ksieniewicz [71], 1) ADVANCING ON SYSTEMATIC MEANS AND METHODS TO
Kurd et al. [37], Kurd and Kelly [32], [33], [36], Kuutti et al. ORIENT THE SAFETY ASSURANCE OF AI-BASED SYSTEMS
[77], Lazarus et al. [58], Ma et al. [109], Mehmood et al. The first point of concern for future research is the
[103], Meltz and Guterman [66], [67], Nahata et al [107], need to deepen the current efforts towards establishing
Peng et al. [101], Pereira and Thomas [121], Pei et al. a process-oriented approach for the safety assurance of
[111], Peruffo et al. [83], Phan et al. [98], Salay et al. [30], AI-based systems during their lifecycle. Such a path has been
Salay et al. [106], Sha et al. [84], Shukla et al. [99], Sub- identified, for instance, by Pedroza and Adedjouma [116] and
baswamy et al. [132], Wang et al. [56], Wang et al. [86], by Tarrisse et al. [123], who have reinforced that there are few
and Zhao et al. [81]. There are exceptions, though, in which initiatives on the subject [116], most of which still work-in-
the authors themselves consider that their objectives have progress and lacking details [123].
not been fully reached, such as Bragg and Habli [134], Investing in such future research is considered of
Lin et al. [80], Machin et al. [75], Sun et al. [69], Tahir and paramount importance because, in order to assess whether AI
Alexander [21], and Zhao et al. [82]. effectively meets the desired safety goals of an application,
As a result, one can infer that overall improvements safety practitioners need beforehand the guidance of means
in ensuring safety could be reached in most studies of and methods on how to assess safety per se. This a concerning
the second variant. This comes either because the pro- aspect especially because current safety practitioners are not
posed methods themselves have been applied with success- expected to have a deep knowledge of AI, and teaching
ful results, or because, even if issues were identified, the and training multidisciplinary professionals with expertise in
authors have discussed relevant future work to circumvent the safety and AI is deemed a hard and time-consuming task.
issues towards allegedly better safety assurance methods or In this sense, establishing a systems-oriented process-
approaches. based means to deal with the safety assurance of AI-based
systems throughout systems’ lifecycles and defining an
VI. NEXT STEPS TOWARDS SAFE AI-BASED SYSTEMS: extensive set of ‘recommended practice’ for each lifecycle
GUIDELINES FOR FUTURE RESEARCH ON THE SAFETY step – e.g., focusing on particular techniques for specifying,
ASSURANCE OF AI-BASED SYSTEMS designing, verifying and validating different AI/ML variants
The objective of this paper section is to present an anal- and techniques –, is a relevant direction for future research.
ysis of future work regarding the safety assurance of This could be reached, for example, by merging the
AI-based safety-critical systems and establish guidelines achieved advancements on safety assurance approaches iden-
with relevant research themes yet to be explored in fur- tified throughout section ‘‘V-B-3)’’ – notably, (i.) black-box
ther research towards filling the current gaps on the matter. testing of AI, (ii.) designing non-AI-based safety envelopes
that limit AI response, (iii.) designing fail-safe AI, and Hence, future research aiming to explore and improve the
(iv.) combining explainable AI to white-box analyses – with current methods of the AI métier in justifying the selection
the research on ‘safety assurance processes for AI-based sys- of specific AI types and their settings (i.e., hyperparameters
tems’ (subsection ‘‘V-B-3)-e)’’). Furthermore, joint efforts and hyperfunctions) when designing safety-critical systems
along with other future research themes defined in these is relevant. This includes the following topics:
guidelines would also be of benefit for that.
a) Exploring and crafting strategies and techniques to
Special attention shall also be given to the safety assurance
select AI and ML types for safety-critical functions –
of AI-based systems during their operation and maintenance
for instance, by augmenting hazard and risk analyses
phase. This is relevant especially for systems with online
methods with AI-specific features and failure modes;
learning, as safety arguments built prior to their operation can
b) Formally defining the domains of hyperparameters
become void as systems learn with new data. Potential future
and hyperfunctions, as well as the strategies to tune
work on the assurance of safety-critical systems with online
them for the target application (e.g., cross-validation
learning require not only assessing the rate with which safety
schemes, range, scale and step of hyperparameters’
arguments shall be reviewed, but also further advancements
variation on each experiment);
on performing automatic safety analyses of AI. Some initial
c) Making redundant preliminary designs using multi-
seeds on this subject have been scattered by Cheng and Yan
ple AI variants and comparing and contrasting them
[137] and by Mehmood et al. [103]. Cheng and Yan have
with regard to their performance metrics, distributional
reinforced the need for additional research to improve the
shifts, and adversarial attacks within the input datasets.
performance of automated safety analysis tools given the real-
Fault injection mechanisms, such as those implemented
time requirements of safety-critical applications [137].
in the TensorFI and BinFI tools [112], [113], can be of
The justification behind advancing on systematic means
benefit for this purpose.
and methods to the safety assurance of AI-based systems
is that, by deepening the definition of activities and rec-
ommended practice to avoid and/or mitigate random and 3) ASSESSING THE IMPACT OF INPUT DATASETS ON SAFETY
systematic faults throughout the lifecycle of safety-critical Several researchers have highlighted that datasets employed
AI-based systems, safety practitioners would be better pre- throughout the design and the operation of safety-critical
pared to deal with the safety assurance of AI-based systems. AI-based systems play a relevant role on the actual safety
With such detailed means and methods at hand, safety levels that these systems, as a whole, actually achieve
practitioners would be able to act in a similar way to pro- (e.g., Burton et al. [115], Gauerhof et al. [130], Gupta et al.
cess oriented by e.g. IEC61508:2010 [9] and CENELEC [139], Rajabli et al. [18], Salay and Czarnecki [122],
EN50129:2018 [11] for non-AI-based systems, circumvent- Subbaswamy et al. [132], Watanabe and Wolf [68], Wen et al.
ing part of the searching and learning efforts practitioners [25], Zhang et al. [27]). As mentioned on subsection
would require to apply a technology-agnostic safety assur- ‘‘VI-A-2)’’, the main reason for this is that datasets
ance standard for the safety AI-based systems, such as the themselves influence the architecture of the AI instances
current version of ANSI/UL4600. crafted for a specific application. Hence, open topics on
dataset-related features consist of important themes for fur-
2) DEFINING JUSTIFIABLE AI VARIANTS AND ther research within the context of AI-based safety-critical
HYPERPAMETERIZATION OF AI MODELS systems.
For safety-critical systems without AI, a system could only be A relevant theme for future research is defining the
considered safe if, among other conditions, application- spe- attributes that a dataset shall possess in order to be deemed
cific settings were properly verified and validated as correct adequate for a safety-critical application. Even though high-
and safe for regular and degraded operational modes [138]. level foundations for aspects to be observed and avoided
With AI, there are two other degrees of freedom when are presented in current research (e.g., data bias, dataset
conceiving a solution for a specific application. Firstly, one shift, concept shift, out-of-domain data [27], quality of
specific variant of AI shall be chosen among different variants labels [140]), general-purpose recommended practice on
within the same AI type/category (e.g., for discrete super- building and analyzing datasets have not been extensively
vised learning, ANNs, DT/RFs and SVMs are some of the researched yet.
resources at hand). Secondly, even after a specific AI variant One point of concern is related to dealing with the rep-
is selected, input data employed while conceiving the models resentativeness of safety-critical scenarios, which tend to
can also influence the internal architecture of the selected AI be scarce, in proportion, in datasets which also include
variant itself by means of the so-called hyperparameters and records of regular operation of a system. Even though current
hyperfunctions. Hence, the choice of AI variants and their research indicates that simulation-based approaches are an
hyperparameters and hyperfunctions has a strong potential to interesting means to obtain data for safety-critical scenarios
influence the behavior of AI, thus directly impacting safety. without exposing real systems to potentially harmful situ-
Wen et al. [25] have raised this specific point of concern on ations and injecting dataset-related faults (e.g., [25], [87],
their research. [115], [139]), the processing of unbalanced databases for
FIGURE 14. Summary of themes for future research on the safety assurance of AI-based systems suggested on reference papers.
safety-critical functions has not been extensively explored. subject by Mjeda and Botterweck [141] and especially the
Moreover, the impact of distributional shifts over safety- Uncertainty Wrapper by Klaes et al. [131], themes such as the
critical AI, which tends to be greater due to the inherent systematic propagation of epistemic uncertainty throughout
unbalancing of datasets, has not been explored in depth as an AI-based model could still benefit of future research.
well. Moreover, evaluating how to deal with subpar data (e.g.,
The scarcity of data related to safety-critical scenarios images collected with dirty lenses), data losses, and corrupted
also compromises the proper exploration of safety-critical data (due to e.g., failure of sensors or adversarial attacks) is
corner cases within datasets. Since corner cases can lead also a subset of studies to be considered. The ultimate target
an AI module to present a potentially unsafe behavior with of this research area is to ensure that, even in the presence
small input perturbations [110], studying them is a relevant of uncertainties, safety requirements of AI-based systems are
concern for safety-critical applications. Future research on met.
this theme shall take into account defining means to iden- Transfer learning, which can be considered as a means
tify the representative of corner cases within datasets and to reuse data from one application as initial reference to
methods to assess how AI elements take them into account. another [142], is another topic worthy of consideration for
Even though general-purpose recommendation can be built to further research. So far, Corso and Kochenderfer [142], who
guide that, it is deemed that most efforts shall be application- have presented a prominent and comprehensive study on
specific, i.e., that identifying and assessing corner cases the matter, have clearly stated that they still needed further
depends on the target application per se. Currently, these comprehension of transfer learning mechanisms to interpret
themes have been explored in more detail only for computer some results of their results and emphasized that insights on
vision (e.g., [135], [139]). transfer learning algorithms are still needed.
Another important avenue for future work is based on Specifically regarding transfer learning, a first step on
investigating means to quantify and deal with data uncer- future research shall take into account simpler cases in which
tainty. This theme is relevant because the uncertainties of the datasets employed on the design of AI are sourced from
datasets contents not only reflect their own quality, but also simulated datasets and/or public datasets from the same appli-
influence the underlying AI architecture and mathematical cation domain, but collected on a different environment. Only
models [131] – thus impacting on the degree of trust on the then further insight on different application domains could be
outputs produced by such AI. Despite some efforts on that drawn.
Finally, establishing a cost-effective process to ensure triangle of ‘what the system should do’, ‘what the system
safe labeling of datasets for supervised learning solutions actually does’ and ‘what the system is perceived to do’ [7].
also deserves additional investigation on additional research. These problems can ultimately be translated on systematic
On the one hand, manual human labeling is bounded to a failures introduced, consciously or not, at design time.
significantly high failure rate inherent of human beings [140], As a result of this, further research on improving the pre-
which might prevent a single labeling chain from being used cision, the exhaustiveness, and the formalism of AI-based
in safety-critical applications. On the other hand, automated safety-critical systems would be of paramount importance
labeling typically relies on semi-supervised or unsupervised towards assuring that AI-based systems are safe. One possible
learning algorithms, whose safety assurance, in turn, depends path for that is to provide positive and negative specifications
on methods which are yet under development, as discussed for functions and/or concepts, which clearly state what is
throughout this paper. Based on such insight, future work within and outside the scope of the said entity.
aiming to address these limitations is considered welcome to Another theme worthy of additional research, and which
the community. can also benefit from advancements on the aforemen-
tioned requirements specification, refers to improving the
4) SYSTEMATIZING AI FAILURE MODES traceability among system-level requirements and their
Researchers such as Boulineau [129], Douthwaite and AI component-specific counterparts. Assuming that better
Kelly [117], McDermid et al. [7], and Zhang et al. [27] have requirements (i.e., more precise, exhaustive and formal)
highlighted the importance of establishing plausible failure are conceived for AI-related functions, a natural step for-
modes for AI models, variants, and techniques. ward that is to apply the same specification techniques to
The main goal of further research on such a theme would refine the systems-level requirements into component-level
be to craft a well-established list of random and systematic requirements. With such a refinement, the traceability among
AI failure modes, similar in structure and organization to requirements of different levels becomes less difficult, which
preexisting lists of hardware failure modes, such as the one has the potential of facilitating the propagation of evidence to
with CENELEC EN50129:2018 [11]. It is deemed that such build systems-level safety arguments in a bottom-up way (i.e.,
a list would feature, for each AI type / variant / approach / starting from the low-level safety-critical AI components and
technique, an as-exhaustive-as-possible relation of failure moving up the system chain up to its top goals). A starting
modes that could affect it. For instance, the failure modes of point towards this is the research by Husen et al. [145], who
an ANN would include potential causes that could change have briefly explored, among other subjects, AI requirements
its architecture (e.g., loss of connection between neurons, traceability on a conceptual case study.
improper neuron weight), as well as improper hyperparam-
eters (e.g., change on the number of hidden neurons, change 6) INVESTIGATING REDUNDANCY OF AI FOR
on the number of neurons per layer) and improper inputs (e.g., SAFETY-CRITICAL FUNCTIONS
inadequate input dataset). For safety-critical systems without AI, using redundant ele-
As already stated in subsection ‘‘VI-A-2)’’, having such a ments has been considered a feasible approach to meet safety
systematic list of AI failure modes would bring benefits in requirements by using ‘building blocks’ which are not suffi-
further research on the means to select the most adequate ciently safe on their own. Various schemes of redundancy,
type of AI for an application. Furthermore, it would also such as physical redundancy and information redundancy,
allow defining the needed actions to control and mitigate are also recommended on several standards for safety-critical
potential safety issues stemming from random and systematic systems (e.g., [9], [11], [138]).
AI failure modes. For AI-based systems, further investigation on whether
redundancy is useful practice in leveraging safety is still
5) SPECIFYING AI REQUIREMENTS AND PROVIDING needed. This concern has been raised by researchers such as
TRACEABILITY TO SAFETY ARGUMENTS Groza et al [61], Kurd et al. [37], and Shafaei et al. [76].
One of the main reasons to consider AI within functions Some correlated topics which are worth analysis in future
of an engineering system is that specifying these functions research involve (i.) assessing the impacts of redundant
is non-trivial in such a way that they cannot be speci- information in datasets on the robustness of AI instances
fied neither formally nor exhaustively enough in order to with regard to safety functions, (ii.) evaluating different
generate a closed-form solution [143]. As a result, the schemes of redundant AI elements, and (iii.) assessing poten-
specification of AI-based functions tends to be incomplete, tial common-mode failure modes that can affect redundant AI
ambiguous and at most partially formal. Such an issue elements.
has been discussed and exemplified by several researchers, On (i.), one shall consider that redundant information is not
among which Barzamini et al [144], Boulineau [129], Dey necessarily a replicated representation of the very same data,
and Lee [15], Koopman and Wagner [73], Kurd et al. [37], and but instead the presence of multiple different variables which
Machin et al. [75]. might translate into similar conclusions for the phenomenon
Moreover, alike every non-formal specification, they are of interest. For instance, if one wishes to estimate a person’s
also subject to problems related to the semiotic perception monthly income, social class and assets net value might be
sufficiently correlated to the point of leading to a converging for balancing exploration and exploitation robustly enough
conclusion. to ensure safety in a dynamic environment.
As per (ii.), a relevant research line comprises conceiving
different architectures of redundant AI and comparing and 8) INCORPORATING MORAL AND ETHICAL ASPECTS INTO
contrasting the results retrieved by their finished designs. SAFETY-CRITICAL AI
The redundant structures might include, for example, several Burton et al. [148] and Lin and Liu [149] have dealt with a
instances of a same AI variant/technique crafted by using research theme of significant importance especially for fully
different data partitions (e.g., alike de facto standardized AI autonomous safety-critical AI-based systems: the dilemma
models, such as random forests), or even building ensembles in which a system, after entering an irreversible state, has
with different AI variants (e.g., building an ANN, an SVM to make a decision among a set of alternatives that all lead
and a DT for the same application). Fault injection tools, such to undesired, catastrophic outcomes. A hypothetical situation
as TensorFI and BinFI [112], [113], can be useful in these which illustrates such a dilemma could occur, for example,
activities. Furthermore, mechanisms to build consensus (e.g., with an UGV that, at some instant of time, is faced with two
majority voting, choice of result with the highest degree of possible decisions only: colliding at a high speed with the
confidence) shall also be further investigated. So far, the study infrastructure, causing the certain death of its single occupant,
by Groza et al [61] is a starting point for exploring this area. or colliding at a somewhat lower speed with another vehicle,
Finally, (iii.) is directly connected to (i.) and (ii.), since with the certainty of injuries to the occupants of both vehicles
the extent to which common-mode failure modes manifest but a lower probability of death for the involved personnel.
themselves allow assessing how and to what extent each of the The contribution provided by Burton et al. [148] is relevant
diverse elements of a redundant architecture is affected by the to deal with such a dilemma, as the authors have identified
occurrence of the failure mode. It is recommended to perform three gaps – namely, semantic gap, responsibility gap, and
a case-by-case investigation, depending on the redundancy liability gap. The consciousness of these gaps allows design-
schemes considered in the assessed safety-critical AI-based ers to become aware of potential dilemmas on safety-critical
systems. AI-based systems and take the needed action to mitigate them
whenever possible and establishing clearer boundaries on
7) ASSESSING THE RELATIONSHIP BETWEEN AI MODELS when they cannot be avoided and who is to be liable for poten-
AND ENVIRONMENTAL CONSTRAINTS tially unsafe scenarios arising from them. The authors pro-
Researchers such as Alexander and Kelly [146], vide a twofold recommendation for future research: (i.) the
Gauerhof et al. [135], Ruan et al. [147], Tuncali et al. [96] safety assurance process shall be multidisciplinary, involv-
have warned that safety-critical AI-based systems can behave ing all potential stakeholders and including, in addition to
in an unexpected and potentially unsafe way if their design engineering itself, expert knowledge for law, regulation and
does not take into account the actual conditions and con- governance; and (ii.) providing means for dynamically mon-
straints of the environment in which they will effectively itoring and updating safety assurance in order to bridge the
operate. As a result, an aspect worthy of concern for future underlying gaps of the engineered system.
research is to develop means and methods to assess whether
environment-related hypotheses are indeed sound for the 9) DEEPENING THE RELATIONSHIP BETWEEN SAFETY AND
revenue service of safety-critical systems with AI. SECURITY ASSURANCE FOR AI-BASED SYSTEMS
Ensuring that models which represent the operational envi- Within the context of Industry 4.0, the usage of AI in safety-
ronment of AI-based components are sufficiently faithful to critical systems has emerged along with significant reliance
the real world and that they are coherently applied throughout of engineered systems on fast wireless communication net-
the system lifecycle becomes a major concern especially works [150]. For instance, safety-critical systems of smart
because the actual exercise of safety-critical systems on cities, along with UGVs running on it, can heavily benefit
their environments is hardly feasible – especially when deal- from the infrastructure of public 5G networks [150]. In this
ing with safety-critical scenarios that involve near-misses, context, a real-time fog computing architecture, in which
as mentioned on subsection ‘‘VI-A-3)’’. In these scenarios, public processing units can be on-demand requested for data
simulators, data collected from supposedly similar environ- processing, might be used for safety-critical purposes as
ments, and transfer learning might be used to fill this gap. well [151].
Hence, one shall have the means and tools to assess whether As a result of such a distributed architecture with public
these additional elements do not introduce uncertainties and networks, a more intricate relationship between security and
inaccuracies that might undermine the faithfulness and trust- safety emerges for safety-critical systems that are part of
worthiness that are necessary for building sound evidence for Industry 4.0. Regardless of the usage of AI, assuring proper
the safety assurance of AI-based systems. protection from security attacks is a necessary condition for
On reinforcement learning, for instance, these analyses are achieving safety goals,as pointed out by Dey and Lee [15],
directly related to the modeling of important hyperparameters and Seon and Kim [152].
and hyperfunctions, such as rewards, the learning rate, and Specifically when AI is within the loop of safety-critical
the exploratory test rate. All these features are responsible systems, additional concerns for security shall be considered.
For instance, specific ML variants and models, such as RL, d) Developing systematic evaluation metrics for XAI
ANNs, and online learning, are susceptible to corner cases in methods to guide the selection of different types of
which small perturbations on inputs and/or AI hyperparame- XAI according to the needs of applications (linked to
ters/hyperfunctions can lead to significant changes on outputs item ‘‘c)’’, for example);
[15]. This is significantly worrying because, if an adversarial e) Investigating means to generate safety assurance pat-
attack exploiting these corner cases is performed due to a terns for XAI.
security breach exploited by an intruder, an otherwise safe
system can be led to a potentially unsafe state, in which users 11) EXPANDING TOWARDS QUANTUM COMPUTING AND
and the surrounding environment are subject to catastrophic QUANTUM MACHINE LEARNING
outcomes. Incorporating quantum computing into a safety assurance
Consequently, future research which aims to deepen the process for AI-based systems is another theme worthy of
relationship between safety and security for AI-based sys- consideration in future research for two main reasons. Firstly,
tems, especially for scenarios in which complying to secu- quantum computing is able to overcome the NP-Hard com-
rity requirements is needed for meeting safety requirements, plexity of the reachability problems in which the formal
represent important progress for conceiving safety-critical verification of AI and ML are typically translated into [158].
systems with AI. Anastasi et al. [153], for instance, have Secondly, the emergence of quantum machine learning per se
highlighted the importance of including security analysis can also leverage the conception of new ML models and make
techniques within the safety lifecycle of AI-based systems. their usage feasible in increasingly more intricate safety-
The usage of graph-based machine learning to leverage secu- critical applications, such as automated medical diagnosis
rity, indicated by Gupta et al. [154] for autonomous vehicles, [159] and physics and chemistry processes [160].
represents another starting point for that purpose. Further- Despite the existence of commercial libraries to deal with
more, frameworks for adversarial machine learning, such as quantum machine learning [161], its high-scale usage is fore-
the Adversarial Robustness Toolbox [155] and Jespipe [156], seen as a long-term research rather than to short-to-mid-term
are also noteworthy starting points towards improving the ones, though. Current challenges related to building the actual
resilience of safety-critical AI-based systems to adversarial hardware to meet the intended purposes [160], as well as
attacks and their implications. the potentially prohibitive short-term costs for a quantum
computing infrastructure for typical consumer-graded appli-
10) EXPANDING THE KNOWLEDGE OF EXPLAINABLE AI cations trending in safety-critical AI research (e.g., trans-
(XAI) TO IMPROVE SAFETY portation and medical support applications) [161] support
this.
Based on the information reviewed in subsection
‘‘V-B-3)-d)’’, XAI is still an emerging area per se [52]. As a
B. SECOND PART OF THE GUIDELINES: SELF-EXPERIENCE
result, joint efforts in combining XAI with safety still remain
AND CROSS-FERTILIZED FUTURE RESEARCH
a research area with room for significant contribution towards
QUESTION Q6
assuring that safety-critical systems with AI meet their safety
The second part of the guidelines for future work related
targets.
to the safety assurance of AI-based systems results from
Researchers such as Confalonieri et al. [52], Dey and
a critical review on each of the C3 publications in order
Lee [15], Groza et al [61], Jia et al. [114], Koopman
to identify other potential future work. This critical review
and Wagner [73], Rajabli et al. [18], and Ward and
is not only based on the present SLR authors’ experience
Habli [157] have highlighted the need for additional research
with safety-critical systems, but also (and especially) on
on XAI and suggested a trend that includes the following
the cross-fertilization among the publications which were
themes:
reviewed during this SLR.
a) Conceiving guidelines for the structure of arguments By means of such critical review, four additional opportu-
generated by XAI in such a way that human prac- nities for future work not discussed within the publications
titioners can benefit from XAI (e.g., by improving reviewed in this SLR have been identified. They are depicted
the link between safety assurance properties and AI in Figure 15 and discussed in the following subsections.
interpretability);
b) Assessing desired and necessary features of 1) ASSURING THAT TOOLS AND THIRD-PARTY LIBRARIES
explainable-by-construction AI, so that sound evidence FOR DEVELOPING AI-BASED SYSTEMS ARE SAFE
can be collected for its safety assurance; Developing AI typically requires the usage of supporting
c) Deepening the analysis on how to propagate uncertain- tools such as simulators and database management systems
ties throughout the AI reasoning and report them for for tasks such as generating datasets, preprocessing datasets,
the AI processing relevant steps; and testing the behavior of safety-critical AI. In addition to
d) Further investigating means to generate approximate these tools, third-party libraries which implement AI models,
XAI models for hard-to-understand, black-box AI such as scikit-learn [162], might also be considered for reuse
models (e.g., DNNs); within the design of safety-critical AI.
As a result, the very last item proposed as future work is that of Mehmood et al. [103] might be unfeasible for, e.g.,
assessing the costs in applying a process-oriented approach practical safety-critical embedded systems due to its strin-
for the safety assurance of AI-based systems in real engi- gent storage requirements. As a result, expanding correlated
neering projects, notably to compare and contrast the needed research shall start by learning with the limitations of existing
efforts and technical skills of professionals to current non- solutions and ultimately trying either to optimize them or to
AI-based solutions’ costs. This could give professionals time follow a different approach if such an optimization is deemed
and cost estimates for assuring that AI-based systems are safe unfeasible or unjustifiable.
and, hence, allow companies and professionals to evaluate Moreover, other avenues on future research involve explor-
potential differences not only in the costs for developing ing specific techniques to be used within steps of the afore-
and/or buying safety-critical AI, but also in structuring their mentioned systems lifecycle. These specific technical topics
organization to optimize them to developing and/or using include (i.) tightening the justification of AI hyperparame-
safety-critical systems with AI. terization, (ii.) analyzing the adequacy of datasets, (iii.) sys-
tematizing AI failure modes, (iv.) improving the specification
C. CONCLUDING REMARKS ON THE GUIDELINES FOR of AI, (v.) exploring AI redundancy, (vi.) tightening assump-
FUTURE WORK ON THE SAFETY ASSURANCE tions and improving models of the environment at which
OF AI-BASED SYSTEMS the AI is used, (vii.) dealing with moral and ethical aspects,
The guidelines for future work presented in this section illus- (viii.) deepening the relationship between safety and security,
trate that, despite the increasing interaction of safety and AI (ix.) exploring explainable AI to improve safety, (x.) expand-
research communities in jointly exploring both areas, as evi- ing on quantum computing and quantum machine learning,
denced in this SLR, there is still plenty of room for research (xi.) assuring safety of PLD-implemented AI, (xii.) assuring
on the safety assurance of AI-based system. A set of eleven safety of AI development tools, (xiii.) reusing practice to
areas of research were derived from future work suggested on assure safety of non-AI-based systems, and (xiv.) estimating
research papers reviewed in this SLR, whereas four additional costs for assuring AI-based systems.
topics were conceived based on a critical analysis carried out Specifically for items (i.) to (x.), a starting point for
by combining the cross-fertilization of the reviewed research incorporating them on future research should also take into
papers with the expertise of the authors of this SLR in AI and consideration the positive and negative results of their prior
safety-critical systems. experimentation on preexisting research, quoted throughout
It is deemed that, among all the raised topics for future subsections ‘‘VI-A-2)’’ to ‘‘VI-A-11)’’. For instance, when
work, establishing a process-oriented method for the safety dealing with transfer learning as part of item ‘(ii.) analyzing
assurance of AI-based systems is a natural first step. The rea- the adequacy of datasets’, future research can be initially
soning that backs this recommendation up is that, alike with tightened to the challenges posed on the state of the art –
traditional safety-critical systems lacking AI, the training of namely, a better understanding of transfer learning mecha-
safety practitioners with expertise for AI-based systems is nisms [142] –, and just then widened to other relevant related
facilitated once there is a systematic approach for dealing the areas (e.g., investigating different transfer learning applica-
safety lifecycle, along with the needed safety activities and tions on safety-critical systems and developing assurance
recommended practice for each of its steps. patterns for transfer learning).
An outlook for future research on this area is to use pre- For items (xi.) to (xiv.), in turn, the lack of consistent previ-
existing safety assurance processes for AI-based systems – ous research on them makes it harder to constrain the starting
established on, e.g., the AI technology-agnostic ANSI/ points of such themes to potentially promising paths. In these
UL4600:2020 standard and the research papers discussed in scenarios, the general guidance provided in the subsections
subsection ‘‘V-B-3)-e)’’ – as templates and improve them in a ‘‘VI-B-1)’’ to ‘‘VI-B-4)’’ is recommended for that purpose.
twofold way. Firstly, it is pertinent to make sure that gaps on Finally, it is considered that future research could also
the safety lifecycles of the ‘template’ processes are filled with benefit from a tighter integration between AI and safety
steps and activities that cover the missing relevant AI-related researchers, since they are related to adapting typical prac-
safety-critical themes they lack. Secondly, it is worth com- tice of the AI field to stricter requirements imposed by
piling preexisting safety assurance techniques which have safety-critical applications. This integration is also important
not been contextualized within the ‘template’ processes (e.g., because it is envisioned that safety professionals shall excel
techniques cited in subsections ‘‘V-B-3)-a)’’ to ‘‘V-B-3)-d)’’) in multidisciplinary knowledge in both safety and AI in order
and map them onto the safety lifecycle steps along with how to coordinate and perform the needed tasks to ensuring that
their usage would be recommended. This could be achieved these systems meet their safety requirements.
by departing from the positive and negative outcomes of these
techniques, as per reported in the subsections ‘‘V-B-3)-a)’’ to VII. CONCLUSION
‘‘V-B-3)-d)’’ of this paper and the research therein quoted. The objective of this paper was to present an overview on the
For instance, if further research on refining the simplex state of the art and guidelines for future research on the safety
architecture defined in the subsection ‘‘V-B-3)-c)’’ is per- assurance of AI-based systems by means of an SLR compris-
formed, it is worth considering that an approach similar to ing texts published until August 26th , 2022. As justified in
section II, the main contribution of this research is not only choosing and adjusting AI hyperparameters, defining the
to go beyond the scope of other SLRs by covering a broader desired features of safety-critical explainable AI (e.g., uncer-
range of applications and following a well-controlled and tainty propagation), systematizing AI failure modes, and
reproducible process on peer-reviewed publications only, but defining means to design, verify and validate safety-critical
also to present an updated landscape on the safety assurance AI-based functions implemented on PLDs. The basic strat-
of systems with AI and introduce guidelines for relevant egy recommended to deal with these themes is to consider
future work. The latter has been reached through a critical the positive and negative results of preexisting research as
analysis of the reviewed references stemming from both starting points to guide research efforts, and then broaden the
the cross-fertilization among the reviewed references and scope of such research to other themes once the known gaps
the own experience of the SLR authors on safety assurance have been either filled or discarded. It is ultimately expected
and AI. that, by following the herein defined guidelines, safety practi-
The six-step SLR, carried out as per section III and leading tioners are provided with a safety assurance experience closer
to the results presented in section IV, covered a total of 5090 to that of non-AI-related standards (e.g., IEC61508-derived
references, among which a subset of 329 publications which ones) than the technology-agnostic approach of the 2020
somehow directly address the safety assurance of AI-based de jure ANSI/UL4600 standard for safety-critical AI-based
systems was considered in further steps. By means of these systems.
329 publications, it has been concluded that research on the Finally, it is worth mentioning that the results herein
theme has sharply increased especially over the last years reported are the first step of the authors’ research. The next
(2016 onwards) and that increasingly more research is also envisioned step is to contribute with the filling of the gaps
expected for the forthcoming years. identified in the guidelines for future work by establishing a
Based on the detailed review of the aforementioned safety assurance process-oriented approach including a set of
329 publications subset, it has been identified that the safety recommended techniques for each of its steps and a detailed
assurance of AI-based systems has been carried out fol- workflow to guide its application. The ultimate objective
lowing five main approaches to build safety arguments: of the research is to apply this safety assurance method to
(i.) performing exhaustive black-box testing of AI, (ii.) con- safety-critical AI-based systems and evaluate its convergence
straining the response of safety-critical AI by means of a towards results that ensure that a system conceived with it is
non-AI-dependent safety envelope, (iii.) designing fail-safe indeed safe.
AI, (iv.) combining explainable AI with its white-box anal-
yses, and (v.) establishing a continuous, process-oriented SUPPLEMENTARY MATERIAL
safety assurance process throughout systems’ lifecycles. The Additional results of the SLR, including the formal defini-
overall conclusion is that current research on the safety assur- tion of the SLR Search Language, further justification on
ance of AI-based systems indicates significant improvements its expressions, its instantiations for each search engine, and
towards allowing AI to be used and proven as safe; neverthe- the full list of reviewed references along with their analyses
less, further advancements are still needed to fully reach this (e.g., attribution of categories C1-C5, attribution of Q-index,
result. Details on each of the aforementioned safety assurance answers to questions Q1-Q6 and additional bibliometrics),
approaches, including their pros, cons, state of the art and are available within the technical report [165]. This report
current limitations, were explored in section V. has been made public on Zenodo.org not only as an open
Guidelines for potential future research topics have also science effort, but also as a means to increase the transparency
been presented in this research. These include not only recur- of the research and support the findings reported in this
rent themes indicated by other researchers, but also additional paper.
topics which stemmed from both the cross-fertilization of
the reviewed references and the experience of the authors of REFERENCES
this SLR with AI and safety. These guidelines are presented [1] Z. Allam and Z. A. Dhunny, ‘‘On big data, artificial intelli-
in two parts on section VI. Among all its items, two main gence and smart cities,’’ Cities, vol. 89, pp. 80–91, Jun. 2019, doi:
10.1016/j.cities.2019.01.032.
conclusions are highlighted. The first of them is the need [2] M. Xu and H. Liu, ‘‘A flexible deep learning-aware framework
for a better integration of AI and safety métiers, so that the for travel time prediction considering traffic event,’’ Eng.
resulting methods and approaches for the safety assurance of Appl. Artif. Intell., vol. 106, Nov. 2021, Art. no. 104491, doi:
10.1016/J.ENGAPPAI.2021.104491.
AI-based systems can be combined in an effective way. It is
[3] C. Liu, W. B. Rouse, and D. Belanger, ‘‘Understanding risks and
expected that this research aids paving this way. opportunities of autonomous vehicle technology adoption through sys-
The second highlight of the guidelines is the need for fur- tems dynamic scenario modeling—The American insurance indus-
ther research towards a systematic, process-oriented approach try,’’ IEEE Syst. J., vol. 14, no. 1, pp. 1365–1374, Mar. 2020, doi:
10.1109/JSYST.2019.2913647.
for the safety assurance of AI-based systems which includes [4] EASA. (2020). Artificial Intelligence Roadmap—A Human-Centric
recommended technical guidelines to deal with AI-specific Approach to AI in Aviation. Accessed: Nov. 23, 2022. [Online]. Available:
aspects. These include, but are not limited to, improv- https://www.easa.europa.eu/en/downloads/109668/en
[5] J. Doppelbauer. (2018). Command and Control 4.0. IRSE News.
ing the breadth and the depth of preexisting safety assur- Accessed: Nov. 23, 2022. [Online]. Available: https://www.era.europa.
ance processes, analyzing datasets, eliciting AI requirements, eu/system/files/2022-10/Command%20and%20Control%204.0.pdf
[6] I. Allende, N. M. Guire, J. Perez-Cerrolaza, L. G. Monsalve, [26] J. Zhang and J. Li, ‘‘Testing and verification of neural-network-based
J. Petersohn, and R. Obermaisser, ‘‘Statistical test coverage for safety-critical control software: A systematic literature review,’’
linux-based next-generation autonomous safety-related systems,’’ IEEE Inf. Softw. Technol., vol. 123, Jul. 2020, Art. no. 106296, doi:
Access, vol. 9, pp. 106065–106078, 2021, doi: 10.1109/ACCESS.2021. 10.1016/j.infsof.2020.106296.
3100125. [27] X. Zhang, F. T. S. Chan, C. Yan, and I. Bose, ‘‘Towards risk-aware arti-
[7] J. McDermid, Y. Jia, and I. Habli, ‘‘Towards a framework for safety ficial intelligence and machine learning systems: An overview,’’ Decis.
assurance of autonomous systems,’’ in Proc. CEUR Workshop, vol. 2419, Support Syst., vol. 159, Aug. 2022, Art. no. 113800, doi: 10.1016/j.
2019, pp. 1–7. dss.2022.113800.
[8] I. Habli, T. Lawton, and Z. Porter, ‘‘Artificial intelligence in health care: [28] S. A. Asadollah, D. Sundmark, S. Eldh, H. Hansson, and W. Afzal,
Accountability and safety,’’ Bull. World Health Org., vol. 98, no. 4, ‘‘10 years of research on debugging concurrent and multicore software:
pp. 251–256, Apr. 2020, doi: 10.2471/BLT.19.237487. A systematic mapping study,’’ Softw. Quality J., vol. 25, no. 1, pp. 49–82,
[9] Functional Safety of Electrical/Electronic/Programmable Electronic Jan. 2016, doi: 10.1007/S11219-015-9301-7.
Safety-Related Systems (7 Parts), document IEC, ISO/IEC61508:2010, [29] K. Petersen, R. Feldt, S. Mujtaba, and M. Mattsson, ‘‘System-
Geneve, Switzerland, 2010. atic mapping studies in software engineering,’’ in Proc. EASE 12th
[10] Railway Applications—The Specification and Demonstration of Relia- Int. Conf. Eval. Assessment Softw. Eng., Jun. 2008, pp. 68–77, doi:
bility, Availability, Maintainability and Safety (RAMS)—Part 1: Generic 10.14236/ewic/EASE2008.8.
RAMS Process, document CENELEC, EN50126-1:2017, Brussels, Bel- [30] R. Salay, K. Czarnecki, H. Kuwajima, H. Yasuoka, V. Abdelzad,
gium, 2017. C. Huang, M. Kahn, V. D. Nguyen, and T. Nakae, ‘‘The missing link:
[11] Railway Applications Communication, Signalling and Processing Sys- Developing a safety case for perception components in automated
tems Safety-Related Electronic Systems for Signalling, document CEN- driving,’’ in Proc. SAE Tech. Paper Ser., Mar. 2022, pp. 1–13, doi:
ELEC, EN50129:2018, Brussels, Belgium, 2018. 10.4271/2022-01-0818.
[12] Design Assurance Guidance for Airborne Electronic Hardware, docu- [31] Elsevier B.V. (2022). What is the Difference Between ScienceDirect
ment RTCA, DO-254, Washington, DC, USA, 2000. and Scopus Data? Data as a Service Support Center. Accessed:
[13] S. Ballingall, M. Sarvi, and P. Sweatman, ‘‘Safety assurance concepts for Sep. 30, 2022. [Online]. Available: https://service.elsevier.com/
automated driving systems,’’ SAE Int. J. Adv. Current Practices Mobility, app/answers/detail/a_id/28240/supporthub/dataasaservice/p/17729/
vol. 2, no. 3, pp. 1528–1537, 2020, doi: 10.4271/2020-01-0727. [32] Z. Kurd and T. P. Kelly, ‘‘Using fuzzy self-organising maps for safety
[14] W. M. D. Chia, S. L. Keoh, C. Goh, and C. Johnson, ‘‘Risk assess- critical systems,’’ in Computer Safety, Reliability, and Security (Lecture
ment methodologies for autonomous driving: A survey,’’ IEEE Trans. Notes in Computer Science), vol. 3219. Berlin, Germany: Springer, 2004,
Intell. Transp. Syst., vol. 23, no. 10, pp. 16923–16939, Oct. 2022, doi: pp. 17–30.
10.1109/TITS.2022.3163747. [33] Z. Kurd and T. P. Kelly, ‘‘Using safety critical artificial neural networks
[15] S. Dey and S.-W. Lee, ‘‘Multilayered review of safety approaches in gas turbine aero-engine control,’’ in Proc. 24th Int. Conf. Comput.
for machine learning-based systems in the days of AI,’’ J. Syst. Saf., Rel., Secur. (SAFECOMP), vol. 3688, 2005, pp. 136–150, 2005, doi:
Softw., vol. 176, Jun. 2021, Art. no. 110941, doi: 10.1016/j.jss.2021. 10.1007/11563228_11.
110941. [34] B. Cukic, E. Fuller, M. Mladenovski, and S. Yerramalla, ‘‘Run-time
[16] S. Kabir, ‘‘An overview of fault tree analysis and its application in model assessment of neural network control systems,’’ in Methods and Proce-
based dependability analysis,’’ Expert Syst. Appl., vol. 77, pp. 114–135, dures for the Verification and Validation of Artificial Neural Networks.
Jul. 2017, doi: 10.1016/j.eswa.2017.01.058. New York, NY, USA: Springer, 2006, pp. 257–269, doi: 10.1007/0-387-
[17] A. M. Nascimento, ‘‘A systematic literature review about the impact 29485-6_10.
of artificial intelligence on autonomous vehicle safety,’’ IEEE Trans. [35] L. Pullum and B. J. Taylor, ‘‘Risk and hazard analysis for neural network
Intell. Transp. Syst., vol. 21, no. 12, pp. 4928–4946, Dec. 2020, doi: systems,’’ in Methods and Procedures for the Verification and Validation
10.1109/tits.2019.2949915. of Artificial Neural Networks. New York, NY, USA: Springer, 2006,
[18] N. Rajabli, F. Flammini, R. Nardone, and V. Vittorini, ‘‘Software pp. 33–49, doi: 10.1007/0-387-29485-6_3.
verification and validation of safe autonomous cars: A systematic [36] Z. Kurd and T. P. Kelly, ‘‘Using fuzzy self-organising maps for safety
literature review,’’ IEEE Access, vol. 9, pp. 4797–4819, 2021, doi: critical systems,’’ Rel. Eng. Syst. Saf., vol. 92, no. 11, pp. 1563–1583,
10.1109/ACCESS.2020.3048047. Nov. 2007, doi: 10.1016/j.ress.2006.10.005.
[19] A. Rawson and M. Brito, ‘‘A survey of the opportunities and [37] Z. Kurd, T. Kelly, and J. Austin, ‘‘Developing artificial neural net-
challenges of supervised machine learning in maritime risk anal- works for safety critical systems,’’ Neural Comput. Appl., vol. 16, no. 1,
ysis,’’ Transp. Rev., vol. 43, no. 1, pp. 108–130, Jan. 2023, doi: pp. 11–19, Oct. 2006, doi: 10.1007/s00521-006-0039-9.
10.1080/01441647.2022.2036864. [38] J. H. Gillula and C. J. Tomlin, ‘‘Guaranteed safe online learning
[20] G. Siedel, S. Voß, and S. Vock, ‘‘An overview of the research landscape via reachability: Tracking a ground target using a quadrotor,’’ in
in the field of safe machine learning,’’ in Proc. Saf. Eng., Risk, Rel. Proc. IEEE Int. Conf. Robot. Autom., May 2012, pp. 2723–2730, doi:
Anal., Res. Posters, vol. 13, Nov. 2021, Art. no. V013T14A045, doi: 10.1109/ICRA.2012.6225136.
10.1115/IMECE2021-69390. [39] G. Mancini, ‘‘Collection, processing and use of data,’’ Nucl. Eng.
[21] Z. Tahir and R. Alexander, ‘‘Coverage based testing for V&V and Des., vol. 93, nos. 2–3, pp. 181–186, 1986, doi: 10.1016/0029-5493(86)
safety assurance of self-driving autonomous vehicles: A system- 90217-7.
atic literature review,’’ in Proc. IEEE Int. Conf. Artif. Intell. Test. [40] T. Washio, M. Kitamura, K. Kotajima, and K. Sugiyama, ‘‘Automated
(AITest), Aug. 2020, pp. 23–30, doi: 10.1109/AITEST49225.2020. generation of nuclear power plant safety information,’’ in Proc. Power
00011. Plant Dyn., Control Test. Symp., vol. 1, 1986, pp. 39.01–39.17.
[22] F. Tambon, G. Laberge, L. An, A. Nikanjam, P. S. N. Mindom, [41] F. L. Cho, ‘‘Expert system application in equipment risk assessment
Y. Pequignot, F. Khomh, G. Antoniol, E. Merlo, and F. Laviolette, ‘‘How for nuclear power plants,’’ in Proc. Comput.-Aided Eng. Appl. Pressure
to certify machine learning based safety-critical systems? A system- Vessels Piping Conf., vol. 126, 1987, pp. 27–32.
atic literature review,’’ Automated Softw. Eng., vol. 29, no. 2, pp. 1–74, [42] R. C. Erdmann and B. K.-H. Sun, ‘‘Expert system approach for safety
Apr. 2022, doi: 10.1007/S10515-022-00337-X. diagnosis,’’ Nucl. Technol., vol. 82, no. 2, pp. 162–172, Aug. 1988, doi:
[23] Y. Wang and M. P. Chapman, ‘‘Risk-averse autonomous systems: A 10.13182/NT88-A34105.
brief history and recent developments from the perspective of opti- [43] R. E. Uhrig, ‘‘Use of probabilistic risk assessment (PRA) in expert
mal control,’’ Artif. Intell., vol. 311, Oct. 2022, Art. no. 103743, doi: systems to advise nuclear plant operators and managers,’’ Proc. SPIE,
10.1016/j.artint.2022.103743. vol. 937, pp. 210–215, Mar. 1988, doi: 10.1117/12.946977.
[24] Y. Wang and S. H. Chung, ‘‘Artificial intelligence in safety-critical sys- [44] B. Frisch, J. Lecinena, C. Preyssl, A. Saleem, F. Stolle, and E. Tosini,
tems: A systematic review,’’ Ind. Manage. Data Syst., vol. 122, no. 2, ‘‘ERES—An expert system for ESA risk assessment and management,’’
pp. 442–470, Feb. 2022, doi: 10.1108/IMDS-07-2021-0419. Sci. Technol. Ser., vol. 93, pp. 161–171, Jan. 1997.
[25] H. Wen, F. Khan, M. T. Amin, and S. Z. Halim, ‘‘Myths and miscon- [45] R. Vaidhyanathan and V. Venkatasubramanian, ‘‘Experience with an
ceptions of data-driven methods: Applications to process safety analy- expert system for automated HAZOP analysis,’’ Comput. Chem. Eng.,
sis,’’ Comput. Chem. Eng., vol. 158, Feb. 2022, Art. no. 107639, doi: vol. 20, pp. S1589–S1594, Jan. 1996, doi: 10.1016/0098-1354(96)
10.1016/j.compchemeng.2021.107639. 00270-0.
[46] J. Fox, ‘‘Expert systems for safety-critical applications: Theory, technol- [66] D. Meltz and H. Guterman, ‘‘RobIL—Israeli program for research and
ogy and applications,’’ in Proc. IEE Colloq. Knowl.-Based Syst. Saf. Crit. development of autonomous UGV: Performance evaluation methodol-
Appl., no. 109, May 1994, pp. 5-1–5-5. ogy,’’ in Proc. IEEE Int. Conf. Sci. Electr. Eng. (ICSEE), Nov. 2016,
[47] M. Kitamura, ‘‘Knowledge engineering approach to risk management pp. 1–5, doi: 10.1109/ICSEE.2016.7806157.
and decision-making problems,’’ Rel. Eng. Syst. Saf., vol. 38, nos. 1–2, [67] D. Meltz and H. Guterman, ‘‘Functional safety verification for
pp. 67–70, Jan. 1992. autonomous UGVs—Methodology presentation and implementation
[48] G. Xie, D. Xue, and S. Xi, ‘‘TREE-EXPERT: A tree-based expert sys- on a full-scale system,’’ IEEE Trans. Intell. Vehicles, vol. 4, no. 3,
tem for fault tree construction,’’ Reliab. Eng. Syst. Saf., vol. 40, no. 3, pp. 472–485, Sep. 2019, doi: 10.1109/TIV.2019.2919460.
pp. 295–309, 1993, doi: 10.1016/0951-8320(93)90066-8. [68] T. Watanabe and D. Wolf, ‘‘Verisimilar percept sequences tests for
[49] E. A. Averbukh, ‘‘Neural network models and statistical tests as flexible autonomous driving intelligent agent assessment,’’ in Proc. Latin
base for intelligent fault diagnosis,’’ Annu. Rev. Autom. Program., vol. 17, Amer. Robotic Symp., Brazilian Symp. Robot. (SBR) Workshop
no. 10, pp. 259–266, 1992, doi: 10.1016/S0066-4138(09)91043-6. Robot. Educ. (WRE), Nov. 2018, pp. 188–193, doi: 10.1109/LARS/
[50] X. Z. Wang, B. H. Chen, S. H. Yang, and C. McGreavy, ‘‘Neural nets, SBR/WRE.2018.00048.
fuzzy sets and digraphs in safety and operability studies of refinery [69] J. Sun, H. Zhou, H. Xi, H. Zhang, and Y. Tian, ‘‘Adaptive design of
reaction processes,’’ Chem. Eng. Sci., vol. 51, no. 10, pp. 2169–2178, experiments for safety evaluation of automated vehicles,’’ IEEE Trans.
1996, doi: 10.1016/0009-2509(96)00074-7. Intell. Transp. Syst., vol. 23, no. 9, pp. 14497–14508, Sep. 2022, doi:
[51] C.-H. Wei, ‘‘Developing freeway lane-changing support systems using 10.1109/TITS.2021.3130040.
artificial neural networks,’’ J. Adv. Transp., vol. 35, no. 1, pp. 47–65, [70] M. Hussain, N. Ali, and J.-E. Hong, ‘‘DeepGuard: A framework for
Sep. 2001, doi: 10.1002/atr.5670350105. safeguarding autonomous driving systems from inconsistent behaviour,’’
[52] R. Confalonieri, L. Coba, B. Wagner, and T. R. Besold, ‘‘A historical Automated Softw. Eng., vol. 29, no. 1, pp. 1–32, May 2022, doi:
perspective of explainable artificial intelligence,’’ WIREs Data Min- 10.1007/s10515-021-00310-0.
ing Knowl. Discovery, vol. 11, no. 1, Jan. 2021, Art. no. e1391, doi: [71] J. Kozal and P. Ksieniewicz, ‘‘Imbalance reduction techniques applied
10.1002/WIDM.1391. to ECG classification problem,’’ in Proc. Int. Conf. Intell. Data Eng.
[53] A. Miller, ‘‘The intrinsically linked future for human and artificial intel- Automated Learn., vol. 11872, 2019, pp. 323–331, doi: 10.1007/978-3-
ligence interaction,’’ J. Big Data, vol. 6, no. 1, pp. 1–9, May 2019, doi: 030-33617-2_33.
10.1186/S40537-019-0202-7. [72] C. Harper and P. Caleb-Solly, ‘‘Towards an ontological framework for
[54] D. Liu, H. Kong, X. Luo, W. Liu, and R. Subramaniam, ‘‘Bringing AI environmental survey hazard analysis of autonomous systems,’’ in Proc.
to edge: From deep learning’s perspective,’’ Neurocomputing, vol. 485, CEUR Workshop, vol. 2808, 2021, pp. 1–7.
pp. 297–320, May 2022, doi: 10.1016/J.NEUCOM.2021.04.141. [73] P. Koopman and M. Wagner, ‘‘Toward a framework for highly automated
[55] G. Giray, ‘‘A software engineering perspective on engineering machine vehicle safety validation,’’ in Proc. SAE Tech. Paper Ser., Apr. 2018,
learning systems: State of the art and challenges,’’ J. Syst. Softw., vol. 180, pp. 1–13, doi: 10.4271/2018-01-1071.
Oct. 2021, Art. no. 111031, doi: 10.1016/J.JSS.2021.111031. [74] H. Wu, D. Lv, T. Cui, G. Hou, M. Watanabe, and W. Kong, ‘‘SDLV:
[56] Q. Wang, G. Kou, L. Chen, Y. He, W. Cao, and G. Pu, ‘‘Runtime Verification of steering angle safety for self-driving cars,’’ Formal Aspects
assurance of learning-based lane changing control for autonomous driv- Comput., vol. 33, no. 3, pp. 325–341, Jun. 2021, doi: 10.1007/s00165-
ing vehicles,’’ J. Circuits, Syst. Comput., vol. 31, no. 14, Sep. 2022, 021-00539-2.
Art. no. 2250249, doi: 10.1142/S0218126622502498. [75] M. Machin, J. Guiochet, H. Waeselynck, J.-P. Blanquart, M. Roy, and
[57] F. Flammini, S. Marrone, R. Nardone, M. Caporuscio, and M. D’Angelo, L. Masson, ‘‘SMOF: A safety monitoring framework for autonomous sys-
‘‘Safety integrity through self-adaptation for multi-sensor event detection: tems,’’ IEEE Trans. Syst., Man, Cybern. Syst., vol. 48, no. 5, pp. 702–715,
Methodology and case-study,’’ Future Gener. Comput. Syst., vol. 112, May 2018, doi: 10.1109/TSMC.2016.2633291.
pp. 965–981, Nov. 2020, doi: 10.1016/j.future.2020.06.036. [76] S. Shafaei, S. Kugele, M. H. Osman, and A. Knoll, ‘‘Uncertainty in
[58] C. Lazarus, J. G. Lopez, and M. J. Kochenderfer, ‘‘Runtime safety machine learning: A safety perspective on autonomous driving,’’ in Proc.
assurance using reinforcement learning,’’ in Proc. AIAA/IEEE Int. Conf. Comput. Saf., Rel., Secur., vol. 11094, 2018, pp. 458–464, doi:
39th Digit. Avionics Syst. Conf. (DASC), Oct. 2020, pp. 1–9, doi: 10.1007/978-3-319-99229-7_39.
10.1109/DASC50938.2020.9256446. [77] S. Kuutti, R. Bowden, H. Joshi, R. de Temple, and S. Fallah, ‘‘Safe deep
[59] Y. Chandak, S. M. Jordan, G. Theocharous, M. White, and P. S. Thomas, neural network-driven autonomous vehicles using software safety cages,’’
‘‘Towards safe policy improvement for non-stationary MDPs,’’ in in Proc. 20th Int. Conf. Intell. Data Eng. Automated Learn., (IDEAL),
Proc. 34th Conf. Neural Inf. Process. Syst. (NeurIPS), Dec. 2020, vol. 11872, 2019, pp. 150–160, doi: 10.1007/978-3-030-33617-2_17.
pp. 9156–9168. Accessed: Nov. 23, 2022. [Online]. Available: https://dl. [78] S. Schirmer, C. Torens, F. Nikodem, and J. Dauer, ‘‘Considerations of
acm.org/doi/10.5555/3495724.3496492 artificial intelligence safety engineering for unmanned aircraft,’’ in Proc.
[60] J. Hernández-Orallo, F. Martínez-Plumed, S. Avin, J. Whittlestone, and Workshops, ASSURE, DECSoS, SASSUR, STRIVE, WAISE Co-Located
S. Ó. Héigeartaigh, ‘‘AI paradigms and AI safety: Mapping artefacts and With 37th Int. Conf. Comput. Saf., Rel. Secur., (SAFECOMP), vol. 11094.
techniques to safety issues,’’ in Proc. 24th Eur. Conf. Artif. Intell., (ECAI) Berlin, Germany: Springer, 2018, pp. 465–472, doi: 10.1007/978-3-319-
Including 10th Conf. Prestigious Appl. Artif. Intell. (PAIS), vol. 325, 2020, 99229-7_40.
pp. 2521–2528, doi: 10.3233/FAIA200386. [79] G. Jager, J. Schleiss, S. Usanavasin, S. Stober, and S. Zug, ‘‘Analyzing
[61] A. Groza, L. Toderean, G. A. Muntean, and S. D. Nicoara, ‘‘Agents that regions of safety for handling shared data in cooperative systems,’’ in
argue and explain classifications of retinal conditions,’’ J. Med. Biol. Eng., Proc. 25th IEEE Int. Conf. Emerg. Technol. Factory Autom. (ETFA),
vol. 41, pp. 730–741, Sep. 2021, doi: 10.1007/s40846-021-00647-7. Sep. 2020, pp. 628–635, doi: 10.1109/ETFA46521.2020.9211932.
[62] I. Ruchkin, M. Cleaveland, R. Ivanov, P. Lu, T. Carpenter, O. Sokolsky, [80] X. Lin, H. Zhu, R. Samanta, and S. Jagannathan, ‘‘Art: Abstraction
and I. Lee, ‘‘Confidence composition for monitors of verifica- refinement-guided training for provably correct neural networks,’’ in
tion assumptions,’’ in Proc. ACM/IEEE 13th Int. Conf. Cyber-Phys. Proc. 20th Conf. Formal Methods Comput.-Aided Design, (FMCAD),
Syst. (ICCPS), May 2022, pp. 1–12, doi: 10.1109/ICCPS54341.2022. Jan. 2020, pp. 148–157, doi: 10.34727/2020/isbn.978-3-85448-042-
00007. 6_22.
[63] P. Musau, N. Hamilton, D. M. Lopez, P. Robinette, and T. T. Johnson, [81] H. Zhao, X. Zeng, T. Chen, and Z. Liu, ‘‘Synthesizing barrier certificates
‘‘On using real-time reachability for the safety assurance of machine using neural networks,’’ in Proc. 23rd Int. Conf. Hybrid Syst., Comput.
learning controllers,’’ in Proc. IEEE Int. Conf. Assured Auton- Control, Apr. 2020, pp. 1–11, doi: 10.1145/3365365.3382222.
omy (ICAA), Mar. 2022, pp. 1–10, doi: 10.1109/ICAA52185.2022. [82] Q. Zhao, X. Chen, Y. Zhang, M. Sha, Z. Yang, W. Lin, E. Tang,
00010. Q. Chen, and X. Li, ‘‘Synthesizing ReLU neural networks with two
[64] Y. Bai, Z. Huang, H. Lam, and D. Zhao, ‘‘Rare-event simulation for neural hidden layers as barrier certificates for hybrid systems,’’ in Proc. 24th
network and random forest predictors,’’ ACM Trans. Model. Comput. Int. Conf. Hybrid Syst., Comput. Control, May 2021, pp. 1–11, doi:
Simul., vol. 32, no. 3, pp. 1–33, Jul. 2022, doi: 10.1145/3519385. 10.1145/3447928.3456638.
[65] A. Corso, R. Moss, M. Koren, R. Lee, and M. Kochenderfer, ‘‘A sur- [83] A. Peruffo, D. Ahmed, and A. Abate, ‘‘Automated and formal synthesis
vey of algorithms for black-box safety validation of cyber-physical of neural barrier certificates for dynamical models,’’ in Proc. Int. Conf.
systems,’’ J. Artif. Intell. Res., vol. 72, pp. 377–428, Oct. 2021, doi: Tools Algorithms Construct. Anal. Syst., Mar. 2021, pp. 370–388, doi:
10.1613/JAIR.1.12716. 10.1007/978-3-030-72016-2_20.
[84] M. Sha, X. Chen, Y. Ji, Q. Zhao, Z. Yang, W. Lin, E. Tang, [101] Y. Peng, G. Tan, H. Si, and J. Li, ‘‘DRL-GAT-SA: Deep reinforcement
Q. Chen, and X. Li, ‘‘Synthesizing barrier certificates of neural net- learning for autonomous driving planning based on graph attention net-
work controlled continuous systems via approximations,’’ in Proc. 58th works and simplex architecture,’’ J. Syst. Archit., vol. 126, May 2022,
ACM/IEEE Design Autom. Conf. (DAC), Dec. 2021, pp. 631–636, doi: Art. no. 102505, doi: 10.1016/j.sysarc.2022.102505.
10.1109/DAC18074.2021.9586327. [102] J. Thumm and M. Althoff, ‘‘Provably safe deep reinforcement learn-
[85] A. Claviere, E. Asselin, C. Garion, and C. Pagetti, ‘‘Safety verification of ing for robotic manipulation in human environments,’’ in Proc.
neural network controlled systems,’’ in Proc. 51st Annu. IEEE/IFIP Int. IEEE Int. Conf. Robot. Autom., May 2022, pp. 6344–6350, doi:
Conf. Dependable Syst. Netw. Workshops (DSN-W), Jun. 2021, pp. 47–54, 10.1109/ICRA46639.2022.9811698.
doi: 10.1109/DSN-W52860.2021.00019. [103] U. Mehmood, S. Sheikhi, S. Bak, S. A. Smolka, and S. D. Stoller, ‘‘The
black-box simplex architecture for runtime assurance of autonomous
[86] Z. Wang, C. Huang, and Q. Zhu, ‘‘Efficient global robustness certi-
CPS,’’ in Proc. 14th Int. Symp. NASA Formal Methods, (NFM),
fication of neural networks via interleaving twin-network encoding,’’
vol. 13260. Cham, Switzerland: Springer, 2022, pp. 231–250, doi:
in Proc. Design, Autom. Test Eur. Conf. Exhib. (DATE), Mar. 2022,
10.1007/978-3-031-06773-0_12.
pp. 1087–1092. [104] M. Fazlyab, M. Morari, and G. J. Pappas, ‘‘Probabilistic verification and
[87] Q. Zhu, W. Li, H. Kim, Y. Xiang, K. Wardega, Z. Wang, Y. Wang, reachability analysis of neural networks via semidefinite programming,’’
H. Liang, C. Huang, J. Fan, and H. Choi, ‘‘Know the unknowns: in Proc. IEEE Conf. Decis. Control, Dec. 2019, pp. 2726–2731, doi:
Addressing disturbances and uncertainties in autonomous systems,’’ in 10.1109/CDC40024.2019.9029310.
Proc. 39th Int. Conf. Comput.-Aided Design, Nov. 2020, pp. 1–9, doi: [105] A. Grushin, J. Nanda, A. Tyagi, D. Miller, J. Gluck, N. C. Oza, and
10.1145/3400302.3415768. A. Maheshwari, ‘‘Decoding the black box: Extracting explainable deci-
[88] R. Ivanov, J. Weimer, R. Alur, G. J. Pappas, and I. Lee, ‘‘Verisig: Verifying sion boundary approximations from machine learning models for real
safety properties of hybrid systems with neural network controllers,’’ in time safety assurance of the national airspace,’’ in Proc. AIAA Scitech
Proc. 22nd ACM Int. Conf. Hybrid Syst., Comput. Control, Apr. 2019, Forum, Jan. 2019, p. 136, doi: 10.2514/6.2019-0136.
pp. 169–178, doi: 10.1145/3302504.3311806. [106] R. Salay, M. Angus, and K. Czarnecki, ‘‘A safety analysis method
[89] R. Ivanov, T. Carpenter, J. Weimer, R. Alur, G. Pappas, and I. Lee, for perceptual components in automated driving,’’ in Proc. IEEE
‘‘Verisig 2.0: Verification of neural network controllers using Taylor 30th Int. Symp. Softw. Rel. Eng. (ISSRE), Oct. 2019, pp. 24–34, doi:
model preconditioning,’’ in Proc. Int. Conf. Comput. Aided Verification, 10.1109/ISSRE.2019.00013.
vol. 12759, 2021, pp. 249–262, doi: 10.1007/978-3-030-81685-8_11. [107] R. Nahata, D. Omeiza, R. Howard, and L. Kunze, ‘‘Assessing and explain-
ing collision risk in dynamic environments for autonomous driving
[90] C. Sidrane, A. Maleki, A. Irfan, and M. J. Kochenderfer, ‘‘OVERT: An
safety,’’ in Proc. IEEE Int. Intell. Transp. Syst. Conf. (ITSC), Sep. 2021,
algorithm for safety verification of neural network control policies for
pp. 223–230, doi: 10.1109/ITSC48978.2021.9564966.
nonlinear systems,’’ J. Mach. Learn. Res., vol. 23, no. 117, pp. 1–45,
[108] S. Gerasimou, H. F. Eniser, A. Sen, and A. Cakan, ‘‘Importance-
2022. [Online]. Available: https://www.jmlr.org/papers/volume23/21-
driven deep learning system testing,’’ in Proc. ACM/IEEE 42nd
0847/21-0847.pdf
Int. Conf. Softw. Eng., Companion, Oct. 2020, pp. 322–323, doi:
[91] H.-D. Tran, F. Cai, M. L. Diego, P. Musau, T. T. Johnson, and 10.1145/3377812.3390793.
X. Koutsoukos, ‘‘Safety verification of cyber-physical systems [109] L. Ma, F. Juefei-Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen,
with reinforcement learning control,’’ ACM Trans. Embedded T. Su, L. Li, Y. Liu, J. Zhao, and Y. Wang, ‘‘DeepGauge: Multi-
Comput. Syst., vol. 18, no. 5s, pp. 1–22, Oct. 2019, doi: 10.1145/ granularity testing criteria for deep learning systems,’’ in Proc. 33rd
3358230. ACM/IEEE Int. Conf. Automated Softw. Eng., Sep. 2018, pp. 120–131,
[92] H. Fahmy, F. Pastore, and L. Briand, ‘‘HUDD: A tool to debug DNNs doi: 10.1145/3238147.3238202.
for safety analysis,’’ in Proc. IEEE/ACM 44th Int. Conf. Softw. Eng., [110] S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana, ‘‘Efficient for-
Companion Proc. (ICSE-Companion), May 2022, pp. 100–104, doi: mal safety analysis of neural networks,’’ in Proc. Adv. Neural Inf.
10.1109/ICSE-Companion55297.2022.9793750. Process. Syst., Dec. 2018, pp. 6367–6377. https://proceedings.neurips.
[93] L. Pulina and A. Tacchella, ‘‘NeVer: A tool for artificial neural networks cc/paper/2018/file/2ecd2bd94734e5dd392d8678bc64cdab-Paper.pdf
verification,’’ Ann. Math. Artif. Intell., vol. 62, nos. 3–4, pp. 403–425, [111] K. Pei, Y. Cao, J. Yang, and S. Jana, ‘‘DeepXplore: Automated white-
Jul. 2011, doi: 10.1007/s10472-011-9243-0. box testing of deep learning systems,’’ Commun. ACM, vol. 62, no. 11,
pp. 137–145, Oct. 2019, doi: 10.1145/3361566.
[94] G. Katz, ‘‘The marabou framework for verification and analysis of
[112] Z. Chen, N. Narayanan, B. Fang, G. Li, K. Pattabiraman, and
deep neural networks,’’ in Proc. Int. Conf. Comput. Aided Verification,
N. DeBardeleben, ‘‘TensorFI: A flexible fault injection framework for
vol. 11561, 2019, pp. 443–452, doi: 10.1007/978-3-030-25540-4_26.
TensorFlow applications,’’ in Proc. IEEE 31st Int. Symp. Softw. Rel. Eng.
[95] C. Paterson, H. Wu, J. Grese, R. Calinescu, C. S. Pǎsăreanu, and (ISSRE), Oct. 2020, pp. 426–435, doi: 10.1109/ISSRE5003.2020.00047.
C. Barrett, ‘‘DeepCert: Verification of contextually relevant robustness [113] Z. Chen, G. Li, K. Pattabiraman, and N. DeBardeleben, ‘‘BinFI: An
for neural network image classifiers,’’ in Proc. Int. Conf. Comput. efficient fault injector for safety-critical machine learning systems,’’ in
Saf., Rel., Secur., vol. 12852, 2021, pp. 3–17, doi: 10.1007/978-3-030- Proc. Int. Conf. High Perform. Comput., Netw., Storage Anal., Nov. 2019,
83903-1_5. pp. 1–23, doi: 10.1145/3295500.3356177.
[96] C. E. Tuncali, J. Kapinski, H. Ito, and J. V. Deshmukh, ‘‘Reasoning about [114] Y. Jia, J. McDermid, T. Lawton, and I. Habli, ‘‘The role of explainabil-
safety of learning-enabled components in autonomous cyber-physical ity in assuring safety of machine learning in healthcare,’’ IEEE Trans.
systems,’’ in Proc. 55th Annu. Design Autom. Conf., Jun. 2018, pp. 1–6, Emerg. Topics Comput., vol. 10, no. 4, pp. 1746–1760, Oct. 2022, doi:
doi: 10.1145/3195970.3199852. 10.1109/TETC.2022.3171314.
[97] J. Val, R. Wisniewski, and C. S. Kallesoe, ‘‘Safe reinforcement [115] S. Burton, ‘‘Safety assurance of machine learning for chassis control
learning control for water distribution networks,’’ in Proc. IEEE functions,’’ in Proc. Int. Conf. Comput. Saf., Rel. Secur., vol. 12852, 2021,
Conf. Control Technol. Appl. (CCTA), Aug. 2021, pp. 1148–1153, doi: pp. 149–162, doi: 10.1007/978-3-030-83903-1_10.
10.1109/CCTA48906.2021.9659138. [116] G. Pedroza and A. Morayo, ‘‘Safe-by-design development method for
artificial intelligent based systems,’’ in Proc. Int. Conf. Softw. Eng. Knowl.
[98] D. T. Phan, R. Grosu, N. Jansen, N. Paoletti, S. A. Smolka, and
Eng., Jul. 2019, pp. 391–397, doi: 10.18293/SEKE2019-094.
S. D. Stoller, ‘‘Neural simplex architecture,’’ in Proc. 12th Int. Symp. [117] M. Douthwaite and T. Kelly, ‘‘Establishing verification and validation
NASA Formal Methods, (NFM), vol. 12229. Cham, Switzerland: Springer, objectives for safety-critical Bayesian networks,’’ in Proc. IEEE Int.
2020, pp. 97–114, doi: 10.1007/978-3-030-55754-6_6. Symp. Softw. Rel. Eng. Workshops (ISSREW), Oct. 2017, pp. 302–309,
[99] D. Shukla, R. Lal, D. Hauptman, S. S. Keshmiri, P. Prabhakar, doi: 10.1109/ISSREW.2017.60.
and N. Beckage, ‘‘Flight test validation of a safety-critical neural [118] I. Haring, F. Luttner, A. Frorath, M. Fehling-Kaschek, K. Ross,
network based longitudinal controller for a fixed-wing UAS,’’ in T. Schamm, S. Knoop, D. Schmidt, A. Schmidt, Y. Ji, Z. Yang, A. Rupalla,
Proc. AIAA AVIATION FORUM, Jun. 2020, pp. 1–15, doi: 10.2514/ F. Hantschel, M. Frey, N. Wiechowski, C. Schyr, D. Grimm, M. R. Zofka,
6.2020-3093. and A. Viehl, ‘‘Framework for safety assessment of autonomous driving
[100] S. Chen, Y. Sun, D. Li, Q. Wang, Q. Hao, and J. Sifakis, ‘‘Runtime safety functions up to SAE level 5 by self-learning iteratively improving con-
assurance for learning-enabled control of autonomous driving vehicles,’’ trol loops between development, safety and field life cycle phases,’’ in
in Proc. Int. Conf. Robot. Autom. (ICRA), May 2022, pp. 8978–8984, doi: Proc. IEEE 17th Int. Conf. Intell. Comput. Commun. Process. (ICCP),
10.1109/ICRA46639.2022.9812177. Oct. 2021, pp. 33–40, doi: 10.1109/ICCP53602.2021.9733699.
[119] P. Koopman, U. Ferrell, F. Fratrik, and M. Wagner, ‘‘A safety standard [139] S. Gupta, I. Ullah, and M. G. Madden, ‘‘Coyote: A dataset of challenging
approach for fully autonomous vehicles,’’ in Proc. Int. Conf. Comput. scenarios in visual perception for autonomous vehicles,’’ in Proc. CEUR
Saf., Rel., Secur., vol. 11699, 2019, pp. 326–332, doi: 10.1007/978-3-030- Workshop, vol. 2916, 2021, pp. 1–9.
26250-1_26. [140] C. J. Hong and V. R. Aparow, ‘‘System configuration of Human-in-the-
[120] M. Mock, ‘‘An integrated approach to a safety argumentation for AI-based loop simulation for level 3 autonomous vehicle using IPG CarMaker,’’ in
perception functions in automated driving,’’ in Proc. Int. Conf. Comput. Proc. IEEE Int. Conf. Internet Things Intell. Syst. (IoTaIS), Nov. 2021,
Saf., Rel., Secur., vol. 12853, 2021, pp. 265–271, doi: 10.1007/978-3-030- pp. 215–221, doi: 10.1109/IoTaIS53735.2021.9628587.
83906-2_21. [141] A. Mjeda and G. Botterweck, ‘‘Uncertainty entangled; modelling safety
[121] A. Pereira and C. Thomas, ‘‘Challenges of machine learning applied to assurance cases for autonomous systems,’’ Electron. Commun. EASST,
safety-critical cyber-physical systems,’’ Mach. Learn. Knowl. Extraction, vol. 79, pp. 1–10, May 2020. [Online]. Available: https://journal.ub.tu-
vol. 2, no. 4, pp. 579–602, Nov. 2020, doi: 10.3390/make2040031. berlin.de/eceasst/article/download/1124/1072
[122] R. Salay and K. Czarnecki, ‘‘Improving ML safety with partial specifi- [142] A. Corso and M. J. Kochenderfer, ‘‘Transfer learning for effi-
cations,’’ in Proc. Int. Conf. Comput. Saf., Rel., Secur., vol. 11699, 2019, cient iterative safety validation,’’ in Proc. AAAI Conf. Artif. Intell.,
pp. 288–300, doi: 10.1007/978-3-030-26250-1_23. vol. 35, no. 8, May 2021, pp. 7125–7132. [Online]. Available:
[123] A. Tarrisse and F. Massé, ‘‘Locks for the use of IEC 61508 to ML safety- https://ojs.aaai.org/index.php/AAAI/article/view/16876/16683
critical applications and possible solutions,’’ in Proc. 31st Eur. Saf. Rel. [143] C. Smith, E. Denney, and G. Pai, ‘‘Hazard contribution modes of machine
Conf. (ESREL), 2021, pp. 3459–3466, doi: 10.3850/978-981-18-2016- learning components,’’ in Proc. CEUR Workshop, vol. 2560, 2020,
8_661-cd. pp. 14–22.
[124] U. D. Ferrell and A. H. A. Anderegg, ‘‘Applicability of UL 4600 to [144] H. Barzamini, M. Shahzad, H. Alhoori, and M. Rahimi, ‘‘A multi-
unmanned aircraft systems (UAS) and urban air mobility (UAM),’’ in level semantic web for hard-to-specify domain concept, pedestrian, in
Proc. AIAA/IEEE 39th Digit. Avionics Syst. Conf. (DASC), Oct. 2020, ML-based software,’’ Requirements Eng., vol. 27, no. 2, pp. 161–182,
pp. 1–7, doi: 10.1109/DASC50938.2020.9256608. Jun. 2022, doi: 10.1007/s00766-021-00366-0.
[125] K. Aslansefat, I. Sorokos, D. Whiting, R. T. Kolagari, and [145] J. H. Husen, H. Washizaki, H. T. Tun, N. Yoshioka, Y. Fukazawa,
Y. Papadopoulos, ‘‘SafeML: Safety monitoring of machine learning and H. Takeuchi, ‘‘Traceable business-to-safety analysis framework for
classifiers through statistical difference measures,’’ in Proc. Int. Symp. safety-critical machine learning systems,’’ in Proc. 1st Int. Conf. AI
Model-Based Saf. Assessment, vol. 12297, 2020, pp. 197–211, doi: Eng., Softw. Eng. (AI), May 2022, pp. 50–51, doi: 10.1145/3522664.
10.1007/978-3-030-58920-2_13. 3528619.
[126] K. Aslansefat, W. Bridges, I. Sorokos, and D. Whiting. (Jun. 10, 2020). [146] R. Alexander and T. Kelly, ‘‘Supporting systems of systems hazard anal-
GitHub—ISorokos/SafeML: Exploring Techniques for Estimating Safety ysis using multi-agent simulation,’’ Saf. Sci., vol. 51, no. 1, pp. 302–318,
of Machine Learning Classifiers. Accessed: Nov. 24, 2022. [Online]. Jan. 2013, doi: 10.1016/j.ssci.2012.07.006.
Available: https://github.com/ISorokos/SafeML [147] W. Ruan, X. Huang, and M. Kwiatkowska, ‘‘Reachability analysis of deep
[127] M. Bergler, R. T. Kolagari, and K. Lundqvist, ‘‘Case study on the use neural networks with provable guarantees,’’ in Proc. 27th Int. Joint Conf.
of the SafeML approach in training autonomous driving vehicles,’’ in Artif. Intell., Jul. 2018, pp. 2651–2659.
Proc. Int. Conf. Image Anal. Process., vol. 13233, 2022, pp. 87–97, doi: [148] S. Burton, I. Habli, T. Lawton, J. McDermid, P. Morgan, and Z. Porter,
10.1007/978-3-031-06433-3_8. ‘‘Mind the gaps: Assuring the safety of autonomous systems from an engi-
[128] T. Aoki, D. Kawakami, N. Chida, and T. Tomita, ‘‘Dataset fault tree analy- neering, ethical, and legal perspective,’’ Artif. Intell., vol. 279, Feb. 2020,
sis for systematic evaluation of machine learning systems,’’ in Proc. IEEE Art. no. 103201, doi: 10.1016/j.artint.2019.103201.
25th Pacific Rim Int. Symp. Dependable Comput. (PRDC), Dec. 2020, [149] H. Lin and W. Liu, ‘‘Risks and prevention in the application of AI,’’
pp. 100–109, doi: 10.1109/PRDC50213.2020.00021. in Proc. Int. Conf. Mach. Learn. Big Data Anal. IoT Secur. Privacy,
[129] J. F. Boulineau, ‘‘Safe recognition A.I. of a railway signal by on- vol. 1283, 2021, pp. 700–704, doi: 10.1007/978-3-030-62746-1_104.
board camera,’’ in Proc. 16th Eur. Dependable Comput. Conf. (EDCC), [150] P. Sarathy, S. Baruah, S. Cook, and M. Wolf, ‘‘Realizing the promise
vol. 1279. Paris, France: Springer, 2020, pp. 5–19, doi: 10.1007/978-3- of artificial intelligence for unmanned aircraft systems through behav-
030-58462-7_1. ior bounded assurance,’’ in Proc. IEEE/AIAA 38th Digit. Avionics Syst.
[130] L. Gauerhof, R. Hawkins, C. Picardi, C. Paterson, Y. Hagiwara, and Conf. (DASC), Sep. 2019, pp. 1–8, doi: 10.1109/DASC43569.2019.
I. Habli, ‘‘Assuring the safety of machine learning for pedestrian detection 9081649.
at crossings,’’ in Proc. 39th Int. Conf. Comput. Saf., Rel. Secur., (SAFE- [151] A. Causevic, A. V. Papadopoulos, and M. Sirjani, ‘‘Towards a framework
COMP), vol. 12234. Renningen, Germany: Springer, 2020, pp. 197–212, for safe and secure adaptive collaborative systems,’’ in Proc. IEEE 43rd
doi: 10.1007/978-3-030-54549-9_13. Annu. Comput. Softw. Appl. Conf. (COMPSAC), Jul. 2019, pp. 165–170,
[131] M. Klaes, R. Adler, I. Sorokos, L. Joeckel, and J. Reich, ‘‘Handling uncer- doi: 10.1109/COMPSAC.2019.10201.
tainties of data-driven models in compliance with safety constraints for [152] S. Seon and J.-W. Kim, ‘‘Designing a modular safety certifi-
autonomous behaviour,’’ in Proc. 17th Eur. Dependable Comput. Conf. cation system for convergence products–focusing on autonomous
(EDCC), Sep. 2021, pp. 95–102, doi: 10.1109/EDCC53658.2021.00021. driving cars,’’ J. Korean Soc. Quality Manag., vol. 46, no. 4,
[132] A. Subbaswamy, R. Adams, and S. Saria, ‘‘Evaluating model robustness pp. 1001–1014, 2018. [Online]. Available: https://koreascience.kr/article/
and stability to dataset shift,’’ in Proc. 24th Int. Conf. Artif. Intell. Statist. JAKO201816842430631.page
(AISTATS), vol. 130, 2021, pp. 2611–2619. [153] S. Anastasi, M. Madonna, and L. Monica, ‘‘Implications of embed-
[133] J. Firestone and M. B. Cohen, ‘‘The assurance recipe: Facilitating assur- ded artificial intelligence–machine learning on safety of machin-
ance patterns,’’ in Proc. Int. Conf. Comput. Saf., Rel., Secur., vol. 11094, ery,’’ Proc. Comput. Sci., vol. 180, pp. 338–343, Jan. 2021, doi:
2018, pp. 22–30, doi: 10.1007/978-3-319-99229-7_3. 10.1016/j.procs.2021.01.171.
[134] J. Bragg and I. Habli, ‘‘What is acceptably safe for reinforcement learn- [154] B. B. Gupta, A. Gaurav, E. C. Marin, and W. Alhalabi, ‘‘Novel graph-
ing?’’ in Proc. Int. Conf. Comput. Saf., Rel., Secur., vol. 11094, 2018, based machine learning technique to secure smart vehicles in intelligent
pp. 418–430, doi: 10.1007/978-3-319-99229-7_35. transportation systems,’’ IEEE Trans. Intell. Transp. Syst., early access,
[135] L. Gauerhof, P. Munk, and S. Burton, ‘‘Structuring validation targets May 30, 2022, doi: 10.1109/TITS.2022.3174333.
of a machine learning function applied to automated driving,’’ in Proc. [155] Linux Foundation AI & Data Foundation. (Nov. 15, 2022). GitHub
Int. Conf. Comput. Saf., Rel., Secur., vol. 11093, 2018, pp. 45–58, doi: Trusted-AI/Adversarial-Robustness-Toolbox: Adversarial Robustness
10.1007/978-3-319-99130-6_4. Toolbox (ART) Python Library for Machine Learning Security Evasion,
[136] C.-H. Cheng, C.-H. Huang, and G. Nuhrenberg, ‘‘Nn-dependability-kit: Poisoning, Extraction, Inference Red and Blue Teams. Accessed:
Engineering neural networks for safety-critical autonomous driving Nov. 23, 2022. [Online]. Available: https://github.com/Trusted-
systems,’’ in Proc. IEEE/ACM Int. Conf. Comput.-Aided Design AI/adversarial-robustness-toolbox
(ICCAD), Nov. 2019, pp. 1–6, doi: 10.1109/ICCAD45719.2019. [156] S. Alemany, J. Nucciarone, and N. Pissinou, ‘‘Jespipe: A plugin-based,
8942153. open MPI framework for adversarial machine learning analysis,’’ in Proc.
[137] C.-H. Cheng and R. Yan, ‘‘Continuous safety verification of neural IEEE Int. Conf. Big Data (Big Data), Dec. 2021, pp. 3663–3670, doi:
networks,’’ in Proc. Design, Autom. Test Eur. Conf. Exhib. (DATE), 10.1109/BIGDATA52589.2021.9671385.
Feb. 2021, pp. 1478–1483, doi: 10.23919/DATE51398.2021.9473994. [157] F. R. Ward and I. Habli, ‘‘An assurance case pattern for the interpretabil-
[138] Railway Applications Communication, Signalling and Processing Sys- ity of machine learning in safety-critical systems,’’ in Proc. Int. Conf.
tems Software for Railway Control and Protection Systems, docu- Comput. Saf., Rel., Secur., 2020, pp. 395–407, doi: 10.1007/978-3-030-
ment CENELEC, EN50128:2011, Brussels, Belgium, 2011. 55583-2_30.
[158] I. Ramezani, K. Moshkbar-Bakhshayesh, N. Vosoughi, and JOÃO B. CAMARGO JR. received the bachelor’s
M. B. Ghofrani, ‘‘Applications of soft computing in nuclear power plants: degree in electronic engineering and the M.Sc. and
A review,’’ Prog. Nucl. Energy, vol. 149, Jul. 2022, Art. no. 104253, doi: Ph.D. degrees from the School of Engineering,
10.1016/j.pnucene.2022.104253. University of São Paulo (Poli-USP), São Paulo,
[159] S. Iqbal, T. M. Khan, K. Naveed, S. S. Naqvi, and S. J. Nawaz, Brazil, in 1981, 1989, and 1996, respectively.
‘‘Recent trends and advances in fundus image analysis: A review,’’
He is currently an Associate Professor with the
Comput. Biol. Med., vol. 151, Dec. 2022, Art. no. 106277, doi:
Department of Computer Engineering and Digital
10.1016/J.COMPBIOMED.2022.106277.
[160] T. M. Khan and A. Robles-Kelly, ‘‘Machine learning: Quantum Systems (PCS), Poli-USP, where he is also the
vs classical,’’ IEEE Access, vol. 8, pp. 219275–219294, 2020, doi: Coordinator of the Safety Analysis Group (GAS).
10.1109/ACCESS.2020.3041719. He has 40 articles in scientific journals, five orga-
[161] Y. Kwak, W. J. Yun, S. Jung, J.-K. Kim, and J. Kim, ‘‘Introduction to nized books, six published book chapters, and 115 complete works published
quantum reinforcement learning: Theory and PennyLane-based imple- in proceedings of conferences. He is a Reviewer in different scientific
mentation,’’ in Proc. Int. Conf. Inf. Commun. Technol. Converg. (ICTC), journals, such as the IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION
Oct. 2021, pp. 416–420, doi: 10.1109/ICTC52510.2021.9620885. SYSTEMS, Journal of Intelligent and Robotic Systems, Risk Analysis, the
[162] Scikit-Learn. (2022). Scikit-Learn: Machine Learning in Python Scikit- Journal of Advanced Transportation, Reliability Engineering and System
Learn 1.1.2 Documentation. Accessed: Oct. 7, 2022. [Online]. Available: Safety, Transportation Research—Part C, and the IEEE SOFTWARE.
https://scikit-learn.org/stable/
[163] E. Wozniak, C. Carlan, E. Acar-Celik, and H. J. Putzer, ‘‘A safety case
pattern for systems with machine learning components,’’ in Proc. Int.
Conf. Comput. Saf., Rel., Secur., vol. 12235, 2020, pp. 370–382, doi:
10.1007/978-3-030-55583-2_28.
[164] A. V. da Silva Neto, L. F. Vismari, R. A. V. Gimenes, D. B. Sesso,
J. R. de Almeida, P. S. Cugnasca, and J. B. Camargo, ‘‘A practical
analytical approach to increase confidence in PLD-based systems safety
analysis,’’ IEEE Syst. J., vol. 12, no. 4, pp. 3473–3484, Dec. 2018, doi:
10.1109/JSYST.2017.2726178.
[165] A. V. S. Neto and P. S. Cugnasca, ‘‘Technical research report—Remarks JORGE R. ALMEIDA JR. received the bachelor’s
on the systematic literature review on the safety assurance of AI-based
degree in electronic engineering and the M.Sc. and
systems—Version 6,’’ Saf. Anal. Group (GAS), Dept. Comput. Eng.
Ph.D. degrees from the School of Engineering,
Digit. Syst., Escola Politécnica, Universidade de São Paulo (USP),
São Paulo, Brazil, 2022, doi: 10.5281/zenodo.7358711.
University of São Paulo (Poli-USP), São Paulo,
Brazil, in 1981, 1989, and 1995, respectively.
He is currently an Associate Professor with the
Department of Computer Engineering and Dig-
ital Systems (PCS), Poli-USP, where he is also
ANTONIO V. SILVA NETO was born in São Paulo, a member of the Safety Analysis Group (GAS).
Brazil, in 1988. He received the bachelor’s degree His research interests include reliable and safe
in electrical engineering and the M.Sc. degree computational systems for critical application.
from the School of Engineering, University of
São Paulo (Poli-USP), in 2010 and 2014, respec-
tively, where he is currently pursuing the D.Sc.
degree with the Safety Analysis Group (GAS),
working on methods for the safety assurance of
artificial intelligence-based systems, supported by
the Brazilian institutions CAPES (Coordenação de
it Aperfeiçoamento de Pessoal de Nível Superior) and FDTE (Fundação para
o Desenvolvimento Tecnológico da Engenharia).
During his D.Sc. degree, he has also been a Teaching Assistant with the
Digital Laboratory undergraduate courses offered as part of the computer PAULO S. CUGNASCA received the bachelor’s
engineering undergraduate courses, since 2021. Before starting his D.Sc. degree in electronic engineering and the M.Sc. and
degree, he was with Alstom Brazil, in 2013 and (2018–2020), respec- Ph.D. degrees from the School of Engineering,
tively, where he acted as the Safety Assurance Manager for research & University of São Paulo (Poli-USP), São Paulo,
development projects and a Safety Assurance Engineer for metro signaling Brazil, in 1987, 1993, and 1999, respectively.
projects on Brazil, Mexico, Panama, and Chile. Moreover, he was also He is currently an Associate Professor with the
with FDTE (2009–2018), where he was an Independent System Safety Department of Computer Engineering and Dig-
Analyst for Brazilian safety-critical projects on metro and air traffic control ital Systems (PCS), Poli-USP, where he is also
domains. He has five papers in scientific journals, six conference papers, and a member of the Safety Analysis Group (GAS).
11 participations on examination boards of Poli-USP electrical and computer His research interests include reliable and safe
engineering undergraduate dissertations. computational systems for critical application.
Mr. Silva Neto is also a Reviewer for the IEEE SYSTEMS JOURNAL.