1 s2.0 S0160791X22000677 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Technology in Society 68 (2022) 101926

Contents lists available at ScienceDirect

Technology in Society
journal homepage: www.elsevier.com/locate/techsoc

Sustainable AI: An integrated model to guide public sector decision-making


Christopher Wilson, PhD *, Maja van der Velden, PhD
University of Oslo, Norway

A R T I C L E I N F O A B S T R A C T

Keywords: Ethics, explainability, responsibility, and accountability are important concepts for questioning the societal
Artificial intelligence impacts of artificial intelligence and machine learning (AI), but are insufficient to guide the public sector in
Public administration regulating and implementing AI. Recent frameworks for AI governance help to operationalize these by identi­
Sustainability
fying the processes and layers of governance in which they must be considered, but do not provide public sector
Social sustainability
AI governance
workers with guidance on how they should be pursued or understood. This analysis explores how the concept of
sustainable AI can help to fill this gap. It does so by reviewing how the concept has been used by the research
community and aligning research on sustainable development with research on public sector AI. Doing so
identifies the utility of boundary conditions that have been asserted for social sustainability according to the
Framework for Strategic Sustainable Development, and which are here integrated with prominent concepts from
the discourse on AI and society. This results in a conceptual model that integrates five boundary conditions to
assist public sector decision-making about how to govern AI: Diversity, Capacity for learning, Capacity for self-
organization Common meaning, and Trust. These are presented together with practical approaches for their
presentation, and guiding questions to aid public sector workers in making the decisions that are required by
other operational frameworks for ethical AI.

1. Introduction decision-making [4]. As Cath [5] rightly notes, moreover, the inscru­
tability of AI’s effects on society increases as AI becomes more wide­
The long-term societal impacts of artificial intelligence and machine spread and normalized: “the more AI matters the less one may be able to
learning technologies (AI) are widely speculated, but poorly understood. realise how much it does” (p. 507). This is particularly challenging for
This poses dual governance challenges for the public sector, which must public sector workers mandated to protect the public good, but who may
not only regulate how AI interacts with society, but is itself increasingly not recognize the societal risks and ethical challenges posed by AI.
adopting AI to automate government workflows and deliver services [1, Research on public administration has demonstrated that recognition of
2]. The challenges of regulating and implementing AI are distinct but the problems that public policy should solve is dependent not only on
related, in so far as they both must confront widely discussed, but often individuals’ personal values [6], but on the international normative
unspecified risks related to AI’s bias, misuse, and perceptions of frameworks to which they are exposed and the salience of policy prob­
illegitimacy. lems within those networks [7]. Even when public sector workers
Efforts to address these risks are confounded by the subtle and recognize these challenges, however—and they increasingly do [8]—
complex ways in which novel technologies like AI interact with society. individuals often lack the skills and capacities to address them [9–11]. A
This is in part due to the novel and pervasive nature of the technology, recent review of research on public sector AI describes this knowledge
and some applications are so new that we must wait to see what con­ gap as “a critical development barrier” for many governments, as the
sequences unfold. Other risks are systematic and can be modeled, but a slowly burgeoning thought leadership on AI governance fails to match
recent review of AI initiatives aiming to improve ecological sustain­ “the pace with which AI applications are infiltrating government glob­
ability notes that even these are generally “poorly elaborated, and more ally” [10, p. 2].
often than not, overlooked” [3, p. 2]. Nor is this simply a problem at There are limited resources to help public sector workers manage this
scale, as the opacity surrounding human-machine interaction persists at complexity in governing AI. Though there has been a proliferation of
the micro-level of allocating responsibility for algorithm-assisted conceptual frameworks and principles, labels such as responsible,

* Corresponding author.
E-mail addresses: [email protected] (C. Wilson), [email protected] (M. van der Velden).

https://doi.org/10.1016/j.techsoc.2022.101926
Received 30 October 2020; Received in revised form 4 February 2022; Accepted 4 February 2022
Available online 7 February 2022
0160-791X/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

ethical, and explainable AI tend to be highly conceptual and not well economic and societal benefits [5,27], while decisions about AI adop­
suited to guide decision-making. The discourse of ethical AI is instruc­ tion emphasize administrative efficiency and improved public service
tive in this regard. In applied contexts, ethics and ethical responsibility delivery [28,29]. The risks implied in both types of decision making are
have proved remarkably challenging to define [13–15]. As a discourse, largely synonymous, however, and are often articulated in relationship
the abstract quality of AI ethics has proved vulnerable to elite capture to high level concepts such as bias, fairness, and privacy, and the obli­
[16] and hijacking [17–19]. More operational tools, meanwhile, tend to gation to uphold fundamental democratic principles [10].
target users with technical expertise, like the Equity Evaluation Corpus Whether deciding on appropriate disclosure requirements, private
or the Tensor Flow Privacy Library (see Ref. [20] for a comparison), and sector data management, or quality assurance processes for algorithmic
are not easily accessible or operationalized for decision-making by decision making in case management, addressing these risks is chal­
public managers, administrators, or civil servants. The few practical lenging in the public sector due to a variety of factors, including a fast-
resources designed to assist non-technical public sector workers with AI moving policy discourse [30], skeptical publics [31,32], and the limited
governance challenges tend to emphasize deliberative and participatory capacities of government workers [9] and institutions [33]. These
processes in the interest of government accountability (see for example challenges are salient at all phases of the policy process and “permeate
[4,21], and their lack of use may be due to perceptions in the public all layers of application,” as noted in a recent systematic review of
sector that the burdens of participation and accountability initiatives research on AI in the public sector [1, p. 1]. As discussed in this article’s
outweigh any potential benefits [22–24]. This is a problem for public introduction, however, most frameworks and concepts for addressing
sector governance of AI, which requires frameworks that are both these risks are difficult to operationalize in the public sector and tend
operational and conceptually coherent in order to actually be ethical or either towards high levels of abstraction or technical detail.
responsible. One notable exception to this trend is the “Integrated AI Governance
The aim of this paper is to explore if the concept of sustainable AI, as Framework for Public Administration” developed by Wirtz et al. [8],
grounded in the theory and practice of sustainable development, might which is oriented towards helping the public sector to manage the most
be better suited to guide decision-making about how to regulate and salient challenges identified in a previous literature review, and grouped
implement AI. Though also abstract at first glance, alignment to the as shown in Table 1. The challenges identified here are dissimilar in
Sustainable Development Goals (SDGs) framework allows the concept of many regards, spanning both practical and abstract considerations, but
sustainable AI to draw on the policies and practices associated with the nevertheless provide important clarity in the otherwise sprawling
SDGs, which have been elaborated, road tested and integrated into literature on AI and society, by indicating the types of issues and chal­
public sector practice for several years now. This can increase the lenges that are most salient in the context of public sector
salience and accessibility of the concept and makes it easier to oper­ decision-making.
ationalize in public sector decision-making than more diffuse notions of Wirtz et al.‘s integrative framework further suggests that public
responsibility or ethics. sector workers address these challenges through a combination regula­
The notion of sustainable AI is often casually, but increasingly tory, policy, and collaborative efforts, which include specific compo­
referenced in research on AI governance [25], in policy debate [20], and nents such as “hazard identification” and “monitoring of unintended
even in national strategies for dealing with AI [26]. The concept is not effects” at the regulatory layer, or the development of Standards at the
clearly or consistently defined, however, and it is not clear how or to public policy layer. This is also a valuable contribution insofar as it
what degree it would support public sector decision-making in govern­ moves past the conceptual ambiguity of notions like “ethical AI” to
ing AI. To explore this potential, this research asks: How can the concept provide a menu of activities that public sector workers can pursue to
of sustainable AI be defined and operationalized to guide public sector de­ manage risks and challenges related to AI governance. The framework
cision making? does not, however, provide substantive guidance that public sector
The article proceeds as follows. Section 2 presents background on the workers can use to make actual decisions about how to conduct risk/
context for public sector decision-making about AI governance and benefit analysis or develop standards.
about how the concept of sustainability has been conceptualized in the Though the integrated framework suggested by Wirtz et al. does not
context of sustainable development and the SDGs. Section 3 then pre­ provide concrete guidance on how to make decisions about AI regulation
sents our research approach, and section 4 presents the results of our or implementation, it does highlight the salience of societal and social
review of the literature, with an emphasis on social sustainability and issues for public sector decision-making, and how these are linked to
operationalizes social sustainability through boundary conditions concepts like safety, justice, and fairness. This provides a frame for
necessary to preserve social functions. It then analyses these boundary narrowing the concept of sustainable AI for public sector decision-
conditions by integrating the literature on sustainability and public making, with a focus on finding the balance between AI’s potential so­
sector AI in order to arrive at a preliminary conceptual model of sus­ cietal benefits with AI’s potential societal harms.
tainable AI for the public sector. Section 5 discusses the analysis and
presents the integrated model of boundary conditions for sustainable AI,
together which operational considerations for how it can be applied in 2.2. Operationalization of sustainability in the context of the SDGs
public sector decision-making. The final part of this section presents
some concluding remarks as well as outlines the limitations of this study. The concept of sustainable development is anchored in the 1987

2. Background and context Table 1


Main governance challenges for the public sector, adapted from Wirtz et al. [8].
2.1. Public sector decision-making about AI governance Type of challenge Challenge

AI & Society Workforce transformation


The rapid diffusion of AI technologies presents the public sector with Societal acceptance
a novel set of regulatory and adoption challenges. Public managers, Transformation of human-to-machine interaction
AI Law and Regulation Governance of autonomous intelligence systems
administrators, and civil servants must make decisions about how to
Responsibility and accountability
balance the potential benefits of these technologies against their po­ Privacy and safety
tential harms. We refer to this collectively as AI governance, whether it AI Ethics AI-rulemaking for human behavior
involves decisions about how AI should be adopted or regulated. Compatibility of AI vs human value judgement
Notably, the potential benefits of AI differ across each of these dynamics, Moral dilemmas
AI discrimination
with regulatory decision-making oriented towards the maximization of

2
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

Brundtland Report as “development that meets the needs of the present question guiding this study could thus be formulated as follows: How to
without compromising the ability for future generations to meet their operationalize the concept of sustainability AI for public sector
own needs” [34], and has since been elaborated in a variety of contexts decision-making.
and has driven global collaboration. This broad notion of sustainability To answer this question, we adopted a six-step approach to i)
is widely understood to consist three pillars: environmental, social, and reviewing the literature on sustainable AI; ii) establish the applicability
economic sustainability, whose relationships and interdependencies be of sustainable AI to the target context of public sector decision-making
conceptualized in a variety of ways within the sustainable development about AI governance; and iii) formulate conceptual and operational
paradigm [35]. definitions of sustainable AI for the public sector. An overview of the six
International collaboration for sustainable development has culmi­ steps is presented below in Table 2.
nated in the Sustainable Development Goals (SDGs), which were adop­ In the first step, we aimed to capture deliberate conceptualizations of
ted by all 193 UN member states 2015 [36]. The SDGs are highly sustainable AI in research. We did this by querying Scopus and Web of
operational, consisting of 17 broad goals, in turn consisting of 169 Science data bases on precise search terms ‘sustainable AI’ and ‘sus­
specific targets and nearly 300 indicators, and countries progress to­ tainable artificial intelligence’ and results were filtered to include only
wards achieving these targets is closely monitored by UN Agencies as results that had these terms had to be found in the title, abstract, or
well as independent organizations, and citizen initiatives [37]. Because keywords of the sources. Google Scholar was queried for articles with
country contexts and capacities vary so significantly, the United Nations the same terms in titles only. This returned 16, 14, and 21 articles from
Development Programme provides support to countries to institution­ each query respectively. Eight articles from the Scopus search, 5 articles
alize efforts to achieve sustainable development, through the creation of from the Web of Science search, and 12 articles fulfilled our criteria.
regulatory structures such as National Councils, and processes for After eliminating the duplicates and applying filters in step 2, 15 articles
coordinating between legislative, executive, and other public service remained that contained definitions or descriptions of one of the search
and administration agencies [38]. A recent mapping of sustainable terms. These articles also referenced three research institutions with a
development policy intermediaries found over 120 independent online mandate for exploring sustainable AI and had that term in their name.
resources to support countries in this regard [39], and early reporting on These were also included in the review, in order assess how sustainable
countries’ self-assessments to the UN suggest that this has supported AI is conceptualized in the research community beyond peer reviewed
broad diffusion of the SDG framework across developed and developing research. This resulted in 7 definitions and 10 descriptions of sustainable
country contexts [40]. AI. Despite their limited number, the results of this search provide a
In parallel with the diffusion of the sustainable development para­ strong and representative baseline of how sustainable AI is conceptu­
digm among national governments, significant work has been done to alized in the research community and aligns with the lack of consistency
define sustainability across sectors and in other operational contexts. and clarity described in reviews of the sustainability literature. In the
Most notably, over 25 years of academic collaboration and review led to fifth step we compare the definitions and descriptions with the target
the development of a Framework for Strategic Sustainable Development context of public sector decision-making in described in the previous
(FSSD) [41]. Notably, the FSSD operationalizes the concept of sustain­ section, in order to narrow our review to a subset of the relevant liter­
ability in terms of defining “boundary conditions” and setting red-lines ature focused on the Framework for Strategic Sustainable Development
that must be respected in order to protect “the basic conditions that are and social sustainability and define a structure for aligning the litera­
necessary to fulfill for the ecological and social systems to not degrade tures on sustainability and AI in the public sector. The final step
systematically,” [41]. As an articulation of sustainability in negative
terms of what cannot be compromised, this conceptualization contrasts
Table 2
strongly with most positive conceptualizations of sustainability as an
Process for literature review and analysis.
objective in the public sector [42], including the SDGs.
Since its launch, the FSSD has been tested and applied in a variety of Step Focus Results
contexts and with a variety of actors from the public and non-profit 1 Data base query Scopus, Web of Science, n = 51
sector, testing its utility to Google Scholar:
“Sustainable AI” or
give guidance on how any region, organization or project can “Sustainable artificial
develop a vision framed by principles for social and ecological sus­ intelligence”
2 Filter 1. Removal of duplicates N = 15
tainability, analyse and assess the current situation in relation to that 2. Search terms in title,
vision and thus clarify the gap, generate ideas for possible actions abstract, or key words
that could help to bridge the gap, and prioritize such actions into a 3. Contain descriptions or
step-wise and economically attractive plan [43]. definitions of
sustainable AI
These efforts have highlighted the importance of an iterative 3 Expansion Research centers • The Nordic center for
mentioned in articles that Sustainable and
approach to simultaneously operationalizing and defining sustainability
have an explicit nominal Trustworthy AI Research
concepts. focus on sustainable AI (Nordstar) in Oslo
• AI Sustainability Center
3. Research approach in Stockholm
• Sustainable AI Lab in
Berlin
Despite significant operationalization of the concept of sustainabil­
4 Review Definitions and 7 definitions and 10
ity, and a clear operational need for public sector decision-making about description of sustainable descriptions presented in
AI governance, the key challenge for this analysis is that the concept of AI Table 3
sustainable AI is asserted regularly but inconsistently. It has been 5 Comparison with Operational context for Focused review on social
casually referenced in regard to the environmental consequences of target context public sector decision- sustainability and the
making about AI FSSD and identification of
advanced computing power [44,45], as a corporate strategy [46,47], as governance 5 boundary conditions for
measure of the degree to which AI threatens human safety [48], and as a sustainable AI
social movement oriented towards social justice [49]. Many of these 6 Integration of Boundary conditions for 5 boundary conditions for
references are highly casual and tangential, resulting in a scope of literatures on maintaining social and social sustainability that
sustainability and environmental can be applied in the
literature that is too broad and diffuse to provide guidance. Our research
AI sustainability context of sustainable AI

3
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

integrates those literatures according to five boundary conditions and Table 3


asserts operational and conceptual definitions of sustainable AI for the Definitions and descriptions of Sustainable AI.
public sector. Definitions

1) “[D]eveloping, implementing, and using AI in a way that minimizes negative


4. Results and analysis social, ecological and economic impacts of the applied algorithms (sustainable AI)”
[50].
4.1. Mapping conceptualization of sustainable AI in the research 2) “The AI Sustainability Center supports an approach in which the positive and
community negative impacts of AI on people and society are as important as the commercial
benefits or efficiency gains. We call it Sustainable AI” [51,52].
3) “Sustainable AI is a movement to foster change in the entire lifecycle of AI products
A review of how research and research institutions have conceptu­ (i.e. idea generation, training, re-tuning, implementation, governance) towards
alized sustainable AI results in 7 specific definitions and 10 specific greater ecological integrity and social justice” [49].
descriptions, which are presented below in Table 3. All of these con­ 4) “Sustainable AI is AI that enables reaching the SDGs” [53].
5) “(…) the extent to which AI technology is developed in a direction that meets the
ceptualizations explicitly reference the conceptual paradigm of sus­
needs of the present without compromising the ability of future generations to
tainable development associated with the SDGs but have a different meet their own needs” [54].
focus on the different dimensions of sustainability. They also differ ac­ 6) “[S]ustainable artificial intelligence that is not harmful but beneficial for human
cording to their contexts of application (specific sectors, industries, or life” [55].
legal contexts), the relationship between AI and sustainability (AI ap­ 7) “One can think of sustainable AI/DS as AI subjected to organizing principles,
including, but not limited to, processes which could be organization specific,
plications that are themselves sustainable vs AI applications that support regulations, best practices, and definitions/standards for meeting the
or promote sustainability). These differences are broadly but unevenly transformative potential of DS while simultaneously protecting the environment,
distributed across the conceptualizations returned from the first steps in enabling economic growth, and social equity” [56].
our literature (see Table 3). Descriptions
In regard to the dimensions of sustainability, four instances focus 1) “This work explores the environmental impact of AI from a holistic perspective.
only on environmentally sustainable AI [55,57,63,65], while others only More specifically, we present the challenges and opportunities to designing
focus on socially sustainable AI [52,60,62]. Half of the articles have a sustainable AI computing (…)” [57].
2) “[S]ustainable development (SD) (Brundlandt) should be the guiding framework
holistic understanding of sustainability. Economic sustainability is not for research and development of artificial intelligence (AI). Instead of merely
featured independently. Rohde et al. [50] and Tsafack Chetsa’s [56] focusing on ethics or human rights, scholars and policy makers should
definitions are close to the so-called Triple Bottom Line [67] or Three acknowledge sustainable AI development (SAID) as guiding framework” [58].
Pillars [68] definitions of sustainability, while others refer to the 3) “The growth in AI and automation will continue regardless of how the space is
regulated and monitored. Whether it is sustainable in the long run, though, and is
Brundlandt definition of sustainability [54,58]. Vinuesa et al. [53]
viewed positively rather than negatively, will depend on taking a responsible,
define sustainable AI as AI that enables achieving the Sustainable rights-based approach” [59].
Development Goals (SDGs), while the SDGs and sustainable develop­ 4) “Exploring the connections between an AI’s technical design and its social
ment are also described as guiding frameworks for the development and implications will be key in ensuring feasible and sustainable AI systems that benefit
evaluation of sustainable AI [58,61,62]. society and that people want to use” [60].
5) “[T]he SDGs provide an ideal framework to test the desirability of AI solutions”
In regard to application contexts, several conceptualizations discuss [61].
sustainable AI on a general policy level [49,50,53,62], referring to 6) “The objective of an inclusive, sustainable, and human-centered AI in Europe will
Agenda 2030 and the EU. The sustainability of AI as a technological likely require a normative frame- work at the European level. Financial and reg­
product is the focus of three articles [57,63,64]. Several articles discuss ulatory stimuli are required to foster SDG-driven AI and public–private collabo­
ration in the sharing of technology and data (…). Furthermore, a human-centered
sustainable AI in a particular field: consumer autonomy [54], business
AI should be human rights-based (…). Although there has been some limited
innovation [55], and human rights [59]. discussion at the European government level of the impact of AI on human rights,
In regard to the relationships between sustainability and AI, eight especially regarding the right to privacy, the impact on social, economic, and
resources discuss the need for the sustainability of AI [49,50,55,57–59, cultural rights has so far received little attention” [62].
62–64]. They mention for example the need “to foster change in the 7) “For example, time has come to focus on sustainable AI (Pal, 2018b). Here we like
to refer to two issues: The first issue is that the development (training) of the AI
lifecycle of AI products (i.e., idea generation, training, re-tuning, system should have the minimum carbon footprint. To achieve human-like per­
implementation, governance) towards greater ecological integrity and formance often this important issue is ignored. To illustrate the severity of this
social justice” [49] and to “have the minimum carbon footprint” [63]. issue we consider a recent study which used an evolution-based search to find a
Two resources focus on how AI can contribute to achieving sustain­ better architecture for machine translation and language modeling than the
Transformer model (So et al., 2019). The architecture search ran for 979 M training
ability [53,54]. The final three resources mention aspects related to both
steps requiring about 32,623 h on TPUv2 equivalently 274,120 h on 8 P100 GPUs.
AI for sustainability and the sustainability of AI [51,60,61]. This may result in 626,155 lbs of CO2 emission–this is about 5 times the lifetime
average emission by an American Car (Strubell et al., 2019). The second point is
4.2. Narrowing the review towards the target context that the solutions provided by an AI system should be sustainable with the mini­
mum impact on the environment” [63].
8) “From the perspective of AI Ethics, Aimee van Wynsberghe defined the term
Three conclusions can be drawn from the broad review of how sus­ sustainable AI as “. . . a field of research that applies to the technology of AI (. . .)
tainable AI has been deliberately conceptualized by the research com­ while addressing issues of AI sustainability and/or sustainable development” [3].
munity. Firstly, consistent reference to the sustainable development In other words, the term of sustainable AI takes into consideration the entire AI
paradigm justifies the use of that framework to operationalize the lifecycle, from training to its implementation and use” [64].
9) “First, state-of-the-art algorithms in AI demand massive computing power and
concept of sustainable AI. Secondly, the variety of application contexts
energy: to handle the ever-increasing Big Data repositories, AI systems must scale
and AI sustainability relationships suggests a complex and fragmented in proportion to all the available data. In other words, future AI must be sustain­
operational landscape. This aligns broadly with the vast array of sub­ able” [65].
stantive issues at issue in public sector contexts [1], and with efforts by 10) “measuring & assessing the environmental impact of AI, ways of making AI
Wirtz et al. [8] to integrate these with the challenges, mechanisms, and systems more sustainable, and directing AI towards the sustainable development
goals [66].
layers of AI governance in the public sector. It also confirms the need for
more hands-on operational tools, such as those developed under the
Framework for Strategic Sustainable Development and suggests that this
framework might be used to operationalize the sustainable AI for public
sector decision-making.
Lastly, differences in how the literature attends to the different

4
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

dimensions of sustainability, social, environmental, and economic, develop governance systems for AI [5,13]. The OECD [28] urges the
highlights the importance of social sustainability for public governance public sector to “provide for multi-disciplinary, diverse, and inclusive
decision-making, and the emphasis on protecting societal values related perspectives” in shaping national approaches to AI and argues that the
to justice, fairness, and safety in AI governance, as described in section inclusion of diverse perspectives is “perhaps the main enabling factor to
2.1. This focus is not exclusive, and interacts significantly with di­ achieving AI initiatives that are both effective and ethical, both suc­
mensions of sustainability [35], as emphasized in a recent review of AI cessful and fair” (p. 101). Of particular importance in this regard is
initiatives for environmental sustainability [3]. The review found that representation of the perspectives and lived realities of different social
their application introduced systemic social risks “since the application groups who will in some way interact with the AI; particularly histori­
of AI-technologies in combination with globalization processes, are cally disadvantaged groups that may not have equal access to AI services
likely to create novel connections between humans, machines and the or the processes through which they are developed, or that are at risk of
living planet including ecosystems and the climate system” [3]. As a further marginalization as a result of the use of AI-based public services
point of departure, however, this suggests that application of the FSSD to [79–81]. Thus UNESCO [82] argues that “anyone or any entity with a
social sustainability provides useful tools for operationalizing the sus­ legitimate or bona fide interest in an issue brought about by the AI
tainable AI concept for public sector decision-making. development can be considered as a relevant stakeholder,” and that
multi-stakeholder participation can help “prevent the domination of the
Internet and other new technologies by one constituency at the expense
4.3. Social sustainability in the context of strategic sustainable
of another. (p. 116).
development
In the context of public sector decision-making about AI governance,
the boundary condition of diversity can thus be understood as avoiding
Of the three forms of sustainability, social sustainability has often
the degradation of social sustainability through elite capture of AI sys­
been regarded the most undertheorized [69], and has been differently
tems or the exclusion of affected stakeholders from AI governance. This
defined across disciplines [35,70–75]. This lack of clarity is perhaps also
is challenging insofar as the FSSD framework implies a starting point
why social sustainability has often been overlooked in national frame­
where the sustainability of systems has not yet been degraded, but there
works and reporting for sustainable development [35,76], and has
appears to be broad agreement in much of the literature on AI and so­
prompted efforts to assert a scientifically grounded and operational
ciety that diversity and inclusion in AI development and implementation
definition of social sustainability through the FSSD [43,77]. Their effort
are exceedingly rare. We may be starting from a default position of
leveraged a literature review and systems mapping to “better aid more
degradation in regard to this boundary condition, because of the
concrete planning and decision-making for sustainable development”
inherently opaque and esoteric nature of AI.
across sectors, in a way that is broadly analogous to the current analysis
Simultaneously, widespread criticism of this state of affairs has
(p. 34). This resulted in a conceptual model and principled definition.
produced a wealth of literature providing guidance on how to avoid
Most relevant for public sector decision-making, the authors also applied
exclusion and capture of AI systems through the inclusive participation
the boundary conditions articulated in the FSSD as boundaries that any
of stakeholders in the development, implementation, monitoring or re­
program or initiative must avoid violating in order to preserve social
view of AI platforms. This includes the use of national level multi-
sustainability:
stakeholder fora [82], national commissions for regulation [5] or
By clustering a myriad of down-stream impacts into overriding trust-building [83], or citizen assemblies to provide a broad represen­
mechanisms of degradation and equipping them with a “not” to serve tation of inputs to design and monitoring of AI implementation [84]. At
as exclusion criteria, boundary conditions for redesign are derived. a more operational level, toolkits have been developed to help govern­
The sustainability of the [social and ecological] systems and the ments conduct public deliberative processes [85] and inclusive impact
definition of the goal (sustainability) at the principle level then assessments for AI [21]. Frameworks for embedding these perspectives
creates the space and opportunity for people to meet their needs in in AI systems will be discussed in section 4.4.4 below.
whatever way they chose and for societies to create scenarios to
prosper and flourish [42, p. 35]. 4.4.2. Capacity for learning
The boundary condition of learning capacity is described as the
The authors identified five specific boundary conditions: diversity, ability “to sense changes and respond to them effectively […] and in­
capacity for learning, capacity for self-organization, common meaning, and cludes social memory, the capacity to learn from experience, as a
trust. The remaining subsections will consider each of these in the mechanism” [43]. The capacity for societies to learn from their in­
context of the broader debate on AI, ethics and society, and the opera­ teractions with AI platforms and systems begs a number of questions and
tional implications this has for public sector decision-making about AI preconditions. The most fundamental of these is basic awareness of AI,
governance. regarding how these technologies work and how they are being imple­
mented. Surveys suggest that awareness in this regard is generally low
4.4. Boundary conditions for sustainable AI among the public [86,87], and that this can be closely linked to the
public’s skepticism with AI [8]. Indeed, there appears to be a funda­
4.4.1. Diversity mental human tendency to blame AI when things go wrong [88].
Missimer et al. [43] describe three types of diversity as boundary While AI’s technical complexity and opacity has often been sug­
conditions for sustainable development, including a mix of social com­ gested as a reason why it is not feasible to engage with the general public
ponents “whose history and accumulated experience help cope with on AI issues [89], there has been a dramatic surge of research and
change,” diverse types of knowledge used to understand systems, and advocacy aiming to make algorithms and related processes explainable
“diversity in governance as a source for resilience” (p.36-37). This res­ [90–92]. Though some critics have argued that the notion of algorithmic
onates strongly with calls to ensure the inclusion of “diversity and in­ explainability assumes the existence of an engaged and critically
clusion within system development teams and stakeholders, broadening informed audience to whom AI might be explained [93], a number of
and diversifying the sources of knowledge, expertise, disciplines and practical frameworks have been developed to help governments engage
perspectives” [78] that inform the design, implementation, and regu­ non-experts in technical discourse [94]. Balaram et al. [86] note,
lation of AI. moreover, that deliberative processes have been demonstrated to be
Diversity and inclusion in AI processes are particularly important for particularly well suited to making sense of contentious or technically
the public sector, and governments have been prominently encouraged complicated topics. Robbins [95] suggests a gradated understanding of
to initiate multi-stakeholder consultative and deliberative processes to knowledge about AI, in which some types of meta-level knowledge, for

5
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

example regarding things like “training data, inputs, functions, outputs, “the role of common culture and meaning in the creation of social
and boundaries,” can be leveraged for regulation and monitoring capital, both horizontal and vertical. Particularly in the absence of a
without requiring more detailed technical knowledge and know-how (p. long history of reciprocity and the trust which that engenders,
391). stakeholders will often make the decision to enter into the initial
In the context of public sector decision-making about AI governance, reciprocities on the basis of their belief that they share representa­
the boundary condition would thus not protect fully transparent AI as tions, interpretations, and systems of meaning” (p. 37).
lines of code, but the fundamental discoverability of algorithmic pro­
These issues are best understood in regard to values that are
cesses and systemic interactions, such that it is possible for stakeholders
embedded in AI system, and how they influence AI interactions with
to determine how AI systems are developed and where decisions are
society and societal outcomes [108]. Drawing on UNESCO’s [109]
made in their interaction with human agents. As with the boundary
notion of power differentials in any given society, the question is whose
condition of diversity, the AI’s inherent opacity has already led to a
values are pursued and embodied by AI. For the public sector, this im­
status quo of degraded systems in regard to explainability. Public sector
plies a boundary condition of ensuring that AI systems do not embody or
decision-making needs to go beyond the preservation as anticipated in
manifest values that are antithetical to societal values or the values held
the FSSD and proactively seek to make AI systems discoverable and
by affected stakeholder groups.
decipherable to the general public. In an operational sense, this most
Protecting this boundary condition requires the public sector to
immediately requires that public sector workers not simply accept the
identify key societal values and to ensure that they are embedded in AI
idea that AI is an unknowable black box, and instead explore mecha­
platforms whose behavior will to some degree be opaque and unpre­
nisms for inclusive audits of AI systems [21], participatory technology
dictable. The deliberative and consultative processes discussed in regard
assessments [96], systematic disclosure systems [97], the appointment
to diversity are good mechanisms for identifying and defining values,
of data stewards [98], or the use of external organizations as knowledge
though Dignum [78] notes that the most determinant values in AI sys­
brokers to specific groups stakeholders [99].
tems are often implicit and dependent on socio-cultural context, and so
require specific methodologies to make those explicit in AI design and
4.4.3. Capacity for self-organization
implementation. This call has been met by a host of technical and pro­
As “complex adaptive systems are usually self-organized systems
cedural methods for the value-sensitive algorithmic design [110–112].
without system-level intent or centralized control,” Missimer et al. [43]
Simply embedding values in AI is likely insufficient to protect this
argue that the capacity for social systems to self-organize “is especially
boundary condition, however, because AI systems are capable of pur­
important when confronted with a sudden change in the environment”
suing conflicting objectives simultaneously [113], and can develop in
(p. 37). Without that capacity, social systems are unable to rapidly adapt
surprising and opaque ways over time, either through their own learning
and respond to disruptive changes, such as those posed by AI to the
processes [114,115] or through hidden feedback loops with human ac­
“informational foundations” of contemporary society [96].
tors and other algorithmic processes [116,117].
As with the previous boundary conditions, critical research suggests
In line with Neyland’s [105] notion of accountability in action, pro­
that AI is already degrading society’s capacity to self-organize and
tecting the boundary condition of common meaning and societal values
manage the societal impacts of AI, primarily because “power” to un­
requires a process-based approach to embedding values in AI systems,
derstand, engage, and shape AI is increasingly concentrated in specific
and Rahwan’s [118] society-in-the-loop (SITL) paradigm describes an
societal groups [82], while other groups are increasingly vulnerable to
architecture with which to do so. This builds on the concept of
AI bias and discriminatory outcomes, and their capacity to do something
human-in-the-loop systems, whereby individual people interact with AI
about it is increasingly diminished because they lack the means to
processes and outcomes to improve their accuracy, identify deviance
organize and engage [100,101]. While this is often framed as problem of
from desired outcomes, and provide accountability. Because AI systems
agency, empowerment, and accountability, it is equally a problem of
increasingly serve broad social functions and humans are prone to bias
consent, because AI systems are simultaneously ubiquitous and invisible
and fallibility, Rahwan proposes a paradigm, in which questions about
[102]. These systemic challenges to agency and engagement are further
fundamental rights, ethical values and societal preferences are directly
exacerbated by the ways in which specific algorithms have been used to
and regularly directed to the human controllers interacting with AI
fragment political communication and undermine political agency in
systems. Rahwan identifies several mechanisms for monitoring and
the public sphere [82,103].
enforcing AI compliance with societal values, including reporting,
The preservation of the public sphere falls outside the scope of most
auditing, and oversight programs. The precise mechanisms that are most
public sector decision-making about AI governance, but this boundary
appropriate in any give context will in turn be defined in regard to so­
condition can be read to obligate the preservation of institutional
cietal values and should be determined through the inclusive mecha­
mechanisms by which social groups engage with AI systems through
nisms described above.
public institutions and in the public sphere. This understanding reso­
nates strongly with notions of algorithmic accountability [104] that
4.4.5. Trust
have been prominent in popular debate, but which have failed to find
Missimer et al. [43] understand trust as a driver of social capital,
any operational purchase [105].
closely linked to the boundary of common meaning described above.
In seeking to operationalize this condition, public sector decision-
The concept of trust is, however, an exceptionally prominent marker for
making might instead rely on self-sovereign identity frameworks that
good AI in the debate on AI and society (see the European Commission’s
help users of algorithms and data systems to manage their data owner­
Ethics guidelines for trustworthy AI,1 OECD Principles on Artificial In­
ship and consent [98], alternative platforms and formats for informing
telligence,2 IBM’s Principles for Trust and Transparency3) and deserves
the consent of system users [106], toolkits for non-expert engagement in
special consideration. As a boundary condition, Missimer et al. [43]
algorithmic design and review [94], or institutional mechanisms for
define trust as “a quality of connection, which allows the system to
grievances and redress that are familiar from other institutional con­
remain together despite the level of internal complexity” (p 37), which
texts, but which are easily adapted to AI governance [107].

4.4.4. Common meaning


In explaining the boundary condition of common meaning [43], 1
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trust
emphasize worthy-ai.
2
http://www.oecd.org/going-digital/ai/principles/.
3
https://www.ibm.com/blogs/policy/trust-principles/.

6
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

resonates strongly with the notion that societal trust is a necessary above can contribute to a hands-on evaluation of whether AI governance
condition for pursuing social good outcomes through AI [13,119], which decisions risk betraying implied trust in AI, including adapted mecha­
in turn requires specific investments from the public sector [120]. nisms for facilitating informed consent [102]. Publicly visible institu­
The over-simplified rhetoric of trust-building in global policy tional mechanisms such as third party auditing, sharing of incidents, and
discourse, e.g. Ref. [121], belies the complex and highly contested bias bounties have also been recommended [128]. National dialogues
quality of trust as a concept [122]. In the context of AI and society, and the development of national institutions mandated to foster public
several scholars have noted that social trust in AI systems should not be trust in AI can support all of these [5,83]. It is also worth noting the
blind, but appropriate [123,124], suggesting that the boundary condition potential backfire of superficial efforts to build public trust through
in this instance is to preserve the foundation for trust, rather than to marketing and public relations [129].
preserve trust itself. This in turn requires “allowing people to determine
the conditions and parameters under which algorithms operate and to 5. Discussion and conclusion
redefine the boundaries between trust and privacy” [8, p. 373]. This is
complicated, however, by the fact that most people engaging with AI 5.1. An integrated model of boundary conditions for sustainable AI
systems, knowingly or not, will lack the technical and practical expertise
to make informed judgements about whether to trust those systems The preceding section presented the results of our 6-step literature
[125]. Often, an individual’s trust in AI will be extended through trust in review and analysis. As described in Table 2, this began with a broad
proxies for those systems. In some instances, this can be explicit. For query about how sustainable AI has been deliberately conceptualized by
example, Bodó [126] has described trust mediators, which vouch for and the research community and concluded with a narrower discussion of
explain the conditions of trust, in much the same way as Janssen et al. boundary conditions for social sustainability in the FSSD. The section
[98] describe data stewards. In other instances the proxy nature of trust closed by discussing how each boundary condition is reflected in the
is implicit because trust is systemic in nature, invested in the larger broader debate about AI and society, and how those conditions might be
system of public and private actors that are associated with the AI at operationalized for public sector decision-making about AI.
issue [127]. In doing so, we found that the boundary conditions for strategic
Understood as such, the boundary condition of trust compels the social sustainability identified by Missimer et al. [43] align well with
public sector to not only protect active and explicit trust in AI, but to contemporary debate on AI ethics and society, and have a good con­
protect the trust that is implied by use of those AI and related systems ceptual fit with the notion of sustainable AI mapped at the beginning of
and services, knowingly or not. This can be most easily operationalized our literature review. Importantly, we find that the broader literature on
in terms of protecting the trustworthiness of AI, in other words, to ensure AI and society helps to identify clear criteria and approaches for how
that AI is not designed and used in any way that, if known, would erode each of these boundary conditions can be operationalized in public
trust in that AI or the systems and services in which it is embedded. sector decision-making. This is presented in Table 4, where boundary
Google’s earlier and now infamous motto: don’t be evil becomes the conditions for sustainable development are aligned with corresponding
histrionic corollary to this boundary condition for public sector decision- concepts from the broader literature on AI and society, suggesting
making: don’t betray implied trust. operational criteria, approaches, and guiding questions for each (refer­
Operationally, several of the approaches and mechanisms described ences for operational approaches can be found at the end of each

Table 4
An integrated model of boundary conditions for sustainable AI.
Boundary Corresponding Criteria for preservation Operational approachesa Guiding questions
conditions concepts

Diversity Inclusive participation • All stakeholders affected by or • National level multi- Does the design and implementation of AI
interacting with the AI system stakeholder dialogues and incorporate the views and needs of all affected
• Emphasis on stakeholders with assemblies stakeholder groups?
traditionally limited access • National commissions for
regulation or trust-building
• Deliberative processes
• Inclusive impact assessments
for AI
Capacity for Transparency and • Discoverability and unknowability of • Inclusive audits of AI systems Do people who are affected understand how it
learning Explainability process • Participatory technology works and what the outcomes are?
assessments
• Systematic disclosure systems
• External knowledge brokers
Capacity for self- Agency, Consent, and • AI are subject to democratic • Grievance and complaint Do people know they are using it?
organization Accountability principles and institutions mechanisms Are affected stakeholders invited to engage in
• In regard to design, implementation, • Informed consent design and review of AI is implemented?
and monitoring phases • Toolkits for non-expert Is there a clear complaint mechanism for affected
engagement stakeholders?
• Systematic disclosure systems
Common meaning Embedded values • Understanding which values are • Society in the loop Does the implementation of AI match the values
represented by AI systems? o Facilitated social debate on held by affected stakeholder groups and society in
• Identifying the values held by society values general?
and by affected stakeholder groups • Human controllers that
oversee and update AI
systems
Trust Appropriate and • Trust is defined by those who actively • Alternative platforms for Should people trust the AI at issue?
systemic trust choose to trust AI systems. informed consent
• Trust is deserved • Trust mediators
• National institutions for trust-
building
a
= The approaches listed in this column are also described, with references, at the end of each subsection 4.4.1 - 4.4.5.

7
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

sub-section on boundary conditions in the previous section). whether the theories implicit in these boundary conditions do in fact
Collectively, we present these conditions as an integrated model for hold (for example, that inclusive participation does indeed strengthen
public sector decision-making that is both holistic and preliminary. and protect social sustainability).
Holistic, because the boundary conditions are interdependent and In terms of policy and practice, this paper contributes towards
regularly redundant. This is evident both in the operational approaches, operationalizing vague discourses about how AI should be considered in
many of which may well help to protect other boundary conditions than the public sector. Building on important work by Wirtz et al. and others
those with which they are associated in Table 4, and dependent of many [8,21,86,133], the integrated model for sustainable AI presented here
of the corresponding concepts (i.e., neither trust or agency are obviously provides a series of practical tests that can be applied by public sector
feasible without transparency and explainability). workers at varying degrees of detail when making decisions about how
The model is also preliminary; it is intended to be refined and applied to use or regulate AI. This does not in itself solve the inherent knowledge
iteratively, because it is premised on protecting a status quo which does and capacity gaps that are manifest around AI in the public sector [12].
not in fact seem to exist. As mentioned in the discussion of several The integrated model, and the approaches it references, require further
boundary conditions, the inherent opacity and inaccessibility of AI explorations, consideration, and contextual analysis to determine if and
technologies has led to a situation in which many of these boundary how they should be applied. It is not certain that this model is imme­
conditions are already degraded. AI technologies and systems are by diately applicable across contexts, and implementation of the model
default neither inclusive nor transparent. As a result, public sector may well suggest adjustments to the boundary conditions or associated
decision-making must take its own limited mandate and the obligation concepts outlined here. As with Missimer et al.‘s [43] model of social
to do no further harm as its point of departure in applying this integrated sustainability more generally, this is “a starting point, expandable and
model to decision-making. Discrete decisions about how algorithms and condensable if necessary” (p. 38).
machine learning are used in managing social welfare cases will not be We nevertheless propose that this model provides a useful and
able to protect the public’s capacity to self-organize in any grand sense. accessible starting point for non-technical or substantive experts, and
It will, however, be able to assess how the public could organize in that its alignment with the SDG policy framework can significantly
response to this specific initiative, and that in turn will have knock-on strengthen recognition and policy salience in a public sector context. In
effects in regard to the other boundary conditions and beyond the spe­ particular, we believe the conceptual framework, suggested approaches,
cific AI implementation at issue. and in particular the guiding questions, can make a significant contri­
This model is in keeping with how the FSSD was intended to be bution to helping public sector workers understand, anticipate, and
leveraged [43]. It also provides a clear complement to discrete tools that manage AI’s societal impact. It does so by building on and com­
are intended to support the public sector in specific aspects of AI plementing recent efforts towards operationalization of ethical AI, and
governance, and to the integrated framework asserted by Wirtz et al. particularly the integrative framework asserted by Wirtz et al. which
[8], insofar as it provides a decision-making framework for actual elaborates the processes and layers at which public sector workers must
implementation of its various components. engage to avoid harms caused by AI. To this what, the current model
provides guidance on how public sector should make decisions about AI
5.2. Concluding remarks and limitations governance, by elaborating red lines that cannot be crossed if values
related to fairness and safety are to be preserved, conceptualized here as
This paper aimed to explore how the concept of sustainable AI can be social sustainability. This is one of many preliminary steps towards
defined and operationalized to guide public sector decision-making, in ensuring that the public sector governs AI sustainably.
order to support efforts to operationalize ethical AI in the public sector. However, several limitations should be noted. Firstly, and as
In doing so it found that there is no widely held definition of sustainable described above, the literature considering AI and society as relevant to
AI, despite its increasing use in the research context. Conceptualizations public sector decision making is vast and diffuse, and our review has
of sustainable AI nevertheless did consistently reference the sustainable been deliberate, but not comprehensive. There is much literature which
development framework associated with the SDGs, however, validating we have not considered and our efforts to narrow the conceptual focus of
the use of that paradigm to elaborate the concept in the context of public sustainable AI have closed some doors which might be worth keeping
sector governance. In addition, aligning a more narrow review of that open in other contexts. Notions of AI sustainability more closely linked
literature with the public sector decision-making identified five to environmental or economic concerns, for example, may be important
boundary conditions for sustainable AI, which the public sector should for other policy or research pursuits. We do not see these as mutually
aim to preserve. Considering these in the context of AI governance, exclusive, however, and are convinced that despite these conceptual
resulted in the following boundary conditions for sustainable AI in the limitations, this analysis makes an important conceptual and theoretical
public sector: 1) diversity and inclusion; 2) capacity for learning, contribution by providing the foundation for rigorously explicating the
transparency and explainability; 3) capacity for self-organization, notion of sustainable AI in the public sector.
agency, and accountability; 4) common meaning and embedded
values; and 5) systemic and implied trust. These five conditions were CRediT author statement
presented together as an integrated model, together with operational
conditions and approaches that can be leveraged to inform public sector Christopher Wilson: conceptualization, methodology, formal
decision-making. analysis, investigation, writing – original draft, writing - review &
By proposing this integrated model, this paper makes several con­ editing. Maja Van Der Velden: conceptualization, methodology, formal
tributions to both theory and practice of sustainable AI. Most immedi­ analysis, investigation, writing – original draft, writing - review &
ately, this involves contributing clarity and rigor to what is in danger of editing.
becoming a buzzword in both policy and research discourse about AI
and society. Most notably, the boundary conditions here are presented in References
a manner that facilitates the elaboration of empirical indicators in-line
with concept explications that have been advanced in the communica­ [1] W.G. de Sousa, E.R.P. de Melo, P.H.D.S. Bermejo, R.A.S. Farias, A.O. Gomes, How
and where is artificial intelligence in the public sector going? A literature review
tions theory [130], or Goertz’s seminal method for defining social sci­ and research agenda, Govern. Inf. Q. 36 (2019) 101392, https://doi.org/
ence concepts [131]. Careful applications of these methods would 10.1016/j.giq.2019.07.004.
provide a framework for elaborating some of the theoretical premises [2] C. Wilson, Public engagement and AI: a values analysis of national strategies,
Govern. Inf. Q. 39 (2022) 101652, https://doi.org/10.1016/j.giq.2021.101652.
implied above and the types of contexts in which they are theoretically
sound [132]. This is a crucial first step before empirically assessing

8
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

[3] V. Galaz, M.A. Centeno, P.W. Callahan, A. Causevic, T. Patterson, I. Brass, [28] J. Berryhill, K.K. Heang, R. Clogher, K. McBride, Hello, World: Artificial
S. Baum, D. Farber, J. Fischer, D. Garcia, T. McPhearson, D. Jimenez, B. King, Intelligence and its Use in the Public Sector, OECD, 2019, https://doi.org/
P. Larcey, K. Levy, Artificial intelligence, systemic risks, and sustainability, 10.1787/726fd39d-en.
Technol. Soc. 67 (2021) 101741, https://doi.org/10.1016/j. [29] J. Reis, P.E. Santo, N. Melão, Artificial intelligence in government services: a
techsoc.2021.101741. systematic literature review, in: Á. Rocha, H. Adeli, L.P. Reis, S. Costanzo (Eds.),
[4] R.K.E. Bellamy, K. Dey, M. Hind, S.C. Hoffman, S. Houde, K. Kannan, P. Lohia, New Knowl. Inf. Syst. Technol., Springer International Publishing, Cham, 2019,
S. Mehta, A. Mojsilovic, S. Nagar, K.N. Ramamurthy, J. Richards, D. Saha, pp. 241–252, https://doi.org/10.1007/978-3-030-16181-1_23.
P. Sattigeri, M. Singh, K.R. Varshney, Y. Zhang, Think your artificial intelligence [30] J. Eager, M. Whittle, J. Smit, G. Cacciaguerra, E. Lale-Demoz, Opportunities of
software is fair? Think again, IEEE Softw. 36 (2019) 76–80, https://doi.org/ Artificial Intelligence, Think Thank European Parliament, Brussels, 2020.
10.1109/MS.2019.2908514. https://www.europarl.europa.eu/thinktank/en/document/IPOL_STU(2020)
[5] C. Cath, S. Wachter, B. Mittelstadt, M. Taddeo, L. Floridi, Artificial intelligence 652713. (Accessed 1 February 2022).
and the ‘good society’: the US, EU, and UK approach, Sci. Eng. Ethics 24 (2018) [31] A. Ingrams, W. Kaufmann, D. Jacobs, In AI we trust? Citizen perceptions of AI in
505–528, https://doi.org/10.1007/s11948-017-9901-7. government decision making, Pol. Internet (2021) 1–20, https://doi.org/
[6] E.M. Witesman, L.C. Walters, Modeling public decision preferences using context- 10.1002/poi3.276.
specific value hierarchies, Am. Rev. Publ. Adm. 45 (2015) 86–105, https://doi. [32] P.D. König, G. Wenzelburger, The legitimacy gap of algorithmic decision-making
org/10.1177/0275074014536603. in the public sector: why it arises and how to address it, Technol. Soc. 67 (2021),
[7] L. Reardon, Networks and problem recognition: advancing the multiple streams https://doi.org/10.1016/j.techsoc.2021.101688.
approach, Pol. Sci. 51 (2018) 457–476, https://doi.org/10.1007/s11077-018- [33] K. Yeung, M. Lodge, Algorithmic regulation, in: K. Yeung, M. Lodge (Eds.),
9330-8. Algorithmic Regul., Oxford University Press, Oxford, 2019, pp. 1–18, https://doi.
[8] B.W. Wirtz, J.C. Weyerer, B.J. Sturm, The dark sides of artificial intelligence: an org/10.1093/oso/9780198838494.003.0001.
integrated AI governance framework for public administration, Int. J. Publ. Adm. [34] World Commission on Environment and Development, Report of the World
43 (2020) 818–829, https://doi.org/10.1080/01900692.2020.1749851. Commission on Environment and Development: Our Common Future, 1987,
[9] A.A. Guenduez, T. Mettler, K. Schedler, Technological frames in public https://doi.org/10.1080/07488008808408783.
administration: what do public managers think of big data? Govern. Inf. Q. 37 [35] S. McKenzie, Social Sustainability: towards some definitions, Hawke Res. Inst.
(2020) 101406, https://doi.org/10.1016/j.giq.2019.101406. Work. Pap. Ser. (2004) 31.
[10] M. Janssen, G. Kuk, The challenges and limits of big data algorithms in [36] United Nations, Transforming Our World: the 2030 Agenda for Sustainable
technocratic governance, Govern. Inf. Q. 33 (2016) 371–377, https://doi.org/ Development, 2015, https://doi.org/10.1201/b20466-7.
10.1016/j.giq.2016.08.011. [37] R. Saner, L. Yiu, M. Nguyen, Monitoring the SDGs: digital and social technologies
[11] D. Kolkman, The usefulness of algorithmic models in policy making, Govern. Inf. to ensure citizen participation, inclusiveness and transparency, Dev. Pol. Rev.
Q. 37 (2020) 101488, https://doi.org/10.1016/j.giq.2020.101488. (2019) 1–18, https://doi.org/10.1111/dpr.12433.
[12] A. Zuiderwijk, Y.-C. Chen, F. Salem, Implications of the use of artificial [38] United Nations Development Programme, Institutional and Coordination
intelligence in public governance: a systematic literature review and a research Mechanisms: Guidance Note on Facilitating Integration and Coherence for SDG
agenda, Govern. Inf. Q. 38 (2021) 101577, https://doi.org/10.1016/j. Implementation, 2017.
giq.2021.101577. [39] O.M. Van Den Broek, R. Klingler-vidra, The UN Sustainable Development Goals as
[13] S.J. Mikhaylov, M. Esteve, A. Campion, Artificial intelligence for the public a North Star : How an Intermediary Network Makes , Takes , and Retro Fi Ts the
sector: opportunities and challenges of cross-sector collaboration, Philos. Trans. Meaning of the Sustainable Development Goals, Regulation and, 2021, https://
R. Soc. Math. Phys. Eng. Sci. 376 (2018) 20170357, https://doi.org/10.1098/ doi.org/10.1111/rego.12415.
rsta.2017.0357. [40] United Nations Department for Economic and Social Affairs, Compendium of
[14] W. Orr, J.L. Davis, Attributions of ethical responsibility by Artificial Intelligence National Institutional Arrangements for Implementing the 2030 Agenda for
practitioners, Inf. Commun. Soc. 23 (2020) 719–735, https://doi.org/10.1080/ Sustainable Development, United Nations Department for Economic and Social
1369118X.2020.1713842. Affairs, New York, 2019. https://sustainabledevelopment.un.org/content/
[15] P. 6, Ethics, regulation and the new artificial In℡ligence, Part I: accountability documents/22008UNPAN99132.pdf.
and power, Inf. Commun. Soc. 4 (2001) 199–229, https://doi.org/10.1080/ [41] G.I. Broman, K.H. Robèrt, A framework for strategic sustainable development,
713768525. J. Clean. Prod. 140 (2017) 17–31, https://doi.org/10.1016/j.
[16] A. Gupta, V. Heath, AI Ethics Groups Are Repeating One of Society’s Classic jclepro.2015.10.121.
Mistakes, MIT Technol. Rev., 2020. https://www.technologyreview.com/202 [42] E.S. Zeemering, Sustainability management , strategy and reform in local
0/09/14/1008323/ai-ethics-representation-artificial-intelligence-opinion/. government, Publ. Manag. Rev. 20 (2018) 136–153, https://doi.org/10.1080/
(Accessed 28 January 2022). 14719037.2017.1293148.
[17] T. Metzinger, Ethics Washing Made in Europe, Tagesspiegel Online, 2019. http [43] M. Missimer, K.-H. Robèrt, G. Broman, A strategic approach to social
s://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe sustainability – Part 1: exploring the social system, J. Clean. Prod. 140 (2017)
/24195496.html. (Accessed 28 January 2022). 32–41, https://doi.org/10.1016/j.jclepro.2016.03.170.
[18] B. Rossi, Why the Government’s Data Science Ethical Framework Is a Recipe for [44] R. Messerschmidt, S. Ullrich, A European Way towards Sustainable AI, Soc. Eur.,
Disaster, Inf. Age, 2016. https://www.information-age.com/why-governments- 2020. https://www.socialeurope.eu/a-european-way-towards-sustainable-ai.
data-science-ethical-framework-recipe-disaster-123461541/. (Accessed 28 (Accessed 5 June 2020).
January 2022). [45] A. Gupta, The Imperative for Sustainable AI Systems, the Gradient, 2021.
[19] L. Vesnic-Alujevic, S. Nascimento, A. Pólvora, Societal and ethical impacts of https://thegradient.pub/sustainable-ai/. (Accessed 31 December 2021).
artificial intelligence: critical notes on European policy frameworks, [46] M. Chavosh Nejad, S. Mansour, A. Karamipour, An AHP-based multi-criteria
Telecommun. Pol. 44 (2020) 101961, https://doi.org/10.1016/j. model for assessment of the social sustainability of technology management
telpol.2020.101961. process: a case study in banking industry, Technol. Soc. 65 (2021) 101602,
[20] J. Cussins Newman, Decision Points in AI Governance: Three Case Studies Explore https://doi.org/10.1016/j.techsoc.2021.101602.
Efforts to Operationalize AI Principles, CLTC UC Berkeley Center for Long-Term [47] G. Myers, K. Nejkov, Developing Artificial Intelligence Sustainably: toward a
Cybersecurity, Berkeley, CA, 2020. https://cltc.berkeley.edu/ai-decision-points/. Practical Code of Conduct for Disruptive Technologies, International Finance
(Accessed 28 January 2022). Corporation, Washington, DC, 2020, https://doi.org/10.1596/33613.
[21] D. Reisman, S. Schultz, K. Crawford, M. Whittaker, Algorithmic Impact [48] N. Aliman, L. Kester, P. Werkhoven, Sustainable AI safety? Delphi - interdiscip.
Assessment: A Practical Framework for Public Agency Accountability, AI Now Rev. Emerg. Technol. 2 (2020) 226–233, https://doi.org/10.21552/delphi/
Institute, 2018. https://ainowinstitute.org/aiareport2018.pdf. 2019/4/12.
[22] R.A. Irvin, J. Stansbury, Citizen participation in decision making: is it worth the [49] A. van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of
effort? Publ. Adm. Rev. 64 (2004) 55–65, https://doi.org/10.1111/j.1540- AI, AI Ethics 1 (2021) 213–218, https://doi.org/10.1007/s43681-021-00043-6.
6210.2004.00346.x. [50] F. Rohde, M. Gossen, J. Wagner, T. Santarius, Sustainability challenges of
[23] J. Haas, K.M. Vogt, Ignorance and investigation, in: Routledge Int. Handb. artificial intelligence and policy implications, Ökol. Wirtsch. - Fachz. 36 (2021)
Ignorance Stud., Routledge, 2015. 36–40, https://doi.org/10.14512/OEWO360136.
[24] B.W. Wirtz, R. Piehler, M.-J. Thomas, P. Daiser, Resistance of Public Personnel to [51] S. Larsson, M. Anneroth, A. Felländer, F. Heintz, R.C. Ångström, Sustainable AI:
Open Government: a cognitive theory view of implementation barriers towards an Inventory of the State of Knowledge of Ethical, Social, and Legal Challenges
open government data, Publ. Manag. Rev. 18 (2016) 1335–1364, https://doi.org/ Related to Artificial Intelligence, AI Sustainability Center, Stockholm, 2019.
10.1080/14719037.2015.1103889. [52] AI Sustainability Center, AI Sustainability Center, 2021. https://aisustainability.
[25] E. Thelisson, J.-H. Morin, J. Rochel, AI governance: digital responsibility as a org.
building block, Delphi - interdiscip, Rev. Emerg. Technol. 2 (2020) 167–178, [53] R. Vinuesa, H. Azizpour, I. Leite, M. Balaam, V. Dignum, S. Domisch, A. Felländer,
https://doi.org/10.21552/delphi/2019/4/6. S.D. Langhans, M. Tegmark, F. Fuso Nerini, The role of artificial intelligence in
[26] S.C. Robinson, Trust, transparency, and openness: how inclusion of cultural achieving the Sustainable Development Goals, Nat. Commun. 11 (2020) 233,
values shapes Nordic national public policy strategies for artificial intelligence https://doi.org/10.1038/s41467-019-14108-y.
(AI), Technol. Soc. (2020) 101421, https://doi.org/10.1016/j. [54] L. Bjørlo, Ø. Moen, M. Pasquine, The role of consumer autonomy in developing
techsoc.2020.101421. sustainable AI: a conceptual framework, Sustainability 13 (2021) 2332, https://
[27] O.J. Erdelyi, J. Goldsmith, Regulating Artificial Intelligence: Proposal for a Global doi.org/10.3390/su13042332.
Solution, Social Science Research Network, Rochester, NY, 2018. https://papers. [55] J.J. Yun, D. Lee, H. Ahn, K. Park, T. Yigitcanlar, Not deep learning but
ssrn.com/abstract=3263992. (Accessed 1 February 2022). autonomous learning of open innovation for sustainable artificial intelligence,
Sustain. Switz. 8 (2016), https://doi.org/10.3390/su8080797.

9
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

[56] G.L. Tsafack Chetsa, Towards Sustainable Artificial Intelligence: A Framework to [85] The Forum for Ethical AI, Democratising Decisions about Technology: A Toolkit,
Create Value and Understand Risk, Apress, Berkeley, CA, 2021, https://doi.org/ RSA, London, 2019.
10.1007/978-1-4842-7214-5. [86] B. Balaram, T. Greenham, J. Leonard, Engaging Citizens in the Ethical Use of AI
[57] C.-J. Wu, R. Raghavendra, U. Gupta, B. Acun, N. Ardalani, K. Maeng, G. Chang, F. for Automated Decision-Making, RSA, London, 2018. https://www.thersa.org
A. Behram, J. Huang, C. Bai, M. Gschwind, A. Gupta, M. Ott, A. Melnikov, /globalassets/pdfs/reports/rsa_artificial- intelligence—real-public-engagement.
S. Candido, D. Brooks, G. Chauhan, B. Lee, H.-H.S. Lee, B. Akyildiz, M. Balandat, pdf.
J. Spisak, R. Jain, M. Rabbat, K. Hazelwood, Sustainable AI: Environmental [87] J. Anderson, L. Rainie, Artificial intelligence and the future of humans, Pew Res.
Implications, Challenges and Opportunities, ArXiv 211100364 Cs, 2021. htt Cent. Internet Sci. Tech. (2018). https://www.pewresearch.org/internet/2018/
p://arxiv.org/abs/2111.00364. (Accessed 19 December 2021). 12/10/artificial-intelligence-and-the-future-of-humans/. (Accessed 28 January
[58] C. Djeffal, Sustainable AI Development (SAID): on the Road to More Access to 2022).
Justice, Social Science Research Network, Rochester, NY, 2018, https://doi.org/ [88] D.B. Shank, A. DeSanti, T. Maninger, When are artificial intelligence versus
10.2139/ssrn.3298980. human agents faulted for wrongdoing? Moral attributions after individual and
[59] K. Porter, Shaping the future of sustainable AI and automation: why human rights joint decisions, Inf. Commun. Soc. 22 (2019) 648–663, https://doi.org/10.1080/
still matter, Hum. Rights Def. 28 (2019) 33–35. 1369118X.2019.1568515.
[60] E. Dahlin, Mind the gap! on the future of AI research, Humanit. Soc. Sci. [89] T. Bucher, Neither black nor box: ways of knowing algorithms, in: S. Kubitschko,
Commun. 8 (2021) 1–4, https://doi.org/10.1057/s41599-021-00750-9. A. Kaun (Eds.), Innov. Methods Media Commun. Res., Springer International
[61] S.-C. Yeh, A.-W. Wu, H.-C. Yu, H.C. Wu, Y.-P. Kuo, P.-X. Chen, Public perception Publishing, Cham, 2016, pp. 81–98, https://doi.org/10.1007/978-3-319-40700-
of artificial intelligence and its connections to the sustainable development goals, 5_5.
Sustainability 13 (2021) 9165, https://doi.org/10.3390/su13169165. [90] B. Goodman, S. Flaxman, European union regulations on algorithmic decision-
[62] C. Fernández-Aller, A.F. de Velasco, Á. Manjarrés, D. Pastor-Escuredo, S. Pickin, making and a “right to explanation, AI Mag. 38 (2017) 50–57, https://doi.org/
J.S. Criado, T. Ausín, An inclusive and sustainable artificial intelligence strategy 10.1609/aimag.v38i3.2741.
for Europe based on human rights, IEEE Technol. Soc. Mag. 40 (2021) 46–54, [91] A. Abdul, J. Vermeulen, D. Wang, B.Y. Lim, M. Kankanhalli, Trends and
https://doi.org/10.1109/MTS.2021.3056283. trajectories for explainable, accountable and intelligible systems: an HCI research
[63] N.R. Pal, In search of trustworthy and transparent intelligent systems with agenda, in: Proc. 2018 CHI Conf. Hum. Factors Comput. Syst., Association for
human-like cognitive and reasoning capabilities, Front. Robot. AI. 7 (2020), Computing Machinery, New York, NY, USA, 2018, pp. 1–18, https://doi.org/
https://doi.org/10.3389/frobt.2020.00076. 10.1145/3173574.3174156. (Accessed 28 January 2022).
[64] I. Kindylidi, T.S. Cabral, Sustainability of AI: the case of provision of information [92] C.T. Wolf, K.E. Ringland, Designing accessible, explainable AI (XAI) experiences,
to consumers, Sustainability 13 (2021) 12064, https://doi.org/10.3390/ ACM SIGACCESS Access, Comput. Times 6 (2020) 1, https://doi.org/10.1145/
su132112064. 3386296.3386302.
[65] OsloMet, Nordic Center for Sustainable and Trustworthy AI Research [93] J. Kemper, D. Kolkman, Transparent to whom? No algorithmic accountability
(NordSTAR), 2021. https://www.oslomet.no/nordstar. (Accessed 28 January without a critical audience, Inf. Commun. Soc. 22 (2019) 2081–2096, https://doi.
2022). org/10.1080/1369118X.2018.1477967.
[66] University of Bonn, Sustainable AI Lab, Sustain. AI Lab, 2021. https://sustainab [94] A. Vestby, J. Vestby, Machine learning and the police: asking the right questions,
le-ai.eu/. (Accessed 28 January 2022). Polic. J. Pol. Pract. 15 (2021) 44–58, https://doi.org/10.1093/police/paz035.
[67] J. Elkington, Cannibals with Forks: the Triple Bottom Line of 21st Century [95] S. Robbins, AI and the path to envelopment: knowledge as a first step towards the
Business, Capstone, Oxford, 1997. responsible regulation and use of AI-powered machines, AI Soc. 35 (2020)
[68] E.B. Barbier, The concept of sustainable economic development, Environ. 391–400, https://doi.org/10.1007/s00146-019-00891-1.
Conserv. 14 (1987) 101–110, https://doi.org/10.1017/S0376892900011449. [96] P.D. König, G. Wenzelburger, Opportunity for renewal or disruptive force? How
[69] A. Colantonio, Social Sustainability: an Exploratory Analysis of its Definition, artificial intelligence alters democratic politics, Govern. Inf. Q. 37 (2020)
Assessment Methods Metrics and Tools, Oxford Brooks University, Oxford, UK, 101489, https://doi.org/10.1016/j.giq.2020.101489.
2007. http://www.brookes.ac.uk/schools/be/oisd/sustainable_communities/. [97] J. Berscheid, F. Roewer-Despres, Beyond transparency, AI Matters 5 (2019)
(Accessed 12 June 2020). 13–22, https://doi.org/10.1145/3340470.3340476.
[70] Ş.Y. Balaman, Chapter 4 - sustainability issues in biomass-based production [98] M. Janssen, P. Brous, E. Estevez, L.S. Barbosa, T. Janowski, Data governance:
chains, in: Ş.Y. Balaman (Ed.), Decis.-Mak. Biomass-Based Prod. Chains, organizing data for trustworthy artificial intelligence, Govern. Inf. Q. 37 (2020)
Academic Press, 2019, pp. 77–112, https://doi.org/10.1016/B978-0-12-814278- 101493, https://doi.org/10.1016/j.giq.2020.101493.
3.00004-2. [99] R. Mcgee, R. Carlitz, Learning Study on the Users in Technology for Transparency
[71] L. Karbasi, Social Sustainability | UN Global Compact, (n.d.). https://www.unglo and Accountability Initiatives: Assumptions and Realities, 2013.
balcompact.org/what-is-gc/our-work/social (accessed June 14, 2020). [100] V. Chiao, Fairness, accountability and transparency: notes on algorithmic
[72] S. Woodcraft, Design for Social Sustainability: A Framework for Creating Thriving decision-making in criminal justice, Int. J. Law Context 15 (2019) 126–139,
New Communities, The Young Foundation, London, UK, 2012. https://doi.org/10.1017/S1744552319000077.
[73] A. Widok, Social Sustainability: Theories, Concepts, Practicability, in: Berlin, [101] V. Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and
2009, p. 9. Punish the Poor, 2018.
[74] G. Assefa, B. Frostell, Social sustainability and social acceptance in technology [102] M.L. Jones, E. Edenberg, Troubleshooting AI and consent, in: M.D. Dubber,
assessment: a case study of energy technologies, Technol. Soc. 29 (2007) 63–78, F. Pasquale, S. Das (Eds.), Oxf. Handb. Ethics AI, Oxford University Press, Oxford,
https://doi.org/10.1016/j.techsoc.2006.10.007. UK, 2020, pp. 357–374, https://doi.org/10.1093/oxfordhb/
[75] K. De Fine Licht, A. Folland, Defining “social sustainability”: towards a 9780190067397.013.23.
sustainable solution to the conceptual confusion, etikk praksis - nord, J. Appl. [103] M. Latonero, Governing Artificial Intelligence: Upholding Human Rights Dignity,
Ethics. (2019) 21–39, https://doi.org/10.5324/eip.v13i2.2913. Data & Society, 2018. https://apo.org.au/sites/default/files/resource-files/201
[76] M. Cuthill, Strengthening the ‘social’ in sustainable development: developing a 8-10/apo-nid196716.pdf.
conceptual framework for social sustainability in a rapid urban growth region in [104] J.A. Kroll, J. Huey, S. Barocas, E.W. Felten, J.R. Reidenberg, D.G. Robinson,
Australia, Sustain. Dev. 18 (2010) 362–373, https://doi.org/10.1002/sd.397. H. Yu, Accountable Algorithms, Social Science Research Network, Rochester, NY,
[77] M. Missimer, K.-H. Robèrt, G. Broman, A strategic approach to social 2016. https://papers.ssrn.com/abstract=2765268. (Accessed 28 January 2022).
sustainability – Part 2: a principle-based definition, J. Clean. Prod. 140 (2017) [105] D. Neyland, Accountability and the algorithm, in: D. Neyland (Ed.), Everyday Life
42–52, https://doi.org/10.1016/j.jclepro.2016.04.059. Algorithm, Springer International Publishing, Cham, 2019, pp. 45–71, https://
[78] V. Dignum, Responsible Artificial Intelligence: How to Develop and Use AI in a doi.org/10.1007/978-3-030-00578-8_3.
Responsible Way, Springer Nature, 2019. [106] A.J. Andreotta, N. Kirkham, M. Rizzi, AI, Big Data, and the Future of Consent, AI
[79] P. Alston, Digital Technology, Social Protection and Human Rights, OHCHR, Soc., 2021, pp. 1–14, https://doi.org/10.1007/s00146-021-01262-5.
Geneva, 2019. https://www.ohchr.org/EN/Issues/Poverty/Pages/DigitalTechno [107] L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum,
logy.aspx. (Accessed 21 June 2020). C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, E. Vayena,
[80] J. Niklas, Conceptualizing Socio-Economic Rights in the Discussion on Artificial AI4People—an ethical framework for a good AI society: opportunities, risks,
Intelligence, Social Science Research Network, Rochester, NY, 2019, https://doi. principles, and recommendations, Minds Mach. 28 (2018) 689–707, https://doi.
org/10.2139/ssrn.3569780. org/10.1007/s11023-018-9482-5.
[81] J. von Braun, AI and Robotics Implications for the Poor, Social Science Research [108] H. Surden, Values Embedded in Legal Artificial Intelligence, Social Science
Network, Rochester, NY, 2019, https://doi.org/10.2139/ssrn.3497591. Research Network, Rochester, NY, 2017, https://doi.org/10.2139/ssrn.2932333.
[82] UNESCO, Steering AI and Advanced ICTs for Knowledge Societies: a Rights, [109] UNESCO, Humanistic Futures of Learning: Perspectives from UNESCO Chairs and
Openness, Access, and Multi-Stakeholder Perspective, UNESCO Digital Library, UNITWIN Networks, UNESCO, Paris, France, 2020. https://unesdoc.unesco.
2018. https://unesdoc.unesco.org/ark:/48223/pf0000372132. (Accessed 22 org/ark:/48223/pf0000372577. (Accessed 27 January 2022).
June 2020). [110] Q.V. Liao, M. Muller, Enabling value sensitive AI systems through participatory
[83] G. Mulgan, A Machine Intelligence Commission for the UK: How to Grow design fictions, Preprint (2019) 7.
Informed Public Trust and Maximise the Positive Impact of Smart Machines, [111] H. Zhu, B. Yu, A. Halfaker, L. Terveen, Value-sensitive algorithm design: method,
2016. https://media.nesta.org.uk/documents/a_machine_intelligence_commissio case study, and lessons, Proc. ACM Hum.-Comput. Interact. 2 (2018), https://doi.
n_for_the_uk_-_geoff_mulgan.pdf. org/10.1145/3274463, 194:1-194:23.
[84] A. Hintz, Towards Civic Participation in the Datafied Society: can citizen [112] D. Loi, T. Lodato, C.T. Wolf, R. Arar, J. Blomberg, PD manifesto for AI futures, in:
assemblies democratize algorithmic governance? AoIR Sel. Pap. Internet Res. Proc. 15th Particip. Des. Conf. Short Pap. Situated Actions Workshop Tutor, vol.
(2021) https://doi.org/10.5210/spir.v2021i0.11943. 2, Association for Computing Machinery, New York, NY, USA, 2018, pp. 1–4,
https://doi.org/10.1145/3210604.3210614.

10
C. Wilson and M. van der Velden Technology in Society 68 (2022) 101926

[113] P. Vamplew, R. Dazeley, C. Foale, S. Firmin, J. Mummery, Human-aligned [124] P. Vassilakopoulou, Sociotechnical Approach for Accountability by Design in AI
artificial intelligence is a multiobjective problem, Ethics Inf. Technol. 20 (2018) Systems, 2020. ECIS 2020 Res.–Prog. Pap, https://aisel.aisnet.org/ecis2020
27–40, https://doi.org/10.1007/s10676-017-9440-6. _rip/12.
[114] S. Das, A. Dey, A. Pal, N. Roy, Applications of artificial intelligence in machine [125] K.S. Gill, AI&Society: editorial volume 35.2: the trappings of AI Agency, AI Soc.
learning: review and prospect, Int. J. Comput. Appl. 115 (2015) 31–41. 35 (2020) 289–296, https://doi.org/10.1007/s00146-020-00961-9.
[115] Z. Ghahramani, Probabilistic machine learning and artificial intelligence, Nature [126] B. Bodó, Mediated Trust: A Theoretical Framework to Address the
521 (2015) 452–459, https://doi.org/10.1038/nature14541. Trustworthiness of Technological Trust Mediators, vol. 23, New Media Soc, 2021,
[116] D. Ensign, S.A. Friedler, S. Neville, C. Scheidegger, S. Venkatasubramanian, pp. 2668–2690, https://doi.org/10.1177/1461444820939922.
Runaway Feedback Loops in Predictive Policing, ArXiv170609847 Cs Stat, 2017. [127] R. Steedman, H. Kennedy, R. Jones, Complex ecologies of trust in data practices
http://arxiv.org/abs/1706.09847. (Accessed 28 January 2022). and data-driven systems, Inf. Commun. Soc. 4462 (2020), https://doi.org/
[117] S. Milano, M. Taddeo, L. Floridi, Recommender systems and their ethical 10.1080/1369118X.2020.1748090.
challenges, AI Soc. 35 (2020) 957–967, https://doi.org/10.1007/s00146-020- [128] M. Brundage, S. Avin, J. Wang, H. Belfield, G. Krueger, G. Hadfield, H. Khlaaf,
00950-y. J. Yang, H. Toner, R. Fong, T. Maharaj, P.W. Koh, S. Hooker, J. Leung, A. Trask,
[118] I. Rahwan, Society-in-the-loop: programming the algorithmic social contract, E. Bluemke, J. Lebensold, C. O’Keefe, M. Koren, T. Ryffel, J.B. Rubinovitz,
Ethics Inf. Technol. 20 (2018) 5–14, https://doi.org/10.1007/s10676-017-9430- T. Besiroglu, F. Carugati, J. Clark, P. Eckersley, S. de Haas, M. Johnson, B. Laurie,
8. A. Ingerman, I. Krawczuk, A. Askell, R. Cammarota, A. Lohn, D. Krueger, C. Stix,
[119] N. Tomašev, J. Cornebise, F. Hutter, S. Mohamed, A. Picciariello, B. Connelly, D. P. Henderson, L. Graham, C. Prunkl, B. Martin, E. Seger, N. Zilberman,
C.M. Belgrave, D. Ezer, F.C. van der Haert, F. Mugisha, G. Abila, H. Arai, S.Ó. hÉigeartaigh, F. Kroeger, G. Sastry, R. Kagan, A. Weller, B. Tse, E. Barnes,
H. Almiraat, J. Proskurnia, K. Snyder, M. Otake-Matsuura, M. Othman, A. Dafoe, P. Scharre, A. Herbert-Voss, M. Rasser, S. Sodhani, C. Flynn, T.
T. Glasmachers, W. de Wever, Y.W. Teh, M.E. Khan, R.D. Winne, T. Schaul, K. Gilbert, L. Dyer, S. Khan, Y. Bengio, M. Anderljung, Toward Trustworthy AI
C. Clopath, AI for social good: unlocking the opportunity for positive impact, Nat. Development: Mechanisms for Supporting Verifiable Claims, 2020.
Commun. 11 (2020) 2468, https://doi.org/10.1038/s41467-020-15871-z. ArXiv200407213 Cs, http://arxiv.org/abs/2004.07213. (Accessed 1 February
[120] T. Harrison, L.F. Luna-Reyes, T. Pardo, N. De Paula, M. Najafabadi, J. Palmer, The 2022).
data firehose and AI in government: why data management is a key to value and [129] C. Bourne, AI cheerleaders: public relations, neoliberalism and artificial
ethics, in: Proc. 20th Annu. Int. Conf. Digit. Gov. Res., Association for Computing intelligence, Publ. Relat. Inq. 8 (2019) 109–125, https://doi.org/10.1177/
Machinery, New York, NY, USA, 2019, pp. 171–176, https://doi.org/10.1145/ 2046147X19835250.
3325112.3325245. [130] S.H. Chaffee, Explication (Communication Concepts), Sage Publications, London,
[121] European Commission, White Paper on Artificial Intelligence: a European 1991.
Approach to Excellence and Trust, European Commission, Brussels, 2020. http [131] G. Goertz, Social Science Concepts: A User’s Guide, Princeton University Press,
s://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european 2006, https://doi.org/10.2307/j.ctvcm4gmg.
-approach-excellence-and-trust_en. (Accessed 28 January 2022). [132] A.L. George, A. Bennett, Case Studies and Theory Development in the Social
[122] F. Bannister, R. Connolly, Trust and transformational government: a proposed Sciences, MIT Press, Cambridge, MA, USA, 2005.
framework for research, Govern. Inf. Q. 28 (2011) 137–147, https://doi.org/ [133] L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum,
10.1016/j.giq.2010.06.010. C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, E. Vayena,
[123] M. Chui, M. Harryson, J. Manyika, R. Roberts |, R. Chung, A. van Heteren, P. Nel, AI4People—an ethical framework for a good AI society: opportunities, risks,
Applying AI for Social Good, McKinsey, San Fransisco, 2018. https://www.mckin principles, and recommendations, Minds Mach. 28 (2018) 689–707, https://doi.
sey.com/featured-insights/artificial-intelligence/applying-artificial-intelligen org/10.1007/s11023-018-9482-5.
ce-for-social-good. (Accessed 28 January 2022).

11

You might also like