46 4 Ethical Issues of AI
46 4 Ethical Issues of AI
46 4 Ethical Issues of AI
of the nature of being and human ability to make sense of this. They also go to the
heart of the nature of humans and humanity.
These metaphysical issues are mostly related to artificial general intelligence
(AGI) or good old-fashioned AI (GOFAI), which is typically conceptualised in terms
of a symbolic and logical representation of the world. The idea is that AGI (which may
build on GOFAI, but does not have to) would display human reasoning abilities. To
reiterate a point made earlier: there currently are no AGI systems available, and there
is considerable disagreement about their possibility and likelihood. I am personally
not convinced that they are possible with current technologies, but I cannot prove
the point any more definitively than others, so I will remain agnostic on the point of
fundamental possibility. What seems abundantly clear, however, is that progress in
the direction of AGI is exceedingly slow. Hence, I do not expect any technology that
would be accepted as AGI by the majority of the expert communities to come into
existence during the coming decades.
The metaphysical ethical issues raised by AGI are therefore not particularly urgent,
and they do not drive policy considerations in the way that issues like discrimination
or unemployment do. Most policy documents on AI ignore these issues, on the
implicit assumption that they are not in need of policy development. In the empirical
research presented earlier in this section, these metaphysical issues were not identified
as issues that organisations currently engage with. There is probably also an element
of fear on the part of scholars and experts of being stigmatised as not being serious
or scholarly, as these metaphysical issues are the staple of science fiction.
I nevertheless include them in this discussion of ethical issues of AI for two
reasons. Firstly, these questions are thought-provoking, not only for experts but for
media and society at large, because they touch on many of the fundamental questions
of ethics and humanity. Secondly, some of these issues can shed light on the practical
issues of current AI by forcing clearer reflection on key concepts, such as autonomy
and responsibility and the role of technology in a good society.
The techno-optimistic version of AGI is that there will be a point when AI is
sufficiently advanced to start to self-improve, and an explosion of intelligence –
the singularity (Kurzweil 2006) – will occur due to a positive feedback loop of AI
onto itself. This will lead to the establishment of super-intelligence (Bostrom 2016).
The implication is that AGI will then not only be better than humans at most or all
cognitive tasks, but will also develop consciousness and self-awareness (Torrance
2012). The contributors to this discussion disagree on what would happen next. The
super-intelligent AGI might be benevolent and make human life better, it might see
humans as competitors and destroy us, or it might reside in a different sphere of
consciousness, ignoring humanity for the most part.
Speculations along those lines are not particularly enlightening: they say more
about the worldview of the speculator than anything else. But what is interesting is
to look at some of the resulting ethical issues in light of current technologies. One
key question is whether such AGIs could be subjects of responsibility. Could we
hold them morally responsible for their actions or the consequences of these actions
(Bechtel 1985)? To put it differently, is there such a thing as artificial morality
(Wallach and Allen 2008, Wallach et al. 2011)? This question is interesting because
4.5 Metaphysical Issues 47
it translates into the question: can we hold current AIs responsible? And this is a
practical question in cases where AIs can create morally relevant consequences, as
is the case for autonomous vehicles and many other systems that interact with the
world.
The question whether an entity can be a subject of moral responsibility, i.e.
someone or something of which or whom we can say, “X is responsible,” hinges
on the definition of responsibility (Fischer 1999). There is a large literature on this
question, and responsibility subjects typically have to fulfil a number of require-
ments, which include an understanding of the situation, a causal role in events, the
freedom to think and act, and the power to act, to give four examples.
The question of whether computers can be responsible is therefore somewhat
similar to the question of whether they can think. One could argue that, if they can
think, they can be responsible. However, Turing (1950) held the question of whether
machines can think to be meaningless and proposed the imitation game, i.e. the Turing
test, instead. In light of the difficulty of the question it is therefore not surprising that
an analogous approach to machine responsibility was devised, the moral Turing test,
where the moral status of a machine could be defined by the fact that it was recognised
as a moral agent by an independent interlocutor. The problem with that approach is
that it does not really address the issue. I have elsewhere suggested that a machine
that can pass the Turing test could probably also pass a moral Turing test (Stahl
2004).
Much of the discussion of the moral status of AI hinges on the definition of
“ethics”. If one takes a utilitarian position, for example, it would seem plausible to
assume that computers would be at least as good as humans at undertaking a moral
calculus, provided they had the data to comprehensively describe possible states of
the world. This seems to be the reason why the trolley problem is so prominent in
the discussion of the ethics of autonomous vehicles (Wolkenstein 2018). The trolley
problem,3 which is based on the premise that an agent has to make a dilemmatic
decision between two alternatives, either of which will typically kill different actors,
has caught the attention of some scholars because it seems to map to possible real-
world scenarios in AI, notably with regard to the programming or behaviour of
autonomous vehicles. An autonomous vehicle can conceivably be put in a situation
that is similar to the trolley problem in that it has to make a rapid decision between
two ethically problematic outcomes. However, I would argue that this is based on
a misunderstanding of the trolley problem, which was devised by Philippa Foot
(1978) as an analytical tool to show the limitations of moral reasoning, in particular
utilitarianism. The dilemma structure is geared towards showing that there is not one
“ethically correct” response. It has therefore been argued (Etzioni and Etzioni 2017),
rightly in my opinion, that the trolley problem does not help us determine whether
3A typical trolley problem would see an agent standing near the points where two railway lines
merge into a single track. From the single track, a train approaches. Unaware of the train, a number
of children are playing on the left-hand track, whereas a single labourer, also unaware of the train,
is working on the right-hand track. The train is set to hit the children. By switching the points, the
agent can switch the train onto the right-hand track, thereby saving the children’s lives, but leading
to a single death. What should the agent do? That is the trolley problem.
48 4 Ethical Issues of AI
machines can be ethical, because it can fully be resolved with recourse to existing
standards of human responsibility.
I have argued earlier that the key to understanding ethics is an understanding of
the human condition. We develop and use ethics because we are corporeal, and hence
vulnerable and mortal, beings who can feel empathy with others who have fears and
hopes similar to our own. This is the basis of our social nature and hence of our
ethics. If we use this starting point, then AI, in order to be morally responsible and
an ethical agent, would have to share these characteristics. At the moment no system
comes close to empathy. This has nothing to do with AI’s computational abilities,
which far exceed ours and have done for some time, but arises from the fact that AI
is simply not in the same category as we are.
This does not mean that we cannot assign a moral status to AI, or to some type
of AI. Humans can assign such a status to non-humans and have always done so,
for example by viewing parts of nature or artefacts as divine or by protecting certain
entities from being treated in certain ways.
Such a view of AI has the advantage of resolving some of the metaphysical
questions immediately. If an existentialist commitment to our shared social world is
a condition of being an ethical agent, then current AI simply falls out of the equation.
This does not mean that developers of autonomous vehicles do not need to worry any
more, but it does mean that they can use established mechanisms of responsibility,
accountability and liability to make design decisions. It also does not fundamentally
rule out artificial moral agents, but these would have to be of a very different nature
from current computing technologies.
This position does not solve all metaphysical questions. There are interesting
issues arising from the combination of humans and machines that need attention.
Actor-networks containing AI-enabled artefacts may well change some of our ethical
perceptions. The more AI gets integrated into our nature, the more it raises new ques-
tions. This starts with seemingly trivial aspects of the prevalence of ubiquitous devices
such as mobile phones and what these do to our agency. Cutting-edge technologies,
such as AI-supported brain computer interfaces, change what we can do, but they can
also change how we ascribe responsibility. In this sense questions of posthumanism
(Barad 2003) and human enhancement (Bostrom and Sandberg 2009, Coeckelbergh
2011) may be more interesting from the AI ethics perspective because they start with
existing ethical agency that may need to be adjusted.
Much more could of course be said about ethical issues of AI, but this chapter
has hopefully given a good overview and provided a useful categorisation of these
issues, as shown in Table 4.1.
The categorisation in Table 4.1 is not authoritative, and others are possible. A
different view that would come to similar conclusions would focus on the temporal
nature of the issues. Ordering ethical issues of AI by temporal proximity and urgency
is not new. Baum (2018) has suggested the distinction between “presentists” and
“futurists”, calling attention to near-term and long-term AI issues. Extending this
thought to the discussion of ethical issues of AI as presented in this chapter, one can
say that the ethical issues of machine learning are the most immediate ones and the
metaphysical ones are long-term, if not perpetual, questions. The category of issues
4.5 Metaphysical Issues 49
arising from living in the digital world is located somewhere between. This view may
also have implications for the question of how, when and by whom ethical issues in
AI can be addressed, which will be discussed in the next chapter.
References
Access Now Policy Team (2018) The Toronto declaration: protecting the right to equality and
non-discrimination in machine learning systems. Access Now, Toronto. https://www.access
now.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf. Accessed 26
Sept 2020
Adler M, Ziglio E (eds) (1996) Gazing into the oracle: the Delphi method and its application to
social policy and public health. Jessica Kingsley, London
AI Now Institute (2017) AI Now 2017 report. https://ainowinstitute.org/AI_Now_2017_Report.
pdf. Accessed 26 Sept 2020
Babuta A, Oswald M, Janjeva A (2020) Artificial intelligence and UK national security: policy
considerations. RUSI Occasional Paper. Royal United Services Institute for Defence and Security
Studies, London. https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf.
Accessed 21 Sept 2020
Barad K (2003) Posthumanist performativity: toward an understanding of how matter comes to
matter. Signs 28:801–831. https://doi.org/10.1086/345321
Baum SD (2018) Reconciliation between factions focused on near-term and long-term artificial
intelligence. AI Soc 33:565–572. https://doi.org/10.1007/s00146-017-0734-3
Bechtel W (1985) Attributing responsibility to computer systems. Metaphilosophy 16:296–306.
https://doi.org/10.1111/j.1467-9973.1985.tb00176.x
Berendt B (2019) AI for the common good?! Pitfalls, challenges, and ethics pen-testing. Paladyn J
Behav Robot 10:44–65. https://doi.org/10.1515/pjbr-2019-0004
BmVI (2017) Ethik-Kommission: Automatisiertes und vernetztes Fahren. Bundesministerium für
Verkehr und digitale Infrastruktur, Berlin. https://www.bmvi.de/SharedDocs/DE/Publikationen/
DG/bericht-der-ethik-kommission.pdf?__blob=publicationFile. Accessed 26 Sept 2020
Boden MA (2018) Artificial intelligence: a very short introduction, Reprint edn. Oxford University
Press, Oxford
Bostrom N (2016) Superintelligence: paths, dangers, strategies, Reprint edn. Oxford University
Press, Oxford and New York
Bostrom N, Sandberg A (2009) Cognitive enhancement: methods, ethics, regulatory challenges.
Sci Eng Ethics 15:311–341
Busch T (2011) Capabilities in, capabilities out: overcoming digital divides by promoting corporate
citizenship and fair ICT. Ethics Inf Technol 13:339–353
Buttarelli G (2018) Choose humanity: putting dignity back into digital. In: Speech at 40th interna-
tional conference of data protection and privacy commissioners, Brussels. https://www.privacyco
nference2018.org/system/files/2018-10/Choose%20Humanity%20speech_0.pdf. Accessed 26
Sept 2020
CDEI (2019) Interim report: Review into bias in algorithmic decision-making. Centre for Data
Ethics and Innovation, London. https://www.gov.uk/government/publications/interim-reports-
from-the-centre-for-data-ethics-and-innovation/interim-report-review-into-bias-in-algorithmic-
decision-making. Accessed 26 Sept 2020
Coeckelbergh M (2011) Human development or human enhancement? A methodological reflection
on capabilities and the evaluation of information technologies. Ethics Inf Technol 13:81–92.
https://doi.org/10.1007/s10676-010-9231-9
Coeckelbergh M (2019) Artificial Intelligence: some ethical issues and regulatory challenges. In:
Technology and regulation, pp 31–34. https://doi.org/10.26116/techreg.2019.003
References 51
Kaplan A, Haenlein M (2019) Siri, Siri, in my hand: who’s the fairest in the land? On the
interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 62:15–25
Kleinig J, Evans NG (2013) Human flourishing, human dignity, and human rights. Law Philos
32:539–564. https://doi.org/10.1007/s10982-012-9153-2
Krafft T, Hauer M, Fetic L et al (2020) From principles to practice: an interdisciplinary framework
to operationalise AI ethics. VDE and Bertelsmann Stiftung. https://www.ai-ethics-impact.org/res
ource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig—report—download-hb-data.
pdf. Accessed 26 Sept 2020
Kurzweil R (2006) The singularity is near. Gerald Duckworth & Co, London
Latonero M (2018) Governing artificial intelligence: upholding human rights & dignity. Data
& Society. https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artifi
cial_Intelligence_Upholding_Human_Rights.pdf. Accessed 26 Sept 2020
Lessig L (1999) Code: and other laws of cyberspace. Basic Books, New York
Linstone HA, Turoff M (eds) (2002) The Delphi method: techniques and applications. Addison-
Wesley Publishing Company, Advanced Book Program. https://web.njit.edu/~turoff/pubs/delphi
book/delphibook.pdf. Accessed 26 Sept 2020
Macnish K, Ryan M, Gregory A et al. (2019) SHERPA deliverable D1.1 Case studies. De Montfort
University. https://doi.org/10.21253/DMU.7679690.v3
McSorley K (2003) The secular salvation story of the digital divide. Ethics Inf Technol 5:75–87.
https://doi.org/10.1023/A:1024946302065
Müller VC (2020) Ethics of artificial intelligence and robotics. In: Zalta EN (ed) The Stanford
encyclopedia of philosophy, Fall 2020. Metaphysics Research Lab, Stanford University, Stanford,
CA
Nemitz P (2018) Constitutional democracy and technology in the age of artificial intelligence. Phil
Trans R Soc A 376:20180089. https://doi.org/10.1098/rsta.2018.0089
Nicas J (2020) Apple reaches $2 trillion, punctuating big tech’s grip. The New York Times. https://
www.nytimes.com/2020/08/19/technology/apple-2-trillion.html. Accessed 26 Sept 2020
Parayil G (2005) The digital divide and increasing returns: contradictions of informational
capitalism. Inf Soc 21:41–51. https://doi.org/10.1080/01972240590895900
Raso FA, Hilligoss H, Krishnamurthy V et al. (2018) Artificial intelligence & human rights: oppor-
tunities & risks. Berkman Klein Center Research Publication No. 2018-6. http://dx.doi.org/10.
2139/ssrn.3259344
Richardson R, Schultz J, Crawford K (2019) Dirty data, bad predictions: how civil rights violations
impact police data, predictive policing systems, and justice. N Y Univ Law Rev Online 192.
https://ssrn.com/abstract=3333423. Accessed 26 Sept 2020
Ryan M, Gregory A (2019) Ethics of using smart city AI and big data: the case of four large
European cities. ORBIT J 2. https://doi.org/10.29297/orbit.v2i2.110
Sachs JD (2012) From millennium development goals to sustainable development goals. Lancet
379:2206–2211. https://doi.org/10.1016/S0140-6736(12)60685-0
Santiago N (2020) Shaping the ethical dimensions of smart information systems: a European
perspective. SHERPA Delphi study, round 1 results. SHERPA project. https://www.project-she
rpa.eu/wp-content/uploads/2020/03/sherpa-delphi-study-round-1-summary-17.03.2020.docx.
pdf. Accessed 26 Sept 2020
Sharkey A, Sharkey N (2010) Granny and the robots: ethical issues in robot care for the elderly.
Ethics Inf Technol. https://doi.org/10.1007/s10676-010-9234-6
Sharkey N (2017) Why robots should not be delegated with the decision to kill. Conn Sci 29:177–
186. https://doi.org/10.1080/09540091.2017.1310183
Spinello RA, Tavani HT (2004) Intellectual property rights in a networked world: theory and
practice. Information Science Publishing, Hershey PA
Stahl BC (2004) Information, ethics, and computers: the problem of autonomous moral agents.
Minds Mach (Dordr) 14:67–83. https://doi.org/10.1023/B:MIND.0000005136.61217.93
Tao J, Tan T, Picard R (2005) Affective computing and intelligent interaction. Springer, Berlin
References 53
Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence.
Nat Med 25:44–56. https://doi.org/10.1038/s41591-018-0300-7
Torrance S (2012) Super-intelligence and (super-)consciousness. Int J Mach Conscious 4:483–501.
https://doi.org/10.1142/S1793843012400288
Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460. https://doi.org/10.
1093/mind/LIX.236.433
United Nations (2020) Sustainable development knowledge platform. https://sustainabledevelop
ment.un.org. Accessed 25 May 2020
USACM (2017) Statement on algorithmic transparency and accountability. ACM US Public
Policy Council, Washington DC. https://www.acm.org/binaries/content/assets/public-policy/
2017_usacm_statement_algorithms.pdf. Accessed 26 Sept 2020
Wallach W, Allen C (2008) Moral machines: teaching robots right from wrong. Oxford University
Press, New York
Wallach W, Allen C, Franklin S (2011) Consciousness and ethics: artificially conscious moral
agents. Int J Mach Conscious 3:177–192. https://doi.org/10.1142/S1793843011000674
Wiener N (1954) The human use of human beings. Doubleday, New York
Wolkenstein A (2018) What has the trolley dilemma ever done for us (and what will it do in the
future)? On some recent debates about the ethics of self-driving cars. Ethics Inf Technol 1–11.
https://doi.org/10.1007/s10676-018-9456-6
Yin RK (2003a) Applications of case study research, 2nd edn. Sage Publications, Thousand Oaks
CA
Yin RK (2003b) Case study research: design and methods, 3rd edn. Sage Publications, Thousand
Oaks, CA
Zuboff PS (2019) The age of surveillance capitalism: the fight for a human future at the new frontier
of power. Profile Books, London
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the Creative Commons license and
indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
the copyright holder.
Chapter 5
Addressing Ethical Issues in AI
Abstract This chapter reviews the proposals that have been put forward to address
ethical issues of AI. It divides them into policy-level proposals, organisational
responses and guidance for individuals. It discusses how these mitigation options
are reflected in the case studies exemplifying the social reality of AI ethics. The
chapter concludes with an overview of the stakeholder groups affected by AI, many
of whom play a role in implementing the mitigation strategies and addressing ethical
issues in AI.
The number of policy papers on AI is significant. Jobin et al. (2019) have provided
a very good overview, but it is no longer comprehensive, as the publication of policy
papers continues unabated. Several individuals and groups have set up websites,
databases, observatories or other types of resources to track this development. Some
of the earlier ones seem to have been one-off overviews that are no longer maintained,
such as the websites by Tim Dutton (2018), Charlotte Stix (n.d.) and NESTA (n.d).
Others remain up to date, such as the website run by AlgorithmWatch (n.d.), or have
only recently come online, such as the websites by Ai4EU (n.d.), the EU’s Joint
Research Centre (European Commission n.d.) and the OECD (n.d.).
What most of these policy initiatives seem to have in common is that they aim to
promote the development and use of AI, while paying attention to social, ethical and
human rights concerns, often using the term “trustworthy AI” to indicate attention
to these issues. A good example of high-level policy aims that are meant to guide
further policy development is provided by the OECD (2019). It recommends to its
member states that they develop policies for the following five aims:
• investing in AI research and development
• fostering a digital ecosystem for AI
• shaping an enabling policy environment for AI
• building human capacity
• preparing for labour market transformation
• international co-operation for trustworthy AI
Policy initiatives aimed at following these recommendations can cover a broad
range of areas, most of which have relevance to ethical issues. They can address
questions of access to data, distribution of costs and benefits through taxation or
other means, environmental sustainability and green IT, to give some prominent
examples.
These policy initiatives can be aspirational or more tangible. In order for them to
make a practical difference, they need to be translated into legislation and regulation,
as will be discussed in the following section.
At the time of writing this text (European summer 2020), there is much activity in
Europe directed towards developing appropriate EU-level legislation and regulation
of AI. The European Commission has launched several policy papers and proposals
(e.g. European Commission 2020c, d), notably including a White Paper on AI (Euro-
pean Commission 2020a). The European Parliament has shared some counterpro-
posals (European Parliament 2020a, b) and the political process is expected to lead
to legislative action in 2021.
5.1 Options at the Policy Level 57
Using the categories developed in this book, the question is whether – for the
purpose of the legislation – AI research, development and use will be framed in
terms of human flourishing, efficiency or control. The EC’s White Paper (European
Commission 2020a) is an interesting example to use when studying the relationship
between these different purposes. To understand this relationship, it is important to
see that the EC uses the term “trust” to represent ethical and social aspects, following
the High-Level Expert Group on AI (2019). This suggests that the role of ethics is
to allow people to trust a technology that has been pre-ordained or whose arrival
is inevitable. In fact, the initial sentences in the introduction to the White Paper
state exactly that: “As digital technology becomes an ever more central part of every
aspect of people’s lives, people should be able to trust it. Trustworthiness is also a
prerequisite for its uptake” (European Commission 2020a: 1). Ethical aspects of AI
are typically discussed by European bodies using the terminology of trust. The docu-
ment overall often follows this narrative and the focus is on the economic advantages
of AI, including the improvement of the EU’s competitive position in the perceived
international AI race.
However, there are other parts of the document that focus more on the human flour-
ishing aspect: AI systems are described as having a “significant role in achieving the
Sustainable Development Goals” (European Commission 2020a: 2), environmental
sustainability and ethical objectives. It is not surprising that a high-level policy initia-
tive like the EC’s White Paper combines different policy objectives. What is never-
theless interesting to note is that the White Paper contains two main areas of policy
objectives: excellence and trust. In Section 4, entitled “An ecosystem of excellence”,
the paper lays out policies to strengthen the scientific and technical bases of Euro-
pean AI, covering European collaboration, research, skills, work with SMEs and
the private sector, and infrastructure. Section 5, the second main part of the White
Paper, under “An ecosystem of trust”, focuses on risks, potential harms, liability
and similar regulatory aspects. This structure of the White Paper can be read to
suggest that excellence and trust are fundamentally separate, and that technical AI
development is paramount, requiring ethics and regulation to follow.
When looking at the suitability of legislation and regulation to address ethical
issues of AI, one can ask whether and to what degree these issues are already covered
by existing legislation. In many cases the question thus is whether legislation is fit
for purpose or whether it needs to be amended in light of technical developments.
Examples of bodies of law with clear relevance to some of the ethical issues are
intellectual property law, data protection law and competition law.
One area of law that is likely to be relevant and has already led to much high-level
debate is that of liability law. Liability law is used to deal with risks and damage
sustained from using (consumer) products, whether derived from new technologies
or not. Liability law is likely to play a key role in distributing risks and benefits of
AI (Garden et al. 2019). This explains the various EU-level initiatives (Expert Group
on Liability and New Technologies 2019; European Commission 2020b; European
Parliament 2020b) that try to establish who is liable for which aspects of AI. Relat-
edly, the allocation of strict and tort liabilities will set the scene for the greater AI
environment, including insurance and litigation.
58 5 Addressing Ethical Issues in AI
Another body of existing legislation and regulation being promoted to address the
ethical issues of AI is that of human rights legislation. It has already been highlighted
that many of the ethical issues of AI are simultaneously human rights issues, such
as privacy and discrimination. Several contributors to the debate therefore suggest
that existing human rights regulation may be well suited to addressing AI ethics
issues. Proposals to this effect can focus on particular technologies, such as machine
learning (Access Now Policy Team 2018), or on particular application areas, such as
health (Committee on Bioethics 2019), or broadly propose the application of human
rights principles to the entire field of AI (Latonero 2018, Commissioner for Human
Rights 2019, WEF 2019).
The discussion of liability principles at EU level is a good example of the more
specific regulatory options that are being explored. During a recent review of regula-
tory options for the legislative governance of AI, in particular at the European level,
Rodrigues et al. (2020) surveyed the current legislative landscape and identified the
following proposals that are under active discussion:
• the adoption of common EU definitions
• algorithmic impact assessments under the General Data Protection Regulation
(GDPR)
• creating electronic personhood status for autonomous systems
• the establishment of a comprehensive EU system of registration of advanced
robots
• an EU task force of field-specific regulators for AI/big data
• an EU-level special list of robot rights
• a general fund for all smart autonomous robots
• mandatory consumer protection impact assessment
• regulatory sandboxes
• three-level obligatory impact assessments for new technologies
• the use of anti-trust regulations to break up big tech and appoint regulators
• voluntary/mandatory certification of algorithmic decision systems
Using a pre-defined evaluation strategy, all of these proposals were evaluated.
The overall evaluation suggested that many of these options were broad in scope
and lacked specific requirements (Rodrigues et al. 2020). They over-focused on
well-established issues like bias and discrimination but neglected other human rights
concerns, and resource constraints would arise from resource-intensive activities such
as the creation of regulatory agencies and the mandating of impact assessments.
Without going into more detail than appropriate for a Springer Brief, what seems
clear is that legislation and regulation will play a crucial role in finding ways to
ensure that AI promotes human flourishing. A recent review of the media discourse
of AI (Ouchchy et al. 2020) shows that regulation is a key topic, even though it is by
no means agreed whether and which regulation is desirable.
There is, however, one regulatory option currently being hotly debated that has
the potential to significantly affect the future shape of technology use in society, and
which I therefore discuss separately in the next section
5.1 Options at the Policy Level 59
5.1.3 AI Regulator
The creation of a regulator for AI is one of the regulatory options. It only makes sense
to have one if there is something to regulate, i.e. if there is regulation that needs to
be overseen and enforced. In light of the multitude of regulatory options outlined in
the previous section, one can ask whether there is a need for a specific regulator for
AI, given that it is unclear what the regulation will be.
It is again instructive to look at the current EU discussion. The EC’s White Paper
(European Commission 2020a) treads very carefully in this respect and discusses
under the heading of “Governance” a network of national authorities as well as
sectoral networks of regulatory authorities. It furthermore proposes that a committee
of experts could provide assistance to the EC. This shows a reluctance to create a
new institution. The European Parliament’s counterproposal (2020a) takes a much
stronger position. It renews an earlier call for the designation of a “European Agency
for Artificial Intelligence”. Article 14 of the proposed regulation suggests the creation
of a supervisory authority in each European member state (see, e.g., Datenethikkom-
mission 2019) that would be responsible for enforcing ways of dealing with ethical
issues of AI. These national supervisory authorities will have to collaborate closely
with one another and with the European Commission, according to the proposal from
the European Parliament.
A network of regulators, or even the creation of an entire new set of regulatory
bodies, will likely encounter significant opposition. One key matter that needs to
be addressed is the exact remit of the regulator. A possible source of confusion is
indicated in the titles of the respective policy proposals. Where the EC speaks only
of artificial intelligence, the European Parliament speaks of AI, robotics and related
technologies. The lack of a clear definition of AI is likely to create problems
A second concern relates to the distribution of existing and potential future respon-
sibilities. The question of the relationship between AI supervisory authorities and
existing sectoral regulators is not clear. If, for example, a machine learning system
used in the financial sector were to raise concerns about bias and discrimination, it
is not clear whether the financial regulator or the AI regulator would be responsible
for dealing with the issue.
While the question of creating a regulator or some other governance structure
capable of taking on the tasks of a regulator remains open, it is evident that it might
be a useful support mechanism to ensure that potential regulation could be enforced.
In fact, the possibility of enforcement is one of the main reasons for calls for regu-
lation. It has frequently been remarked that talk of ethics may be nothing but an
attempt to keep regulation at bay and thus render any intervention impotent (Nemitz
2018, Hagendorff 2019, Coeckelbergh 2019). It is by no means clear, however, that
legislative processes will deliver the mechanisms to successfully address the ethics
of AI (Clarke 2019a). It is therefore useful to understand other categories of mitiga-
tion measures, and that is why I now turn to the proposals that have been directed at
organisations.