Theodorou 2020

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

comment

Towards ethical and socio-legal governance in AI


Many high-level ethics guidelines for AI have been produced in the past few years. It is time to work towards
concrete policies within the context of existing moral, legal and cultural values, say Andreas Theodorou and
Virginia Dignum.

Andreas Theodorou and Virginia Dignum

A
rtificial intelligence (AI) is 2. A field of scientific research (this is the and organizations in many different roles
increasingly believed to be the main original reference and still predominant (for example, developer, manufacturer,
transformative technology of our in academia); the field of AI includes user, bystander or policymaker), their
time. A pressing question is how to develop the study of theories and methods for interactions and the processes that organize
and deploy AI systems that are aligned with adaptability, interaction and autonomy these interactions. Guidelines, principles
fundamental human principles and our legal of machines (virtual or physical). and strategies must direct these socio-
system, and that serve the common good. 3. An (autonomous) entity (for example, technical systems. It is not the AI artefact
As AI is now applied across many different when one refers to ‘an’ AI); this is the or application that is ethical, trustworthy
domains, governments and policymakers most common reference in media or responsible. Rather, it is the social
are exploring how to shape the process of and science fiction but is the most component of the socio-technical system
decision making and implementation of inaccurate one — it often brings with it that can and should take responsibility
governmental policy in ways that ensure the (dystopic) view of malicious power. and act in consideration of an ethical
public safety, social stability and continued framework such that the overall system can
innovation. In this Comment, we discuss Lack of understanding of these be trusted by the society. Ethical AI is not,
the challenges of producing such concrete differences is one of the reasons why many as some may claim, a way to give machines
policies, critique existing guidelines, AI guidelines and principles are so varied. some kind of ‘responsibility’ for their
and suggest practical steps forward at Right now, this confusion leads to constant actions and decisions, and in the process
establishing a socially beneficial policy for rewriting of similar high-level statements, discharge people and organizations of their
AI through the use of technical standards, creates exploitable loopholes and increases responsibility. On the contrary, ethical AI
legislation and education. the public’s misconceptions. Organizations gives the people and organizations
Governance is necessary to reduce the can circumvent even their own AI ethics involved more responsibility and makes
number of negative incidents, ensure trust policies by claiming that a product is ‘dumb’ them more accountable.
and create long-term societal stability enough to avoid any need for compliance to
through the use of well-established tools and any AI-specific policy or ‘smart’ enough to Critique of high-level ethics guidelines
design practices. Well-designed regulations claim ‘autonomy’ and pass any responsibility We need to move on from the high-level
do not eliminate innovation but instead to the machine. For example, after the statements that AI ethics guidelines have
enhance it through the development and insolvency of Air Berlin, Lufthansa’s prices produced. They often rely on context-
promotion of both socio-legal and technical for flights within Germany increased up to specific keywords — for example, fairness
means to enforce compliance1. Moreover, 30%3. Lufthansa’s response to an enquiry — but do not address the cultural variety
policy is needed to determine human from the German consumer protection between the different societies affected by
responsibility in the development and organization was that the algorithm acts AI systems5. The multi-interpretability of
deployment of intelligent systems, filling autonomously and beyond the company’s such terms, including that of the concept of
the gap that emerges from the increased direct control. At the time of writing, Apple AI itself, may prove to be one of the greatest
automation of decision. Furthermore, the and Goldman Sachs are under investigation challenges of appropriately regulating
ultimate aim of regulation is to ensure by New York’s Department of Financial intelligent systems6. Even if there is a
well-being for all in a sustainable world so it Services for discriminatory decisions made common understanding for some of the
can guide responsible research, development by the application system used in their terms, to avoid drawn-out semantic debates
and use of AI. newly launched credit card; male applicants and minimize risk of adverse outcomes due
for the card received significantly higher to misunderstanding, there is a need for
Ethical AI gives us more responsibility credit limit than their female spouses, even disclosure of the contextual definition given
The term AI refers to different issues, if the latter had higher credit scores4. When to each term. Without a precise framework,
each of which has its own terminology, applicants tried to appeal these credit limits, developers can easily claim they comply
communities and expectations2. Depending the response was that the algorithm makes to guidelines and develop ethical AI. At
on the focus and the context, AI can refer to: the final decision and its actual decision- the same time, the opposite may happen;
making system is a black box that cannot products could be branded unethical (which
1. A (computational) technology that be challenged. would destroy consumer confidence) by the
is able to infer patterns and possibly It is fundamental to recognize that company’s failure to follow some very loose
draw conclusions from data; currently technology, or the artefact that embeds guidelines. Either outcome could potentially
AI technologies are often based on that technology, cannot be separated from lead to long-term harm of the public’s trust.
machine learning and/or neural the socio-technical system of which it is a Actionable policy to assess, develop,
networking-based paradigms. component. This system includes people incentivize and support the use and
Nature Machine Intelligence | www.nature.com/natmachintell
comment

development of AI should thus focus on is properly appropriated by the relevant means that education plays an important
social aspects of AI. Recommendations, such stakeholders, but also that the development role, both to ensure that knowledge of
as those put forward by the AI4People task and deployment of processes that support the potential impact of AI is widespread,
force7 or the European High-Level Expert redressing, mitigation and evaluation of as well as to make people aware that
Group on AI8, as well as the standardization potential harm are enforced. they can participate in shaping the
efforts of the IEEE Global Initiative on societal development.
Ethics of Autonomous and Intelligent Standards and legislation. Even if a system
Systems9, recognize this necessity. is ethical and legal but not technically Training. Computer science practitioners
Moreover, governance is not only about robust, it will also not be very useful as need to be trained and perhaps licensed
ethics. AI systems need to be legal and we cannot trust it to behave as expected. in the safety and societal implications of
reliable as well. Abidance to law is the basis Building confidence in a system’s efficacy their designs and implementations, just like
of any application, whereas ethics is the ‘sky’, in areas important to users, such as safety, those of other disciplines. Often computer
the horizon we aim to attain7,10. However, as security and reliability, requires us to move science degrees address transparency and
Mireille Hildebrandt11 points out: “whereas away from simply producing soft governance safety through courses such as software
law and the Rule of Law introduce checks in the form of ethical guidelines and move engineering, which require not only effective
and balances and demand democratic into the creation of standards and legislation. documentation, but also procedures for
participation (at least in constitutional Standards are consensus-based ways of working in teams, with users, non-technical
democracies), ethics may be decided by doing things that provide the minimum managers and so forth. We should extend
tech developers or behind the closed doors universally acknowledged specifications. such considerations to legal and moral
of the board room of corporate business The release of technical standards can accountability for foreseeable consequences
enterprise. It can thus obtain the force of help in the establishment of a common of design decisions by developing
technology.” That is, law is transparent vocabulary, good design methodologies, new courses through interdisciplinary
and explicit and can be audited by external and architectural frameworks. Once cooperation, providing not only tailored-
parties. If the law is flawed or does not such practices are clear, then they can be made courses as part of STEM degrees, but
represent its society’s best interests or moral enforced through the ‘threat’ of ordinary also new content and considerations for
values, it can be amended through the legal-liability standards6,15. Most standards the humanities and social sciences. Already
democratically elected legislative body that are considered soft governance, hence not in some countries (for example, Cyprus16),
set the law. However, how an AI system mandatory. Yet it is often in the best interest information technology engineers require
handles ethical dilemmas is the result of of companies to follow them to demonstrate similar certification to civil and mechanical
possibly implicit implementation decisions due diligence and, therefore, limit their legal engineers to at least work in the public
decided behind closed doors with little input liability in case of an incident. Standards sector. However, training should also
by its users, let alone society at large. One can ensure user-friendly integration be provided in addition to ethics classes,
of us has described12, using the infamous between products. In fact, various standards at least on an on-demand basis,
trolley problem as example, how different — for example, USB — were developed to experienced researchers and developers,
ethical theories can lead to different possible through collaborative work of many large who could benefit from science
outcomes: taking a utilitarian approach, corporations. At the time of writing, there is communication courses17.
the decision would be to save the largest an increased activity in the development of To ensure the long-term adherence of our
number of lives, whereas the application of AI-specific standards across both national socio-technical systems to good practices,
a human rights approach would lead to a and international standard committees, in all major stakeholders must stop considering
decision not to switch the lever, as no one addition to existing standards for software ethics and upcoming AI-related standards
should have to decide on the lives of others, development, data management and as an afterthought or obstacles to be dealt
given that each life is valuable in itself13. An robotic systems. with in time. Instead, policy considerations
algorithmic approach to ethical dilemmas and governance mechanisms should become
would need not only to take into account Ethics boards. To ensure that any integral parts of any project, implemented
these differences, but also to identify guidelines, codes of conduct and non- and evaluated throughout the AI system’s
different orderings of the values and which mandatory standards are being followed, lifecycle. Otherwise, there is a risk of AI
ethical theory is most suitable in a given organizations should continue investing becoming ‘genetically modified food 2.0’,
situation. Even when well-defined ethical in establishing advisory panels and hiring where fear, due to misconceptions and
theories are successfully applied, resulting ethics officers. Similar to how universities disbelief in experts, damages the public trust
behaviour can be surprising, as the world is have ethics boards to approve projects and in the technology. This is not a simple task
dynamic and complex. Simulators, testing experiments, ethics officers and advisers but a continuous work in progress. Several
procedures and other safety measures can in non-academic organizations should be organizations have already been accused of
only cover what the developers of a system able to veto any projects or deliverables ‘ethics washing’, and efforts from Google
thought of. The real-world behaviour of a that do not adhere to any ethical guidelines to set up, and as quickly dissolve, an ethics
system can lead to situations that may be that their organization publicly states that board do not help to build trust on these
impossible to predict14. it follows. Responsibility should be one issues18. More than accusing these failures,
of the core stances underlying research, we need to learn from them and try again,
Moving beyond abstract guidelines development, deployment and use of AI try better.
Aligning a system's goals with human values technologies. After all, ensuring socially
is a complex multifaceted process; it requires beneficial outcomes of AI relies on resolving Conclusion
both technical and socio-legal initiatives the tension between incorporating the AI systems are artefacts that are decided,
and solutions. Sensible governance benefits and mitigating potential harms. designed, implemented and used by us; we
mechanisms can ensure not only that any Responsible AI also requires informed are responsible to try again when we fail, to
moral responsibility or legal accountability participation of all stakeholders, which observe and denounce when we see things

Nature Machine Intelligence | www.nature.com/natmachintell


comment

going wrong, to be informed and to inform, References 14. Theodorou, A., Wortham, R. H. & Bryson, J. J. Connect. Sci. 29,
to rebuild and to improve. Ultimately, as 1. Brundage, M. & Bryson, J. J. Preprint at http://arxiv.org/abs/ 230–241 (2017).
1608.08196 (2016). 15. Winfield, A. F. T. & Jirotka, M. 2018. Philos. Trans. R. Soc. A 376,
the chain of responsibility includes all of us 2. Monett, D. & Lewis, C. W. P. In Philosophy and Theory of Artificial 20180085 (2018).
(from the researchers to the policymakers, Intelligence 2017 (ed. Müller V.) 212–214 (Springer, 2018). 16. Law of the Cyprus Scientific Technical Chamber (ETEK, 2012).
developers and users), we all need to 3. No Proceeding Against Lufthansa for Abusive Pricing 17. Hauert, S. Nature 521, 415–418 (2015).
(Bundeskartellamt, 2018). 18. Levin, S. The Guardian https://www.theguardian.com/technology/
actively work towards the next steps for 4. Nasiripour, S. & Natarajan, S. Bloomberg https://www.bloomberg. 2019/apr/04/google-ai-ethics-council-backlash (2019).
producing concrete ethical and socio-legal com/news/articles/2019-11-10/apple-co-founder-says-goldman-
governance to ensure the societal-beneficial s-apple-card-algo-discriminates (2019).
Acknowledgements
5. Tubella, A. A., Theodorou, A., Dignum, F. & Dignum, V. In
development, deployment and usage Proc. Twenty-Eighth International Joint Conference on Artificial This work has been partially funded by the European
of AI technologies. ❐ Intelligence 5787–5793 (IJCAI, 2019). Union’s Horizon 2020 research and innovation programme
6. Bryson, J. J. & Theodorou, A. 2019. In Human-Centered under grant agreement no. 825619 (AI4EU project).
Digitalization and Services (eds Marja Toivonen-Noro, M. et al.) V.D. was also supported by the Wallenberg AI,
Andreas Theodorou    and Virginia Dignum 305–323 (Springer, 2019). Autonomous Systems and Software Program (WASP)
Department of Computer Science, Umeå University, 7. Floridi, L. et al. Minds Mach. 28, 689–707 (2018). funded by the Knut and Alice Wallenberg Foundation.
Umeå, Sweden. 8. Ethics Guidelines for Trustworthy AI (European Commission, 2019). We are grateful to A. A. Tubella for her comments on
9. Ethically Aligned Design (IEEE, 2019).
e-mail: [email protected]; earlier versions. We also thank J. Bryson, M. De Vos and
10. Bryson, J. J. Ethics Inf. Technol. 20, 15–26 (2018).
[email protected] S. Hauert for discussion they had with A.T. regarding
11. Hildebrandt, M. Law for Computer Scientists https://
governance mechanisms.
lawforcomputerscientists.pubpub.org/pub/nx5zv2ux (2019).
12. Dignum, V. In Proc. Twenty-Sixth International Joint Conference
Published: xx xx xxxx on Artificial Intelligence 4698–4704 (IJCAI, 2017). Competing interests
https://doi.org/10.1038/s42256-019-0136-y 13. Sen, A. Philos. Public Aff. 32, 315–356 (2004). The authors declare no competing interests.

Nature Machine Intelligence | www.nature.com/natmachintell

You might also like