Nliu CLT - Acing The Ai
Nliu CLT - Acing The Ai
Nliu CLT - Acing The Ai
ACING THE AI
ARTIFICIAL INTELLIGENCE AND ITS
LEGAL IMPLICATIONS
JULY 2023
NATIONAL LAW INSTITUTE UNIVERSITY
KERWA DAM ROAD, BHOPAL – 462 044 (M.P.)
Published by National Law Institute University (NLIU),
Kerwa Dam Road, Bhopal - 462044, Madhya Pradesh, India
ISBN: 978-81-957807-9-2
Information contained in this work has been obtained by the Cell for Law and
Technology, National Law Institute University, Bhopal (India), from sources
believed to be reliable and authors believed to be bona fide. However, neither the
publishers nor the editors guarantee the accuracy or completeness of any
information published therein, and neither the publishers nor the editors shall be
responsible for any errors, omissions, or damages arising out of use of this
information. This book is published with the understanding that the publisher,
editors and authors are supplying information but are not attempting to render legal
or other professional services. If such services are required, the assistance of an
appropriate professional should be sought. The views and opinions expressed in this
book are those of the authors only and do not reflect the official policy, position or
opinions of the publishers or the editors. The publishers or the editors shall not be
liable for any copyright infringement, plagiarism or the like committed by the
authors.
Cover Designer: Cell for Law and Technology, National Law Institute University,
Bhopal (India)
4
Acing the AI: Artificial Intelligence and its Legal Implications
PATRON
Prof. (Dr.) V. Vijayakumar
Vice Chancellor
National Law Institute University, Bhopal
FACULTY ADVISOR
Prof. (Dr.) Atul Kumar Pandey
Professor of Cyber Law,
Chairperson, Rajiv Gandhi National Cyber Law Centre and Head,
Department of Cyber Law, Bhopal
EDITOR-IN-CHIEF
Sharqa Tabrez
DEPUTY EDITOR-IN-CHIEF
Saloni Agrawal
MANAGING EDITORS
Nandini Chouhan and Vanshika Jaiswal
EDITORIAL BOARD
Third Year
Bushra Abid Harshali Sulebhavikar Ishika Srivastava
Karman Singh
Second Year
Amit S. Krishnan Divyank Dewan Sakshi Gour
Samrudhi Memane Shreeji Patel Swadha Chandra
First Year
Ali Asghar Kshitij Gondal Pheoli Manvid
Priyanshu Danu Tejbeer Singh
MANAGERIAL BOARD
Third Year
Astha Jain
Second Year
Neha Kumari Rishabh Dwivedi Shivam Nishad
First Year
Muskan Khatri Sarfraz Alam Sharad Khemka
Utkarsh Maheshwari
8
Acing the AI: Artificial Intelligence and its Legal Implications
TABLE OF CONTENTS
12
Acing the AI: Artificial Intelligence and its Legal Implications
14
Acing the AI: Artificial Intelligence and its Legal Implications
1
Acing the AI: Artificial Intelligence and its Legal Implications
The legal industry has been increasingly using AI, which has led to a
range of intricate legal and ethical challenges. The book provides a detailed
and comprehensive overview of the current legal state of AI, including the
opportunities and challenges that come with it.
I believe that this book will serve as a valuable resource not only for legal
professionals, researchers, and students, but also for policymakers, business-
leaders, and individuals interested in knowing about the impact of AI on the
legal system and society as a whole. I hope this book will spark further
research and innovation in this field, and help to shape a future where AI is
used ethically and responsibly to advance the legal profession and society.
2
Acing the AI: Artificial Intelligence and its Legal Implications
EDITORIAL NOTE
We hope this book will serve as a valuable resource for anyone interested
in the fascinating and rapidly evolving world of AI and its impact on the
legal system.
Editorial Board
3
Acing the AI: Artificial Intelligence and its Legal Implications
4
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
This chapter investigates the scope of copyright protection for artistic
works created with artificial intelligence (AI), together with the degree
of creative participation. It also assesses the viability of the current
legal system for defending these works through the lens of
international copyright law. Indian copyright law is modelled on
international copyright conventions such as the Berne Convention and
mainly focuses on the notion of original work by a human author. The
notion of authorship attribution for AI-assisted artistic works that are
autonomously made without human participation throughout the
creative process remains to be established.
5
Prospect of Copyright Protection for AI-generated Works
domain. This chapter delves into many material facts, such as the
concept of AI and its position as a creator, a broad introduction to
copyright legislation and principles in India, the United Kingdom, the
United States, and Ireland, followed by an assessment of their
relevance to AI-generated works. Currently, AI-generated works are
not protected by copyright law. Nonetheless, this chapter discusses
potential future solutions such as the sui generis rule, the “work
created for hire doctrine” in the United States, and the “doctrine of
related rights” in the European Union to ascertain authorship.
I. Introduction
(a) Background
The domain of intellectual property law, and primarily copyright law, aims
to protect works of authorship and bestow exclusive, economic, and moral
rights. Heretofore, it has been hypothesised in the chapter that creativity is
attributable to human beings only. This assumption that technology persists
largely independently from the creative process has led to a clear and concise
legal study of issues like authorship and originality in the field of copyright
law. Due to the intricate nature of the various human inputs a work created
with AI may potentially contain, terms like “AI-generated” and “AI-assisted”
have arisen to limit the amount of original human contribution. Nevertheless,
these terms still lack accurate descriptions. Due to the evolution of
6
Acing the AI: Artificial Intelligence and its Legal Implications
Because AI-assisted works are quite often supplied in high volumes and with
a great deal of less manpower, the protection of such works becomes more
consequential from an economic standpoint. Additionally, by introducing
better perspectives and facilitating the use of challenging techniques that
would otherwise require significant time and a large workforce to master,
they have the potential to further the heights of human creativity. Exactly
how this kind of optimised creativity will influence the creative sector is still
1
Maria Iglesias, Sharon Shamuilia and Amanda Anderberg, ‘Intellectual Property and
Artificial Intelligence – A literature review’ [2019] EUR 30017
<https://documents.pub/document/maria-iglesias-sharon-shamuilia-amanda-anderberg-2019-
12-17-maria-iglesias-sharon.html?page=1> accessed 10 January 2023.
2
Claudya den Kamp and Dan Hunter ‘On How Novel Technologies and Objects have
Shaped the History of Intellectual Property Rights’ [2019] CUP 1, 1-8.
7
Prospect of Copyright Protection for AI-generated Works
3
Matthias Griebel, ChristophFlath and Sascha Friesike, ‘Augmented Creativity: Leveraging
Artificial Intelligence for Idea Generation in Creative Sphere’ [2022] Research-in Progress
Papers 77.
8
Acing the AI: Artificial Intelligence and its Legal Implications
4
John McCarthy, ‘What is Artificial Intelligence’ [2007] Stanford University
<http://jmc.stanford.edu/articles/whatisai/whatisai.pdf> accessed 10 January 2023.
5
Josef Drexl et al, ‘Technical Aspects of Artificial Intelligence: An Understanding from an
Intellectual Property Law Perspective’ (2019) 19-13(3) Max Planck Institute for Innovation
& Competition <https://ssrn.com/abstract=3465577> accessed 10 January 2023.
6
Commission, ‘Communication from the Commission to the European Parliament, the
European Council, the Council, the European Economic and Social Committee and the
Committee of the Regions on Artificial Intelligence for Europe, Brussels’ (25 April 2018)
COM/2018/237 final.
9
Prospect of Copyright Protection for AI-generated Works
7
‘Perhaps in the future possibilities of joint authorships between humans and AI systems
become a relevant consideration?’ [2022] WIPO 4.
8
Margaret A. Boden, The Creative Mind Myths and Mechanisms (2nd edn, Routledge 2004)
7.
10
Acing the AI: Artificial Intelligence and its Legal Implications
9
Babak Saleh and Ahmed Elgammal, ‘Large-scale Amalgamation of Fine Art Paintings:
Learning the Right Metric on the Right Feature’ (2015) Department of Computer Science
Rutgers University, NJ, USA <https://arxiv.org/pdf/1505.00855.pdf?source=post_page>
accessed 10 January 2023.
10
Boden (n 8) 4.
11
Ed Lauder, ‘Aiva Is The First AI to Officially Be Recognised As A Composer’ (AI
Business, 10 March 2017) <https://aibusiness.com/verticals/aiva-is-the-first-ai-to-officially-
be-recognised-as-a-composer> accessed 20 December 2022; Samuel Karlsson, ‘Artificial
Intelligence and the Concept of Legal Entities in Copyright’ [2019] Lund University
<https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9020263&fileOId=90
20290> accessed 21 December 2022.
12
Matt Burgess, ‘Google’s AI Has Written Some Amazingly Mournful Poetry’ (Wired, 16
May 2016) <https://www.wired.co.uk/article/google-artificial-intelligence-poetry> accessed
21 December 2022.
13
Mark Brown, ‘‘New Rembrandt’ to be Unveiled in Amsterdam’ (Guardian, 5 April 2016)
<https://www.theguardian.com/artanddesign/2016/apr/05/new-rembrandt-to-be-unveiled-in-
amsterdam> accessed 20 December 2022.
11
Prospect of Copyright Protection for AI-generated Works
human mind cannot fully comprehend.14 One of the most talked-about AI-
produced works is the painting titled “Next Rembrandt”. Other, more recent
AI-generated art was created by the e-David system, a “painting machine
that simulates human painters and is able to draw a painting on a real
canvas.”15 These works of art would unquestionably be subject to copyright
laws if they had been created by a human. Still, an inevitable question that
nevertheless looms over is whether AI-generated works of art created
without human intervention should be similarly protected by copyright laws.
There are still some unresolved questions about the extent to which the
works created by AI should be secured by traditional copyright law. But,
given their growing popularity, it is more imperative than ever to find a
plausible solution.
14
Madeleine de Cock Buning, ‘Artificial Intelligence and the Creative Industry: New
Challenges for the EU paradigm for Art and Technology by Autonomous Creation’ in
Woodrow Barfield and Ugo Pagallo (eds), Research Handbook on the Law of Artificial
Intelligence (Edward Elgar 2018) 517.
15
Oliver Deussen and Marin Gulzow, ‘e-David, A painting process’ (University of
Konstanz, 17 June 2019) <http://graphics.uni-konstanz.de/eDavid/?page_id=2> accessed 21
December 2022.
12
Acing the AI: Artificial Intelligence and its Legal Implications
16
ibid.
17
ibid.
13
Prospect of Copyright Protection for AI-generated Works
can be registered with the Copyright Office in the United States.18 The fact
that AI can produce work is undeniable. It is indisputable that AI is capable
of creative output. In its Draft Issues Paper on intellectual property policy
and AI, WIPO raised a number of questions on the subject.19
In the instance of Yoox case,20 the designers were actively involved in the
creative process, which allowed the company’s products to be considered
original. However, as the lines separating the AI’s input and the decisions
made by humans get increasingly blurred, it becomes more and more
difficult for businesses to be certain whether their works will be protected by
copyright or not.
18
‘Copyrightable Authorship’ (US Copyright Office) 4
<https://www.copyright.gov/comp3/chap300/ch300-copyrightable authorship. pdf> accessed
23 December 2022; Feist Publication, Inc., v. Rural Telephone Service Company, 499 U.S.
340 (1991).
19
WIPO Secretariat, ‘WIPO Conversation on Intellectual Property (IP) And Artificial
Intelligence (AI): Draft Issues Paper on Intellectual Property Policy and Artificial
Intelligence’ (World Intellectual Property Organisation, 13 December 2019)
WIPO/IP/AI/2/GE/20/1.
20
Giulio Coraggio, ‘AI in the Fashion Industry Unveils New Unexpected Legal Issues’
(Gaming Tech Law, 26 November 2018) <https://www.gamingtechlaw.com/2018/11/ai-
fashion-legal-issues.html> accessed 27 December 2022.
21
Erika Hubert, ‘Artificial Intelligence and Copyright Law in a European Context’ [2022]
Lund University 24
14
Acing the AI: Artificial Intelligence and its Legal Implications
<https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9020263&fileOId=90
20290> accessed 28 December 2022.
22
Begoña González Otero and João Pedro Quintais, ‘Before The Singularity: Copyright and
the Challenges of Artificial Intelligence’ (Kluwer Copyright Blog, 25 September 2018)
<http://copyrightblog.kluweriplaw.com/2018/09/25/singularity-copyright-challenges-
artificial-intelligence/> accessed 29 December 2022.
23
Eva-Maria Painerv Standard Verlags GmbH and Others [2011] C-145/10.
24
ibid.
25
ibid.
15
Prospect of Copyright Protection for AI-generated Works
Dr. Thaler applied for patents for two DABUS innovations in 2019 with the
USPTO, and with DABUS designated as the inventor.26 Both applications
were rejected by the USPTO for not having identified a natural person as the
inventor and two other reasons. First, the USPTO determined that inventors
must be individuals in accordance with the legislative requirements. Second,
the current recognised system has never witnessed inventions working totally
on their own without human interference, and third, non-natural entities
cannot have transferable rights of property. Hence they cannot be given
ownership rights.
The USPTO claimed that the keyword “whoever” in section 101, which
reads as, “whoever finds any new and useful procedure, machine,
manufacture, or composition of matter may claim a patent,” implies that a
natural person must be the inventor in order to be issued a patent. 27 A patent
application must identify the person(s) responsible for the invention being
claimed in accordance with section 115(a), and section 115(h)(1) further says
that the inventor who makes an oath or declaration must be a “person.”28
26
In re Application 16/524,350 [2022] WL 1970052 (22 April 2022).
27
ibid.
28
ibid.
16
Acing the AI: Artificial Intelligence and its Legal Implications
According to the Supreme Court of the United States, “[a]s a general rule,
the author is then, the individual who actually creates the work, that is, the
person who turns an idea into a fixed, concrete manifestation entitled to
copyright protection.”31
29
In re Application of Application [2022] WL 1970052.
30
Burroughs Wellcome Co. v Barr Lab’ys, Inc. [1994] 40 F.3d 1223, 1227–28 (Fed. Cir.).
31
Cmty. for Creative Non-Violence v Reid [1989] 490 U.S. 730, 737; see also Burrow-Giles
Lithographic Co. v Sarony [1884] 111 U.S. 53, 61 (“[A]nd Lord Justice Bowen says that
photography is to be treated for the purposes of the act as an art, and the author is the man
who really represents, creates, or gives effect to the idea, fancy, or imagination.”).
32
‘EPO Refuses DABUS Patent Applications Designating a Machine Inventor’ (European
Patent Office, 20 December 2019) <https://www.epo.org/news-
events/news/2019/20191220.html> accessed 28 December 2022.
17
Prospect of Copyright Protection for AI-generated Works
synthetic neural networks are utilized by DABUS: one to produce ideas, and
another to evaluate them for ‘authenticity’ and ‘utility’. 33
The EPO rejected the application on the grounds of failure to comply with
Article 81 of the EPC and Rule 19(1) of its Implementing Regulations,
which require the designation of an inventor.34 Rule 19(1) of the EPC
requires inventors to be natural or legal persons and thus rejected the
designation of DABUS as inventor.35 The European Patent Office (EPO)
states that “names granted to natural beings, whether consisting of a given
name and a family name or homonyms, serve not just the role of identifying
them, but enable them to exercise their rights and form part of their
personality.” For the same reason that DABUS could not be considered an
employee of Dr. Thaler, it is because machines lack legal personality that
they are entitled to the rights vested in inventors under Article 62 of the EPC,
such as the ability to be listed on patent applications. 36
33
‘Can Artificial Intelligence Systems Patent their Inventions?’ (Dennemeyer & Associates,
22 November 2019) <https://www.dennemeyer.com/ip-blog/news/can-artificial-
intelligence-systems-patent-their-inventions/> accessed 28 December 2022.
34
Wensen An, ‘The Lay of the Land: Patent Law and AI Inventors’ (The Robotics Law
Journal, 30 October 2020) <https://roboticslawjournal.com/analysis/the-lay-of-the-land-
patent-law-and-ai-inventors-33535461> accessed 28 December 2022.
35
ibid.
36
Joel Smith, Rachel Montagnon and Laura Adde, ‘European Union: EPO Publishes
Reasons for Rejecting AI as Inventor on Patent Application’ (Mondaq, 7 February 2020)
<https://www.mondaq.com/uk/patent/891234/epo-publishes-reasons-for-rejecting-ai-as-
inventor-on-patent-
application#:~:text=by%20Joel%20Smith%2C%20Rachel%20Montagnon%20and%20Laur
a%20Adde,than%20a%20human%2C%20was%20named%20as%20the%20inventor>
accessed 29 December 2022.
18
Acing the AI: Artificial Intelligence and its Legal Implications
or individuals” who are regarded to be the inventor.37 For this reason, even if
the AI machine were considered the inventor, the applicant would have a
hard time securing legal title of the innovation. It is well established in law
that an IPR holder cannot be a corporation. To quote the application, “the
applicant acknowledges that DABUS is an AI machine and not a human,
consequently, cannot be understood to be a ‘person’ as required by the
Act.”38
37
BL O/741/19, Decision, U.K. Intellectual Property Office.
38
ibid.
39
Goldstein v California [1973] 412 U.S. 546, 561.
40
Naruto v Slater [2018] 888 F.3d 418, 420 (9th Cir.).
41
ibid at 426.
19
Prospect of Copyright Protection for AI-generated Works
42
European Parliament and of the Council, ‘Directive 2006/116 on the Term of Protection
of Copyright and Certain Related Rights [2006] O.J. (L 372) 12, 14 (EC).
43
European Parliament Communication on Legal Affairs, ‘Report on intellectual property
rights development of artificial intelligence technologies’ (European Parliament, 2 October
2022) 9.
44
Copyright, Designs and Patents Act 1988, s 178(b) (EU).
20
Acing the AI: Artificial Intelligence and its Legal Implications
historically speaking, the emphasis on the need for ‘human interference’ has
been counterproductive to the growth of copyright law. 45
45
Madeleine de Cock Buning (n 14) 511, 533.
46
European Parliament Communication on Legal Affairs, “At a time when artistic creation
by AI is becoming more common [(citing the example of the ‘Next Rembrandt’)], we seem
to be moving towards an acknowledgement that an AI-generated creation could be deemed
to constitute a work of art on the basis of the creative result rather than the creative
process.”).
47
Kaminski, ‘Authorship, Disrupted: AI Authors in Copyright and First Amendment Law
(2017) 51 U.C. Davis L. Rev. 589.
48
Shlomit Yanisky-Ravid, ‘Generating Rembrandt: Artificial Intelligence, Copyright, and
Accountability in the 3A Era—The Human-Like Authors are Already Here—A New Model’
[2017] Michigan State Law Review 659, 701
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2957722> accessed 29 December
2022.
21
Prospect of Copyright Protection for AI-generated Works
49
European Parliament Committee on Legal Affairs, ‘AI Act: a step closer to the first rules
on Artificial Intelligence’ (European Parliament, 11 May 2023)
<https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-
closer-to-the-first-rules-on-artificial-intelligence> accessed 29 December 2022.
50
Otero and Quintais (n 24).
22
Acing the AI: Artificial Intelligence and its Legal Implications
First, legislators should create a new legal protection for AI inventions made
with human input. Second, after deciding how they would ensure the
protection of the work itself and the moral rights of the owner, legislators
should approve making AI-generated works part of the public domain.
‘Sui generis’ means “of its own kind or class” in Latin.54 Unusual IP rights in
copyrighted works, patents, trademarks, and trade secrets are protected by
51
Paul T. Babie, ‘The “Monkey Selfies”: Reflections on Copyright in Photographs of
Animals’ (2018) 52 U.C. Davis Law Review Online 103, 116; See generally Shyamkrishna
Balganesh, ‘Causing Copyright’ (2017) 117 Columbia Law Review 1, 73–74.
52
Victor M. Palace, ‘What if Artificial Intelligence Wrote This? Artificial Intelligence and
Copyright Law’ (2019) 71 FLR 217, 226.
53
Copyright, Designs and Patent Act 1988, s 9(3); Copyright and Related Rights Act 2000, s
21(f); Copyright Act 1994, s 5(2)(a).
54
Henry Campbell Black, Black’s Law Dictionary (11th edn, St. Paul, Minn. West
Publishing Co. 2019).
23
Prospect of Copyright Protection for AI-generated Works
the sui generis system. 55 Copyright, IP rights, and this sui generis right are
separate. Regardless of the existence of copyright protection for the content,
it safeguards databases.
1) Over the past ten years, deep learning has made AI ubiquitous.56
Algorithms are used by AI systems to learn from feedback and advance.
In order to improve performance, some AI systems create and rewrite
algorithms.57 Because deep learning and neural networks are used
throughout AI systems, they are always learning and adapting. They can
mimic human intelligence and creativity to generate new original works,
such as news reports,58 poems,59 paintings, 60 and music. 61
55
ibid.
56
Kai-Fu Lee, AI Superpowers: China, Silicon Valley, New world order (9th edn, Houghton
Mifflin Harcourt Publishing 2018).
57
Will Knight, ‘The Dark Secret at the Heart of AI’ (MIT Law Review, 11 September 2017)
<https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai> accessed
19 December 2022.
58
Jaclyn Peiser, ‘The Rise of the Robot Reporter’ (New York Times, 5 February 2019).
59
Matt Burgess, ‘Google’s AI Has Written Some Amazingly Mournful Poetry’ (Wired, 16
May 2016).
60
Nadja Sayej, ‘Vincent van Bot: The Robots Turning Their Hand to Art’ (The Guardian,
22 February 2018).
61
Dani Deahl, ‘This New Alexa Skill Will Play Music Generated by Artificial Intelligence’
(Verge, 14 March 2018).
24
Acing the AI: Artificial Intelligence and its Legal Implications
These three considerations suggest the need for a sui generis mechanism to
protect AI-created works against infringement. The question remains, though
– who should have the exclusive rights to an AI-created piece of art? Who
should get credit for creating an AI system, the developer who started it in
motion or the appropriate AI system that made the primary input for the
construction of the work? Several recent AI-related rulings have invalidated
claims of intellectual property ownership by AI systems.
62
17 U.S.C. 2018, s 101.
25
Prospect of Copyright Protection for AI-generated Works
Unlike traditional copyright laws, the new sui generis rule for AI-generated
works may not have an “originality criterion.” 64 The originality requirements
for copyrighting AI-generated works continue to be a major concern when
one wants to attribute copyright protection to a work autonomously
generated by an AI. It would be simpler to implement an exception for AI-
generated works if this stipulation were eliminated from a new sui generis
rule as opposed to a copyright act. It would be simple to protect works by
lowering or eliminating the originality criteria.
63
Council Directive 96/9/EC of 11 March 1996 on the legal protection of databases [1996]
OJ L 77/20.
64
ibid.
26
Acing the AI: Artificial Intelligence and its Legal Implications
Additional aspects
The new sui generis rule should make it abundantly obvious which works are
covered by the sui generis right and which are protected by standard
copyright law. By doing this, legal certainty would be strengthened, and
the authorities might choose whether to exempt works from copyright
protection. Protection requirements for AI-generated works may be relaxed
or altered. In conformity with rapid technical advancements in the field
the periods of protection would have to be in line with the same.65
Companies could reduce liability and anti-competitive risks by meticulously
regulating the use of their products and AI systems.
VI. Conclusion
65
European Parliament and of the Council, ‘Directive for the sui generis protection of
databases’ (European Parliament) 96/9/EC/Art 10 [1996] L 77/20.
27
Prospect of Copyright Protection for AI-generated Works
28
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
In the last decade, social media platforms like Facebook, Instagram,
Twitter, etc. have had a very tumultuous journey. There are multiple
examples wherein social media has done wonders. The mistreatment of
Iraqi prisoners at Abu Ghraib in Baghdad,66 and the significance of
WikiLeaks,67 in Arab Springs are instances where social media played
a catalytic role.68 Then there are instances where social media has
been used to violate human rights. In Myanmar, social media had a
“determining role” in the suspected acts of genocide in the country.69
Social Media has been used in decade long war in Syria,70 online hate-
speech has been used to provoke enmity in the Central African
66
Hersh Seymour, ‘Torture at Abu Ghraib’ (The New Yorker, 30 April 2004)
<http://www.newyorker.com/magazine/2004/05/10/torture-at-abu-ghraib> accessed 25
March 2022.
67
Raffi Khatchadourian, ‘What Does Julian Assange Want?’ (The New Yorker, 31 May
2010) <https://www.newyorker.com/magazine/2010/06/07/no-secrets> accessed 25 March
2022.
68
Robin Wright, ‘How the Arab Spring Became the Arab Cataclysm’ (The New Yorker, 15
December 2015) <http://www.newyorker.com/news/news-desk/arab-spring-became-arab-
cataclysm> accessed 25 March 2022.
69
Tom Miles, ‘U.N. Investigators Cite Facebook Role in Myanmar Crisis’ (Reuters, 12
March 2018) <https://www.reuters.com/article/us-myanmar-rohingya-facebook-
idUSKCN1GO2PN> accessed 31 March 2022.
70
Patrick Howell, ‘Why the Syrian Uprising Is the First Social Media War’ (The Daily Dot,
18 September 2013) <https://www.dailydot.com/debug/syria-civil-social-media-war-
youtube/> accessed 31 March 2022.
29
The Rabbit-Hole of Content Moderation by AI
I. Introduction
An article was written by Eugene Volokh on the future of the internet and the
world around it, around 27 years ago.73 It spoke about the extinction of the
physical forms of music, the prevalence of video music, and free-flowing
speech. Here we are, 27 years later, and every prediction has come true.
Internet and social media have come a long way since their inception. The
kind of problems which have come into the picture now was probably never
envisioned by the creators of the internet and the other platforms which
71
‘Hate Speech on Social Media Inflaming Divisions in CAR - Central African Republic’
(Relief Web, 2 June 2018) <https://reliefweb.int/report/central-african-republic/hate-speech-
social-media-inflaming-divisions-car> accessed 31 March 2022.
72
Max Fisher, ‘Sri Lanka Blocks Social Media, Fearing More Violence’ (The New York
Times, 21 April 2019) <https://www.nytimes.com/2019/04/21/world/asia/sri-lanka-social-
media.html> accessed 31 March 2022.
73
Eugene Volokh, ‘Cheap Speech and What It Will Do’ (1996) 1 The Communication
Review 261.
30
Acing the AI: Artificial Intelligence and its Legal Implications
followed. Now across society, there exists a consensus that the internet and
these platforms cause enormous societal problems, including loss of
privacy, 74 harassment of women and minorities,75 systematic manipulation of
democracy, 76 and incitement to genocide (as in Myanmar).77 The kinds of
content on social media which cause problems are sometimes user- generated
content, and sometimes the activities of the platforms themselves, like over-
censuring, acting late on sensitive content, etc. Facebook has been a mess for
quite some time now, with regular lawsuits in and around the world being
opened against it, along with various inquiries in front of parliamentary
committees/ commissions around the world for their involvement in such
activities. 78
74
Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at
the New Frontier of Power: Barack Obama’s Books of 2019 (Profile Books 2019).
75
Danielle Keats Citron, Hate Crimes in Cyberspace (Harvard University Press 2014).
76
Alexis C Madrigal, ‘What Facebook Did to American Democracy’ (The Atlantic, 12
October 2017) <https://www.theatlantic.com/technology/archive/2017/10/what-facebook-
did/542502/> accessed 13 March 2022.
77
Alexandra Stevenson, ‘Facebook Admits It Was Used to Incite Violence in Myanmar’
(The New York Times, 6 November 2018)
<https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html> accessed 13
March 2022.
78
Kara Swisher, ‘Opinion | Zuckerberg Never Fails to Disappoint’ (The New York Times, 10
July 2020) <https://www.nytimes.com/2020/07/10/opinion/facebook-zuckerberg.html>
accessed 13 March 2022.
79
Kevin Granville, ‘Facebook and Cambridge Analytica: What You Need to Know as
Fallout Widens’ (The New York Times, 19 March 2018)
<https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-
explained.html> accessed 13 March 2022.
31
The Rabbit-Hole of Content Moderation by AI
address such changes, but the amount of progress in crimes has not been
matched by legal and regulatory reforms.
80
Ira Steven Nathenson, ‘Super-Intermediaries, Code, Human Rights’ (2013) 8 Intercultural
Human Rights Law Review (St. Thomas University) 19, 39.
81
Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets
and Freedom (Yale University Press 2006).
82
K Sabeel Rahman, ‘Regulating Informational Infrastructure: Internet Platform as the New
Public Utilities’ (2017–2018) 2 Georgetown Law Technology Review 234.
32
Acing the AI: Artificial Intelligence and its Legal Implications
the platforms mint money. 83 Platforms use the data of individuals to organise
it and use it in a structured form. 84 Along with this, they have scoring power
over the user, via which they exercise control over the entire ecosystem
using an algorithm, recommendations, etc.85
The data of a user is analysed by the algorithm, and then the prediction of
what a user may like is based on the kinds of posts which are liked by an
individual. The accumulated data can be organized into different categories
that each reveal clues about what a user likes to see. Activities like searches,
people whom the user has contacted, etc. are used to customise even more.86
Once the pool of user content is created by the algorithm, the content is
ranked based on how much the user is engaging with it. Following this, the
feed of the user is filled with content which has a higher rank and is on the
top of the feed. 87
83
Brian O’Connell, ‘How Does Facebook Make Money? Six Primary Revenue Streams’
(The Street, 23 October 2018) <https://www.thestreet.com/technology/how-does-facebook-
make-money-14754098> accessed 30 March 2022.
84
Sang Ah Kim, ‘Social Media Algorithms: Why You See What You See Technology
Explainers’ (2017–2018) 2 Georgetown Law Technology Review 147.
85
Danielle Citron and Frank Pasquale, ‘The Scored Society: Due Process for Automated
Predictions’ (2014) 89 Washington Law Review 1.
86
‘How the Facebook Algorithm Works and Ways to Outsmart It’ (Sprout Social, 3 August
2020) <https://sproutsocial.com/insights/facebook-algorithm/> accessed 30 March 2022.
87
Sang Ah Kim (n 84) 149.
88
‘Facebook Changes Algorithm to Promote Worthwhile & Close Friend Content’ (Tech
Crunch, 16 May 2019) <https://social.techcrunch.com/2019/05/16/facebook-algorithm-
links/> accessed 14 March 2022.
33
The Rabbit-Hole of Content Moderation by AI
exists between the reaction emojis on Facebook which in a way favour the
promotion of hateful content more than that of normal ones. 89 Sites like
Facebook also allow for the creation of “groups,” where people with
common interests can congregate and share posts reflecting their views. 90
The algorithms of these sites also modify the suggestions of whom to follow,
which groups to join, etc. People/organisations pay for targeted advertising
to seek out the most agreeable persons for their purposes. All of this
contributes to the creation of “echo chambers,” where the person targeted to
be incited is repeatedly fed the same rhetoric, misinformation, and hateful
content that would create the requisite context and atmosphere for a call to
violence through incitement.
ISIS has made prodigious use of social media for its varied purposes,91 with
specific strategies employed for recruitment, indoctrination, fundraising,
incitement, etc. Research exists to back the claim that Facebook’s algorithm
boosts posts that elicit strong, negative emotions, in order to increase
engagement.92 In Germany, when Facebook servers were down, anti-refugee
attacks dropped significantly, indicating just how much such violently
inclined outfits depended on the site to incite the attacks.93 However perhaps
89
Gina M Masullo and Jiwon Kim, ‘Exploring “Angry” and “Like” Reactions on Uncivil
Facebook Comments That Correct Misinformation in the News’ (2021) 9 Digital Journalism
1103.
90
Ashutosh Bhagwat, ‘The Law of Facebook’ (2020–2021) 54 UC Davis Law Review 2353.
91
Piotr Łubiński, ‘Social Media Incitement to Genocide’, in Marco Odello, Piotr Łubiński
(eds.), The Concept of Genocide in International Criminal Law Developments after Lemkin
(Routledge, 2020) 262.
92
Tobias Kraemer, ‘The Good, The Bad, And The Ugly – How Emotions Affect Online
Customer Engagement Behavior’ (2016) <https://iae-aix.univ-amu.fr/sites/iae-aix.univ-
amu.fr/files/42_kraemer-the_good_rev.pdf> accessed 14 November.
93
Amanda Taub and Max Fisher, ‘Facebook Fueled Anti-Refugee Attacks in Germany,
New Research Suggests’ (The New York Times, 21 August 2018)
<https://www.nytimes.com/2018/08/21/world/europe/facebook-refugee-attacks-
germany.html> accessed 14 November 2021.
34
Acing the AI: Artificial Intelligence and its Legal Implications
94
Steve Stecklow, ‘Why Facebook is losing the war on hate speech in Myanmar’ (Reuters
Investigates, 15 August 2018) <https://www.reuters.com/investigates/special-
report/myanmar-facebook-hate/> accessed14 November 2021.
95
Paul Mozur, ‘A Genocide Incited on Facebook, with Posts from Myanmar’s Military’(The
New York Times, 15 October 2018)
<https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-
genocide.html?nl=top-stories&nlid=61026281ries&ref=cta> accessed 14 November 2021.
96
Tom Miles, ‘U.N. investigators cite Facebook role in Myanmar crisis’ (Reuters, 13 March
2018) <https://www.reuters.com/article/us-myanmar-rohingya-facebook/u-n-investigators-
cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2PN> accessed 14 November 2021.
97
Neriah Yue, ‘The “Weaponization” of Facebook in Myanmar: A Case for Corporate
Criminal Liability Notes’ (2019–2020) 71 Hastings Law Journal 813.
98
Paul Mozur, ‘A Genocide Incited on Facebook, With Posts From Myanmar’s Military’
(The New York Times, 15 October 2018)
<https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html>
accessed 14 March 2022.
35
The Rabbit-Hole of Content Moderation by AI
99
Adrian Chen, ‘The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook
Feed’ (Wired, 23 October 2014) <https://www.wired.com/2014/10/content-moderation/>
accessed 1 April 2022.
100
Kate Crawford and Tarleton Gillespie, ‘What Is a Flag for? Social Media Reporting
Tools and the Vocabulary of Complaint’ (2016) 18 New Media & Society 410.
101
Finn Brunton, Spam: A Shadow History of the Internet (MIT Press 2013).
102
Sebastian Felix Schwemer, ‘Trusted Notifiers and the Privatization of Online
Enforcement’ (2019) 35 Computer Law & Security Review 105, 339.
103
Kate O’Flaherty, ‘YouTube Keeps Deleting Evidence of Syrian Chemical Weapon
Attacks’ (Wired UK, 26 June 2018) <https://www.wired.co.uk/article/chemical-weapons-in-
syria-youtube-algorithm-delete-video> accessed 29 March 2022.
104
Ben Depoorter and Robert Walker, ‘Copyright False Positives’ (2013) 89 Notre Dame
Law Review 319.
36
Acing the AI: Artificial Intelligence and its Legal Implications
terms of service which they adjudicate using automated means. 105 And since
content moderation is done by AI, the decision to look into the removal or
non-removal depends on the private economic interests of the platform. 106
105
Thomas Kadri and Kate Klonick, ‘Facebook v. Sullivan: Public Figures and
Newsworthiness in Online Speech’ (2019) 93 Southern California Law Review 37.
106
Maayan Perel, ‘Enjoining Non-Liable Platforms’ (2020–2021) 34 Harvard Journal of
Law & Technology (Harvard JOLT) 1, 28.
107
Sofia Grafanaki, ‘Drowning in Big Data: Abundance of Choice, Scarcity of Attention and
the Personalization Trap, a Case for Regulation’ (2017–2018) 24 Richmond Journal of Law
& Technology 1.
108
Katherine J Strandburg, ‘Free Fall: The Online Market’s Consumer Preference
Disconnect’ [2013] University of Chicago Legal Forum 95.
109
James Q Whitman, ‘The Two Western Cultures of Privacy: Dignity versus Liberty’
(2004) 113 The Yale Law Journal 1151, 1181.
37
The Rabbit-Hole of Content Moderation by AI
These algorithms combine the power of deciding the norm, enforcing the
norm, and adjudicating based on the norms. There may be errors in content
moderation by algorithms that mark something as illegal because of the lack
of context.110 This will lead to the algorithm censoring legal content as well.
And finally, this act of content moderation by platforms affects the
separation of power within the platform system, as the same algorithm acts
as an enforcer and adjudicator.
The community standards are the rules which are applicable across every
jurisdiction and user. The language used in the document is very broad,
which makes it difficult to have consistent application because of the lack of
context specificity. 111 This paves the way for a lot of subjectivity at the hands
of content moderators. Along with this, Facebook specifies that it would help
law enforcement agencies based on the severity of the violation, and with the
lack of definition, this also leads to subjectivity. Certain leaked documents
revealed that Facebook has internal guidelines which are more specific than
the ones available to the general public. 112 Moderators have several sources
of “truth” which they have to consider before making a decision and that
leads to inconsistency. 113 Also, the internal materials are changed on an ad
hoc basis. Because hate speech is difficult to define without context, the
110
Evan Engstrom and Nick Feamster, ‘The Limits of Filtering’ (Engine, March 2017)
<https://www.engine.is/the-limits-of-filtering> accessed 19 April 2022.
111
Sarah Koslov, ‘Incitement and the Geopolitical Influence of Facebook Content
Moderation’ (2019–2020) 4 Georgetown Law Technology Review 183, 189.
112
Max Fisher, ‘Inside Facebook’s Secret Rulebook for Global Political Speech’ (The New
York Times, 27 December 2018) <https://www.nytimes.com/2018/12/27/world/facebook-
moderators.html> accessed 6 April 2022.
113
Casey Newton, ‘The Secret Lives of Facebook Moderators in America’ (The Verge, 25
February 2019) <https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-
content-moderator-interviews-trauma-working-conditions-arizona> accessed 6 April 2022.
38
Acing the AI: Artificial Intelligence and its Legal Implications
internal document can magnify power disparities. 114 Facebook, not being a
sovereign or democratic institution, is a facilitator of free speech and a
proponent of democratic values.
114
Julia Angwin and Grassegger Hannes, ‘Facebook’s Secret Censorship Rules Protect
White Men From Hate Speech But Not Black Children’ (Pro Publica, 28 June 2017)
<https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-
algorithms?token=jshSORFCpk4rT10jAyZXIO0twvVlATYO> accessed 6 April 2022.
115
Sarah Koslov (n 111) 200.
116
Andrew D Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian and
Janet Vertesi, ‘Fairness and Abstraction in Sociotechnical Systems’, in Proceedings of the
Conference on Fairness, Accountability, and Transparency (ACM 2019)
<https://dl.acm.org/doi/10.1145/3287560.3287598> accessed 6 April 2022.
117
Yuval Noah Harari, ‘Why Technology Favors Tyranny’ (The Atlantic, 30 August 2018)
<https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-
tyranny/568330/> accessed 6 April 2022.
39
The Rabbit-Hole of Content Moderation by AI
40
Acing the AI: Artificial Intelligence and its Legal Implications
people. This should also make the person reported accountable, and such a
record should be maintained. The biggest issue amongst all of this is the
issue of language. The existence of various languages in India also acts as a
hurdle in having a uniform standard, hence, a diversified panel is required.
122
Rebecca Cambron, ‘World War Web: Rethinking “Aiding and Abetting” in the Social
Media Age’ (2019) 51 Case Western Reserve Journal of International Law 293, 307.
123
Assaf Baciu, ‘Artificial Intelligence Is More Artificial Than Intelligent’ (Wired, 7
December 2016) <https://www.wired.com/2016/12/artificial-intelligence-artificial-
intelligent/> accessed 30 March 2022.
41
A.I.: Perpetuator of Racism and Colourism
Abstract
This article will deal with the causes of the inherent bias within A.I. as
well as its subsequent effect on the perception of beauty and law
enforcement, among other areas. It will also discuss its negative and
unforeseen impact on the everyday lives of people, the legal and other
measures people have taken against it, as well as steps taken towards
resolving this issue along with other possible solutions.
I. Introduction
124
Patrick Grother, ‘NIST Study Evaluates Effects of Race, Age, Sex, Gender on Face
Recognition Software’ (NIST, 19 December 2019) <https://www.nist.gov/news-
events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software>
accessed on 13 January 2023; Alex Najibi, ‘Racial Discrimination in Face Recognition
Technology’ (Harvard University, 24 October 2020)
<https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-
technology/> accessed 13 January 2023.
125
Sheridan Wall and Hilke Schellmann, ‘We tested AI interview tools. Here’s what we
found.’ (MIT Technology Review, 7 July 2021)
42
Acing the AI: Artificial Intelligence and its Legal Implications
Yet, the Police and prosecutors in most of the U.S. are not required to inform
people arrested for crimes that Facial Recognition Technologies played a
role in the investigation that led to their arrest. They can hide the use of
Facial Recognition Technologies in their warrants and affidavits by using
phrases such as ‘investigative means’, which makes it difficult for an
attorney to find the use of the same unless s/he knows the tactics used by the
police to shroud the same.129 There is an urgent need for transparency
<https://www.technologyreview.com/2021/07/07/1027916/we-tested-ai-interview-tools/>
accessed 13 January 2023.
126
Tate Ryan-Mosley, ‘How digital beauty filters perpetuate colourism’ (MIT Technology
Review, 15 August 2021) <https://www.technologyreview.com/2021/08/15/1031804/digital-
beauty-filters-photoshop-photo-editing-colorism-racism/> accessed 14 January 2023.
127
ibid.
128
Catherine Kenny, ‘Artificial Intelligence: Can We Trust Machines to Make Fair
Decisions?’ (UC Davis, 13 April 2021) <https://www.ucdavis.edu/curiosity/news/ais-race-
and-gender-problem> accessed 14 January 2023.
129
T.J. Benedict, ‘The Computer Got it Wrong: Facial Recognition Technology and
Establishing Probable Cause to Arrest’ (2022) 79(2) Washington & Lee Law Review
<https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=4773&context=wlulr>
accessed 13 January 2023.
43
A.I.: Perpetuator of Racism and Colourism
The second type of bias is the Societal AI Bias, wherein societal norms and
traditions cause people to have certain blind spots in their manner of
thinking. This type of bias heavily influences the aforementioned
Algorithmic bias, therefore, growth and development in Societal Bias help
influence Algorithmic bias for the better.
III. Solutions
130
Kenny (n 128).
44
Acing the AI: Artificial Intelligence and its Legal Implications
San Francisco, in June 2019, was one of the first U.S. cities to ban the
usage of facial recognition technologies by the Police and other
departments, with the State of California also imposing a three-year
moratorium on the same in January 2020.131 A similar strategy must be
followed by other states until the discriminatory biases in AI are settled
and it is well tested before being brought out on the field. Policies framed
131
ibid.
45
A.I.: Perpetuator of Racism and Colourism
to monitor and direct the usage of the same must be strictly followed, as
well as, the punishments in the event of non-compliance. It is crucial to
hold the Police accountable in cases of their complete reliance on AI and
shoddy, sloppy detective work in investigations so as to prevent more
cases of the likes of Robert Williams, Nijeer Parks and Michael Oliver.
Google has helped reduce colour bias by unveiling a new ten shade skin
tone scale test for its A.I. that is more representative of the different skin
tones in the world and can more accurately test the AI for bias. 132 This
new scale, called the Monk Skin Tone Scale replaced the flawed
Fitzpatrick Skin Type standard of six colours that was proven to result in
colour bias as it underrepresented people with darker skin. Microsoft and
IBM have also pledged to reduce this by improving their data collection
methods.133 Other such multi-national companies that make use of AI
should follow the same strategy to help reduce bias.
132
‘Google unveils new 10 shade skin-tone scale to test AI for bias’ (The Economic Times,
12 May 2022) <https://economictimes.indiatimes.com/tech/technology/google-unveils-new-
10-shade-skin-tone-scale-to-test-ai-for-bias/articleshow/91506703.cms> accessed 14
January 2023.
133
Alex Najibi, ‘Racial Discrimination in Face Recognition Technology’ (Harvard
University, 24 October 2020) <https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-
in-face-recognition-technology/> accessed 13 January 2023.
46
Acing the AI: Artificial Intelligence and its Legal Implications
134
Ryan-Mosley (n 126).
47
A.I.: Perpetuator of Racism and Colourism
in the caste system. 135 Light skin is associated with beauty and nobility in
China. 136 People of many different races experience colourism in the US
because this prejudice is primarily based on appearance rather than race.
Black Americans who had lighter skin historically received more domestic
chores, while those who had darker skin were far more likely to labour in the
fields when they were enslaved.137 In current times, digital colourism has
emerged due to the widespread use of selfies and face filters 138. As per a
report by Snapchat, about 200 million people use its feature ‘Snapchat
Lenses’ every day, and some of them use it to lighten their skin tone.
Instagram and Tiktok have automatic image-enhancing features and filters
that bring about the same effect, and this is done in an almost imperceptible,
subtle manner.139 As mentioned before, there are pre-set skin tone ranges in
the imaging chips in personal cameras that prevent the accurate portrayal of
darker skin tones. As recent as 2020, Twitter had come under fire for its
image cropping tool that preferred faces that were ‘lighter, thinner and
younger’, thus enforcing the popularity of people with a lighter skin tone
over those with a darker skin tone.140 Digital technologies are thus
continuing to narrow beauty standards.
135
‘Skin colour tied to caste system, says study’ (Times of India, 21 November 2016)
<https://timesofindia.indiatimes.com/india/skin-colour-tied-to-caste-system-says-
study/articleshow/55532665.cms> accessed 14 January 2023.
136
Zhang Xi, ‘Chinese consumers obsessed with white skin bring profits for cosmetics
companies’ (The Economic Times, 20 November 2011)
<https://economictimes.indiatimes.com/news/international/chinese-consumers-obsessed-
with-white-skin-bring-profits-for-cosmetics-companies/articleshow/10796591.cms>
accessed 14 January 2023.
137
Verna M. Keith and Cedric Herring, ‘Skin Tone and Stratification in the Black
Community’ (1991) 97(3) American Journal of Sociology
<http://www.jstor.org/stable/2781783.> accessed on 14 January 2023.
138
Ryan-Mosley (n 126).
139
ibid.
140
Alex Hern, ‘Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger
faces’ (The Guardian, 10 August 2021)
48
Acing the AI: Artificial Intelligence and its Legal Implications
There are also flourishing ‘facial assessment tools’, based on AI, on websites
such as Quoves and the world’s largest open facial recognition platform,
Face++.141 These rate the attractiveness of a face, detect faults with it, and
recommend solutions that involve injectable or surgical enhancements to
rectify the same. However, the detection of these ‘faults’ has unfortunate
racist biases.142 Economist Lauren Rhue found that the system on Face++
consistently rated women with lighter skin tones higher on the attractive
scale than those with darker skin tones. The same phenomenon was observed
with women with Eurocentric features – smaller noses and lighter hair were
rated higher than those with other features, regardless of their skin tone. This
reflects a Eurocentric Bias in the people who graded the images to train the
A.I, and thus codified and amplified this bias.
There is a suspicion of the same bias being reflected in dating services and
social media platforms that codify their recommendation algorithm on the
basis of these prejudiced, discriminatory, colourist and racist beauty scoring
AI.143 This further establishes how influential AI has become, influencing the
likes and dislikes of people, be it on a social media app or an online dating
service.
<https://www.theguardian.com/technology/2021/aug/10/twitters-image-cropping-algorithm-
prefers-younger-slimmer-faces-with-lighter-skin-analysis> accessed 14 January 2023.
141
Tate Ryan-Mosley, ‘I asked an AI to tell me how beautiful I am’ (MIT Technology
Review, 5 March 2021) <https://www.technologyreview.com/2021/03/05/1020133/ai-
algorithm-rate-beauty-score-attractive-face/> accessed 14 January 2023.
142
ibid.
143
ibid.
49
A.I.: Perpetuator of Racism and Colourism
144
Kenny (n 128).
145
Lynch v State [2018] 260 So.3d 1166; People v Knight [2020] 130 N.Y.S.3d 919.
146
ibid.
147
Khari Johnson, ‘How Wrongful Arrests Based on AI Derailed 3 Men’s Lives’ (Wired, 7
March 2022) <https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/>
accessed 13 January 2023.
148
ibid.
50
Acing the AI: Artificial Intelligence and its Legal Implications
facial recognition software misidentified them. The three cases had some
common points.
149
Johnson (n 147).
51
A.I.: Perpetuator of Racism and Colourism
received the photo from the fake ID and identified Nijeer as a ‘high profile
match’ through their facial recognition system. When he learnt that the
police had approached his grandmother’s home regarding this case that they
believed concerned him, he went to the police station himself in an attempt
to clear his name. He was, however, arrested.
After his first appearance in court, three days after his arrest, Nijeer was not
released. This led him to wonder how long of a sentence he faced,
considering that the charges of assault, theft, and eluding arrest would carry
long terms. According to the complaint, his maximum sentence could have
been up to twenty-five years. His previous drug related charges also weighed
heavily on him, and a plea deal instead of an opportunity to prove himself
innocent at trial seemed to be a more attractive option, especially as the
Prosecutors kept pushing him to do so. However, when he got a new phone
about six months later, he found a receipt for a money transfer to his fiancé
among his old photos that conclusively placed him at a location thirty miles
away from the gift shop, thus proving him innocent. It was still only after
several more months that the charges against Nijeer were dropped.
150
ibid.
52
Acing the AI: Artificial Intelligence and its Legal Implications
photograph in person. The lawsuit also asserts that Parks was not a suspect
because police disregarded any DNA or fingerprint evidence the suspect left
at the crime scene. Parks sought loss of wages and emotional harm, however,
there was no set date for the trial.
Nijeer only shared this ordeal with close family and friends, in part due to his
prior criminal record. According to him, this incident divided the people he
knew into two factions: those who stood by him, and those who wondered if
he had actually committed the crime and hence kept their distance. His
fiancé was part of the latter faction. Nijeer did not discuss his case with his
10-year-old son while he was fighting it, but he did so in May 2021 after
they watched a sixty minute segment regarding the use of facial recognition
by the police. This led to a rite of passage that African American Families
sometimes refer to as ‘The Talk’, where they discussed the different ways in
which a black person is supposed to interact with the police for their safety.
All parties to the lawsuit denied the allegations against them, while Idemia
refused to comment on the matter.151
151
ibid.
53
A.I.: Perpetuator of Racism and Colourism
Oliver filed a federal lawsuit, 153 against the city of Detroit and Detective
Donald Bussa in Michigan in October 2020, seeking damages for economic
loss and emotional distress. The lawsuit asserted that Bussa misrepresented
information in the search warrant, including the teacher’s initial
identification of a former pupil, and that the detective failed to speak with
numerous witnesses or the school where the altercation occurred. Oliver
requested a court order prohibiting the Detroit police from using facial
recognition software until issues with the software’s performance on people
of various races, ethnicities, and skin tones are addressed. The lawsuit
demanded that investigating officers be obligated to inform judges reviewing
arrest warrants that the quality of an image can impact the performance of
the software and the accuracy of its results. Oliver’s attorney also wanted the
police to reveal the images returned by the facial recognition software
152
T.J. Benedict, ‘The Computer Got it Wrong: Facial Recognition Technology and
Establishing Probable Cause to Arrest’ (2022) 79(2) Washington & Lee Law Review
<https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=4773&context=wlulr>
accessed 13 January 2023.
153
Johnson (n 147).
54
Acing the AI: Artificial Intelligence and its Legal Implications
besides Michael, and the accuracy of the software in an area where brown
and black people predominantly reside. All parties to the lawsuit denied
allegations against themselves.
154
ibid.
55
A.I.: Perpetuator of Racism and Colourism
Craig later informed the Police Department that Detective Bussa had realised
his mistake four days after Robert’s arrest when he reviewed security camera
footage and discovered that the thief was a different person. Craig had told
Police Commissioners in July 2019 that facial recognition technologies could
never be used as the sole reason for arresting a person. Michael had been
arrested a week after making this statement. In September 2019, the
commissioners adopted a policy that instructed officers to only use facial
recognition software in the event of violent crimes like homicides or home
invasions and made violation of the same a fireable offence. Four months
later, Robert Williams was arrested for shoplifting.
After learning about Robert William’s botched arrest, Craig stated that it was
the result of sloppy investigative work and that nothing was wrong with the
facial recognition software. He also stated that Robert and Michael were
arrested through the software before the policy governing the same was
enforced. While Craig has since admitted,155 that this facial recognition
software identifies the wrong person 90% of the time, this sudden change of
opinion most probably had more to do with the fact that he was the
Republican candidate running for the position of Governor of Michigan,
instead of an actual admission of the obvious flaws in facial recognition
software.
Social media giants such as YouTube, Facebook, Twitter, and others are
counting on AI Technology to help reduce or control the spread of racist,
155
ibid.
56
Acing the AI: Artificial Intelligence and its Legal Implications
Two recent research studies,157 however, suggest that AI that has been taught
to recognise hate speech may instead wind up reinforcing racial bias. In one
study, 158 researchers discovered that tweets published by African Americans
were 1.5 times more likely to be flagged as hateful or offensive, and 2.2
times more likely to be flagged if they were written in African American
English (which is commonly spoken by black people in the US). Similar
widespread evidence of racial bias against hate speech was discovered by
another study in five commonly used academic data sets for analysing hate
speech, which included approximately 1,55,800 Twitter posts.159
A major reason,160 for the same is that social context determines what is
considered to be offensive. For example, the words ‘queer’ and ‘n-word’ can
be considered offensive and derogatory in certain contexts and entirely
156
Shirin Ghaffary, ‘The algorithms that detect hate speech online are biased against black
people’ (Vox, 15 August 2019) <https://www.vox.com/recode/2019/8/15/20806384/social-
media-hate-speech-bias-black-african-american-facebook-twitter> accessed 14 January
2023.
157
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi and Noah A. Smith, ‘The Risk of
Racial Bias in Hate Speech Detection’ (University of Washington, 2019)
<https://maartensap.com/pdfs/sap2019risk.pdf> accessed 14 January 2023; Thomas
Davidson, Debasmita Bhattacharya, and Ingmar Weber. ‘Racial Bias in Hate Speech and
Abusive Language Detection Datasets’ (Association for Computational Linguistics, 2019)
<https://aclanthology.org/W19-3504> accessed 14 January 2023
158
Sap et al (n 157).
159
Davidson et al (n 157).
160
Ghaffary (n 156).
57
A.I.: Perpetuator of Racism and Colourism
It is not known beyond any doubt that the Facebook, Twitter, and Google
content moderation systems show the same biases as in these studies; these
businesses employ proprietary technology to control content. But the big
tech companies frequently consult academics for advice on how to
effectively enforce laws against hate speech. Therefore, if leading
researchers are identifying weaknesses in widely used academic data sets, it
poses a serious issue for the tech sector as a whole.
Interestingly enough, the effects of this bias can also be seen in other areas.
Object detection systems used by driverless cars to identify pedestrians have
been found to be more likely to hit people of colour, as the system finds it
difficult to identify people with darker skin tones. 161 As per the FDA, pulse
oximeters used to measure oxygen saturation levels for COVID-19 patients
161
Kenny (n 128).
58
Acing the AI: Artificial Intelligence and its Legal Implications
also do not work accurately with people who have darker skin
pigmentation.162
VIII. Conclusion
162
Jacqueline Howard, ‘FDA panel examines evidence that pulse oximeters may not work as
well on dark skin’ (CNN, 1 November 2022)
<https://edition.cnn.com/2022/11/01/health/pulse-oximeters-fda-
meeting/index.html#:~:text=FDA%20panel%20examines%20evidence%20that%20pulse%2
0oximeters%20may,as%20well%20on%20dark%20skin&text=Pulse%20oximeters%20are
%20used%20to,individuals%20with%20darker%20skin%20pigmentation.> accessed 14
January 2023.
59
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
Artificial intelligence refers to a computer system capable of carrying
out tasks that typically require human intelligence. Machine learning
(ML) involves gathering knowledge and guidelines for using the data,
which is the driving force behind artificial intelligence (AI) systems.
The law is highly favourable for using AI and machine learning in
many aspects. Both machine learning and law are based on
surprisingly similar ideas: they both use previous instances to infer
rules that will apply to new circumstances. This sort of reasoning is
precisely the kind of endeavour to which artificial intelligence may be
successfully applied.
60
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
I. Introduction
163
Dr. Paul Marsden, ‘Artificial Intelligence Defined: Useful list of popular definitions from
business and science’ (Digital Wellbeing, 4 September 2017)
<https://digitalwellbeing.org/artificial-intelligence-defined-useful-list-of-popular-
definitions-from-business-and-science/> accessed 28 December 2022.
61
Acing the AI: Artificial Intelligence and its Legal Implications
A lot of things are being said about the utilization of artificial intelligence in
law, and in a country like India, where 5 crore cases are pending, 164 and
problems are not expected to be solved in the near future or within two
decades, artificial intelligence has raised a lot of expectation among judges,
legal professionals, and the general public. While delivering his keynote
address on ‘Navigating AI and Technology Disputes via Arbitration’ in May
2022 in Dubai, Hon’ble Mr Justice DY Chandrachud said that “law and
arbitration must keep up with technological advancements and the increasing
use of artificial intelligence in daily life in order for the existing adjudicatory
system to resolve new generational disputes,”165 which shows that present
judicial authorities are comfortable with the incorporation of artificial
intelligence in the legal sector.
164
‘Pendency of 5 Crore Court Cases a Matter of Grave Concern: Kiren Rijiju’ (The Hindu,
6 December 2022) <https://www.thehindu.com/news/national/pendency-of-5-crore-court-
cases-a-matter-of-grave-concern-kiren-rijiju/article66231956.ece> accessed 28 December
2022.
165
Dhananjay Mahapatra, ‘Law must keep up with tech progress: Justice Chandrachud’ (The
Times of India, 21 March 2022) <https://timesofindia.indiatimes.com/india/law-must-keep-
up-with-tech-progress-justice-chandrachud/articleshow/90341606.cms> accessed 28
December 2022.
62
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
166
Pritam Bordoloi, ‘The Power & Pitfalls of AI in Indian Justice system’ (Analytics India,
25 July 2022) <https://analyticsindiamag.com/the-power-pitfalls-of-ai-in-indian-justice-
system/> accessed 28 December 2022.
167
ibid.
168
ibid.
63
Acing the AI: Artificial Intelligence and its Legal Implications
According to the National Judicial Data Grid (“NJDG”), 3.93 crore cases are
pending in the subordinate courts,169 49 lakh in high courts, and 57,987 in the
Supreme Court.170 With the current strength of judges, a Law Commission
Report from 2009 stated that it would take 464 years to clear the cases. 171 In
the High Courts, the pendency is even higher: half of all the 8 million cases
there have been pending for more than three years.172 Issues of high backlogs
of cases from lower to higher courts are not new in India; the issue has been
there since independence.
169
‘PTI, Over 3.93 Crore Cases Pending in Lower, Subordinate Courts: Govt’ (News 18, 4
August 2021) <https://www.news18.com/news/india/over-3-93-crore-cases-pending-in-
lower-subordinate-courts-govt-4044947.html> accessed 29 December 2022.
170
Roshni Sinha, ‘Examining pendency of cases in the Judiciary’ (PRS Legislative
Research, 8 August 2019) <https://prsindia.org/theprsblog/explainer-code-occupation-
safety-health-and-working-condition?page=45&per-page=1> accessed 29 December 2022.
171
‘Judicial vacancies in the Supreme Court must be filled soon to speed up justice delivery’
(Hindustan Times, 2 January 2018) <https://www.hindustantimes.com/editorials/judicial-
vacancies-in-the-supreme-court-must-be-filled-soon-to-speed-up-justice-delivery/story-
hfSW8pGam5QqGJzcrLY3hP.html> accessed 30 January 2022.
172
ibid.
64
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
The Indian judiciary can, however, benefit from significant reforms like “AI-
based smart courts” in order to reduce the large backlog of pending cases. It
will be an automated court with a database of all prior decisions involving
comparable facts and situations. It can be useful in resolving cases involving
family, marriage, drunk driving, land, and other issues. For instance, if a
person broke the speed limit while driving their car, going 100 km/hr in an
80 km/hr zone, the judge would only enter the keywords (80 km/hr, 100
km/hr) into the system, which would have all cases with comparable facts
and circumstances in its database, and the system would then render a verdict
based on laws and precedent dealing with similar facts and circumstances.
Now suppose if the car driver was under the influence of alcohol while
breaking the speed limit, then the judge will also put “driver had consumed
alcohol” along with the other two keywords (like 80 km/hr or 100 km/hr),
and the verdict will be delivered based on the laws and precedents based on a
similar situation where the accused has broken the speed limit in an alcoholic
state.
India can take inspiration from China in terms of automation and use of AI in
resolving the backlog of pending cases in their country. The emergence of
automated pattern analysis has transformed the way courts operate in China.
Since 2014, Chinese courts have uploaded more than 120 million documents
to a centralised website named “China Judgments Online”, 173 allowing the
public access to an unprecedented amount of court judgements. In recent
times, courts all around the nation have been experimenting with integrating
173
Rachel E. Stern, Benjamin L. Liebman, Margaret E. Roberts, and Alice Z. Wang,
‘Automating Fairness? Artificial Intelligence in the Chinese Courts’ (2021) 59(515)
Columbia Journal Of Transnational Law
<https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=3946&context=faculty_s
cholarship> accessed 30 January 2022.
65
Acing the AI: Artificial Intelligence and its Legal Implications
India will need to work in this direction with the backing of the leaders of all
major political parties, who should support the implementation of “smart
courts” in India because, unlike China, India is a multi-party democracy, and
the allegations about the motivations of “smart courts” can reduce public
confidence in the judiciary and lead people to believe that the court has
turned into a puppet of the government, which works on the algorithms fixed
by the government. There should be minimum interference of government in
the implementation of “Smart Courts” in India; however, the government
will need to provide the legal backing through statutes. There is currently no
statute in India that deals exclusively with the subject related to automation
and AI; the only statute that even “touches” on this topic is the Information
Technology Act of 2000. Despite the modifications to Sections 43A and 72
of the Act,174 there are still many issues and no protection for the AI systems.
The United Kingdom and the United States, who have given the AI a lot of
174
SS Rana and Co. Advocates, ‘Information Technology (Reasonable Security Practices
And Procedures And Sensitive Personal Data Or Information) Rules, 2011’ (Mondaq, 5
September 2017) <https://www.mondaq.com/india/data-protection/626190/information-
technology-reasonable-security-practices-and-procedures-and-sensitive-personal-data-or-
information-rules-2011> accessed 30 January 2022.
66
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
exposure by passing specific laws for the protection of the AI as well as the
humans using it, are examples of thorough legislation that we also need.
AI-automated smart courts can be a boon for India. The majority of cases
that are pending in all types of courts can be easily solved based on
algorithms, but in the case of grave offences that impact society as a whole
like rape, murder, dacoity, robbery, kidnapping, etc., their trials should be
based on the classical method because in these cases the judge’s
discretionary powers and their obiter dicta matter a lot.
67
Acing the AI: Artificial Intelligence and its Legal Implications
the threats to privacy and democracy degrade human values.175 AI can have
(and likely already has) an adverse impact on democracy, in particular where
it comes to: (i) social and political discourse, access to information, and voter
influence, (ii) inequality and segregation; and (iii) systemic failure or
disruption.176
The use of AI in the legal system may have an impact on the principle of
judicial independence. Judicial independence refers to the autonomy of
judges to make decisions based on their own interpretation of the law,
without interference from other branches of government or external
pressures. AI systems, on the other hand, are designed to follow specific
rules and procedures, and they may be less likely to consider extenuating
circumstances or to exercise discretion in a way that is consistent with the
principles of justice.
175
Karl Manheim and Lyric Kaplan, ‘Artificial Intelligence: Risks to Privacy and
Democracy’ (2019) 21(106) Yale J.L. & Tech
<https://yjolt.org/sites/default/files/21_yale_j.l._tech._106_0.pdf> accessed 8 January 2023.
176
Catelijne Muller, ‘The Impact of Artificial Intelligence on Human Rights, Democracy
and the Rule of Law Draft D.D.’ (2020) ALLAI <https://allai.nl/wp-
content/uploads/2020/06/The-Impact-of-AI-on-Human-Rights-Democracy-and-the-Rule-of-
Law-draft.pdf> accessed 8 January 2023.
68
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
The cost of maintenance and updating these systems over time is also a
significant concern. AI systems require regular updates and maintenance to
ensure that they continue to function properly and to keep them in line with
the latest technological developments. This includes expenses such as
177
Paweł Marcin Nowotko, ‘AI in Judicial Application of Law and the Right to a Court’
[2021] Faculty of Law and Administration, University of Szczecin.
69
Acing the AI: Artificial Intelligence and its Legal Implications
One reason is that AI systems are inherently opaque, and it is difficult for
citizens to understand how they make decisions. This lack of transparency
can lead to a lack of trust in the system, as citizens may question the fairness
and impartiality of the decisions made by AI. People will likely have little in-
depth knowledge about legal technologies. The perceived risk negatively
affects trust and the perceived usefulness of legal technologies.178
Another reason is that AI systems may perpetuate biases that are present in
the data they are trained on, which could lead to unfair or discriminatory
decisions that disproportionately affect marginalized groups. The majority of
these biases result from either training the algorithms on biased data or
178
Baryse and Dovil, ‘People’s Attitudes towards Technologies in Courts’ Laws 11: 71.
Institute of Psychology, Vilnius University <https://doi.org/10.3390/laws11050071>
accessed 10 January 2023.
70
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
because the AI focuses on correlation rather than causation179. This can lead
to a lack of trust in the system among these groups, as they may feel that the
system is not working in their best interests.
179
Erlis Themeli and Stefan Philipsen, ‘AI as the Court: Assessing AI Deployment in Civil
Cases’, in K. Benyekhlef (ed), AI and Law: A Critical Overview (Thémis edn, 2021) 213-
233 <https://ssrn.com/abstract=3791553> accessed 8 January 2023.
180
Sir Henry Brooke, ‘Algorithms, ‘Artificial Intelligence and the Law’ (Bailii, 12
November 2019) <https://www.bailii.org/bailii/lecture/06.pdf> accessed 8 January 2023.
71
Acing the AI: Artificial Intelligence and its Legal Implications
Thus, while AI-powered smart courts have the potential to improve the
efficiency and accessibility of the legal system in India, the government’s
monopoly over the algorithms used in these systems could lead to abuse of
power, a lack of accountability, and a lack of transparency.
181
ibid 21.
72
AI in Law: Not a Gamble on Morality but a Facilitator of Precision
IV. Conclusion
AI has the potential to improve the efficiency and speed of the legal system
by handling large volumes of data and assisting judges in legal research.
However, there are also concerns about the use of AI in the legal sector, such
as the question of whether AI can replace lawyers and the potential impact
on job opportunities in the legal field. It is important to note that while AI
can assist in tasks such as document drafting and contract analysis, it cannot
replace a lawyer in court.
Furthermore, the use of AI in the legal system also raises concerns about
transparency and accountability, as the decision-making process of AI
systems may be difficult for citizens to understand. Additionally, AI systems
may perpetuate biases present in the data, leading to unfair or discriminatory
decisions. Therefore, it is crucial to ensure that AI systems are implemented
with proper oversight and due process to protect human rights and civil
liberties. The Indian judiciary’s effort to incorporate AI into its operations is
a step in the right direction, but it is crucial to continue monitoring the
impact of AI on the legal system and make adjustments as necessary.
73
Acing the AI: Artificial Intelligence and its Legal Implications
V. Suggestions
Monitoring and evaluating the impact of AI on the legal system, such as its
effect on the number of pending cases and the quality of justice delivered,
will help determine if further adjustments or improvements are needed.
74
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
The destruction of property has been dealt with in different
conventions across International Humanitarian Law. However, in the
light of evolving warfare, certain aspects of warfare demand evolution.
One such aspect is the protection of digital intangible assets in several
forms of armed conflict. The existing protection conferred to intangible
assets is questionable and has been very little addressed in light of
international law. Therefore, the paper seeks to demonstrate the
enforceability of existing principles over intangible assets. In addition,
there is explicit dependability of protection of these intangible cultural
assets on cyber security. The cyber technology of contemporary times
is abundantly capable to affect the social and cultural assets of the
opponent adversely. Recognizing the paradigm shift, the chapter
entails the comprehensive efforts that should be realized to expand the
applicability of international law.
I. Introduction
75
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
The conventions were framed in the context of two world wars primarily
attentive to saving the lives of individuals and cultural property, particularly
from the revulsions of kinetic warfare. 182 Although this foundation will not
lose its relevance in the coming future, considering the military complexity
of today’s age, it is imperative to add dimension to this ever-developing legal
arena.
In this regard, the ICC charged Ahmad Al Faqi Al Mahdi, an Islamic scholar
with the destruction of cultural property. 183 He was the first person to be
charged with the crime of destroying cultural heritage and monuments of
historic importance in Mali. It is vital to trace the jurisprudence of this case,
as it could set up a potential example for prosecuting the offenders for the
destruction of intangible assets. The destruction could potentially be caused
by cyber-attacks which pose an inescapable hazard to this new form of
heritage. This note principally suggests the active applicability of existing
humanitarian laws to digitalized assets that should be considered archives.
Countries in contemporary times are making active efforts to digitalise their
cultural heritage to confer protection from terrorist attacks, natural disasters,
and any other aggression. 184
In light of the malicious cyber intent in the last few years, the chapter seeks
to initiate a new deliberation centred around the existing norms on state-led
cyber occurrences aimed at destroying the cultural traditions of the
182
See, Convention for the Protection of Cultural Property in the Event of Armed Conflict
(adopted on 14 may 1954, entered into force 7 August 1956) 249 U.N.T.S. 216.
183
The Prosecutor v Ahmad Al Faqi Al Mahdi (Judgement) CC-01/12-01/15 (27 September
2016).
184
Alonzo C. Addison, ‘The Vanishing Virtual: Safeguarding Heritage’s Endangers Digital
Record, in New Heritage: New Mediaand Cultural Heritag’e in Yehuda Kalay, Thomas kvan
and Janice Affleck (eds.) (Routledge, 2002) 27, 28–29.
76
Acing the AI: Artificial Intelligence and its Legal Implications
The goal of this chapter is to frame the issue to serve as the starting point for
a more in-depth conversation among interested parties about required
clarifications on existing rules, or the creation of new frameworks. To
achieve this, the paper presents some hypothetical scenarios in which state-
led cyber operations carried out during an armed conflict interfere with
activities that are crucial to the operation of contemporary interconnected
societies as a way to first map the current cyber threat landscape. The section
that follows explores whether and to what extent the existing legal structures
are adequate to safeguard society against the repercussions of potential cyber
conflict. While International Humanitarian Law (“IHL”) will be the main
topic, the chapter also looks at how International Criminal Law may apply
and be relevant in armed conflict circumstances. The final section offers
potential future directions based on these results and serves as a jumping-off
point for in-depth talks with all pertinent stakeholders.
185
ibid 35-36; Graciela Gestoso Singer, ‘ISIS’s War on Cultural Heritage and Memory’
[2015] UK Blue Shield.
186
Jean-Marie Henckaerts and Louise Doswald-Beck, Customary International
Humanitarian Law Volume I: Rules (Cambridge University Press 2005) Rule 38–41.
77
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
The prime set of regulations that could regulate the emerging frequent cyber-
attacks on Intangible cultural assets is IHL. It is guided by the maxim of
Article 35 of Additional Protocol 1 (“AP I”) which illustrates that the right of
the parties to choose the methods or means of warfare must not be unlimited,
but rather should be limited and regulated.187 The prime focus of such a
principle is to alleviate the suffering, particularly of civilians. This can be
done by several principles such as – (a) proportionality (b) distinction and (c)
military necessity. Given the inspiration for these fundamental principles of
IHL, it is not difficult to argue that disproportionate cyber-attacks on the
intangible cultural property of public concern are prohibited and forbidden,
regardless of the nature of the conflict.
187
Protocol Additional to the Geneva Conventions of 12 August 1949 (adopted 8 June 1977)
1125 UNTS 3, art 35.
188
See, A.F. Vrdoljak, ‘Minorities, Cultural Rights and the Protection of Intangible Cultural
Heritage’ [2005] ESIL.
78
Acing the AI: Artificial Intelligence and its Legal Implications
their identity and maintain the continuity of their rituals and customs.189 By
examining the said definition thoroughly, one can conclude that the majority
of ICH would fall into the domain of recognized groupings that are already
recognized under previous Geneva and Hague Conventions of 1949 and
1954 respectively as it includes musical instruments, sacred groves, forms of
dances and other spiritual assets. Besides, even though some assets are
intangible, they satisfy some material elements that justify the applicability
of previous conventions on these assets as they both are interconnected and
dependent on each other for their existence, for example, the traditional
dance in the Royal Palace of Vietnam or the progression of Lord Jagannatha
from the Shree Jagannatha Temple.
As far as the legal regulations of IHL are concerned, the initial question
arises whether the definition of ‘attack’ in Additional Protocol I could be
189
UNESCO’s Convention on Safeguarding Intangible Cultural Heritage (adopted 17
October 2003), art 49(1).
79
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
appropriately applicable to cyber-attacks that are meant for destroying ICH.
The threshold can be satisfied by indicating that any systematic act that
foresees the obliteration of or injury to a person or an object shall qualify as
an ‘attack’ under Article 49(1).190 However, the delinquency persists as to
what if the cyber-attack just affects the functionality of an attacked object
rather than destroying it completely. The majority of experts in this regard
maintain that it will amount to an attack if the affected system demands
restoration in any way. 191 On the same line, the ICRC supports the broader
interpretation of the definition of ‘attack’. 192 It maintains that the object and
purpose of IHL is to assure the protection of civilians and the reprisal of their
objects in armed conflict. Therefore, they seek the enforceability of Article
31 of the Vienna Convention on the Law of Treaties, which preserves that
the treaty shall be interpreted in the light of its objects, purposes, and
ordinary meaning.193 This definition of ‘attack’ indeed has the potential to
act as a default rule for constituting the liability of an aggressor under other
relevant conventions. Another relevant recourse is the Nicaragua judgment to
determine whether the act in question can be said to be an ‘attack’ on ICH.
194
190
See, Laurent Gisel, Tilman Rodenhäuser and Knut Dörmann, Twenty Years on:
International Humanitarian Law and the Protection of Civilians Against the Effects of
Cyber Operations During Armed Conflicts (International Review of the Red Cross 2020).
191
Michael N. Schmitt, Tallinn Manual 2.0 on the International Law applicable to Cyber
Operations (2nd edn, Cambridge University Press 2017) 417.
192
International Humanitarian Law and the Challenges of Contemporary Armed Conflicts
(International Committee of the Red Cross 2015) ICRC 32IC/15/11.
193
ibid.
194
Case Concerning Military and Paramilitary Activities In and Against Nicaragua
(Nicaragua v United States of America) Merits, Judgement [1986] ICJ rep 14, [195].
80
Acing the AI: Artificial Intelligence and its Legal Implications
195
The Brussels International Declaration concerning the Laws and Customs of War
1874, art 14(164).
196
The Hague Convention (V) Respecting the Rights and Duties of Neutral Powers and
Persons in Case of War on Land 1907 (entered into force 26 January 1910), art 56.
197
ibid art 27.
198
ibid art 46.
81
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
The same declaration has also been given in Articles 75 and 4(1) of
Additional Protocol I and II of the Geneva Conventions, respectively. 199
These articles which shelter the physical and intellectual possessions should
be read collectively with UNESCO Convention of 2003. The specific words
‘practise and customs’ in the context of the UNESCO convention relate to
oral traditions and knowledge.200 It implicitly relates to the custom of passing
knowledge from generation to generation for the continuation and
perseverance of knowledge. Such cultural practises are also entitled to
protection under ILO Convention No. 169 and the United Nations
Declaration on the Rights of Indigenous People. 201 Thus, it is mandatory to
deconstruct the importance of ‘preservation’ and ‘transmission’ for its
bearers and their generations. The protection of ICH is not just essential for a
few individuals among many concerned individuals; it is obligatory for the
existence of a ‘nation’ itself.
The Hague Protocol of 1954 and its additional protocols are lex specialis that
aim to protect the tangible cultural heritage in warfare. It seeks to protect
cultural property, which is of great importance to people. There is an absence
of common ground as to what should be the threshold of importance.
Regarding this, the prevailing view of scholars is that it shall be the
responsibility of the state to determine the monuments of its national
199
Protocol Additional to the Geneva Conventions of 12 August 1949 (Protocol I) (adopted
8 June 1977) 1125 UNTS 3, art 75; Protocol Additional to the Geneva Conventions of 12
August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts
(Protocol II) (adopted on 8 June 1977) 1125 UNTS 609, art 4(1).
200
Christoph Antons and Willian Logan, ‘Intellectual Property, Cultural Property, and
Intangible Cultural Heritage’ in Christoph Antons and Willian Logan (eds.), Intellectual and
Cultural Property and the Safeguarding of Intangible Cultural Heritage (Routledge 2018).
201
UN General Assembly, United Nations Declaration on the Rights of Indigenous Peoples,
A/RES/61/295 (2 October 2007).
82
Acing the AI: Artificial Intelligence and its Legal Implications
To this end, the ICRC upheld that the rudimentary idea is the same and of
similar nature. The 2003 UNESCO Convention is more similar to the 1954
Hague Convention as appropriate protection was given to cultural property
while preparing nominations for the 2003 UNESCO Representative List of
the Intangible Cultural Heritage of Humanity that is of great importance to
the people.205 The list was identical to the 1954 Hague Convention list and
the 1972 World UNESCO lists, which serve as a guide for state forces to
follow in the case of tangible cultural heritage. 206 The cultural sites on these
two lists are protected by Article 85(4)(d) of AP I, which prohibits violations
of provisions related to cultural heritage conservation because there is a
symbiotic relationship between tangible and intangible heritage. Due to the
similarities between the Hague Convention and the lists, it can be argued that
similar protection should be accorded to cultural properties enshrined in the
2003 UNESCO Representative List of Humanity’s Intangible Cultural
202
See, J. Blake, Introduction to the Draft Preliminary Study on the Advisability of
Developing a Standard-setting Instrument for the Protection of Intangible Cultural Heritage
(UNESCO International Round Table Conference 2001).
203
Protocol Additional to the Geneva Conventions of 12 August 1949 (adopted 8 June 1977)
1125 UNTS 3, arts 53, 85 (4)(d).
204
Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict
(adopted May 14, 1954, entered into force 7 August 1956) 249 U.N.T.S. 216 (Hague
Convention).
205
Vrdoljak (n 188) 285.
206
ibid.
83
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
Heritage. Such a conclusion can be supported by the fact that both lists
contained the same historical monuments of public importance.
The conservation of ICH and the cyber protection mechanism of a state are
two sides of the same coin and are directly correlated to each other. Like
almost everything else, the digital revolution has revolutionized the arena of
ICH. Now, states, for their convenience, can convert tangible or intangible
heritage into digitalized information such as 3D visuals, scanned texts and
chronicles, and audio recordings. Platforms such as YouTube act as the
largest collection of moving photographs that have the nature of ‘continuing
value’. Similarly, Wikipedia can be called a digital storehouse of information
207
UNESCO World Heritage Convention 2015.
208
Antons and Willian Logan (n 200) 6.
84
Acing the AI: Artificial Intelligence and its Legal Implications
that has cultural importance. Similarly, CyArk is a digital archive that was
created after the destruction of the Bamiyan Buddhas of Afghanistan to
digitally store documents related to the world’s most spectacular cultural
places. Motion capture technology has enabled the digitization of traditional
Japanese dances, allowing master performers to study their craft in a new
way and enabling the preservation of cultural artefacts. Iso Huvila has
developed participatory archives by using MediaWiki software to convert the
digital archives of two Finnish cultural heritage sites, namely the Saari
Manor in Mietoinen and Kajaani Castle into participatory spaces for archive
users.209 The Huvila model is an appropriate example of how digital archives
can engage future generations with their cultural roots. Likewise, digital
protection of China’s intangible cultural heritage has developed rapidly, with
great success and numerous accomplishments, such as the digital protection
of the Silk Road cultural heritage project in 2004, the research project for the
world’s intangible cultural heritage protection in cooperation with Samsung
Galaxy Co., Ltd. in 2004, and the “Memory of the World in Lijiang, China”
Project in 2005.
However, with evolution, the threat to these heritages is also increasing with
cyber threats that aim to destroy these assets owing to their religious and
political beliefs. Therefore, the law of armed conflict shall necessarily be
applicable in both non-international and international armed conflicts. For
this purpose, cyber-attacks shall be incorporated under the definition of
armed conflict. Even the absence of any concrete regulation, consideration
shall be given to Marten’s clause of the Geneva Conventions and their
209
Kozaburo Hachimumura, ‘Digital Archives of Intangible Cultural Properties’
(International Conference on Culture and Computing, 17 December 2017) 55.
85
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
Additional Protocols.210 Marten’s clause specifies that even without any
complete code, the belligerents and civilians shall be under the protection of
the principles and regulations recognized by civilised society and driven by
public conscience.211 Consequently, the Martens clause reflects customary
international law that ensures that nothing shall take place in a legal vacuum.
Although the digitalized cultural assets and cyber-attacks could fall within
the domain of IHL, the question of what consequences perpetrators will face
remains unanswered. The obligations will become more important where the
cultural asset is present in its intangible form. For example, YouTube
contains traditional Mongolian throat singing and traditional American-
Indian dance. The answer depends on the nature of the armed conflict,
whether it is of international or non-international nature. As far as the armed
conflicts of international nature are concerned, there should be the
involvement of two or more states as opponents as per common Article 2 of
the Geneva Conventions.212 Besides, there shall be the presence of
sophisticated and organised armed groups which would be under the
command of one of the states engaged in hostility. However, delinquency
persists as to whether the acts of non-state armed groups can be accredited to
the state. Concerning this, the ICTY in Tadic’s case originated the ‘overall
control’ test to ascertain whether the Bosnian Serbs were under the control of
the Federal Republic of Yugoslavia.213 The Tribunal concluded that there
210
See, F. Kalshoven, Constraints on the Waging of War (Martinus Nijhoff ed, Dordrecht
1987) 14.
211
Geneva Convention I, art 63; Geneva Convention II, art 62; Geneva Convention III, art
142; Geneva Convention IV, art 158.
212
Geneva Convention Related to the Protection of Civilian Persons in Time of War
(adopted on 12 August 1949) 75 UNTS 287, art 2.
213
Prosecutor v Dusko Tadic (Appeal Judgement) IT-94-1-A (15 July 1999) [131], [145],
[162].
86
Acing the AI: Artificial Intelligence and its Legal Implications
existed sufficient influence from the state, which confirmed the existence of
international armed conflict in that case. 214 Applying a similar test in cyber
warfare, it could be ascertained if the state authorities exercised a certain
level of influence on hackers who destroyed and caused significant damage
to ICH, then the regulations pertaining to international armed conflict could
be applied as it could amount to an attack under Article 53 of Additional
Protocol I and the Hague Convention of 1954.
214
ibid [131], [140], [145].
215
Prosecutor v Dusko Tadic (Decision on the defence motion for interlocutory appeal) (2
October 1995), [67]-[70].
216
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the
Protection of Victims of Non-International Armed Conflicts (Protocol II) (adopted on 8 June
1977) 1125 UNTS 609, art 1(2).
217
Kalshoven (n 210) [70].
218
Prosecution v Milosevic (Judgement) IT-02-54-R77.4 (13 May 2005), [16]-[17];
Prosecution v Furundzija (Judgement) IT-95-17/1-A (21 July 2000), [59].
219
Prosecution v Haradinaj (Judgement) IT-04-84-A (19 July 2010), [49].
220
Prosecution v Mile Mrksic (Judgement) IT-95-13/1-A (5 May 2009), [419]; Prosecution
v Fatmir Limaj (Judgement) IT-03-66-A (27 September 2007), [135].
87
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
that are employed.221 For cyber-attacks to be classified as NIAC, the
organisation must be well-armed and have a command structure that is
sophisticated enough to execute extended military operations. Even
individuals who function “collectively” but not “cooperatively” cannot be
said to be under proper direction and organisation if this is the case. The
majority of these groups act digitally with a degree of anonymity rather than
physically executing the attack. Therefore, it is impossible to determine
whether the group in issue meets the NIAC standard, as the mere fact that
they are targeting the state is insufficient to trigger the application of
international humanitarian law.
221
ibid [39]-[40].
222
Protocol Additional to the Geneva Conventions of 12 August 1949 (adopted 8 June 1977)
1125 UNTS 3, art 1(1).
223
Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict
(adopted 14 May 1954, entered into force 7 August 1956) 249 U.N.T.S. 216, art 19.
88
Acing the AI: Artificial Intelligence and its Legal Implications
After World War II, efforts were made to hold criminally liable the
perpetrators who were engaged in the destruction of public and private
estates. The Nuremberg trials marked the beginning of such efforts when the
Nazis were sentenced for plundering and destroying cultural property.224 The
resort to International Criminal Law is essential as the existing conventions
do not enumerate special offences that can hold the perpetrator criminally
liable in a proportionate manner.
224
Agreement for the Prosecution and Punishment of the Major War Criminals of the
European Axis 59 Stat. 1544, 82 U.N.T.S. 279, E.A.S. No. 472 (18 August 1945).
225
Statute of the International Tribunal for the Protection of Persons Responsible for Serious
Violations of International Humanitarian Law Committed in the Territory of the Former
Yugoslavia since 1991, S.C. Res., U.N. SCOR, 4 8th Sess., 3217th mtg, at 1-2, U.N. Doc.
S/RES/827 (1993).
226
Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187
U.N.T.S. 90.
89
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
borrowed the terms from the previous Geneva and Hague Conventions.227 In
accordance with Article 8(2)(b)(ix) of the Rome Statute, hospitals and
schools have been designated as protected property. This protectorate status
is not an upgraded level of protection, as hospitals and schools lose their
protectorate status when their services are no longer required. In contrast,
cultural property must be safeguarded despite these external circumstances.
227
The Prosecutor v Strugar (Judgment ICTY Trial Chamber) Case IT-01-42-T (31 January
2005), [229].
228
Addison (n 2).
229
UN Committee on Economic, Social and Cultural Rights, General Comment No. 21, ESC
43rd, UN Doc E/C12/GC/21 (2009), s 15(b).
230
Addison (n 2) 79.
90
Acing the AI: Artificial Intelligence and its Legal Implications
Blaskic,231 and Kordic’s judgment232 too where the perpetrator had the
requisite intent.
The then prosecutor of the ICC, Mr. Fatou Bensouda, even held that – ‘what
is at stake here is not walls and bricks, those mausoleums were important
from a religious point of view and identical point of view too’.236 The
prosecution went further to highlight the destruction of intangible heritage in
Mali at the vital stage of proceedings. This precedent is crucial since it is the
only instance in which the culprit is prosecuted for crimes against cultural
231
The Prosecutor v Blaskic (Judgment ICTY Appeals Chamber) Case No. IT-95-14-T (29
July 2004), [149]-[159].
232
The Prosecutor v Kordic and Cerkez (Judgment ICTY Appeals Chamber) Case No. IT-
95-14/2-A (17 December 2004), [104]-[108].
233
Rome Statute (n 45) art 7(2)(g).
234
ibid art 7(1)(k).
235
ICC, Al Mahdi Transcript of the Confirmation of Charges Hearing (1 March 2016), 39.
236
ibid 13.
91
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
property and not against a person. However, to meet the requirement under
Article 7(1)(k) for crimes against humanity, the prosecution has to prove that
there were additional inhuman acts together inflicted considerable suffering
on mental or bodily health. This expression, ‘other inhuman Acts’ was
enshrined in the ICTR Charter,237 and is a part of Customary International
Law.
As per the ILC and ICTY in Tadic, the act must have an adverse
consequence to be classified as an inhuman act.238 The act that has been
enacted to inflict mental pain, which also includes moral agony, does not
need to be rape or murder, and it could also be the act of apartheid or
discriminatory legislation within its domain. 239 The act would be said to be
an ‘Inhuman Act’ even if it caused temporary unhappiness or humiliation.
Applying the same test to the destruction of any sort of cultural heritage
would certainly hold the perpetrator liable, which can be inferred from the
Al-Mehndi case where the witnesses were crying when they saw the
destruction of the holy gate, which caused them mental suffering in the form
of ‘temporal humiliation’.
237
UN Security Council, Statute of the International Criminal Tribunal for Rwanda (8
November 1994), art 3 (i).
238
The Prosecutor v Katanga and Ngudjolo Chui, ICC-01/04-01/07-717 (30 September
2008) 450.
239
The Prosecution v Delalic (Judgement) IT-96-21-I (21 March 1996) 511.
92
Acing the AI: Artificial Intelligence and its Legal Implications
nature of the conflict. The individuals could also incur liability for cyber
operations provided they possess the required mens rea under Article 30.240
The crimes committed with more volition element shall be liable under dolus
directus of I or II degrees whereas, crimes committed with recklessness or
negligence with more cognitive element shall be liable under dolus
eventualise enshrined under Article 30 of the Rome Statute.241
Criminal responsibility can also arise in cases in which the perpetrator acts
under the command of a third person. 245 In such cases, these commanders
and superiors too cannot escape their liability just because they did not
commit any act that constitutes a war crime by the virtue of Article 28 of the
240
Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187
U.N.T.S. 90, art 30.
241
Sarah Finnin, ‘Mental Elementunder Article 30 of the Rome Statute of International
Criminal Court: A Comparative Analysis’ [2012] ICLQ 325.
242
Delalic (n 58) [345]-[354].
243
The Prosecution v Thomas Lubanga Dyilo (Judgement) ICC -01/04-01/06 (14 March
2012) 326.
244
ibid 1005.
245
See also Prosecutor v Germain Katanga and Mathieu Ngudjolo Chui (Judgement) ICC-
01/04-01/07 (30 September 2008), [495]-[499].
93
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
Rome Statute.246 In a cyber-war context, the responsibility could be imposed
on the military commander or cyber operations head of the state who ordered
the commission of an act amounting to the destruction of intangible cultural
property. Even a subordinate commander who conforms to the order of the
commander will not be absolved of responsibility in any manner. 247 This
regulation is in conformation of Articles 86 and 87 of Additional Protocol I,
which ensures that the superiors shall take steps to investigate war crimes.
Additionally, it is not even mandatory that the individual be the
‘commander’ or have military status.248 The said rule is appropriate for the
cyber-attack which is generally administered by the hacker without any
military position.
246
ibid.
247
Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187
U.N.T.S. 90, art 33; Statute of the International Tribunal for the Protection of Persons
Responsible for Serious Violations of International Humanitarian Law Committed in the
Territory of the Former Yugoslavia since 1991, S.C. Res., U.N. SCOR, 4 8th Sess., 3217th
mtg, at 1-2, U.N. Doc. S/RES/827 (1993), art 7(4).
248
Rome Statute, art 28 (b); Delalic (n 58) [239]-[254].
94
Acing the AI: Artificial Intelligence and its Legal Implications
III. Conclusion
This ‘legal grey zone’ gained further prominence in the pandemic and
beyond due to this era of digital culture and data storage. This digital
emergence raised questions regarding the intersection of digital assets with
humanitarian regulations. As proven above, the regulations, particularly in
the sphere of NIAC, need the assistance of International Criminal and
Human Rights Law that could bolster the protection and prevent the
destruction of intangible assets as happened in Iraq in 2003. The questions
over sovereignty, proportionality, and freedom of expression can only be
answered through the combined conjunction of IHL, the Rome Statute, and
the Human Rights treaties. Their scope and applicability could be a
promising subject for future research owing to the ‘grey zones’ in NIAC
conflicts and doubts regarding the extra-territorial applicability of the human
rights treaties.
This chapter concludes that the rise of digital cultural property brings up
additional intriguing issues about international human rights law, cultural
heritage, and cyberspace. Since access to the internet is a fundamental
human right, and because cultural life results in a human right to cultural
heritage, protections for digital cultural property may also be derived from
international Criminal law in addition to international humanitarian law.
95
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
96
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
Technology is a double-edged sword has never before been true to its
meaning as it is now. Law thus playing catch-up with technological
progress is inevitable. On the other hand, human decision-making
being effectively and eventually supplanted by autonomous weapon
systems (“AWS") has raised profound ethical concerns and legal
obstacles. The uncertain future of the use of technology in weapons
goes beyond the borders of International Humanitarian Law (“IHL”)
compliance, wherein the reliability of such weapons raises a deep
sense of discomfort. The biggest challenge for the technology in such
weapons is to factor in a situation of doubt in a conflict setting;
furthermore, it needs to maintain the consistency of zero error in an
algorithm that can assimilate unique situations without an on-field
experience. Requirements described above become necessary since a
single error on the learning curve of this adapting Artificial
Intelligence would meet the threshold of a broken rule under public
international law.
Lethal Autonomous Weapons have not yet met the legal compliance
standards, however, there is uncanny haste among the States that
promote their use. This establishes a compelling case for believing
that, shortly, the use of AWS will result in violations of several human
rights and a grey mist of non-compliance with IHL principles. Lack of
97
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
accountability in this regard is complementary to the adverse impact
on the victim’s right to legal remedy. This chapter shall therefore
explore the forms of accountability and legal discourse available
under public international law. It will examine treaty law and
customary international law to address the challenges of establishing
mens rea within individual criminal responsibility while also
substantiating the present obligations on States. A domain of ‘Split-
Responsibility’ and its contemporary understanding within the context
of AWS will be addressed as well. This chapter builds upon the ICRC’s
Commentary on the 1977 Additional Protocol I that prompted a
sensitive outlook on the development of weapon systems that minimizes
the roles of humans. Furthermore, this chapter will also analyze the
role of individuals associated with the production and development of
such technology and their accountability.
I. Introduction
In the last days of the fight against the Islamic State in Syria, as members of
the once-ferocious caliphate were besieged in a dirt field outside the town of
Baghuz, a United States (US) military drone buzzed high above, searching
for military targets. However, all it saw was a swarm of women and children
gathered against a riverbank. Unannounced, an American F-15E assault
fighter sped over the drone’s high-definition range of view and dropped a
500-pound bomb on the crowd, devouring it in a shuddering boom. 249 As the
smoke cleared, a few individuals fled for safety. Then the following jet
249
Dave Phillips and Eric Schmitt, ‘How the U.S. Hid an Airstrike That Killed Dozens of
Civilians in Syria’ (New York Times, 13 November
2021) <https://www.nytimes.com/2021/11/13/us/us-airstrikes-civilian-deaths.html>
accessed 24 December 2021.
98
Acing the AI: Artificial Intelligence and its Legal Implications
250
ibid.
251
See International Covenant on Civil and Political Rights (entered into force 19 December
1966) 999 U.N.T.S. 171 (ICCPR), art 6; Paul M Taylor, A Commentary on the International
Covenant on Civil and Political Rights: The UN Human Rights Committee’s Monitoring of
ICCPR Rights (Cambridge University Press 2020) 138-170; Stuart Casey-Maslen and C H
Heyns, The Right to Life under International Law: An Interpretative Manual (Cambridge
University Press 2021) 659-671; ‘Counter-Terrorism Module 8 Key Issues: Arbitrary
Deprivation of Life’ (United Nations) <https://www.unodc.org/e4j/en/terrorism/module-
8/key-issues/arbitrary-deprivation-of-life.html> accessed 24 December 2021; Académie De,
Use of Force in Law Enforcement and the Right to Life: The Role of the Human Rights
Council (Geneva Academy Of International Humanitarian Law And Human Rights 2016);
Cóman Kenny, ‘Legislated Out of Existence: Mass Arbitrary Deprivation of Nationality
Resulting in Statelessness as an International Crime’ (2020) 20 International Criminal Law
Review 1026.
252
Robert Sparrow, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy 62, 62-66.
253
Vincent Boulanin and Maaike Verbruggen, ‘Mapping the Development of Autonomy in
Weapon Systems’ (Sipri, 2017) 20 <https://www.sipri.org/sites/default/files/2017-
11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf>
accessed 24 December 2021.
254
ibid.
99
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
to engage targets autonomously, many researchers predict they may
eventually replace drones in future battles. 255
Additionally, although the technology may seem distant, the topic is already
gaining prominence among international law researchers and participants.
Major human rights groups and notable professors have expressed opposition
to AWS, and the United Nations has started considering a preemptive ban. 256
Indeed, some commentators believe that within the next several years, the
world community will achieve an agreement on these weapons. 257 As the
prospective ban indicates, the thought of people being removed from the
battle loop has alarmed certain human rights organizations. 258 One key worry
they have voiced is the above-mentioned accountability issue.259 Without
human intervention, the weapons’ artificial intelligence will allow them to
assess data, identify courses of action, and execute reactions to a variety of
circumstances. 260 Unlike human-operated drones, the acts of an AWS are
255
‘Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How
the Laws of War Can’ (Columbia University Scholarship Archive) 4-5
<https://scholarship.law.columbia.edu/faculty_scholarship/1803/> accessed 21 December
2022; Daniel Hammond, ‘Autonomous Weapons and the Problem of State Accountability’
(2015) 15 Chicago Journal of International Law 654, 655.
256
‘States must address concerns raised by autonomous weapons’ (International Committee
of the Red Cross, 2019) <https://www.icrc.org/en/document/states-autonomous-weapons>
accessed 24 December 2022; ‘Autonomous weapons that kill must be banned, insists UN
chief’ (UN News) <https://news.un.org/en/story/2019/03/1035381> accessed 24 December
2022.
257
Hammond (n 255).
258
UN Human Rights Council, Report of the Special Rapporteur on Extrajudicial, Summary
or Arbitrary Executions, A/HRC/23/47,
<https://digitallibrary.un.org/record/755741?ln=en> accessed 24 December 2021.
259
Ahan Gadkari, ‘Question on the Use of Ethical Responsibility on the Use of Unmanned
Aerial Vehicles in Combat Zones’ (JSIA
Bulletin) <https://www.thejsiabulletin.com/post/question-of-ethical-responsibility-on-the-
use-of-unmanned-aerial-vehicles-in-combat-zones> accessed 24 December 2021; Boulanin
and Verbruggen (n 253).
260
Gadkari (n 259).
100
Acing the AI: Artificial Intelligence and its Legal Implications
difficult to attribute to a single individual. 261 In light of this, and other issues,
several experts have questioned whether the mere use of AWS breaches
international law by definition. 262 Furthermore, even if such usage does not
violate international law in and of itself, an individual weapon might
undoubtedly result in a breach in a particular occurrence. 263 In many
instances, it is difficult to determine who should be held accountable for an
AWS crime.
Defenders of AWS have suggested that those sufficiently involved with the
weapon—military commanders, designers, or manufacturers—could be held
accountable for the weapon’s illegal actions, but opponents have identified
several flaws with each of these potential candidates for accountability. 264
Few, on the other hand, have examined the practicality of holding the state
liable for crimes committed by its AWS. 265 Indeed, few academics have
questioned whether this alternative is desirable in principle or practicable in
reality. 266
261
Darren Stewart, ‘New Technology and the Law of Armed Conflict’ 87 International Law
Studies 272, 278.
262
ibid.
263
Stephen White, ‘Brave New World: Neurowarfare and the Limits of International
Humanitarian Law’ (2008) 41 Cornell International Law Journal 177, 177–
210 <https://scholarship.law.cornell.edu/cilj/vol41/iss1/9> accessed 24 December 2021.
264
‘Autonomous Robotics thrust group of the Consortium on Emerging Technologies,
Military Operations, and National Security, International Governance of Autonomous
Military Robots’ (Columbia University Academic Commons,
2011) <https://academiccommons.columbia.edu/doi/10.7916/D8TB1HDW> accessed 24
December 2022; Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous
Weapons (Ashgate 2012) 103-5.
265
Gadkari (n 259).
266
ibid; Jack Beard, ‘Autonomous Weapons and Human Responsibilities’ (2014) 45 College
of Law, Faculty Publications 617, 617–678
<https://digitalcommons.unl.edu/lawfacpub/196/> accessed 24 December 2021.
101
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
This chapter addresses these concerns by providing a conceptual use of state
responsibility for AWS crimes and examines the major procedures for
holding a state accountable. Normatively, it would be better to hold the state
liable for AWS crimes rather than commanding commanders, designers, and
makers, since the state is in the best position to guarantee that these weapons
conform with international law consistently. Furthermore, as the principal
buyers and users of these weapons, governments would have the most
responsibility in the case of unforeseeable AWS war crimes. Finally, the
majority of nations that use this technology will very certainly have the
financial means to recompense victims.
267
Graduate Institute of Geneva, Academy Briefing No. 8 - Autonomous Weapon Systems
under International Law (2014) 9.
102
Acing the AI: Artificial Intelligence and its Legal Implications
When a private firm acts under the State’s instructions, guidance, or control,
the company’s activity is traceable to the State.268 States, as indicated by
state practice and opinio juris, are required by customary law to assess the
legitimacy of novel means and techniques of combat.269 This evaluation must
take into consideration the weapon’s anticipated usage.270 Article 2(4) of the
United Nations Charter prohibits states from threatening or using force
against the territorial integrity of other states.271 The term “territorial
integrity” refers to a State’s exclusive sovereignty over its territory. 272 A
threat of force may take the form of the prospect of cross–border weapon use
268
Armed Activities on the Territory of the Congo (the Democratic Republic of the Congo v
Uganda) (Judgment) [2005] ICJ Rep 168, 66 [175]-[176].
269
Isabelle Daoust et al, ‘New wars, new weapons? The obligation of States to assess the
legality of means and methods of warfare’ (2002) 84 Revue Internationale de la Croix-
Rouge/International Review of the Red Cross 345, 354-7; A Guide to the Legal Review of
New Weapons, Means and Methods of Warfare (International Committee of the Red Cross
2010) <https://www.icrc.org/en/publication/0902-guide-legal-review-new-weapons-means-
and-methods-warfare-measures-implement-article> accessed 24 December 2021; Natalia
Jevglevskaja, ‘Weapons Review Obligation under Customary International Law’ (2018)
94 International Law Studies 187, 933-4; Group of Governmental Experts of the High
Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain
Conventional Weapons, Weapons Review Mechanisms Submitted by the Netherlands and
Switzerland, CCW/GGE.1/2017/WP.5, 4 [18].
270
William Boothby, Weapons and the Law of Armed Conflict (Oxford University Press
2009) 249.
271
Oil Platforms (the Islamic Republic of Iran v the United States of America) (Judgment),
[2003] ICJ Rep 161, 51 [100]; Legal Consequences of the Construction of a Wall in the
Occupied Palestinian Territory (Advisory Opinion of 9 July 2004) [2004] ICJ Rep 136, 31
[63].
272
Military and Paramilitary Activities in and against Nicaragua (Nicaragua v the United
States of America) (Judgment) [1986] ICJ Rep 14, 99-100 [209].
103
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
and troop concentrations along borders, as Turkey, Yugoslavia, Pakistan,
Iraq, and the Soviet Union have shown via their state practice. 273
States are obligated to respect the human rights of those who reside on their
territory and are subject to their authority. 274 Extraterritorial jurisdiction
arises when a State’s acts have consequences that extend beyond its
borders.275 States are prohibited under Article 6(1) of the ICCPR from
engaging in activities that may arbitrarily deprive life, even if such conduct
does not result in death.276 The use of force in law enforcement
circumstances is only legal when a danger to life is imminent. 277 Using AWS
273
‘Action concerning Threats to the Peace, Breaches of the Peace, and Acts of Aggression,
Article 51’, in The Charter of the United Nations: A Commentary, Volume 2 (3rd edn) 1410;
JA Green, The Threat of Force as an Action in Self-defense under International Law (2011)
44 Vanderbilt Journal of Transnational Law 239–285; Repertoire of the Practice of the
Security Council, Suppl. 1964-1965, XVI, 238 S. (Sales No. 1968. VII. 1), Doc.
ST/PSCA/l/Add. 4., 202 (1968); U.N.S.C., Letter dated 1 February 1999 from the Chargé
D’Affaires A.I. of the Permanent Mission of Yugoslavia to the United Nations addressed to
the President of the Security Council, U.N Doc. S/1999/107 (2 February 1999); U.N.S.C.,
Letter dated 5 February 1999 from the Chargé D’Affaires A.I. of the Permanent Mission of
Yugoslavia to the United Nations addressed to the Secretary-General, U.N. Doc.
S/1999/118 (4 February 1999); U.N.S.C., Cablegram dated 15 July 1951 from the
Permanent Representative of Pakistan to the President of the Security Council and the
Secretary-General, U.N. Doc. S/2245 (15 July 1951); S.C. Res. 949 (15 October 1994);
Anthony De Luca, ‘Political Science Quarterly: Fall 1977: Soviet-American Politics and the
Turkish Straits’ (1977) 92 Political Science Quarterly 503, 516-20.
274
Application of the Convention on the Prevention and Punishment of the Crime of
Genocide (Bosnia and Herzegovina v Serbia and Montenegro) (Judgment) [1996] ICJ Rep.
595, 24-5 [31]; ICCPR, art 2(1).
275
Drozd and Janousek v France and Spain (Admissibility and Merits) App. No. 12747/87,
A/240, [1992] ECHR 52, 22 [91].
276
ICCPR, art 6; UN Human Rights Committee (HRC), ‘General Comment No. 36 on art 6
Right to Life’ 3 September 2019, CCPR/C/GC/35, 2 [7].
277
Dilek Kurban, ‘Forsaking Individual Justice: The Implications of the European Court of
Human Rights’ Pilot Judgment Procedure for Victims of Gross and Systematic Violations’
(2016) 16 Human Rights Law Review 731; Benzer and others v Turkey, Application no.
23502/06, Judgement (2013), p. 33-4 (para. 163); Andreou v. Turkey, Application no.
45653/99, Judgement of 27 Oct. 2009, 12 [46]; Daragh Murray and Dapo
Akande, Practitioners’ Guide to Human Rights Law in Armed Conflict (Oxford University
Press 2016) 119-120; United Nations Congress on the Prevention of Crime and the
104
Acing the AI: Artificial Intelligence and its Legal Implications
with algorithmic tagging to identify goals and approve the use of force
violates this criterion since threats are recognized in advance of an
“imminent” emergency. 278 Effective remedies need the State to pursue and
punish those responsible for human rights breaches. 279 Individual
responsibility for arbitrary deprivation of life is not attainable in the case of
AWS, since the weapon cannot be penalized or deterred.280
Treatment of Offenders, Basic Principles on the Use of Force and Firearms by Law
Enforcement Officials (United Nations 1990) [9]; Landaeta Mejías Brothers et al v
Venezuela (Preliminary Objections, Merits, Reparations and Costs, Judgment, Judgement of
27 August 2014) 34-5 [131]; UN Human Rights Council, Report of the Special Rapporteur
on extrajudicial, summary or arbitrary executions (1 April 2014) A/HRC/26/36, 10 [59].
278
Maya Brehm, ‘Defending the Boundary’ (Geneva Academy, 2017) 24
<https://www.geneva-academy.ch/joomlatools-files/docman-files/Briefing9_interactif.pdf>
accessed 24 December 2022.
279
ICCPR, art 1, 2(3); ‘Losing Humanity | The Case against Killer Robots’ (Human Rights
Watch) <https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-
robots> accessed 24 December 2021; Hammond (n 255).
280
UN Human Rights Council, ‘Report of the Special Rapporteur on extrajudicial, summary
or arbitrary executions’ (9 April 2013) A/HRC/23/47, 14 [76]; Christof Heyns, ‘Human
Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law
Enforcement’ (2016) 38 Human Rights Quarterly 350.
281
Henckaerts and Doswald-Beck (n 186) 25; Legality of the Threat or Use of Nuclear
Weapons (Advisory Opinion of 8 July 1996) [1996] ICJ Rep 226, 257 [78].
282
Kristina Benson, ‘“Kill ‘em and Sort it Out Later:” Signature Drone Strikes and
International Humanitarian Law’ (2014) 27 Global Business & Development Law
Journal 17, 49 <https://scholarlycommons.pacific.edu/globe/vol27/iss1/2> accessed 24
December 2022.
105
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
whole. 283 This balance necessitates a subjective assessment of military
benefit versus humanitarian considerations.284
283
Daniel Thürer, International Humanitarian Law: Theory, Practice, Context (Ail Pocket,
Cop 2011) 74; Henckaerts and Doswald-Beck (n 186) 173-5; Dieter Fleck and Michael
Bothe, The Handbook of International Humanitarian Law (Oxford University Press 2021)
119-186; William Fenrick, Attacking the Enemy Civilian as a Punishable Offence (Core
1997) <https://core.ac.uk/download/pdf/62547705.pdf> accessed 24 December 2021.
284
Prosecutor v Stanilav Galic (Trial Judgement and Opinion) IT-98-29-T (5 December
2003) [58]; Henckaerts and Doswald-Beck (n 186) 60.
285
ibid.
286
Maya Brehm (n 278) 24.
287
ibid 40.
288
ibid.
106
Acing the AI: Artificial Intelligence and its Legal Implications
289
Corporate Criminal Liability: Emergence, Convergence and Risk (M. Pieth and R. Ivory
(eds.), Springer 2011) 7–14.
290
Christian Tomuschat, ‘The Legacy of Nuremberg’ (2006) 4 J. Int’l. Crim. Just. 830, 840.
291
Marco Sassoli, ‘Humanitarian Law and International Criminal Law’, in The Oxford
Companion to International Criminal Justice (Antonio Cassese ed., 2009) 112-113.
107
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
possibility of essentials of criminal responsibility and whether the same is
attributable to an individual. The presence of autonomy is assessed primarily
to understand the impacts on which an individual may engage in unlawful
conduct and to further narrow down the context in which the conduct
occurred. Scrutinizing the situation under which the act transpires is crucial
to establishing the intention behind it. Thus, analyzing temporal and
geographical circumstances as parameters to intent is paramount to
establishing individual criminal responsibility.
It is crucial to assert at this moment that the use of AWS in a set scenario has
no bearing on the individual criminal responsibility associated with it. This is
to say that whether they are committed in a domestic feud, internal conflict,
or an international armed conflict if a crime is committed as a direct or
indirect use of AWS then accountability should exist. 292 Following the
similar object and purpose of the above-given essentials, Article 25 of the
Rome Statute demarcates the boundaries in which individual criminal
responsibility shall subsist. Determination of conditions under which an
individual is considered to have aided, instigated, or contributed directly or
292
The Prosecutor v Tadic, Case No. IT-94-1-T (2 October 1995) (Decision on the Defence
Motion for Interlocutory Appeal on Jurisdiction) 129.
108
Acing the AI: Artificial Intelligence and its Legal Implications
General acts of Criminal nature are pivoted on the state of a guilty mind and
act in consonance with such a state done with a guilty intention.294 The
Foremost requirements for raising a standing within the ICC shall be of the
aforesaid state in addition to meeting the jurisdiction of the court under
Article 5 of the Rome Statute. It is to be noted that both actus reus and mens
rea need to be established for individual criminal responsibility.
i. Class of Perpetrators
The nature of violations that can be caused due to AWS points to the fact
that there can be a different class of perpetrators for the same crime. It is
therefore important for us to encompass these classes and identify the
accountability associated with each of these individuals. A person who
instigates a plan and the person who orders the commission of a crime will
have equal accountability. 295 The aforesaid conduct as part of a common
criminal purpose only applies as long as the acts of the persons participating
have a direct and material impact on the commission of a crime. Such a class
of co-perpetrators is relatively easier to establish generally. About AWS this
becomes a challenge since weapons acting with full autonomy are
unpredictable, thus whether its actions were within the knowledge and
293
Jack M. Beard, ‘Autonomous Weapons and Human Responsibilities’ (2014) 45 Geo. J.
Int'l L. 617, 646.
294
‘Trial of Bruno Tesch et al (Zyklon B Case), UNWCC, Case Number 9, British Military
Court (1946)’, in Law Reports of Trials of War Criminals (1949) 93-104.
295
The Prosecutor v Delalic et al (Trial Judgement) Case No. IT-96-21-T (16 November
1998)1 328.
109
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
understanding of the deployer becomes a farfetched question. Furthermore,
demonstrating a criminal state of mind to a level of responsibility is an even
tougher task.
The class of perpetrators for the use of AWS can rise to the level of
politically motivated individuals as well. This is to say that the commission
of an act by AWS was a direct consequence of an order given by an official.
Although on numerous occasions the UN Security Council has affirmed that
the responsibility of leaders and executants exists,296 which is also why
ICTY and ICTR can prosecute leaders and members along with command
responsibility, 297 there exists a conundrum as to whether the responsibility of
an ‘ordinary soldier’ acting the operation persists or not.
296
S.C. Res. 1329, U.N. Doc. S/RES/1329 (30 November 2000).
297
Statute of the International Criminal Tribunal for the former Yugoslavia 1993, art 7(1).
110
Acing the AI: Artificial Intelligence and its Legal Implications
V. Split Responsibility
This chapter makes out arguments to show that there exist multiple sets of
actors who contribute to a single violation that involves AWS. The Concept
of ‘Split responsibility’ is a way that suggests that the responsibility shall be
shared among all the aforesaid actors ranging from the manufacturers to the
programmers and the military and political officials in command
responsibility positions as well. The rationale behind such an approach is to
counter the lack of moral agency in AWS that lacks ‘effective human
control’ by holding accountable every human component to the activity
behind its functioning.299 This approach growing popular is not only
misdirected but is also impossible owing to legal challenges. Primarily
responsibility is difficult to split since the threshold between each participant
is unknown; so is the extent to which the same shall be applicable. Tribunals
298
Ministry of Defence, Development, Concepts, and Doctrines Centre, ‘The UK Approach
to Unmanned Aircraft Systems’ [2011] JDN 2-11, 510.
299
‘The Convention on Certain Conventional Weapons (CCW), Informal Meeting of
Experts on Lethal Autonomous Weapons Systems, U.S. Delegate Closing Statement’
(United Nations, 2014)
<http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/6D6B35C716AD388CC12
57CEE00487 1E3/$file/1019.MP3> (accessed on Dec 23, 2021).
111
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
that take up such violations are also concerned more about the bearer than
the manufacturer given the direct choice the user makes compared to them
later. Legally, manufacturers, programmers shall not fall within the
interpretation of IHL, unless they directly are part of the armed conflict. In
an event they are applying split-responsibility to them is simply conflating
the overall responsibility for selected perpetrators.
300
Aaron Xavier Fellmeth and Maurice Horwitz, Guide to Latin in International Law (2009)
47.
301
Steven Ratner et al, Accountability for Human Rights Atrocities in International Law:
Beyond the Nuremberg Legacy (3rd edn, 2009) 3.
302
Draft Articles on Responsibility of States for Internationally Wrongful Acts (ILC 2001)
art 5; U.N. Human Rights Committee, ‘General Comment No. 31 [80] on the Nature of the
General Legal Obligation Imposed on States Parties to the Covenant’ (adopted 29 March
2004, entered into force 26 May 2004) CCPR/C/21/Rev.1/Add.13 18.
112
Acing the AI: Artificial Intelligence and its Legal Implications
weapon systems not just in terms of the individual actions but also the
command given and general programming set up, establishing accountability
would remain to be a long unachieved dream.
303
‘Report of the Expert Meeting on Autonomous Weapons’ (International Committee of
the Red Cross, 9 May 2014) 89-91 <https://www.icrc.org/eng/assets/files/2014/expert-
meeting-autonomous-weapons-icrc-report2014-05-09.pdf> accessed on 23 December 2021
[hereinafter ICRC Report].
304
Ken Obura, ‘Duty to Prosecute International Crimes Under International Law’, in Chacha
Murungu, Japhet Biegon (eds.) Prosecuting International Crimes in Africa (Oxford
University Press 2011) 11-31.
305
Social and Economic Rights Action Centre and Centre for Economic and Social Rights v
Nigeria, Cmt. No. 155/96 (27 October 2001) [44]-[48]
<http://www.achpr.org/communications/decision/1 55.96/> accessed 5 April 2023.
306
International Covenant on Civil and Political Rights (adopted 1966, entered into force
1976), art 2(2); Rome Statute of the International Criminal Court (adopted 17 July 1998), art
75.
113
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
done as a custom against certain offenses. It is on several occasions the
United Nations Security Council has also affirmed the need for reparation. 307
Several Cases across various Courts have emphasized the need for reparation
and its role in holistically completing the right of legal remedy. 308 The nature
of such reparation can be variable i.e., to the effect of compensation,
restitution, etc. however the possibility of lack of accountability affecting
this remedy as a whole is the crux of the matter. In an event where the
responsibility for the acts of such machines is not established and due to all
the possible reasons, that this chapter addresses pertaining to the
responsibility of the state and of the individual, whether any form of
reparation be granted? is a question that shall remain unanswered. This
further helps substantiate the illegality of the use of AWS, since their use
becomes ethical purely out of the fact that someone shall be accountable for
the illegal acts in war, and the use of AWS does not have a mechanism to
help authorities reach there.309
VII. Conclusion
The threats posed by AWS should be taken seriously but even more so the
accountability gaps that it creates. In a world where accountability holds the
fort of international law steady, one loose end could prove fatal to the
discourses available in other segments of law as well. One extreme of which
could be the loss of ways to counter impunity and assist victims. The
307
Khmer Rouge Trials, 77th Plenary Meeting (18 December 2002) UN Doc. A/RES/57/228
B (22 May 2003).
308
Factory at Chorz6w (Indemnities) (Germany v Poland) (Judgement) [1927] PCIJ (ser. A)
No. 17, 29.
309
Robert Sparrow, ‘Killer Robots’ (2007) 24 J. of Applied Phil. 62.
114
Acing the AI: Artificial Intelligence and its Legal Implications
115
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
Abstract
AI has proven valuable in national legal practice, contract drafting,
and other processes closely related to law. However, international law
is yet to grab the opportunity. Unlike domestic laws which have the
luxury of definite statutes and codified provisions, international law is
formed out of several treaties and state practices. Possibly, there might
be difficulties with regard to introducing artificial intelligence.
However, the transformations and advantages that AI might bring in
cannot be overlooked. One among them is the codification of
international law. The first phase of our discussion analyses those
potential tools of AI bearing in mind the scope of codification and
progressive development of international law as suggested by Article
13 (1)(a) of UN charter. Taking this as the premise, this chapter
contends that AI could be a powerful mechanism for the codification
and progressive growth of international law. It is even powerful
enough to surpass the cognitive ability of humans. To study the scope
of codification using AI this chapter analyses different advanced tools
of AI such as information extraction tool, similarity analyses tool,
authorship analyses tool, etc. The result found that AI can develop a
codified law of nations to be applied in diverse fields of international
law by collecting primary data from numerous texts relating to state
116
Acing the AI: Artificial Intelligence and its Legal Implications
I. Introduction
By the time this chapter sees the daylight, the world’s first AI-enabled
lawyer created and trained by AI would have assisted a party in a court of
law. DoNotPay, a startup based in San Francisco, has agreed to use its robot
lawyer to assist a defendant by telling him what to say via an earpiece.310
Evidently, the scope of AI is increasing day by day. Many professions have
already started to use its advanced tools in different previously human-
handled operations. As a solution to the limitations of human analytical
capacity, AI has made data analysis and evidence-based interpretation very
simple.
310
Tech Desk, ‘World’s first AI-enabled robot lawyer will tell defendant what to say in
upcoming court case’ (The Indian Express, 11 January 2023)
<https://indianexpress.com/article/technology/worlds-first-robot-lawyer-will-tell-defendant-
what-to-say-in-upcoming-court-case-8374910/> accessed 14 January 2023.
117
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
in international law. The goal of initiating AI through such ways includes the
codification and progressive development of international law as stated in
Article 13 (1)(a) of the UN Charter.311
In the first phase of this investigation, we discuss the sources of data which
can be utilized in the successful application of AI. Then the study goes on to
analyze the different AI tools with regard to their applicability in
international law. Finally, the research reaches the conclusion that those AI
tools can effectively be used to codify international law and thereby can
introduce some regulatory mechanisms as well.
311
United Nation Charter 1945, ch 4, 13 (1)(a).
118
Acing the AI: Artificial Intelligence and its Legal Implications
One of the main features of AI is its adaptability. AI is known for its ability
to react to unprecedented circumstances and environment in the ‘right way’.
Here the tool is ‘machine learning’, a method of AI that enables machines to
“learn” from data and experience without having been explicitly
programmed. 314 When it comes to the debate over the right and wrong of the
machine, attempts have been made by Russell and Norvig to clarify that “a
system is rational if it does the ‘right thing’, given what it knows.”315 The
initial commands given to a system work as raw data based on which the
system will interpret and decide on a given problem. Sometimes, the
machine will use the general characteristics of similar problems learned from
the data pool to decide in unprecedented situations. This act of learning
lessons is truly an imitation of human cognitive abilities. Right and wrong
will be based on, as Russell and Norvig put it, “given what it knows.”316
312
Stuart J. Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th edn.,
Prentice Hall 2020) 4.
313
N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press
2014) 22.
314
A. L. Samuel, ‘Some Studies in Machine Learning Using the Game of Checkers’ (1959)
3 IBM Journal of Research and Development 210.
315
Peter Norvig and Stuart J. Russell, Artificial Intelligence: A Modern Approach (3rd edn,
Prentice Hall 2010) 1.
316
ibid.
119
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
This is, indeed, the era of information overflow. The major problem we face
now is the processing of information from an enormous set of data. Although
experts can tackle this constraint, AI proves here more efficient than human
experts. Compared to human specialists, AI can produce results with higher
quality, greater efficiency, and better results.317 This being the central
premise, scholars have given much attention to discussing the future of the
workforce, especially in an organizational structure.318 The field of
application of AI is expanding day by day. The traditional fields that were
meant to be man’s domain, including innovation, are no longer remote to
AI.319
317
A. Agrawal, J. Gans and A. Goldfarb, ‘Prediction Machines: The Simple Economics of
Artificial Intelligence’ [2018] Harvard Business Review Press.
318
J. Bughin, E. Hazan, S. Lund, P. Dahlström, A. Wiesinger and A. Subramaniam, ‘Skill
Shift: Automation and the Future of the Workforce’ (McKinsey Global Institute, 23 May
2018) <https://www.mckinsey.com/featured-insights/future-of-work/skill-shift-automation-
and-the-future-of-the-workforce> accessed on 12 November 2022.
319
Teresa M. Amabile, ‘Creativity, Artificial Intelligence, and a World of Surprises’,
Academy of Management Discoveries’ (2019) 6(3) Academy of Management Discoveries
<https://doi.org/10.5465/amd.2019.0075> accessed 14 January 2023.
320
Elizabeth M. Bruch, ‘Is International Law Really Law? Theorizing the Multi-
Dimensionality of Law’ [2011] Law Faculty Publications, Valparaiso University School of
Law <https://scholar.valpo.edu/cgi/viewcontent.cgi?article=1134&context=law_fac_pubs>
120
Acing the AI: Artificial Intelligence and its Legal Implications
United Nations and its subsidiary bodies have tried and won, to an extent, in
nurturing a ‘binding’ character for international law that can be named as
application, if not enforcement of international law. International bodies like
the International Court of Justice, United Nations, International Criminal
Court, etc. are carrying out their duties by way of persuasion. Moreover, in
the case of international law, it is primarily up to the States to interpret their
obligation in the agreement which they themselves have entered since there
is no internationally recognized sovereign. 321
Nonetheless, owing to the new world order and the multitude of transnational
treaties, the stakeholders of international law have started to take a different
approach to it. For some, compliance with the law and treaties is a matter of
international existence. This can be proved by analyzing the United Nations
Security Council’s actions and its global reflections in the aftermath of the
9/11 attack of 2001. Security Council constituted a ‘Counter-Terrorism
Committee’ providing them with the authority to list any entities that fund
terrorism.322 The Security Council’s legislative action against the whole
terrorist activities led to passing of similar legislations by many states in
compliance with the rules of the Security Council. It shows that international
law caused a tremendous amount of legislation at the domestic level. This
proves that even if international law lacks the element of complete legal-
enforceability neither is it essentially unstable nor is it a set of unenforceable
words. This brings us to the fundamental theme of the chapter that if the
modes and means used in the practice and implementation of international
law become more advanced and easier to interact then the participation of the
321
A Roberts, ‘Power and Persuasion in Investment Treaty Arbitration: The Dual Role of
States’ [2010] American Journal of International Law 179.
322
E. Alexandra Dosman, ‘Designating “Listed Entities” for the Purposes of Terrorist
Financing Offenses at Canadian Law’ [2004] University of Toronto Faculty of Law Review.
121
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
states and other international bodies will increase making it a stronger, stable
and enforceable law of the globe.
Prof. Thomas Burry, in his work on international law and AI, sheds some
light on the prospective powers of machines to outperform human
specialists. He opines that tasks that once required lawyers’ keen attention
are increasingly falling under the term mechanized.323 Those tasks include
legal assessment, due diligence scrutiny, contract drafting, appeals of
grievances, etc. Even though he narrows down the possibility of automation
and application of AI only to municipal laws due to the structural difference
between national and international laws,324 the changes that the legal
profession is put to, as a whole, indicate to the inevitable world of AI-
enabled international law. Sometimes the application of AI may take place
through the use of Machine Learning which is a tool of AI that renders a
system to learn without being explicitly programmed.325
323
Thomas Burri, ‘International Law and Artificial Intelligence’ [2017] German Yearbook
of International Law 91.
324
ibid 92.
325
Samuel (n 314).
122
Acing the AI: Artificial Intelligence and its Legal Implications
and it also helps them detect legal hazards that the contract may have if they
are not addressed. A company called Blue J uses AI to forecast the results of
court cases involving tax law. AI’s power of natural language processing is
widely used in the field of legal research by many lawyers by way of
analyzing legal records, judgments, and question-based investigations. 326
The two most significant sources of international law are treaties and
customs. The diverging appearance of international law is the result of these
sources formed out of manifold treaties and diverse customary practices of
states. Using AI these diversities can be codified, to an extent. These sources
326
Manohar Samal, ‘International Law, Litigation and Alternate Dispute Resolution’ in
Abhivardhan, Suman Kalani, Akash Manwani and Kshitij Naik (eds.) Handbook on AI and
International Law (Indian Society of Artificial Intelligence and Law 2020) 134.
327
Nico Krisch, ‘International Law in Times of Hegemony: Unequal Power and the Shaping
of the International Legal Order’ [2005] European Journal of International Law 369.
123
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
can provide enough data for the purposes of prediction, analysis, information
extraction, etc. The three major potential ways of application are:
328
John Saee, ‘Best Practice in Global Negotiation Strategies for Leaders and Managers in
the 21st Century’ [2008] Journal of Business Economics and Management 309.
329
Ajay Agrawal, Avi Goldfarb and Joshua S. Gans, ‘What to Expect from Artificial
Intelligence’ [2017] MIT Sloan Management Review.
330
Ashley Deeks, ‘High-Tech International Law’ [2020] George Washington Law Review
576.
124
Acing the AI: Artificial Intelligence and its Legal Implications
only in comparison to the data available in the national laws. Certainly, the
data for AI analysis can be collected from:
UN digital library can be utilized as a source of data that contains almost all
the relevant details concerning the Security Council, Economic and Social
Council, and the General Assembly. Moreover, the UN Treaty Collection
website provides official data regarding the drafting and negotiation of
specific treaties. 333
Domestic laws will certainly have a constraining effect on the decision of the
state in its international discourses. This will apply to international treaties as
331
Wolfgang Alschner, Julia Seiermann and Dmitriy Skougarevskiy, ‘Text-as-Data Analysis
of Preferential Trade Agreements’ [2018] Journal of Empirical Legal Studies 648.
332
Peter Reilly, ‘Was Machiavelli Right? Lying in Negotiation and the Art of Defensive
Self-Help’ [2009] Ohio State Journal on Dispute Resolution 481.
333
‘International Law Documentation’ (UN General Assembly) <https://treaties.un.org/>.
125
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
well. Hence, a sufficient set of data for AI must include the relevant codes
which will possibly influence the decision of the state.334
‘Law of nations’ is the old term used to denote customary international law.
It is a set of state practices and opinion juris across the world that lays down
one of the constituent elements of international law. The function of the
same is to recognize common law causes of action for infringement of
international standards recognized by the civilized world.335 There is a lot of
tension regarding the availability of documents pertaining to state practice. It
is true that many of them are not even digitized. 336 This does not essentially
constitute that there is a lack of primary data. Still, the availability of data
may vary depending upon the willingness of the states to publish their
records and materials.
Despite this fact, the challenges are being addressed and the states are
coming forward with their materials and records which will gradually result
in the enhanced performance of AI tools in the field. Here the potential AI-
tools for the codification are:
i. Information Extraction
334
David Sloss, ‘Domestic Application of Treaties’ in Duncan B. Hollis (eds), The Oxford
Guide to Treaties (OUP 2020).
335
Ryan M. Scoville, ‘Finding Customary International Law’ (2016) 101 Iowa Law Review
<https://ilr.law.uiowa.edu/print/volume-101-issue-5/finding-customary-international-law/>
accessed 9 November 2022.
336
Report of the Secretary-General, ‘Ways and Means of Making the Evidence of
Customary International Law More Readily Available’ (1949) U.N. Doc. A/CN.4/6.
126
Acing the AI: Artificial Intelligence and its Legal Implications
An international dispute may arise due to several reasons. It may stem from
the action of a state contravening a ratified treaty or when there arise
disputes regarding a contract or in the meaning and interpretation of certain
terms in the agreement. The bodies under which these issues would be
337
Hu, Xia and Huan Liu, ‘Text Analytics in Social Media’ in Charu C. Aggarwal and
Cheng Xiang Zhai (eds) Mining Text Data (Springer 2012).
338
ibid.
339
D.M. Blei, ‘Probabilistic Topic Models. Communications of the ACM’ [2012] Open
Journal of Statistics 77.
340
K. D Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in
the Digital Age (Cambridge University Press 2017) 7.
127
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
addressed for adjudication may vary given the clauses and conditions in the
contract. The adjudicatory bodies may be the International Court of Justice,
Tribunals, Permanent Court of Arbitration, etc.
i. Authorship Analysis
This AI tool analyzes the judgment or arbitral award with respect to which
judge or arbitrator wrote the core decision or award and on which argument
or reason he/she was found to be more persuasive.343 Taken in the context of
a state in an international dispute, this tool helps in identifying key elements
and characteristics of the whole body and provides insight as to the selecting
or avoiding of a particular body of adjudication in the future.
341
Lauri Donahue, ‘A Primer on Using Artificial Intelligence in the Legal Profession’
[2018] Harvard Journal of Law & Technology <https://jolt.law.harvard.edu/digest/a-primer-
on-using-artificial-intelligence-in-the-legal-profession> accessed on 7 December 2022.
342
ibid.
343
William Li, ‘Using Algorithmic Attribution Techniques to Determine Authorship in
Unsigned Judicial Opinions’ [2013] Stanford Technology Law Review 503.
128
Acing the AI: Artificial Intelligence and its Legal Implications
This tool of AI can identify the level of influence a party to the dispute has
over the final decision if it is given a set of data containing the judgment and
submission of parties. Here, AI can identify the tendency to what extent a
person who wrote the decision would be cognitively inclined by the language
and words used in the brief of a party to the suit.344 This might be helpful for
the states to understand the probability of winning the award or decision in
the same legal question in future disputes. AI can enable them to choose
whether to go forward with the judicial adjudication if the algorithm detects
those types of ‘persuasive’ languages, words, and terms in the brief of the
opposite party.
344
Pamela Corley, Paul Collins and Bryan Calvin, ‘Lower Court Influence on U.S. Supreme
Court Opinion Content’ [2011] Journal of Politics 31.
345
United Nation Charter 1945, art 13 (1)(a).
346
‘Codification Division Publications’ (Office of Legal Affairs) <http://legal.un.org/cod/>
accessed 18 December 2022.
347
Jeremy Bentham, An Introduction to the Principles of Morals and Legislation (Dover
Publications 2007).
129
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
The codification, as stated above, can be understood as enacting international
laws on matters upon which domestic legislations have already been in place.
AI has something to do with this codification of international law. Namely,
in the legislation, enforcement and implantation of international laws,
treaties, agreements and other rules these highly capable tools of AI are
offering a considerable contribution. Since international law has different
facets this AI-aided codification may also take different approaches
accordingly.
In the case of international treaties, under the guidance of data scientists and
experts in the field of AI, machine learning and natural language processing,
AI can put forward a ‘listing mechanism’ which can be automatically
updated by it from time to time. If any state contravenes any treaties or court
orders, irrespective of the position it holds, AI may list them separately
suspending them from entering a new treaty for a certain period. As a tool of
AI, machine learning is appropriate for this purpose. Since the chances of
vested interests are comparatively insignificant,348 in the case of a machine
run by AI, this mechanism stands a chance against its prevailing
counterparts. This will increase the reliability of bilateral and multilateral
treaties because even before the states enter into a treaty, AI will predict the
possibility of partnering states contravening the rules in the future.349
Using the information extraction tool and topic model method which are
efficient enough to extract the underlying themes from different texts,350 a
348
Henry Adobor and Robert Yawson, ‘The Promise of Artificial Intelligence in Combating
Public Corruption in The Emerging Economies: A Conceptual Framework’ [2022] Science
and Public Policy.
349
Deeks (n 330).
350
Hu, Xia and Liu (n 337).
130
Acing the AI: Artificial Intelligence and its Legal Implications
As far as authorship and similarity analysis tools are concerned, apart from
the utilities to the States in identifying relevant information about the award
and legal points, these tools offer a stronger judicial/adjudicatory system at
the international level. Instead of relying on academic accomplishments, the
selection committee can directly learn from the report of AI how rooted is
the cognitive capacity of the judge and the arbitrator.352 The information
provided by authorship and similarity analysis tools will aid the international
authorities to choose the least (cognitively) inclined one to take the position
in the adjudicatory bodies.
V. Conclusion
351
Ashley (n 340).
352
Corley, Collins and Calvin (n 344).
131
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
using AI tools will enable the states to comprehend and practice the law
more efficiently. Moreover, this act of replacing human intelligence with its
artificial counterpart will be beneficial in the long run as the enormous set of
interpreted data, reports, machine codifications, and various other
information will aid and advise the concerned authorities for the expeditious
disposal of disputes and negotiations.
132
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
Autonomous cars minimize human intervention and rely on the
software system installed within them to take decisions about their
functioning. An unfortunate revelation is that a highly advanced
Artificial Intelligence (“AI”) even after undergoing supervised
training and testing, can have unaccounted bugs that can land a car in
an accident. In this scenario, the foremost question is pertaining to
product liability. The law has to decide who is liable: a manufacturer,
software developer, service provider, testing authority, customer or the
AI itself. Furthermore, how do we ascertain liability when multiple
parties have substantially contributed to the making of the product and
all their functions, are intricately tied up? More than that, how do we
understand if the flaw in the software was latent or patent?
133
Product Liability Dilemmas: Driverless Cars
I. Introduction
134
Acing the AI: Artificial Intelligence and its Legal Implications
The focus of the author’s research would be on these AI-enabled cars and the
kind of legal implications they attract. There was a time when these AI-
enabled cars were a far-fetched dream, or rather, an improbable thought. But,
today, it is seen quite often. What is an AI-enabled car?
135
Product Liability Dilemmas: Driverless Cars
(vi) Level 5: The Vehicle is fully automated and can carry out all the
driving functions without any human assistance or attention.”353
Returning to the issue, it is quite evident that the functioning of such cars is
dependent on not just electrical, mechanical, and physical factors but also on
the quality of the software. Additionally, the legal problem will vary at each
automated level.
This immediate shift in the kind of vehicles driven has led to a departure
from the traditional system where only the owner of the vehicle could be
held accountable for an accident unless there was a severe defect in the
manufacturing or design of the vehicle. The parties and stakeholders
involved in this new setting can be the user of the car, the owner of the car,
the person who coded the software, the manufacturer of the car, or any other
353
Rahul Kapoor, ‘Autonomous Driving Cars’ (The Financial Express, 9 December 2020)
<https://www.financialexpress.com/auto/car-news/autonomous-driving-cars-all-six-levels-
autonomous-vehicles-explained-level0-level-1-level-2-level3-level-4-level-5-volvo-
chevrolet-audi-bmw-mercedes-benz-artificial-intelligent-ai-self-driving-cars/2146290/>
accessed 11 January 2023.
136
Acing the AI: Artificial Intelligence and its Legal Implications
party involved in the making of the vehicle. Therefore, the issue of product
liability has expanded in the context of AI-enabled cars and poses some
serious novel concerns. One of the questions that will also be the focus of
this chapter is: can AI-enabled cars be accommodated safely in our legal
system?
354
Product Liability Directive 85/374/EEC of 2017 [2017].
355
ibid art 3(1).
137
Product Liability Dilemmas: Driverless Cars
A producer will be held liable only if they deliver a product that does not
match the standards of ‘safety’ expected by a ‘reasonable’ man. In order to
establish product liability under Article 4 of the Product Liability Directive,
the plaintiff has to prove the defect in the product and the damage caused due
to such defect. The sufficient and proximate link must be there in the
damages caused due to the defect.356
As per the exceptions, the producer will be exempted from liability if the
technical know-how at the time when product was put into the market was
not sufficient to find out the defect (development risks defence), if the defect
is pertaining to a component of the product that has been designed or
manufactured by a third party, or if the defect arose after the product was put
into circulation. 358 Let us analyse the impact of the direct application of these
exceptions to the defects in AI-enabled cars.
356
ibid art 4.
357
Jeff Hecht, ‘Self-driving vehicles: Many challenges remain for autonomous navigation’
(Laser Focus Word, 14 April, 2020) <https://www.laserfocusworld.com/test-
measurement/article/14169619/selfdriving-vehicles-many-challenges-remain-for-
autonomous-navigation> accessed on October 13, 2022.
358
Product Liability Directive (n 354) art 7.
138
Acing the AI: Artificial Intelligence and its Legal Implications
The second defence given to the producer is such that it can bind one
producer but release another producer from liability. This is because if the
defective component is designed by a different manufacturer, he shall be
responsible for it. Such a mechanism is suitable and required for autonomous
cars, as there are multiple producers who could be responsible for the same
accident. While one producer who assembled the car may be freed from
liability, the other may be liable as he had failed to install the updated
software into the car.
139
Product Liability Dilemmas: Driverless Cars
incorrect software updates may be exempt from liability as they arose after
the product was put into circulation.
Since the AI learns on its own from its surroundings and from data that may
be collected from third parties, the producer of the car may also not have
exclusive control over the software updates. The producer may have some
degree of control, but because the AI keeps developing, it may not be
possible for the producer to constantly monitor these updates. In such a
muddy scenario where the traditional role of the producer has decreased and
the role of AI working on its own has increased, accountability for defects
becomes a major issue. The producer may be liable for the defect provided
the legislation is altered, but the question is, who is the real producer?
Every person owes a ‘duty of care’ and will be liable for his negligent
actions provided he did not adhere to the standard of behaviour he should
have adhered to. In Donoghue v. Stevenson, the court established that one
must take ‘reasonable care’ to avoid any harm or injury to the plaintiff. 359 On
applying this principle of negligence to an AI-enabled car, it becomes
difficult to ascertain who exactly owes this duty of care to the plaintiff and
what ‘reasonable care’ implies. It would completely depend upon the party
that made an error in making the product when deciding on who owed a duty
of care. If, the driver of the car, that is the human himself, was responsible
for the accident that injured a pedestrian, then the answer would be pretty
straightforward, and it would be easier to decide if he took reasonable care.
359
Donoghue v Stevenson [1932]SC(HL) 31.
140
Acing the AI: Artificial Intelligence and its Legal Implications
Ryan Abott says that in such cases, the focus must be on the acts of AI, as it
has evolved and learned to such an extent that it can be treated as an
independent person. We must analyse if the acts of the AI were safer than
those of a reasonable person or if the AI’s actions were safer than those of
other AIs in the market.360
On the other hand, comparing AI with other AIs on the market poses a
different set of problems. There can be a range of AI-enabled cars in the
market, ranging from low-end automated cars to high-end automated cars.
Within these ranges, there can be subcategories and a number of companies
that may have built these cars using totally different software, techniques,
and training methods. When there can be a difference between two AI-
360
Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law (Cambridge
University Press 2020).
141
Product Liability Dilemmas: Driverless Cars
This segment of the chapter seeks to examine the transformations that will
have to be made to accommodate autonomous cars in the Indian space. A
range of Indian conglomerates and start-ups have engaged themselves in the
commercialization of this dream of ‘driverless’ cars. However, with this
dream of the impossible, there come not just technological and societal
barriers but also legal ones. The Vienna Convention of Road Traffic, 1968,
to which 70 countries excluding India are signatories, prescribed that cars
may be automated, but they still must be in full control of the driver. 361
361
Vienna Convention on Road Traffic [1968] Chapter XI on Transport and
Communications, Road Traffic.
142
Acing the AI: Artificial Intelligence and its Legal Implications
However, this was later amended, and something called ‘self-driving’ cars or
‘autonomous cars’ were allowed. But this amendment did not allow for
driverless cars.
In this part of the chapter, we shall analyse the risks and benefits of the
operation of autonomous vehicles in India and then, in detail, examine the
legal status of product liability clauses in India.
i. Risks
India has a wide variety of terrain, ranging from the Himalayas to plains and
from plateaus to beaches. Some of these areas do not even have proper roads.
In fact, India is still a developing nation, and smooth roads are a dream in
143
Product Liability Dilemmas: Driverless Cars
On top of that, the challenges that the roads face are also very different. For
example, rural roads may very often be loaded with a herd of cows, which is
not commonly seen in the UK and USA. Additionally, there might be road
blockages due to a variety of reasons, such as Indian baraats, religious
ceremonies, festivals, or even legitimate protests. Autonomous cars will have
to be trained to deal with hurdles that are relevant to Indian society.
Car parking is also done in a very haphazard and scattered manner, except
for the parking in malls. The dynamic parking arrangements in India would
require AI to be heavily trained so that it can remove cars efficiently from
these areas.
Even following road instructions can be risky for autonomous vehicles. For
example, expressways in India prescribe a driving speed of 60km/h.
However, most cars run their vehicles at a higher speed. In that case, if the
autonomous vehicle tries to abide by the speed regulation, an accident may
occur. Therefore, it is not just important for autonomous cars to be
acquainted with the Indian laws and their terrain but also with Indian society
and its mindset.
ii. Benefits
144
Acing the AI: Artificial Intelligence and its Legal Implications
The advent of autonomous cars will also remove the subjective and irrational
decision-making on roads. For example, many people are in a rush to
overtake and remove their cars from traffic by breaking lines and road laws.
Such unpredictability will be erased. In fact, autonomous cars will be law-
abiding entities that will stop at signals, listen to the traffic police, and follow
all other minute instructions. Unnecessary horns that create noise pollution
would also be reduced. On top of that, autonomous cars may detect high
traffic and shut down, thereby reducing air pollution as well. AI-enabled
cars’ ability to supervise themselves and take righteous decisions can help
the authorities.
362
Consumer Protection Act 2019, s 10.
363
ibid s 22(2).
145
Product Liability Dilemmas: Driverless Cars
Furthermore, as per the Act, a manufacturer can be held liable only if there is
“a manufacturing defect, a design defect, a deviation from manufacturing
specifications, non-compliance with the express warranty, or inadequate
warning signals.”364 All these requirements may be inapplicable in the
context of autonomous cars, as all the features of the car are so intricately
connected that it may become impossible to determine if it is a
manufacturing defect, car design defect, or software design defect. Even
after providing all warning signals, information, specifications, and safety
guidelines, the software system may fail. The functioning of these cars is
highly dependent on the software installed within them, and nobody can be
held responsible for such failures as they are sudden, unpredictable, and out
of human control.
364
ibid s 84.
146
Acing the AI: Artificial Intelligence and its Legal Implications
Therefore, a way out for the victim customer is to rely on Section 84(2) of
the Act which says that the manufacturer will be liable even if he was not
negligent in making the express warranty. 365 The benefit of this provision is
that it removes the need for an ‘examination of defect’ and the issue of
‘foreseeability or non-foreseeability’ of the defect. However, the customer
can rely on this only if the producer warranted the customer a certain level of
safety, which was later breached. If suppose the warranty indicates that the
car can identify between basic elements of nature like the sky, tree and truck
but the car actually fails to identify this, a claim may be successful against
the producer.
365
ibid s 84(2).
147
Product Liability Dilemmas: Driverless Cars
Fortunately, the Act does not place huge reliance on the ‘foreseeability’
aspect, but it does say that a seller may be responsible if he “did not exercise
‘reasonable care’ in assembling, inspecting, and maintaining the product or
failed to give enough warnings.”367 Now, this ‘reasonable care’ leaves a grey
area in the law. It is almost impossible to decide if the manufacturer or the
seller exercised reasonable care in making or training the AI. It would
involve a deep study of various aspects of computer sciences, which may not
even lead us to a fulfilling conclusion. However, if it is clearly visible that
something substantially negligent was done, the Act clearly holds the person
in charge of the product responsible.
There are also exceptions to product liability, which complicates the question
even more. Under Section 87, “the producer shall not be liable if, at the time
366
ibid s 86.
367
ibid s 86(e).
148
Acing the AI: Artificial Intelligence and its Legal Implications
of the offence, the product was altered or misused,” which may be relevant
in the context of hacking (third party intervention). Other exceptions say that
if sufficient warnings were present, the producer will not be liable. 368
Application of this principle might be tricky as the producer may provide
adequate warnings for the usage of the car, but it can still malfunction.
The last and most thought-provoking exception available under the Act is
that if the danger is obvious, then the product manufacturer shall not be
liable for the failure of the product.369 It is similar to the defence of volenti
non-fit injuria. If such an exception is applied to autonomous cars, it may
create a situation where the producers escape liability. They may claim that
the customer is aware that even if the software acts rationally mostly, there is
still an iota of a chance that it may go erratic, and if the customer is driving
the car with this knowledge, he has consented to the precedented injury,
therefore, the manufacturer should not be liable after such consent.
It is amply clear that, through the application of sections 84(2), 85, and 86,
parties can be brought under the purview of liability. However, certain
provisions also create confusion in the law, such as the exceptions
highlighted above, which will have to be modified with regard to
autonomous cars.
After circling around all the issues, we come to a singular problem, namely,
on whom does the concerned Act impose liability? It is clear that in some or
368
ibid s 87.
369
ibid.
149
Product Liability Dilemmas: Driverless Cars
In the case of a level 0 automated car, there are only two parties who could
be held responsible: the car’s company and the car’s owner. However, in the
case of a self-driving car, the number of parties that can get involved will
depend on the level of automation. For instance, level 3 is a substantial jump
from level 2 as the self-driving car is capable of monitoring its environment
and taking decisions. However, Level 4 automation requires no human
intervention for functioning. 370 In these two instances, multiple parties can be
held responsible: the car manufacturer company, the owner of the car, the AI
itself, or the entity that trained the AI.
Clearly, the whole system consists of so many parties and contributors that it
becomes unfair to make one party responsible in a scenario where liabilities
cannot be distinguished or separated. Therefore, where there is complexity in
attributing the accident or fault to one person, a fair solution could be
apportioning the risks.371 By opting for this, one party is not under a full-
fledged obligation to pay damages when the defect occurred due to the
shared contributions of multiple stakeholders in the product.
370
Kapoor (n 353).
371
Bernhard A. Koch, ‘Product liability for autonomous vehicles’ (Polska Izba Ubezpieczeń,
1 April 2019) <https://piu.org.pl/wp-content/uploads/2020/03/WU_2019-04_01_BAK.pdf>
accessed 3 January 2022.
150
Acing the AI: Artificial Intelligence and its Legal Implications
151
Product Liability Dilemmas: Driverless Cars
VIII. Conclusion
Quite clearly, many parallel laws will have to be amended for the proper
adaptation of autonomous vehicles in India. These parallel laws would help
the government be proactive in encountering all the issues beforehand. On
the other hand, the product liability clauses might require little amendment
for application into this new sector, but efforts must be made to ensure that
these clauses are utilized in only a handful of circumstances. More efforts
must be dedicated to strengthening laws that act as precautionary measures,
so that the dilemma of who must be held liable is lessened.
152
Acing the AI: Artificial Intelligence and its Legal Implications
The Government and the public would also have to come to terms with the
fact that if earlier humans made errors while driving then, it could be
software in the future. Accepting this reality could be a huge leap towards
the prevalence of self-driving cars in the country. Needless to say, the laws
in India and the EU are not fully ripe to tackle the issues of product liability
arising out of autonomous vehicles. In order to frame better strategies,
governments will have to unlearn the old conventional laws and formulate
new ones for a better and safer penetration of autonomous vehicles.
153
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
Abstract
AI-based judges are the “next big thing” in the sphere of the judiciary.
China has reportedly rolled out Internet courts and has been using AI-
based SoS/Robot Judges to assist in adjudication for quite some time
now, which has saved the Chinese Legal System billions of dollars and
reduced almost one-third of the workload of the Chinese legal system.
There have been reports of Estonia working on a project using AI-
based judges to decide small claims. India, though it has not developed
any AI or robot judges, has launched initiatives to introduce AI into
the functioning of the judiciary through the recently launched
SUPACE Project to increase efficiency and fast-track the judicial
process. This chapter will analyze the fallacies of such a system and
demonstrate why AI-based judging can never fully replace “human
judges” using theories of jurisprudence, such as the ones dealing with
judicial discretion, morality and positivism, legal realism, and critical
legal studies. This will be done with the help of examples from the
domain of competition law.
154
Acing the AI: Artificial Intelligence and its Legal Implications
While some legal realists have reached the conclusion that it is almost
impossible to predict how a judge will decide a particular case, some
like Moore have devised statistical theories like the “Institutional
Approach” to predict the judicial behaviour of a particular judge at a
particular time and place in an uncontrolled environment. This chapter
will try to address the concern of whether a statistical or mathematical
approach can be adopted to predict judicial behaviour, like the one
propounded by Moore, or whether the indeterminacy in judicial
behaviour cannot be conquered through mathematical tools.
I. Introduction
Artificial Intelligence (‘AI’) has pervaded almost all areas of human life
today. Judiciary and the legal field are no exceptions. China has reportedly
rolled out internet courts and has been using AI Based/Robot Judges to assist
in adjudication for quite some time now, which has saved the Chinese Legal
System billions of dollars and reduced nearly one-third workload of the
Chinese legal system. 372 There were reports of Estonia working on a project
of using AI based judges to decide small claims. 373 Albeit, the authenticity of
this news has been disputed.374 India has not developed any AI or robot
judges. However, it has launched initiatives to introduce AI in the
functioning of the judiciary through the recently launched SUPACE Project
372
Ben Wodecki, ‘AI helps judges decide court cases in China’ (AI Business, 18 July 2022)
<https://aibusiness.com/document.asp?doc_id=779080#:~:text=China%20claims%20to%20
have%20used,can%20alter%20errors%20in%20verdicts.> accessed 05 April 2023.
373
‘Estonia creating AI Powered Judge’ (Daily Mail, 26 March 2019)
<https://www.dailymail.co.uk/sciencetech/article-6851525/Estonia-creating-AI-powered-
JUDGE.html> accessed 5 April 2023.
374
‘Estonia does not develop AI Judge’ (Ministry of Justice, Republic of Estonia, 16
February 2022) <https://www.just.ee/en/news/estonia-does-not-develop-ai-judge> accessed
5 April 2023.
155
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
to increase efficiency and fast-track the judicial process.375 AI-based judges
seem to be the next big thing in the sphere of judiciary. This chapter will try
to analyse the fallacies of such a system and demonstrate why AI-based
judges can never fully replace ‘human judges’. The premise has been
substantiated, in this chapter, via theories of jurisprudence such as the ones
dealing with judicial discretion, morality and positivism, legal realism, and
critical legal studies. This will be done with the help of examples from the
domain of Competition law.
The second important question that the chapter deals with is that of ‘Legal
Forecasting’. Attempts to predict judicial behaviour have been made in the
past by various theorists in the domain of jurisprudence. While some legal
realists reached the conclusion that it is almost impossible to predict how a
judge will decide a particular case, some like Moore devised statistical
theories like the ‘Institutional Approach’ to ‘predict the judicial behaviour of
a particular judge at a particular time and place in an uncontrolled
environment’.376 If a business group could predict in advance what direction
a case would take, they could mitigate the impending losses or avoid
venturing into such an area, from the beginning. Even the investors can make
more informed choices with their investments. They could decide beforehand
whether they wish to invest their time and money in a certain venture, or buy
or sell their shares. This is keeping aside the extraneous factors like
375
‘CJI launches top court’s AI driven research portal’ (Indian Express, 7 April 2022)
<https://indianexpress.com/article/india/cji-launches-top-courts-ai-driven-research-portal-
7261821> accessed 5 April 2023.
376
U. Moore and T.S. Hope, ‘An Institutional Approach to the Law of Commercial Banking.
(1929) 38(6) Yale Law Journal 703–719 <https://doi.org/10.2307/790071> accessed 5 April
2023.
156
Acing the AI: Artificial Intelligence and its Legal Implications
colluding with the judges and getting information from the departments from
the back end.
Let us take the example of the recent case of Future Retails v Amazon
(CCI),377which caused a severe blow to the Amazon and Future Retail deal.
If the legal team could have foreseen the possibility of such a decision, then
a loss worth of crores and time could have been saved. Even from the
perspective of retail investors, the price of shares fluctuates massively by
such decisions and if predictions can be made in advance, then it would help
them make informed investments. Additionally, if someone is funding
litigation through Third-Party Litigation Funding (‘TPLF’), or champerty
and maintenance, they could also benefit from the same. This could be
subject to public policy concerns,378 nevertheless a valid use of such a
prediction theory as one could predict the outcome of the case beforehand
and make an informed bet or litigation funding decision.
377
‘Amazon v Future Retail’ (Competition Commission of India, 2021)
<https://www.cci.gov.in/combination/order/details/order/1148/1> accessed 5 April 2023.
378
‘A Strategic Look at Champerty and Third-Party Litigation Financing’ (JDSupra, 23
January 2019) <https://www.jdsupra.com/legalnews/a-strategic-look-at-champerty-and-
third-79997/> accessed 5 April 20203; Thomas J. Salerno, ‘Third-Party Litigation Funding
(TPLF) and Ethical Issues In Bankruptcy’ (Daily DAC, 26 September 2022)
<https://www.dailydac.com/third-party-litigation-funding/> accessed 05 April 2023.
157
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
Judicial discretion is an integral element of judicial decision making. It is
one of the reasons for the evolution of a diverse variety of thought in various
fields of law and, in its absence there would be stagnancy in the legal
thinking. Hart, in his theory on ‘Open Texture of Laws,’379 and Dworkin, in
his theory on ‘Hard Cases’, 380 have spoken at length on the role of discretion
in judicial decision making. Legal realists have identified it as the major
contributing factor for ‘indeterminacy in law.’
(a) Rule and its exception – Competition Law and Intellectual Property
Rights – Section 3(5) of the Competition Act
379
HLA Hart, The Concept of Law, Formalism and Rule Scepticism (See Section I. Open
Texture of Laws).
380
Ronald Dworkin, Taking Rights Seriously (Duckworth 1978) Ch 4.
158
Acing the AI: Artificial Intelligence and its Legal Implications
While this exception looks simple at the outset, the interpretation of what
constitutes ‘unreasonable’ is highly controversial and there are varying
considerations that need to be balanced. On the one hand the individual
rights of IP holders, and on the other hand, there is the collective interest of
society in maintaining workable competition and not letting anyone gain a
position of dominance which could serve detrimental to the intended object
of the legislation. The trade-off needs to be justified based on dynamic
efficiency which promotes innovation in the economy over static efficiency
which deals with promoting competition. 382
Interestingly, Dworkin, in his work, has also pointed out a controversial rule
and principle relating to the competition law domain in the United States
(‘US’) Anti-trust legislation. 383 He pointed out in the Sherman Act, Section 1
provided for “every contract in restraint of trade to be void”.384 The Supreme
Court of the US had held that only ‘unreasonable’ restraint of trade is void.
Therefore, this principle of unreasonableness was the sphere where courts
were to exercise discretion when deciding whether a contract in restraint of
trade is void.
381
Competition Act 2002, s 3(5).
382
V Korah, ‘Competition Law and IPRs’ in V Dhall (ed), Competition Law Today:
Concepts, Issues and Law in Practice (Oxford University Press 2007) 131.
383
Dworkin (n 380) ch 3.
384
Sherman Act (USA) s 1.
159
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
It would be pertinent to note how AI-based judges would deal with such
situations involving the use of judicial discretion in cases involving
conflicting rules and principles. Some theorists have suggested classifying
cases based on facts of cases and drawing similarities based on the extent of
deviation. However, how far the balancing of competing interests would
work out as new facts present themselves would be worth observing.
(c) Appeals
385
The ‘Sociological Wing of Realism’ (represented by writers like Oliphant, Moore,
Llewellyn and Felix Cohen) propounded that “judicial decisions fell into predictable patterns
160
Acing the AI: Artificial Intelligence and its Legal Implications
After analysing the appellate cases in the Indian competition law domain
often months between January to October 2022 as a part of this research, it
was observed that none of the cases were substantially overturned in the
appeal at NCLAT, the appellate body for competition law cases. Therefore,
competition law regime in India resembles more with the ‘sociological wing
of Legal Realism’. A reason for this is that the data set for competition law
appeals in India is very small, only 27 appeals in 10 months. In the past,
however, decisions have been overturned at appellate stages in competition
law in India, as well.386
(though not, of course, the patterns one would predict just by looking at the existing rules of
law).” It can be inferred from this that “various social forces must operate upon judges to
force them to respond to facts in similar, and predictable ways.” The ‘Idiosyncrasy wing of
Realism’ mainly propounded by Frank and Judge Hutcheson asserted that “personality of a
judge is the pivotal factor in law administration.” [Extracted from Brian Leiter’s “American
Legal Realism”]
386
India Trade Promotion Organisation v CCI (Appeal No. 36 of 2014).
161
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
at the lower stages. After all, if the whole purpose of AI-based judge is to
bring efficiency and fast-track the court process, there can be no possible
justification for employing a superior algorithm at appellate stages when the
same can be employed at lower stages, as well.
Dworkin terms the precedents that are not justifiable by the principles that
provide the best interpretation of past practice and says that such precedents
are to be set aside as ‘mistakes.’ Another major question AI-based
adjudication will need to address is how such ‘mistakes’ will be set aside.
Again, human intervention from the legislative angle will be required to
qualify such precedents as mistakes.
162
Acing the AI: Artificial Intelligence and its Legal Implications
387
MM Sharma, ‘Hear the Monopolyphony’ (The Economic Times, 4 March 2020)
<https://economictimes.indiatimes.com/blogs/et-commentary/hear-the-monopolyphony/>
accessed 5 April 2023.
388
ibid.
389
Some of the factors which this approach takes into consideration include: efficacy theory,
consumer welfare theory and essential facilities theory (list not exhaustive).
390
Sharma (n 387).
163
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
391
ibid.
164
Acing the AI: Artificial Intelligence and its Legal Implications
In case, the government develops its indigenous systems without taking any
help from private actors, then what that would essentially do is give the
legislature and executive an upper hand (or uncontrolled powers) over the
judiciary, This would be violative of the doctrine of separation of powers
which is considered to be a part of basic structure of Indian Constitution and
an essential component of jurisprudence worldwide. This is because AI-
based judging system would lack the subjectivity and the discretion which is
exercised by human judges and as such would basically follow a similar line
of thinking as held by previous judgments. The new changes which will be
brought into the law would be by the legislature, when they enact a new law
in the Parliament and the changes are fed in the AI-based judges. It would be
the will of the Legislature which will dominate all the organs of the
government, in case a fully AI-based Judging system is allowed to operate in
the future. Needless to say, the political parties in power would also get an
opportunity to abuse their position of dominance which could manifest into
arbitrary actions by the government in power.
Another risk ensues when private actors power the technology for this AI-
based judging system. Consider a practical real-life example to understand
this. Recently, there was a decision by the Competition Commission of India
(‘CCI’) wherein a penalty of Rs. 1,338 Crore was imposed on Google for
indulging in anti-competitive practices.392 In a second order which came just
within a week after the first, another penalty of Rs. 936 crore ($113.04mn)
392
Mr. Umar Javeed and Others v Google LLC and Anr (Case No. 39 of 2018 CCI); ‘CCI
imposes Rs 1,338-crore fine on Google for 'anti-competitive practices'’ (The Economic
Times, 21 October 2022) <https://economictimes.indiatimes.com/tech/technology/cci-
imposes-rs-1337-crore-fine-on-google-for-anti-competitive-
practices/articleshow/94993416.cms> accessed 05 April 2023.
165
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
has been imposed on the company for abusing its dominant position with
regard to its payment app and in-app payment system. 393
Apart from this, there are also several other questions of balancing
competing interests as discussed earlier which exist. One such example
would be balancing of individual interest of IPR holders with the collective
interest of distributive justice in the society. Another minor non-legal yet
important point for a CLS Analysis could be the loss of jobs for human
beings due to the introduction of an AI Based judiciary system.
393
XYZ (Confidential) v Alphabet Inc. and Others (14 of 2021); Match Group, Inc. v
Alphabet Inc. and Others (35 of 2021); Alliance of Digital India Foundation v Alphabet Inc.
and Others (Competition Commission of India); ‘Google fined Rs 936 crore in second
antitrust penalty this month’ (The Economic Times, 25 October 2022)
<https://economictimes.indiatimes.com/tech/technology/google-fined-113-million-in-
second-antitrust-penalty-this-month/articleshow/95080594.cms> accessed 05 April 2023.
166
Acing the AI: Artificial Intelligence and its Legal Implications
and the institution can be stated. After the method has been applied to large
numbers of cases in many fields it may be possible to state ‘laws’ for some
fields in terms of that correlation. 394
The author finds that this theory could be useful in creating AI Based
systems for judicial decision-making at ‘lower levels of adjudication’. It
could be coupled with coefficients of correlation or similarity coefficients
like Jaccard distance. Jaccard Distance and other coefficients of correlation
are statistical tools which are used to locate the shared and distinct members
in different data sets and measure the degree of similarity/dissimilarity
between them.
394
Moore and Hope (n 376).
167
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
V. Conclusion
Some important observations that were made in this chapter are as follows:
395
Snehanshu Shekhar, ‘Supreme Court embraces Artificial Intelligence, CJI Bobde says
won’t let AI spill over to decision-making’ (India Today, 7 April 2021)
<https://www.indiatoday.in/india/story/supreme-court-india-sc-ai-artificial-intellegence-
portal-supace-launch-1788098-2021-04-07> accessed 05 April 2023; ‘Behind SUPACE: the
Artificial Intelligence Portal of Supreme Court of India’ (AI Magazine, 29 May 2021)
<https://analyticsindiamag.com/behind-supace-the-ai-portal-of-the-supreme-court-of-india/>
accessed 05 April 2023.
168
Acing the AI: Artificial Intelligence and its Legal Implications
As far as legal forecasting is concerned, the author is of the view that even
though attempts can be made to predict legal decisions using mathematical
and statistical tools. Yet, the human intuition and legal sense developed over
time with experience is perhaps a better meter to judge the outcome. Again,
169
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
this is with regard to hard cases and not the technical easy cases which can be
decided in mechanical manner.
So, in the end, this chapter concludes that ‘the human touch’ shall forever
remain essential for the legal profession and can never be entirely done away
with.
170
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
In today’s techno-savvy world, artificial intelligence is revolutionizing
development and the field of law is no different. Artificial Intelligence
(AI) is based on the idea that if all characteristics of learning and
intelligence are thoroughly recorded, they may be replicated by a
computer programme. Deep learning models are a form of machine
learning that take their inspiration from the design of the human brain.
While the fundamental human element in dispute resolution can never
be substituted by technology, it is time for certain human elements in
arbitration to be gradually replaced by AI.
171
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
I. Introduction
172
Acing the AI: Artificial Intelligence and its Legal Implications
The right to challenge the mandate of tribunal secretaries emanates from the
right of parties to challenge the Tribunal’s jurisdiction. Therefore, the
396
Constantine Partasides, ‘The Fourth Arbitrator? The Role of Secretaries to Tribunals in
International Arbitration’ (2002) 18(2) Journal of International Arbitration 147
<https://doi.org/10.1023/A:1015787618880> accessed 10 January 2023.
397
Horst Eidenmüller and FaidonVaresis, ‘What is an Arbitration? Artificial Intelligence and
the Vanishing Human Arbitrator’ (2020) <https://ssrn.com/abstract=3629145> accessed 13
January 2023.
398
Paul Bennett Marrow, Mansi Karol and Steven Kuyan, ‘Artificial Intelligence and
Arbitration: The Computer as an Arbitrator – Are We There Yet?’ (2020) 74(4) Dispute
Resolution Journal
<https://www.marrowlaw.com/wp-content/uploads/2021/02/Marrow-et-al.-AI-and-
Arbitration.pdf> accessed 14 January 2023.
173
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
foundations for challenging an arbitrator, such as a “lack of impartiality and
independence” or the performance of “impermissible duties,” apply by
analogy to the right to challenge a secretary. 399 The rationale behind this is
that secretaries must adhere to similar ethical standards as arbitrators, and
that the possibility of the involvement of a biased secretary could result in a
flawed award.400 Similar to how parties use arbitrator challenges as a tactic
to delay arbitration proceedings, increase costs, or render an award
unenforceable, it is typical for them to challenge tribunal secretaries for the
same purposes.
For instance, the most common ground for challenging tribunal secretaries is
“justifiable doubts” regarding their impartiality and independence. In Victor
399
J Ole Jensen, Tribunal Secretaries in International Arbitration (Oxford University Press
2019) 319.
400
Klaus Peter Berger, International Economic Arbitration (Kluwer 1993) 259.
401
Malcolm Langford, Daniel Behn, and RunarHilleren Lie, ‘The Revolving Door in
International Investment Arbitration’ (2017) 20(2) Journal of International Economic Law
318 <https://doi.org/10.1093/jiel/jgx018> accessed 13 January 2023.
174
Acing the AI: Artificial Intelligence and its Legal Implications
Parties may also wait until the award has been rendered before attempting to
render it unenforceable on the basis of bias or on the grounds that the
secretary performed impermissible duties. 405 In some instances, the use of
402
Victor Pey Casado and President Allende Foundation v Republic of Chile [2014] ICSID
Case No ARB/98/2.
403
Langford, Behn and Lie (n 401).
404
P v Q & Ors [2017] EWHC 194 (Comm).
405
Chloe J Carswell and Lucy Winnington-Ingram, ‘Awards: Challenges Based on Misuse
of Tribunal Secretaries’ (Global Arbitration Reviews, 8 June 2021)
<https://globalarbitrationreview.com/guide/the-guide-challenging-and-enforcing-arbitration-
175
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
such strategies to render an arbitral ruling unenforceable was ineffective. For
instance, in the Court of Arbitration, ICC, Sonatrach v Statoil,406 Sonatrach
attempted to invalidate the award on the pretext that the Tribunal unfairly
allowed the secretary to participate in its considerations by allowing her to
analyse and make notes on the dispute’s substantive issues. The court
dismissed Sonatrach’s attempts and also denied Sonatrach access to the
secretary’s notes as their disclosure would undermine the confidentiality of
the Tribunal’s deliberations.
awards/2nd-edition/article/awards-challenges-based-misuse-of-tribunal-secretaries>
accessed 12 January 2023.
406
Sonatrach v Statoil [2014] EWHC 875 (Comm).
407
Yukos Universal Limited (Isle of Man) v Russia (2015) UNCITRAL PCA Case No AA
227.
408
‘IBA Guidelines on Conflicts of Interest in International Arbitration’ (International Bar
Association, 23 October 2014) GSt 5(b), s 4.1.1.
176
Acing the AI: Artificial Intelligence and its Legal Implications
177
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
David E Sorkin, ‘Technical and Legal Approaches to Unsolicited Electronic Mail’ [2001]
409
178
Acing the AI: Artificial Intelligence and its Legal Implications
Let D stand for the training set, which is the entire set of information utilised
to train the system. As the size of D increases, the system will have more
experience and be able to make predictions that are more accurate when
applied to input that has not yet been seen. This system is referred to as
generalisation ability.
There are many different types of digitised data, including pixels, sound
bites, game scores, temperature records, etc. The datasets are a disjointed
collection of random facts. Machine Learning models can be of different
types. The few widely used are linear, nonlinear, monotonic and
discontinuous.410
410
AD Selbst and S Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87(3)
Fordham Law Review <https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/> accessed 13
January 2023.
411
Temitayo Bello, ‘Online Dispute Resolution Algorithm: The Artificial Intelligence
Model as a Pinnacle’ (2018) 84(2) Int’l J of Arb Med & Disp Man
<https://dx.doi.org/10.2139/ssrn.3072245> accessed 13 January 2023.
412
Harry Surden, ‘Machine Learning and the Law’ [2014] Washington Law Review
<https://scholar.law.colorado.edu/faculty-articles/81> accessed 15 January 2023.
179
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
algorithm performs better and more efficiently with time as it receives more
datasets.413
The computer quickly estimates the likely result if given a factual event and
is asked to compare it to similar cases that are identified and documented in a
dataset. More often than not, the tasks related to tribunal secretaries are
labour-intensive. It is feasible and more efficient to automate such tasks
using a Machine Learning algorithm. This algorithm can be trained to
identify a certain task performed by tribunal secretaries by training it. This
can be done by providing it with a sufficient sample of datasets. Here, the
data is the factual or legal information of the case that needs to be processed.
With the aid of AI tools, pertinent factual extracts, agreed-upon and disputed
view points of the parties, and procedural history can be incorporated to help
with formulating the award.414
413
ibid.
414
Falco Kreis and Markus Kaulartz, ‘Smart Contracts and Dispute Resolution – A Chance
to Raise Efficiency?’ (2019) 37(2) ASA Bulletin 337, 350
<https://doi.org/10.54648/asab2019031> accessed 11 January 2023.
415
Annie Chen, ‘The Doctrine of Manifest Disregard of the Law After Hall Street:
Implications for Judicial Review of International Arbitrations in US Courts’ (2009) 32(6)
Fordham International Law Journal <https://ir.lawnet.fordham.edu/ilj/vol32/iss6/3/>
accessed 13 January 2023.
180
Acing the AI: Artificial Intelligence and its Legal Implications
particular case. Over time, as the sample size increases with increased input
of information, the AI becomes more efficient at detecting a pattern and
producing a result.
Artificial intelligence (AI) is based on the idea that all attributes of intellect
and learning can be mimicked by a computer programme if they are carefully
recorded. By assuming the responsibilities of secretaries, artificial
intelligence can improve arbitration by organising huge amount of
documentation with inhuman speed and efficiency, and hence prevent
procedural delays.
416
UNCITRAL Notes on Organizing Arbitral Proceedings 2016, s 36.
181
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
and arrange their activities. Using this application, parties can seamlessly
organise all meetings and hearings. It can integrate parties’ objectives and
take into consideration crucial factors such as time, place and people with
minimal human intervention. 417
417
Azael Socorro Marquez, ‘Can Artificial Intelligence be used to Appoint Arbitrators?’
(2020) 1 AVANI <https://avarbitraje.com/wp-content/uploads/2021/03/ANAVI-No1-A12-
pp-249-272.pdf> accessed 15 January 2023
418
P v Q & Ors [2017] EWHC 194 (Comm).
419
Pyrrho Investments Ltd v MWB Property Ltd [2016] EWHC 256 (Ch).
420
Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (Sweet & Maxwell
2020).
182
Acing the AI: Artificial Intelligence and its Legal Implications
A well-organized case file is vital for the smooth and effective operation of
the arbitration, despite the fact that its value may appear to be minimal. A
key part of the arbitration between Croatia and Slovenia was the efficient
handling of paperwork.422 In this dispute, an arbitrator had colluded to
secretly introduce new evidence after the record had been closed. However,
he was unsuccessful in convincing the registrar to sneak in some documents
in the case file. Even if a tribunal secretary could be convinced, the AI
algorithm can be programmed to not allow such uploading of documents at a
later stage. Another advantage of using AI is that while a secretary may be
required to digitise documents,423 this activity is completed automatically
because AI data are digitalized and therefore also sustainable.
421
Marquez (n 417).
422
Arbitration between the Republic of Croatia and the Republic of Slovenia [2016] PCA
Case No 2012-04.
423
Jensen (n 399) 235.
424
Yukos Universal Limited (Isle of Man) v Russia [2015] UNCITRAL PCA Case No AA
227.
183
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
prohibited.425 This is a common reason to doubt the secretary’s mandate and
the credibility of the arbitral ruling due to concerns of bias. Tribunal
secretaries may also support the drafting of any procedural decision in a
purely administrative capacity, such as by typing a pre-decided verdict,
proofreading, including the correction of typographical, grammatical, or
mathematical errors, and checking citations, dates, and cross-references.426
With numerous AI software such as Motionize, Spellbook and Kira Systems
available to draft legal documents, similar technologies can be adapted in the
arbitration context.
Once a procedural decision has been drafted, the parties must be notified.
This may require the secretary to just send an e-mail to the parties, or to
officially serve the decision upon them,427 which can also be accomplished
using AI technology, as demonstrated by the success of Swart Writer and
Lavender.
The arbitrator’s first logical step is to get deeply familiar with the dispute’s
facts, the parties’ statements, and the evidence. The sheer bulk of these
materials is a persistent constraint to their assessment. 428 Tribunals therefore
425
Compañía de Aguas del Aconquija SA and Vivendi Universal SA v Argentine Republic
[2010] ICSID Case NoARB/97/3.
426
Hong Kong International Arbitration Centre Domestic Arbitration Rules 2014, s 3.3(e);
Note to Parties and Arbitral Tribunals on the Conduct of the Arbitration under the ICC Rules
of Arbitration 2017, s 150.
427
Jensen (n 399).
428
‘Artificial Intelligence ‘AI’ in International Arbitration: Machine Arbitration’ (Nairobi
Centre for International Arbitration, August 2021) <https://ncia.or.ke/wp-
184
Acing the AI: Artificial Intelligence and its Legal Implications
content/uploads/2021/08/ARTIFICIAL-INTELLIGENCE-AI-IN-INTERNATIONAL-
ARBITRATION.pdf> accessed 13 January 2023.
429
Partasides (n 396).
430
Young ICCA Guide on Arbitral Secretaries 2014, art 3(2)(h).
431
Jensen (n 399) 254.
432
Marquez (n 417).
185
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
ii. Document Translation and Interpretation Services
186
Acing the AI: Artificial Intelligence and its Legal Implications
criminal trials, the arbitral community has yet to widely adopt AI. In 2019,
the United Kingdom invested £61 million in law technology, making
Artificial Intelligence and technology one of the country’s most impressive
fields. Since 2021, there have been discussions in London regarding the
deployment of this technology to dispute resolution. 436 Hong Kong has
already implemented AI-based eBRAM Services in arbitration. Africa has
witnessed a lot of exciting advancements in recent years, but it has yet to
explore adopting AI into the arbitral process due to a variety of public policy
restrictions. 437 Similarly, while it may be common for arbitral institutions
worldwide to consider using AI, envisioning the same for countries like India
requires considerations of cost, necessity and viability.
It is a general belief that computers operate as a ’black box,’ and that they
are unable to explain the how and why of particular results. There has been a
common concern regarding the lack of transparency with regard to AI and its
functioning. Deep learning and/or artificial neural network-based
applications are the ones that are most frequently affected by the black box
issue. Artificial neural networks are made up of a buried layer of nodes. Each
of these nodes processes the input and sends its output to the layer of nodes
after it. Deep learning comprises these buried layers hidden in a huge neural
network. Moreover, the complexity of this might be endless. What the nodes
learn by training is not openly visible to people. Therefore, we only know the
436
Dr Paresh Kahthrani, ‘The use of tech and AI in the future of dispute resolution in
London’ (CiArb, 17 June 2021) <https://www.ciarb.org/resources/features/the-use-of-tech-
and-ai-in-the-future-of-dispute-resolution-in-london/> accessed 7 January 2021.
437
Sadaff Habib, ‘The Use of Artificial Intelligence in Arbitration in Africa – Inevitable or
Unachievable?’ (IBA Net) <https://www.ibanet.org/article/E62B06F6-7772-458A-A6E7-
1474DB7136B5> accessed 12 January 2023.
187
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
final output given to us and not what made the algorithm reach that
conclusion. This situation is referred to as an AI black box. 438
Training data comes from two different places. It may be chosen by the
algorithm’s creator or by outside parties. Training data tainted by
unintentional biases of those involved in the selection process is a significant
danger. It is simpler to identify and address bias and inaccuracy in algorithm
design than in training data sets. It is likely for the biases to be self-
perpetuating since learning algorithms retrain and reinforce by utilising
438
Ronald Yu and Gabriele Spina Ali, ‘What’s inside the Black Box: AI Challenges for
Lawyers and Researchers’ (2019) 19(1) Legal Information Management
<https://doi.org/10.1017/S1472669619000021> accessed 8 January 2023.
439
Caryn Devins, ‘The Law and Big Data’ (2017) 27(2) Cornell Journal of Law& Public
Policy <https://scholarship.law.cornell.edu/cjlpp/vol27/iss2/3> accessed 9 January 2023.
188
Acing the AI: Artificial Intelligence and its Legal Implications
VI. Conclusion
Artificial intelligence is a reality and will remain so in the modern day where
large data is the norm and computers are getting more powerful by the
day. 441 AI has been instrumental in increasing procedural efficacy and
revolutionising numerous processes, such as e-discovery. There is always a
need for larger and better things. Creation of big data technology and
quantum physics-based computers that process intricate algorithms and large
amounts of data back up this idea. There is a growing need for an automated
legal system, notably the laborious and time-consuming procedures. There is
currently no law that specifically provides for the use of AI in the arbitration
procedure. Additional rules and restrictions would be required if AI were to
somehow win support from the legal field to be implemented in arbitration.
Arbitration is a widely preferred alternative to dispute resolution for a lot of
people today. It is a recognised alternative method for achieving a prompt
and effective resolution of disputes. Therefore, it becomes increasingly
important for us to try to apply AI to it. The benefits of arbitration could
advance significantly more swiftly with arbitration by AI. Questions around
bias as well as related expenditures will be greatly reduced if the plan is
440
Maxi Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open?
Study on the Example of International Arbitration’ (2019) Queen Mary School of Law Legal
Studies Research Paper No. 318/2019 <https://ssrn.com/abstract=3392669> accessed 9
January 2023.
441
Marrow, Karol and Kuyan (n 398).
189
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
implemented properly. Conflicts over scheduling will be lessened. There will
be less delay. At the end of the day, we need to remember that AI is only
enhanced statistics, not magic. It is important that we embrace the
advancements it has to offer while addressing the challenges it carries along.
190
Acing the AI: Artificial Intelligence and its Legal Implications
Abstract
Artificial Intelligence (AI) functions as an engine for modern finance.
AI is an interdisciplinary field classified as a strategic sector by the
EU and a vital engine of economic progress that can solve many
societal problems. It expects to be able to deliver better service in less
time and at a cheaper cost. This chapter explores major AI
applications that are changing the financial ecosystem, transforming
the financial industry, and having the potential to improve many of its
functions. It examines AI’s risks and limitations in law, finance, and
society. Part I of this chapter maps AI use-cases in banking,
demonstrating why AI has progressed so quickly and will continue to
do so. Part II discusses potential challenges arising from AI’s
expansion in finance. Part III discusses AI’s regulatory problems in
financial services and the methods available to address them. Part IV
emphasises the need for human engagement. It examines the inherent
and structural risks and limitations of financial AI, discusses their
implications, and gives future recommendations. This chapter seeks to
inspire new thinking on compliance, technology, and modern finance.
I. Introduction
191
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
Artificial intelligence (hereby referred to as AI) refers to intelligent
computational agents442 that mimic human power and accuracy. 443AI
systems have advanced in the past decade. Natural and biological
intelligence are modelled using algorithmic models. 444 Although a machine
that can comprehend or perform any intellectual work a human undertakes is
not yet possible, today’s AI systems can perform well on activities that
require human intelligence. Many financial activities today rely on AI,
algorithmic programs, and supercomputers. 445 Fintech organisations have
increased the usage of AI in the financial sector. Recent financial sector
adoption of big datasets and cloud computing, combined with the
development of the information economy, makes AI systems conceivable.
77% of financial institutions surveyed expect AI to be important to their
business within two years. 446
442
David Poole and Alan Mackworth, Artificial Intelligence: Foundations of Computational
Agents (Cambridge University Press 2018).
443
Laurie Hughes, Yogesh K. Dwivedi and Tom Crick, ‘Artificial Intelligence (AI):
Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for
Research, Practice and Policy’ (2019) 57(7) International Journal of Information
Management <https://www.sciencedirect.com/science/article/pii/S026840121930917X>
accessed 13 December 2022.
444
Shivam Gupta, Vinayak A. Drave and Yogesh K. Dwived, ‘Achieving Superior
Organizational Performance via Big Data Predictive Analytics: A Dynamic Capability
View’ (2020) 90(3) Industrial Marketing Management 581
<https://doi.org/10.1016/j.indmarman.2019.11.009> accessed 13December 2022.
445
MB Fox, ‘The New Stock Market: Sense and Non-Sense’ (2015) 65(2) Duke Law
Journal <http://scholarship.law.duke.edu/dlj/vol65/iss2/1> accessed 13December 2022.
446
Madeleine Hillyer ‘AI Will Transform Financial Services Industry within Two Years,
Survey Finds’ (World Economic Forum, 4 February 2020)
<https://www.weforum.org/press/2020/02/ai-will-transform-financial-services-industry-
within-two-years-survey-finds> accessed 15 December 2022.
192
Acing the AI: Artificial Intelligence and its Legal Implications
to improve underwriting and fraud detection. 447 Its progress could widen the
digital gap between developed and poor nations. Its deployment and benefits
are mostly in major nations and a few emerging economies. These
technologies could aid emerging economies by lowering the costs of credit
risk assessments.448 AI usage in the banking sector brings new risks and
problems to ensure financial stability. AI is seen as a disruptive technology
driver. In financial services, AI could change in different ways. AI could
increase the quality of products and services for clients by using a larger and
deeper analytical base. It can drive innovation. 449
447
R Thomas, ‘AI-Bank of the Future: Can Banks Meet the AI Challenge?’ (McKinsey &
Company, 20May 2021) <https://www.mckinsey.com/industries/financial-services/our-
insights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge> accessed15 December
2022.
448
Cristian Alonso, Siddharth Kothari and Sidra Rehman, ‘How Artificial Intelligence
Could Widen the Gap between Rich and Poor Nations’ (IMF, 02 December 2020)
<https://www.imf.org/en/Blogs/Articles/2020/12/02/blog-how-artificial-intelligence-could-
widen-the-gap-between-rich-and-poor-nations> accessed 10 December 2022.
449
Stephen Bredt, ‘Artificial Intelligence AI in the Financial Sector Potential and Public
Strategies’ (Frontiers, 4 October 2019)
<https://www.researchgate.net/publication/336261157_Artificial_Intelligence_AI_in_the_Fi
nancial_SectorPotential_and_Public_Strategies/fulltext/5d973e15299bf1c363f7a30c/Artifici
al-Intelligence-AI-in-the-Financial-Sector-Potential-and-Public-Strategies.pdf> accessed 5
December 2022.
193
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
seeks to inspire new thinking on compliance, technology, and modern
finance.
450
Sweety Gupta and Anshu Yadav, ‘The Impact of Electronic Banking and Information
Technology on the Employees of Banking Sector’ (2017) 42(4) Management and Labour
Studies 379 <https://doi.org/10.1177/2393957517736457> accessed 5 December 2022.
451
Dan Sarel and H Marmorstein, ‘Marketing Online Banking Services: The Voice of the
Customer’ (2003) 8 Journal of Financial Services Marketing
<http://dx.doi.org/10.1057/palgrave.fsm.4770111> accessed 15December 2022.
452
Deborah R. Compeau and Christopher A. Higgins, ‘Computer Self-Efficacy:
Development of a Measure and Initial Test’ (1995) 19(2) MIS Quarterly
<http://dx.doi.org/10.2307/249688> accessed 6 December 2022.
453
A. Meharaj Banu, N. Shaik Mohamed and Satyanarayana Parayitam, ‘Online Banking
and Customer Satisfaction: Evidence from India’ (2019) 15(1) Asia-Pacific Journal of
Management Research and Innovation 68 <http://dx.doi.org/10.1177/2319510x19849730>
accessed 10 December 2022.
454
Carmel Herington, Scott Weaven, ‘Can Banks Improve Customer Relationships with
High Quality Online Services?’ (2007) 17(4) Managing Service Quality: An International
Journal 404 <http://dx.doi.org/10.1108/09604520710760544> accessed 14 December 2022.
455
T Chen, ‘Critical Success Factors for Various Strategies in the Banking Industry’ (1999)
17(2) International Journal of Bank Marketing 83
<http://dx.doi.org/10.1108/02652329910258943> accessed 14 December 2022.
456
Mohd. Al-Hattami, Abdulwahid Ahmed Hashed Abdullah and Afrah Abdullah Ali
Khamis, ‘Determinants of Intention to Continue Using Internet Banking: Indian Context’
(2021) 17(1) Innovative Marketing 40 <http://dx.doi.org/10.21511/im.17(1).2021.04>
accessed 10 December 2022.
194
Acing the AI: Artificial Intelligence and its Legal Implications
Finland,457 Thailand, 458 Italy, 459 Turkey, 460 UK,461 Singapore,462 and
Korea,463 found that internet banking boosted consumer satisfaction due to
its quick access and time-saving features. Electronic banking makes it easier
for consumers to compare bank services and products, encourages
competition, and helps banks grow into new markets. Many banks are even
offering mobile financial guidance.
457
Heikki Karjaluoto, Minna Mattila and T Pento, ‘Electronic Banking in Finland:
Consumer Beliefs and Reactions to a New Delivery Channel’ (2002) 6 Journal of Financial
Services Marketing 346 <http://dx.doi.org/10.1057/palgrave.fsm.4770064> accessed 10
December 2022.
458
S Prompattanapakdee, ‘The Adoption and Use of Personal Internet Banking Services in
Thailand’ (2009) 37(1) The Electronic Journal of Information Systems in Developing
Countries 1 <http://dx.doi.org/10.1002/j.1681-4835.2009.tb00261.x> accessed 11 December
2022.
459
Rocco Ciciretti, Iftekhar Hasan and Cristiano Zazzara, ‘Do Internet Activities Add
Value? Evidence from the Traditional Banks’ (2008) 35 Journal of Financial Services
Research 81 <http://dx.doi.org/10.1007/s10693-008-0039-2> accessed 11 December 2022.
460
Vichuda Nui Polatoglu and Serap Ekin, ‘An Empirical Investigation of the Turkish
Consumers’ Acceptance of Internet Banking Services’ (2001) 19(4) International Journal of
Bank Marketing 156 <http://dx.doi.org/10.1108/02652320110392527> accessed 12
December 2022.
461
Gary Boyes and Merlin Stone, ‘E-Business Opportunities in Financial Services’ (2003) 8
Journal of Financial Services Marketing 176
<http://dx.doi.org/10.1057/palgrave.fsm.4770117> accessed 12 December 2022.
462
Z Liao and MT Cheung, ‘Internet-Based e-Banking and Consumer Attitudes: An
Empirical Study’ (2002) 39(4) Information & Management 283
<http://dx.doi.org/10.1016/s0378-7206(01)00097-0> accessed 13 December 2022.
463
B Suh and I Han, ‘Effect of Trust on Customer Acceptance of Internet Banking’ (2002)
1(3) Electronic Commerce Research and Applications 247
<http://dx.doi.org/10.1016/s1567-4223(02)00017-0> accessed 14 December 2022.
464
RAlt, R Beck and MT Smits, ‘FinTech and the Transformation of the Financial Industry’
(2018) 28 Electronic Markets 235 <http://dx.doi.org/10.1007/s12525-018-0310-9> accessed
14 December 2022.
195
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
reputational issues. Many national authorities have adjusted their regulations
to guarantee the security and profitability of the domestic financial market,
promote market discipline, and preserve client rights and public faith in the
banking sector. Policymakers are becoming more cognizant of macro
policy’s impact on capital flows. 465
AI-driven solutions can help banks decide how much to lend a client, notify
traders regarding position risk, detect consumer and insider fraud, improve
compliance, and reduce model risk. 466 The Global Financial Crisis in
2008 underlined bank risk management’s importance (GFC).467 The GFC’s
economic and financial tragedy was largely caused by banks’ disdain for risk
management in the period before 2008.468 Financial risk relates to uncertain
outcomes that affect a company’s earnings. 469 Banks transfer funds between
deficit and surplus units. Due to knowledge asymmetries, economic entities
prefer utilising an intermediary. The risk of the transaction is transferred to
the bank as a middleman instead of the surplus unit. Part (or even all) of the
bank’s investment is in danger. This risk is the chance that an investment’s
465
Saleh M. Nsouli and Andrea Schaechter, ‘Finance and Development’ (2002) 39(3)
Finance and Development
<https://www.imf.org/external/pubs/ft/fandd/2002/09/nsouli.htm> accessed December 16,
2022.
466
S Aziz and MM Dowling, ‘AI and Machine Learning for Risk Management’ (2018)
SSRN Electronic Journal <http://dx.doi.org/10.2139/ssrn.3201337> accessed 16 December
2022.
467
J Blažek, T Hejnová and H Rada, ‘The Impacts of the Global Economic Crisis and Its
Aftermath on the Banking Centres of Europe’ (2018) 27(1) European Urban and Regional
Studies 35 <http://dx.doi.org/10.1177/0969776418807240> accessed 16 December 2022.
468
Wyn Grant and Graham K. Wilson, ‘The Consequences of the Global Financial Crisis:
The Rhetoric of Reform and Regulation’ (2013) 50 Choice Reviews Online 50
<http://dx.doi.org/10.5860/choice.50-2771> accessed 17 December 2022.
469
H. David Sherman and S. David Young, ‘Where Financial Reporting Still Falls Short’
(Harvard Business Review, 1 July 2016) <https://hbr.org/2016/07/where-financial-reporting-
still-falls-short> accessed 14 December 2022.
196
Acing the AI: Artificial Intelligence and its Legal Implications
real return will be lower than projected. Banks’ business models depend on
managing this risk. 470 Effective risk management identifies, measures, and
monitors a bank’s market, credit, and liquidity risks. Upside risk can boost a
bank’s worth.471
The bank’s mindset and risk appetite complement its objective of enhancing
shareholder value. The robust risk culture ensures excellent workplace
relationships for workers who are aware of the bank’s risk attitude and risk
parameters. It results from employee behaviour and beliefs, strategic
decisions and experiences, and underlying assumptions. 472 Regulation
protects customers, reduces crime, supports macroeconomic objectives, and
maintains investor trust. Credit, liquidity, reputational, and operational risks
are major financial hazards. Credit risk is the likelihood that a bank may lose
money if a customer does not meet their contractual obligations or repay a
loan. 473 Credit risk is measured by expected and unexpected loss. 474
470
Harry DeAngelo and René M. Stulz, ‘Liquid-Claim Production, Risk Management, and
Bank Capital Structure: Why High Leverage Is Optimal for Banks’ (2015) 116(2) Journal of
Financial Economics 219 <http://dx.doi.org/10.1016/j.jfineco.2014.11.011> accessed 14
December 2022.
471
Anthony Saunders, Marcia Cornett and Otgo Erhemjamts, Financial Institutions
Management: A Risk Management Approach (Mc Graw Hill 2021).
472
J Galbreath, ‘Drivers of Corporate Social Responsibility: The Role of Formal Strategic
Planning and Firm Culture’ (2009) 21(2) British Journal of Management 511
<http://dx.doi.org/10.1111/j.1467-8551.2009.00633.x> accessed 15 December 2022.
473
K Horcher, ‘Managing Treasury Risks in the Real World’ (2005) 17(1) Journal of
Corporate Accounting and Finance 23 <http://dx.doi.org/10.1002/jcaf.20163> accessed 15
December 2022.
474
E Angelini, G. Tollo and A. Roli, ‘A Neutral Network Approach for Credit Risk
Evaluation’ (2008) 48(4) The Quarterly Law Review 733
<https://doi.org/10.1016/j.qref.2007.04.001> accessed 15 December 2022.
197
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
AI could improve a bank’s credit risk quantification process. It classifies
credit risk more accurately than traditional methods.475 Operational risk
relates to possible losses through internal management, functional and
accounting system failures, failed procedures and processes, and fraud, and
human mistakes.476 AI uses clustering algorithms to uncover abnormal
spending trends and fraud rings. 477 Banks must have enough liquidity to
handle loan requests and depositor withdrawals. AI could save operating
costs and effort, increase automation, and improve liquidity management. 478
A bank’s reputation depends on its financial strength.479 Reputational risk is
the risk of bad headlines harming a company’s brand and economic well-
being. 480 AI discovers patterns of inaccessible data, and algorithms encode
this data to make predictions and judgments. 481
475
I-Cheng Yeh, Che-hui Lien, ‘The Comparisons of Data Mining Techniques for the
Predictive Accuracy of Probability of Default of Credit Card Clients’ (2009) 36(2) Expert
Systems with Applications 2473 <http://dx.doi.org/10.1016/j.eswa.2007.12.020> accessed
15 December 2022.
476
ibid.
477
GD Brown Swankie and D Broby, ‘Examining the Impact of Artificial Intelligence on the
Evaluation of Banking Risk’ (University of Strathclyde, 28 November 2019)
<https://pureportal.strath.ac.uk/en/publications/examining-the-impact-of-artificial-
intelligence-on-the-evaluation> accessed 16 December, 2022.
478
M Tavana et al, ‘An Artificial Neural Network and Bayesian Network Model for
Liquidity Risk Assessment in Banking’ (2018) 275 Neurocomputing 2525
<http://dx.doi.org/10.1016/j.neucom.2017.11.034> accessed 17 December 2022.
479
C.S. Fernando, V.A. Gatchev, A.D. May and W.L. Megginson, ‘The Value of
Reputation: Evidence from Equity Underwriting’ (2015) 27(3) REPEC 96
<https://ideas.repec.org/a/bla/jacrfn/v27y2015i3p96-112.html> accessed 17 December 2022.
480
F Fiordelisi, M-G Soana and P Schwizer, ‘The Determinants of Reputational Risk in the
Banking Sector’ (2011) SSRN Electronic Journal <http://dx.doi.org/10.2139/ssrn.1895327>
accessed 18 December 2022.
481
H Gao, G Barbier and R Goolsby, ‘Harnessing the Crowdsourcing Power of Social
Media for Disaster Relief’ (2011) 26(3) IEEE Intelligent Systems 10
<http://dx.doi.org/10.1109/mis.2011.52> accessed 19 December 2022.
198
Acing the AI: Artificial Intelligence and its Legal Implications
482
KN Johnson, ‘Cyber Risks: Emerging Risk Management Concerns for Financial
Institutions’ [2016] SSRN Electronic Journal
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2847191> accessed 17 December
2022.
483
A Ghandour, ‘Opportunities and Challenges of Artificial Intelligence in Banking:
Systematic Literature Review’ (2021) 10(4) TEM Journal 1581
<http://dx.doi.org/10.18421/tem104-12> accessed 17December 2022.
484
J Ravetz, ‘Normal Accidents; Living with High-Risk Technologies’ (1985) 17(3) Futures
287 <http://dx.doi.org/10.1016/0016-3287(85)90044-8> accessed 18 December 2022.
485
SK Mohanty and S Mishra, ‘Regulatory Reform and Market Efficiency: The Case of
Indian Agricultural Commodity Futures Markets’ (2020) 52 Research in International
Business and Finance 101145 <http://dx.doi.org/10.1016/j.ribaf.2019.101145> accessed 18
December 2022.
486
ibid.
199
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
speed.487 Beyond systemic issues, the modern, considerable capital industry
faces many challenges and weaknesses. Because of its reliance on
computerised systems, the financial firm is vulnerable to the risks posed by
technology. 488
For many financial firms, computer codes, proprietary products, private data
and other intellectual properties are important assets.489 The introduction of
the internet in financial activities also comes with the internet of financial
dangers. 490 Rogue employees with adequate authorization are among the
most significant risks to finance organizations in this digital era. There are
few protections against someone who is properly verified and approved. 491
Algorithms can produce errors. AI faults cause algorithmic bias: erroneous
internal representations distort to arrive at a decision. 492 The algorithm may
draw conclusions based on misleading or changing statistical trends in
data.493 Specifying AI’s objectives is difficult in complex social
487
HS Scott, ‘The Reduction of Systemic Risk in the United States Financial System’ [2010]
SSRN Electronic Journal <http://dx.doi.org/10.2139/ssrn.1602145> accessed 18 December
2022.
488
Tom C. W. Lin, ‘Financial Weapons of War’ [2016] SSRN Electronic Journal
<https://ssrn.com/abstract=2765010> accessed 17 December 2022.
489
David Barboza and Kevin Drew, ‘Security Firm Sees Global Cyberspying’ (Innovation
Toronto, 6 August 2011) <https://innovationtoronto.com/2011/08/security-firm-sees-global-
cyberspying/> accessed 17December 2022.
490
DB Hollis, ‘Why States Need an International Law for Information Operations’ [2008]
SSRN Electronic Journal <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1083889>
accessed 18 December 2022.
491
SR Chabinsky and A Archives, ‘Cybersecurity Strategy: A Primer for Policy Makers and
Those on the Front Line’ (2010) 4(1) Journal of National Security Law & Policy
<https://jnslp.com/2010/08/13/cybersecurity-strategy-a-primer-for-policy-makers-and-those-
on-the-front-line/> accessed 10 January 2023.
492
A Klein, ‘Reducing Bias in AI-Based Financial Services’ (Brookings, 10 July 2020)
<https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/>
accessed 17 November 2022.
493
S Mullainathan and Z Obermeyer, ‘Does Machine Learning Automate Moral Hazard and
Error?’ (2017) 107(5) American Economic Review 476
<http://dx.doi.org/10.1257/aer.p20171084> accessed 17 November 2022.
200
Acing the AI: Artificial Intelligence and its Legal Implications
environments. The algorithm needs a precise function that analyses the cost
and advantages of possible actions given both the current state and future
evolution of the environment. Mis-specifying the problem structure causes
suboptimal decisions.494
(b) Cybersecurity
Its usage increases cyber dangers and hazards. AI systems are vulnerable to
cyber threats that go beyond human or software flaws. Such attacks use data
494
S Athey, ‘Beyond Prediction: Using Big Data for Policy Problems’ (2017) 355(6324)
Science 483 <http://dx.doi.org/10.1126/science.aal4321> accessed 18 November2022.
495
R Guidotti et al, ‘A Survey of Methods for Explaining Black Box Models’ (2018) 51(56)
ACM Computing Surveys 1 <http://dx.doi.org/10.1145/3236009> accessed 20 November
2022.
496
J Silberg and J Manyika, ‘Tackling Bias in Artificial Intelligence (and in Humans)’
(McKinsey & Company, 6 June 2019) <https://www.mckinsey.com/featured-
insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans>
accessed 22 December 2022.
497
C Molnar, Interpretable Machine Learning (Lulu 2022).
201
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
collected during the AI cycle to exploit algorithmic weaknesses. 498 Data
poisoning causes AI to misclassify or recognise data. It can also be used to
develop Trojan models that mask dangerous behaviours. 499 Corrupted
systems could damage the financial sector’s ability to identify, price, and
manage risks, leading to unrecognised systemic problems. Attackers could
also obtain sensitive training datasets. Financial AI providers and users
should implement mitigating mechanisms as part of their cyber-security
strategy, which includes detection and reporting systems, rigorous training in
data feed protection, and model and data privacy measures.
Big data privacy concerns predate AI’s mainstreaming. Data anonymity and
privacy tools have been developed. To address these problems, legal data
policies are being implemented globally. AI models’ ability to prevent
training data leaks presents additional privacy concerns. AI may directly or
indirectly disclose critical data. 500 Data consumption intensifies the risk of
cyber attacks.501 Organized crime groups may aim for critical possessions to
498
M Comiter and H Rong, ‘Attacking Artificial Intelligence: AI’s Security Vulnerability
and What Policymakers Can Do About It’ (Belfer Center for Science and International
Affairs, 1 August 2019) <https://www.belfercenter.org/publication/AttackingAI> accessed
24 December 2022.
499
K Liu, B Dolan-Gavitt and S Garg, ‘Fine-Pruning: Defending against Backdooring
Attacks on Deep Neural Networks (Semantic Scholar, 1 January 2019)
<https://www.semanticscholar.org/paper/Fine-Pruning%3A-Defending-Against-
Backdooring-Attacks-Liu-Dolan-Gavitt/790ec1befba47991e8fd50a24d13be6094253f93>
accessed 7 December 2022.
500
N Kshetri, ‘The Role of Artificial Intelligence in Promoting Financial Inclusion in
Developing Countries’ (2021) 24(1) Journal of Global Information Technology
Management 1 <http://dx.doi.org/10.1080/1097198x.2021.1871273> accessed 7 December
2022.
501
‘OECD Digital Economy Outlook 2020’ (OECD, 27 November 2020)
<https://www.oecd.org/digital/oecd-digital-economy-outlook-2020-bb167041-en.htm>
accessed 22 November 2022.
202
Acing the AI: Artificial Intelligence and its Legal Implications
502
‘The Economics of Artificial Intelligence and Machine Learning’ (YouTube, 22 June
2021) <https://www.youtube.com/watch?v=esBgWGAvjQw> accessed 10 December 2022.
503
David Bholat, Mohammed Gharbawi and Oliver Thew,‘The Impact of Covid on Machine
Learning and Data Science in UK Banking’ (Bank of England, 18 December 2020)
<https://www.bankofengland.co.uk/quarterly-bulletin/2020/2020-q4/the-impact-of-covid-
on-machine-learning-and-data-science-in-uk-banking> accessed 25 December 2022.
504
A Mirestean and others, ‘Powering the Digital Economy: Opportunities and Risks of
Artificial Intelligence in Finance’ (2021) 2021(024) IMF Departmental Papers 1
<http://dx.doi.org/10.5089/9781589063952.087> accessed 05January 2023.
203
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
Increasing consistency in assessments and credit decisions in the finance
sector could be induced by third-party AI algorithm providers, which,
together with expanding interconnectedness, could lead to systemic issues.
Data concentrations and the growing use of data sources in artificial
intelligence may contribute to swarming hazards which may result in
systemic danger. In the case of a tail risk occurrence, an improper risk
assessment and reaction by AI algorithms may magnify and propagate
shocks all across the financial system, which would either make the response
more complicated or less effective. Concerns have been voiced that policies
or marketing tactics based on all of these designs will be hard to decipher or
predict by relevant counterparties. This would result in the incorporation of
new asymmetric information into the market, which could have
unpredictable effects on financial stability. Both the proliferation of new
financial technology and the expansion of the regulatory role will have a
variety of effects on the course that the financial industry will take in the
future.
505
Tom C. W. Lin, ‘Compliance, Technology, and Modern Finance’ [2017] SSRN
Electronic Journal <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2904664> accessed
5 January 2023.
204
Acing the AI: Artificial Intelligence and its Legal Implications
The growing use of new technologies within the financial sector has ushered
in a new set of threats. Every sophisticated financial institution operating in
today’s environment is, at its core, a technology corporation. In addition to
the standard issues about the balance sheet, financial institutions today also
have to concentrate on the dangers and threats that are related to new
financial technologies.
Despite the tremendous progress that has been accomplished and the
enormous potential that has been made available by breakthroughs in
financial artificial intelligence, it still offers certain severe dangers and
restrictions that are interrelated. 506 Particularly interesting are the four types
of risks and restrictions that relate to programming codes, data bias, virtual
dangers, and systemic hazards. In isolation and as a whole, these four
potentially hazardous domains stand out as potentially inherent and structural
concerns that are connected to the development of artificial intelligence in
the financial sector.
Many in the financial business have been led to assume, incorrectly, that the
solutions to the issues that humans have caused in the financial sector can be
found within the great capabilities and implementations of financial artificial
intelligence. 507 While such praise and acclaim are well-deserved, it is
important to keep in mind that AI systems still have significant gaps in their
coding that prevent them from fully representing the intricacies of the
506
KN Johnson, ‘Cyber Risks: Emerging Risk Management Concerns for Financial
Institutions’ (2015) 50(1) Georgia Law Review
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2847191> accessed 5 January 2023.
507
Emanuel Derman, Models. Behaving. Badly: Why Confusing Illusion with Reality Can
Lead to Disaster, on Wall Street and in Life (2012).
205
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
modern financial market and the entire world. 508 The capacity of artificial
intelligence algorithms to accurately record all market activity is constrained
by the programmes’ underlying programming. Financial markets and our
unpredictable environment are filled with complicated, unfathomable human
and other aspects that cannot be adequately represented by artificial lines of
code, no matter how thorough or sophisticated they may be. Therefore, it is
important to keep in mind that computer codes and models commonly make
simple and oversimplifying assumptions about how the market actually
functions, giving a false impression that they are more predictive and
productive than they actually are. 509
508
Brian Christian, The Most Human Human (Anchor 2012).
509
RJ Shiller, Finance and the Good Society (Princeton University Press 2012) 132.
510
JO Weatherall, The Physics of Wall Street (Harper Business 2014).
511
A Saunders and L Allen, Credit Risk Management in and Out of the Financial Crisis
(Wiley 2010).
206
Acing the AI: Artificial Intelligence and its Legal Implications
AI and machine learning are being used more and more in the financial
sector, which is highly regulated and depends on public trust. This has led to
discussions about the risk of bias being built into the systems. Friedman and
Nissenbaum (1996) say that embedded bias is when a computer system treats
some people or groups of people unfairly and consistently worse than others.
AI/ML processes for putting customers into groups can lead to bias in the
financial sector by making prices or service quality vary. 514 Bias in AI
decisions is often caused by training data that is biased because it comes
from already biased processes and data sets. This will teach AI/ML models
to be biased as well. 515 Data bias, like wrong or not enough information,
could make it harder for people to get a loan and increase distrust of
512
ibid.
513
MB Fox, ‘The New Stock Market: Sense and Non-Sense’ (2015) 65(2) Duke Law
Journal <http://scholarship.law.duke.edu/dlj/vol65/iss2/1> accessed 13 December 2022.
514
EL Lehmann, ‘Consistency and Unbiasedness of Certain Nonparametric Tests’ (1951)
22(2) The Annals of Mathematical Statistics 165
<http://dx.doi.org/10.1214/aoms/1177729639> accessed 9 January 2023.
515
P Wang, ‘On Defining Artificial Intelligence’ (2019) 10(2) Journal of Artificial General
Intelligence <https://sciendo.com/article/10.2478/jagi-2019-0002> accessed 10 January
2023.
207
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
technology, especially among the most vulnerable. 516 There are two ways
that collecting data could lead to bias:
The data that was used to teach the system may not be complete or
accurate. For example, predictive algorithms (like those used to
decide whether to give a loan) favour groups that are better
represented in the training data, since those predictions will be less
uncertain (Goodman and Flaxman 2016).
The data may back up existing biases (Hao 2019). For example,
Amazon found that its internal tool for hiring was rejecting women
because it was trained on past hiring decisions that gave men more
jobs than women.
One of the major classes of risks and restrictions associated with the
development of financial AI is bias in data and algorithms. Due to the
increasing use of AI in the financial sector, it is imperative that
policymakers, legislators, and other important stakeholders be vigilant
against the possible damages that might result from data and algorithmic
516
Martin Cihak and Ratna Sahay, ‘Finance and Inequality’ (IMF, 17 January 2020)
<https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2020/01/16/Finance-
and-Inequality-45129> accessed 10 January 2023.
517
V Eubanks, Automating Inequality (St Martin’s Press 2018).
208
Acing the AI: Artificial Intelligence and its Legal Implications
bias. Major and serious efforts have been made in recent years to reduce
algorithmic bias in the financial sector and beyond. 518 It is crucial that the
promise of creativity, impartiality, and objectivity not be used as a cover for
the perpetuation of long-standing biases in the context of today and
tomorrow.519
The proliferation of cyber threats and cyber conflicts in the financial sector is
another important class of hazards and constraints brought on by the
development of financial AI. The financial sector is becoming increasingly
susceptible to cyber attacks due to its increasing dependence on technology,
which is reflected in the rise of financial AI. IBM research published in 2019
indicated that the financial and insurance sectors were the most vulnerable to
cyber attacks.520 The financial sector is rapidly becoming a high-tech
business, making it susceptible to the same kinds of cyber threats as other
sectors of the IT sector.521
There are both external and internal cyber threats to the financial industry.
First, when it comes to virtual threats from the outside, foreign nation-states,
competitors, terrorist groups, organised cyber criminals, and cyber
combatants must be watched closely by financial firms and regulatory
518
‘Algorithmic Justice League –‘Unmasking AI Harms and Biases’ (AJL)
<https://www.ajl.org/> accessed January 10, 2023
519
DK Citron and FA Pasquale, ‘The Scored Society: Due Process for Automated
Predictions’ (2014) 89 Washington Law Review
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209> accessed 8 January2023.
520
‘IBM Security X-Force Threat Intelligence Index’ (IBM, 2023)
<https://www.ibm.com/security/data-breach/threat-intelligence> accessed 05 January 2023.
521
DB Hollis, ‘Why States Need an International Law for Information Operations’ (2008)
11 Lewis & Clark Law Review 1023
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1083889> accessed 05 January2023.
209
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
agencies. 522 In just the last ten years, the financial world has had to deal with
a wide range of external threats from both state and non-state actors. Some of
these actors want to make money, while others just want to create mayhem
by stealing billions of dollars, getting crucial data, and causing major
problems. 523
Second, financial institutions and their regulators must keep an eye out for
internal risks like disgruntled workers, corporate spies, and misdirected
outside contractors in terms of the external dangers they face. 524 Although
such internal dangers have long been present in the financial sector, their
impact has been amplified due to the sector’s substantial dependence on
technology such as artificial intelligence. A rogue internal threat may be one
of the most significant threats to the financial industry in today’s
instantaneous marketplace.525 The financial sector faces growing and
substantial dangers related to virtual and other technology-oriented hazards
as it increasingly resembles the technology sector through increased usage of
artificial intelligence.
Systemic risk and big financial mishaps are more likely to occur as a result
of the proliferation of financial artificial intelligence and other similar forms
of financial technology.526 In the field of finance, an increasing dependence
522
M Bowden, Worm (Atlantic Monthly Press 2012).
523
‘APT28: A Window into Russia’s Cyber Espionage Operations Report’ (Fire Eye)
<https://www2.fireeye.com/CON-ACQ-RPT-APT28_LP.html> accessed 5 January 2023.
524
Chabinsky and Archives (N 491).
525
D Lawrence, ‘Companies Are Tracking Employees to Nab Traitors’ (Bloomberg, 12
March 2015) <https://www.bloomberg.com/news/articles/2015-03-12/companies-are-
tracking-employees-to-nab-traitors> accessed 5 January 2023.
526
‘Not so Social: High-Frequency Trading: Twitter Speaks, Markets Listen, and Fears
Rise’ (Indian Express, 30 April 2013) <http://archive.indianexpress.com/news/not-so-social-
210
Acing the AI: Artificial Intelligence and its Legal Implications
highfrequency-trading-twitter-speaks-markets-listen-and-fears-rise/1109483/> accessed 8
January 2023.
527
‘Wall Street and the Financial Crisis:Anatomy of a Financial Collapse’ (US Senate,
2011).
211
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
frequency of “normal financial accidents” as the use of financial AI becomes
more widespread.528
Both the New York Stock Exchange and the Nasdaq, the two largest stock
exchanges in the United States, have had major breakdowns in recent years,
causing the temporary suspension of hundreds of billions of dollars’ worth of
trade for many hours during otherwise typical trading sessions. 529
528
M Schneiberg and T Bartley, ‘Regulating or Redesigning Finance? Market Architectures,
Normal Accidents, and Dilemmas of Regulatory Reform’ (2010) 30 Markets on Trial: The
Economic Sociology of the U.S. Financial Crisis: Part A
<https://www.emerald.com/insight/content/doi/10.1108/S0733-
558X(2010)000030A013/full/html> accessed 7 January2023.
529
E S Browning and Scott Patterson ESB, ‘Market Size + Complex Systems = More
Glitches’ (WSJ, 22 August 2013)
<http://online.wsj.com/article/SB10001424127887323980604579029342001534148.html>
accessed 2 January 2023.
530
J Hasbrouck and G Saar, ‘Low-Latency Trading’ [2011] Johnson School Research Paper
<http://dx.doi.org/10.2139/ssrn.1695460> accessed 2 January 2023.
212
Acing the AI: Artificial Intelligence and its Legal Implications
aspect in the future of finance are three important consequences that stand
out as highly significant.
One of the biggest problems facing the financial sector in the near future is
cyber security. The complexity of the financial system and the security
threats it faces are exacerbated by the fact that the underlying technology
infrastructure is mostly privately held and managed by a wide variety of
financial intermediaries.531 Private financial firms control a large portion of
the United States’ technological and cyber infrastructure, which could make
it difficult to take timely, coordinated, and security-enhancing actions,
especially if companies prioritize short-term profits and other factors, such as
secrecy, over financial cyber security. 532 While it may make sense for certain
financial institutions to delay investing in cyber security in the short term,
doing so might increase cyber security risks for the whole sector.533 Due to
the integrated and intermediated structure of contemporary finance, a
company’s vendors and counterparties also require robust financial cyber
security to protect against market volatility.
It is anticipated that the financial industry will see an increase in the number
of investments made in this area, as well as a larger push for improved
cooperation between private and governmental players, in order to better
meet the issue posed by cyber security in the financial sector. To begin with,
531
K Eichensehr, ‘The Cyber-Law of Nations’ [2014] Georgetown Law Journal
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2447683> accessed 2 January 2023.
532
S Baker and S Waterman, ‘In the Crossfire: Critical Infrastructure in the Age of Cyber
War’ (Cyberwar Resources Guide, 1 January 2010) <https://www.projectcyw-
d.org/resources/items/show/158> accessed 1 January 2023.
533
DE Bambauer, ‘Ghost in the Network’ (2014) 162(5) University of Pennsylvania
Review1011 <https://scholarship.law.upenn.edu/penn_law_review/vol162/iss5/1> accessed
17 December 2022.
213
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
it is expected that, in the next few years, financial institutions will increase
their spending on cyber security.
534
S Harris, @War:The Rise of the Military- Internet Complex (Houghton Mifflin Harcourt
2015).
535
FJ Garcia, ‘Globalization, Inequality & International Economic Law’ (MDPI, 26 April
2017) <https://www.mdpi.com/2077-1444/8/5/78> accessed 15 December 2023.
214
Acing the AI: Artificial Intelligence and its Legal Implications
In any case, the new economic and political dynamics that are shaping our
increasingly globalized environment, the diversity of the underlying cultures
and related attributes, the disparities in legal systems and approaches all
throughout the world, the pure realities of the enormous changes that occur
within Latin America, East Asia, Central and Eastern Europe, and Southern
Africa in particular, the ongoing need for feasible financial sector law
reforms, and the increasing significance of globalisation are all factors that
should be considered. This does not lessen the legal significance and
relevance of developing new international “road rules” for financial
institutions and banking institutions (whether private, public, or
intergovernmental in character) in the context of the global 21st century.
536
B Alex and LR Pierre, ‘International Financial Centres, Global Finance and Financial
Development in the Southern Africa Development Community (SADC)’ (2017) 9(7) Journal
of Economics and International Finance 68 <http://dx.doi.org/10.5897/jeif2017.0849>
accessed 5 January 2023.
215
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
(c) The Implication of Compliance and Technology
Since regulation and new financial technology are both on the rise at the
same time, many financial institutions will merge their compliance and
technology departments to better serve the needs of today’s financial sector.
Many financial institutions have previously seen the benefits of leveraging
the capabilities of modern information technology in their trading, investing,
research, and other business-side activities, but they are now realising that
the same is true for their regulatory operations. As a result of increased
demand from regulators and managers, compliance will increasingly rely on
cutting-edge IT infrastructure. Financial institutions in the modern period
operate in an extremely fluid and intricate market and regulatory setting. 537 It
seems certain that, if not already the case, in the near future, a strong IT
infrastructure will be equated with a strong compliance system in the
financial sector. And the tech-savvy compliance officer will rise to
prominence as a crucial part of the 21st-century financial system’s web of
life.
537
Manuel A. Utset, ‘Complex Financial Institutions and Systemic Risk’ (2011) 45 Georgia
Law Review<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1810144> accessed 5
January 2023.
538
J Barrat, Our Final Invention (Saint Martin’s Griffin 2015).
216
Acing the AI: Artificial Intelligence and its Legal Implications
When it comes to legal and compliance activities in the financial sector, the
real battles of the future would not be between people and AI, but rather
between humans and other humans. 542 Rather than wondering how smart
machines will replace lawyers and compliance officers, the future of security
and regulatory responsibilities in the financial industry should focus on how
lawyers and compliance officers can use smart machines to build more
lawful, compliant, and lucrative institutions.
539
GA Akerlof and RJ Shiller, Animal Spirits: How Human Psychology Drives the
Economy, and Why It Matters for Global Capitalism (Princeton University Press 2009).
540
A Saunders and L Allen, Credit Risk Management in and Out of the Financial Crisis
(2010).
541
S Baker, Final Jeopardy: Man vs. Machine and the Quest to Know Everything (Houghton
Mifflin Harcourt 2011).
542
Barry Ritholtz, ‘Trading under the Influence of Emotion’ (Bloomberg, 3 December 2015)
<https://www.bloomberg.com/opinion/articles/2015-12-03/trading-based-on-emotion-is-
disastrous-for-investors> accessed 2 January 2023.
217
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
VII. Conclusion
The financial industry is expected to see a further uptick in the use of AI and
ML technologies. Accelerating advancements in processing power, data
storage, and big data, as well as noteworthy gains in modelling and use-case
adaptations, are all major factors fuelling this pattern. The spread of the
COVID-19 virus hastened the transition to a cashless society and the
widespread use of digital financial services, both of which boost the allure of
AI/ML systems among financial service providers. AI/ML will deliver
benefits but pose financial policy problems. AI systems give financial
institutions cost savings, efficiency benefits, new markets, and improved risk
management; they deliver customers new experiences, products, and cheaper
costs; and offer strong tools for regulatory compliance and prudential
monitoring. These systems raise ethical problems and new hazards to the
integrity and safety of the financial system, whose full scope is unknown.
Financial sector officials face a difficult problem since these innovations are
continually developing as new technologies emerge. These advances require
better supervision, monitoring mechanisms, and active interaction with
stakeholders to detect hazards and implement regulatory measures.
218
Acing the AI: Artificial Intelligence and its Legal Implications
219
The Virtues and Vices of Digital Cloning and its Legal Implications
Abstract
Within a short period, AI has established itself as a significant area
that is transforming every walk of life. Most sectors, including trade
and commerce, are impacted by its influence. Consumers are the key to
trade and commerce. An effective, efficient, and balanced
implementation of AI in commerce and trade is the sine qua non for
ensuring better promotion and protection of the rights of consumers.
However, the regulations necessary for the effective implementation of
AI in India seem to have been neglected by policymakers over time.
Furthermore, the vexing issue concerning AI is that it is very difficult
to characterise, at least comprehensively. In this scenario, when there
are no specific regulating norms related to AI, consumers are bound to
suffer. This chapter makes an effort to analyse the aforementioned
issues. The foremost issue deals with the concern of whether the
provisions of consumer protection need to be strengthened after the
implementation of AI. The author will try to establish that the current
consumer protection laws are almost negligible concerning issues
related to AI and hence, exhaustive provisions related to AI are the
need of the hour. Another important aspect upon which this chapter
focuses is the right of the consumer to compensation in the event that
their data is infringed. The other issue pertains to the liability that
arises out of the damages caused by the activities performed within the
220
Acing the AI: Artificial Intelligence and its Legal Implications
I. Introduction
543
Robert Walters and Marko Novak, Cyber Security, Artificial Intelligence, Data
Protection and the law (1st edn, Springer Nature Singapore Pte Ltd. 2021).
221
The Virtues and Vices of Digital Cloning and its Legal Implications
544
Robert Mazzolin, ‘Artificial Intelligence and Keeping Humans “in the loop”’ (2020)
Modern Conflict and Artificial Intelligence <https://www.jstor.org/stable/resrep27510.10>
accessed 12 January 2023.
545
McCarthy (n 4).
222
Acing the AI: Artificial Intelligence and its Legal Implications
546
Jim Goodnight, ‘Artificial Intelligence’ (SAS Insights)
<https://www.sas.com/en_in/insights/analytics/what-is-artificial-intelligence.html> accessed
January 10, 2023.
547
Dom Galeon, ‘The World’s First Album Composed and Produced by an AI Has Been
Unveiled’ (Futurism, 21 August 2017) <https://futurism.com/the-worlds-first-album-
composed-and-produced-by-an-ai-has-been-unveiled> accessed on January 11, 2023.
548
Consumer Protection Act 2019, s 2(7).
223
The Virtues and Vices of Digital Cloning and its Legal Implications
549
Sale of Goods Act 1930, s 41(2).
224
Acing the AI: Artificial Intelligence and its Legal Implications
problems faster, error-free, and easy, it is also true that it comes with certain
complications. To what extent society must adapt to technological
innovations has to be based on the needs of that society, be they economic or
social. 550 Since the process is complicated, it is very likely that a
considerable group of people, especially those who are not tech-savvy,
would find themselves in a disadvantaged position. As such, unawareness
can be used by the companies as an opportunity to stultify the consumers.
550
Robert van den Hoven van Genderen, ‘Do We Need Legal Personhood in the Age of
Robots and AI?’, in Marcelo Corrales and Mark Fenwick (eds) Robotics (Springer Nature
Singapore Pte Ltd. 2018).
551
Thomas H. Davenport and Randy Bean, ‘Companies Are Making Serious Money with
AI’ (MIT Solan Management Review, 17 February 2022)
<https://sloanreview.mit.edu/article/companies-are-making-serious-money-with-
ai/#:~:text=A%202021%20McKinsey%20global%20survey,AI's%20economic%20return%2
0is%20growing> accessed on 13 January 2023.
225
The Virtues and Vices of Digital Cloning and its Legal Implications
552
Commission, ‘Directorate-General for Communications Networks, Content and
Technology, Regulation of the European Parliament and of the Council Laying Down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending
Certain Union Legislative Acts’ [2021] COM 2021/206.
553
Bhavya Dilip Kumarand and Himanshi Lohchab, ‘Low Smartphone Reach Coupled with
Lack of Digital Literacy Hit Rural India Covid Vaccine Drive’ (The Economic Times, 16
May 2021) <https://ecoti.in/DGnN1b53> accessed on 12 January 2023.
226
Acing the AI: Artificial Intelligence and its Legal Implications
Any government schemes and the like become available to the rural
population much later. It takes time to reach them. This is the case when a
simple Act is to be implemented. The application of AI is a rather complex
setup, which requires much more technicality and specialisation. It would not
be justified to expect such a level of expertise from the rural population
immediately. Women consumers, in particular, are disadvantaged because
they are not independent, are less educated, and have less awareness.
However, it does not mean that the implementation of the AI regime is
impossible in rural areas. It would require much more attention and
consciousness. We cannot hold back any scheme in perpetuity owing to the
backwardness of the rural areas. We should make gradual developments
along with making rural people aware of the technicalities of AI. The holistic
development of rural India depends considerably on these significant
developments.
VI. Implications of AI
227
The Virtues and Vices of Digital Cloning and its Legal Implications
For simulating human behaviour, AI takes out useful information from the
previous behaviour of the consumers, and this information is used by the
businesses, at the cost of harming the data privacy and security of the users.
AI can be used as an important asset of the business and economy of the
country. However it can also be a liability for the legislators and companies
when there is a breach of data privacy or issues with AI regulation. Even
though privacy and data protection are evolving areas of law and economic
development, they have not matured as a measure of redressing economic
and personal harm comparable to the protection of intellectual property,
copyright, criminal procedure, and international trade law. 554 In a landmark
decision of 2017, the Supreme Court held that the right to privacy is an
inalienable and intrinsic part of the right to life enshrined under Article 21 of
554
Robert Walters and Marko Novak, ‘Cyber Security, Artificial Intelligence, Data
Protection and the Law’ (1st edn, Springer Nature Singapore Pte Ltd.) 90-91.
228
Acing the AI: Artificial Intelligence and its Legal Implications
the Constitution of India. 555 Although India does not have any specific law
for Data Protection, the personal information of consumers is protected
under Sections 43A and 72 of the Information Technology Act, 2000. Still,
the laws are not sufficient to protect the privacy of consumers.
The challenge in front of the legislature is to pass privacy laws that can
protect consumers from the adverse effect of AI without compromising the
efficient implementation of AI. The onus is on the government to enact
specific privacy laws in relation to consumer protection. Further, the major
challenge for companies is to use AI as an asset while prioritising the data
privacy and security of the consumers. Furthermore, companies face issues
working with AI to build the confidence of consumers, for which they are
already in a disadvantageous position in their eyes in terms of privacy.
Privacy is a complex topic; it is not static but rather a dynamic concept. The
same can be inferred by a Kerala High Court judgment that extended the
ambit of privacy by laying down that the right to be forgotten is an intrinsic
part of the right to privacy. 556 Hence, it is inevitable to consider the right to
be forgotten while deciding AI-related disputes. Data can be disguised via
anonymity techniques or pseudonymization to prevent storing identifiable
data, thus circumventing the right to be forgotten. However, there is no
practical way to retrieve data once it has been given to the machines, and the
system operates in ways that we do not often understand. As a result, the
concept of the right to be forgotten is difficult to apply to AI.
555
Constitution of India 1950, art 21.
556
Justice K.S. Puttaswamy (Retd) v Union of India [2017] 10 SCC 1.
229
The Virtues and Vices of Digital Cloning and its Legal Implications
Usually, consumers do not know that the information of any product, price,
and the terms of the contract are specifically designed for them according to
their profile. Sometimes they receive unfavourable information without the
knowing how such information was received. This happens because of some
specific features of AI such as the “black box effect” (a metaphor used for
AI technology that refers to the function of AI providing relevant
information without referring to how it internally works), complexity, and
230
Acing the AI: Artificial Intelligence and its Legal Implications
557
European Commission, ‘EU White Paper Report on AI and Law’ (19 February 2020).
231
The Virtues and Vices of Digital Cloning and its Legal Implications
the person who created it? The same is said in the European Union’s report
on AI –
“In general, EU legislation on product safety allocates the
responsibility to the producer of the product placed on the
market, including all components e.g., AI systems. But the rules
can for example become unclear if AI is added after the product
is placed on the market by a party that is not the producer. In
addition, EU Product Liability legislation provides for the
liability of producers and leaves national liability rules to
govern the liability of others in the supply chain.”558
The Product Liability Directives 559 make the producer strictly liable in the
event that a consumer incurs damages due to their product. The injured party
needs to establish a causal link between the defect in the product and the
injury caused to him, which is difficult to prove in AI transactions.
558
ibid.
559
Council of the European Communities, ‘Product Liability Directives (1985)’ (30 July
1985).
232
Acing the AI: Artificial Intelligence and its Legal Implications
An important issue arises when the producer has no role in the cause of
action. Nowadays, artificial intelligence is creating creative content with
minimal human assistance, but creativity and rationality are considered
human attributes by society, and the same reflects in the legislation too,
especially in matters related to copyright. The laws are silent on the question
of who will get the copyright in the creation of AI – the AI itself, or the
person who created it? This shows the loopholes involved in the laws in the
path of getting remedy for the damages caused by the AI, for which even
copyright is not certain.
233
The Virtues and Vices of Digital Cloning and its Legal Implications
Properly regulated norms are required for business persons who use AI so
that the risk to consumers and society can be minimised without jeopardising
the potential advantages. The law must be formed considering the impact on
consumers, such as protection of their privacy, equality in information
sharing, security from cyber attacks, and also liability in the case of damage
caused by the AI. Since AI is a complex and technical concept, the
reinforcement of laws must be on a technical level with a body of highly
specialised technical experts. Consumers must be protected from the unfair
discrimination and differentiation caused by AI. Transparency should be
maintained on how a machine decides, with the intention to enhance trust in
AI transactions. The consumers must know about their personal information
that is being shared through AI. Those who are at the forefront of the AI
revolution hold much power, which in turn carries with it many
responsibilities. They are required to make people understand its
technicalities in the simplest possible way. It must be ensured that the
information which consumers receive through AI is reliable.
It is good to have certain activities listed in the Act itself for which the use of
AI shall be prohibited. Some AI systems are considered unacceptable,
against moral values, and in violation of fundamental rights. There must be
provisions prohibiting such AIs. Furthermore, some AI technologies could
234
Acing the AI: Artificial Intelligence and its Legal Implications
235
The Virtues and Vices of Digital Cloning and its Legal Implications
Abstract
Digital cloning is an Artificial Intelligence (AI) technology that has
seen a prominent rise in use in recent years. The Massachusetts
Institute of Technology (MIT) dedicated a podcast episode to the new
technological advancement that allowed humans to talk with people
who had died. Thus, the promise and potential of this technology
cannot be understated. In this chapter, the author shall critically
examine the concept of digital cloning, its meaning, and its various
types, such as Audio and Visual cloning and Mind cloning. Apart from
the fundamentals, the chapter shall address the issue in two spectrums
– criminal and civil. For the criminal spectrum, the author shall give a
preferential focus to the tremendous rise of the use of deepfake
technologies for crimes such as theft and pornography. In the civil
spectrum, the focus shall be placed on several new digital cloning
technologies used in several sectors such as education, entertainment,
and healthcare. Finally, the author shall consider the legal aspects
surrounding digital cloning. With the concept of digital cloning being
considerably new, the lack of regulation can become a severe problem
and lead to several violations. Apart from analysing existing or newly-
founded criminal laws, the author shall consider concerns such as data
privacy and copyright issues. As a result of this comprehensive study,
the author shall determine whether digital cloning can be practically
applied successfully in its current state.
236
Acing the AI: Artificial Intelligence and its Legal Implications
I. Introduction
561
Charlotte Jee, ‘Technology That Lets Us ‘Speak’ to Our Dead Relatives Has Arrived. Are
We Ready?’ (MIT Technology Review, 19 October 2022)
<https://www.technologyreview.com/2022/10/18/1061320/digital-clones-of-dead-people/>
accessed 18 November 2022.
562
Truby Jon and Brown Rafeal, ‘Human Digital Thought Clones: The Holy Grail of
Artificial Intelligence for Big Data’ (2020) 30(2) Information & Communications
Technology Law 140 <https://doi.org/10.1080/13600834.2020.1850174> accessed 19
November 2022.
563
ibid.
237
The Virtues and Vices of Digital Cloning and its Legal Implications
Audio and visual cloning refers to the direct manipulation of audio and
visuals. Such cloning can help create fake images, videos, audio, and
avatars.565 AV cloning forms a crucial part in the unfortunate rise of
“Deepfakes”, which the author shall discuss in detail further in the chapter.
When we talk about audio cloning, Deepfakes do not even require an entire
audio file. A sound clip of 3.7seconds is enough for “Baidu” to clone human
564
Shoshana Zuboff, ‘You Are Now Remotely Controlled’ (The New York Times, 24
January 2020) <https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-
capitalism.html> accessed 19 November 2022.
565
ibid.
238
Acing the AI: Artificial Intelligence and its Legal Implications
When we consider visual cloning, the Mark Zuckerberg Deepfake will serve
as a perfect example.568 In the fake video, Zuckerberg states, “Imagine this
for a second: One man, with total control of billions of people’s stolen data,
all their secrets, their lives, their futures.” Mark Zuckerberg previously
showed a hypocritical attitude towards Deepfake videos of other people. In
May 2019, a Deepfake video of Nancy Pelosi, House Speaker, showed her
speaking slurred.569 While YouTube removed the video, Facebook refused to
remove it. Once Zuckerberg got his Deepfake video, he changed Facebook’s
policy during the 2020 elections.570 The author believes that the lack of
awareness and consequences regarding AV cloning is creating such issues.
Zuckerberg’s attitude shift came too late, as it would not be possible to
reverse the damage.
566
Sercan O Arik, Jitong Chen, Kainan Peng, Wei Peng and Yanqi Zhou, ‘Neural Voice
Cloning with a Few Samples’ (32nd Conference on Neural Information Processing Systems,
October 2018).
567
Jesse Damiani, ‘A Voice Deepfake Was Used to Scam a CEO out of $243,000’ (Forbes,
3 September 2019) <https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-
deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=76a2dfd22416> accessed 18
November 2022.
568
Rachel Metz and Donie O’Sullivan, ‘A Deepfake Video of Mark Zuckerberg Presents a
New Challenge for Facebook’ (CNN, 12 June 2019)
<https://edition.cnn.com/2019/06/11/tech/zuckerberg-deepfake/index.html> accessed 20
November 2022.
569
Donie O’Sullivan, ‘Doctored Videos Shared to Make Pelosi Sound Drunk Viewed
Millions of Times on Social Media’ (CNN, 24 May2019)
<https://edition.cnn.com/2019/05/23/politics/doctored-video-pelosi/index.html> accessed 20
November 2022.
570
Hadas Gold, ‘Facebook Tries to Curb Deepfake Videos as 2020 Election Heats up’
(CNN, 7 January 2020) <https://edition.cnn.com/2020/01/07/tech/facebook-deepfake-video-
policy/index.html> accessed 18 November2022.
239
The Virtues and Vices of Digital Cloning and its Legal Implications
571
Jon and Rafeal (n 562) 143.
572
ibid.
573
Natalie O’Neill, ‘Companies Want to Replicate Your Dead Loved Ones with Robot
Clones’ (VICE, 16 March 2016) <https://www.vice.com/en/article/pgkgby/companies-want-
to-replicate-your-dead-loved-ones-with-robot-clones> accessed 19 November 2022.
240
Acing the AI: Artificial Intelligence and its Legal Implications
information about their family members. The movement has stated that they
plan to start commercialisation within 10-20 years. 574
A critique of HereAfter AI
Besides Terasem, mind cloning for resurrecting dead family members is also
part of a Los Angeles-based technological company – HereAfter AI. In her
article, Charlotte Jee notes her experience using the services HereAfter AI
provides. She goes on to note that although her parents are alive and well,
she was able to fall under the illusion that her digital parents were real while
talking with their digital avatars.575 We can therefore see that the company’s
technology is quite effective.
The company states its objective as “preserving a person’s stories and voice
forever.” Photo albums and life story books are non-interactive and thus, this
technology intends to make memories interactive. The author’s concerns
regarding HereAfter AI revolve around their privacy policy. The question
about whether they would sell it outside to others is posted on the company’s
FAQ page. Instead of discussing the company’s compliance with US privacy
norms such as the California Consumer Privacy Act (CCPA) of 2018 and
GDPR principles, they only say, “Nope, never”. The website does not
contain any privacy policy that users can go through before opting for their
services. Despite such grave data privacy concerns, the lack of awareness
amongst the common public is attracting many people to invest in the
products. As Charlotte Jee noted, if an individual’s loved ones could be
574
ibid.
575
Jee (n 561).
241
The Virtues and Vices of Digital Cloning and its Legal Implications
connected with them again after their death, would they think it was wrong
to try?576
When we examine the categories of questions the company shall ask for
utilising its product, such as ‘Advice,’ ‘Ancestry,’ ‘Celebrations,’
‘Childhood,’ ‘Children,’ ‘College,’ ‘Dating,’ ‘Feelings,’ ‘Friends’ and much
more, we can see how each of these categories can contain sensitive
information about a person. They will have people answer at least six
questions of a highly personal nature. When data privacy problems continue
to exist even with a privacy policy, the lack of one creates a severe problem
for the users of this product. Thus, we must ask ourselves how this company
can do this without having a proper privacy policy page on its website.
576
ibid.
242
Acing the AI: Artificial Intelligence and its Legal Implications
in which users can address any future issues even before purchasing a
service.
There are three types of user profiles within consumer behaviour cloning,
namely:
Explicit User Profiles are created using surveys and ratings. This profile type
is commonly seen on websites like Amazon and Flipkart. Systems create
Implicit User Profiles through digital footprints. This method is slightly
problematic as it could lead to privacy and data protection violations. Hybrid
User Profiles are a simple combination of both Explicit and Implicit User
Profiles.
Along with the types, there are three methods to create user profiles. The
Content-based method refers to creating user profiles using past behaviour in
577
Unfold Labs, ‘AI Driven Personalization’ (Medium, 11 June 2019)
<https://unfoldlabs.medium.com/ai-driven-personalization-6dc9c47c1418> accessed 18
November 2022.
578
ibid.
243
The Virtues and Vices of Digital Cloning and its Legal Implications
The information collected to attain this goal could be the time taken to
compare the price of a product, political views read over a period of time,
location history from an individual’s phone, and other such sensitive data.
Not only does digital thought cloning allow for accumulation of general data,
but it can also collect data such as likes, pages followed, and comments.583
579
ibid.
580
Jon and Rafeal (n 562).
581
‘Digital Thought Clones Manipulate Real-Time Online Behavior’ (Help Net Security, 7
December 2020) <https://www.helpnetsecurity.com/2020/12/07/digital-thought-clones/>
accessed 6 January 2023.
582
Jon and Rafeal (n 562) 145.
583
ibid.
244
Acing the AI: Artificial Intelligence and its Legal Implications
The professors further share a table that shows how easily one can predict
users’ sensitive data from their likes on Facebook. The likes translate into
people who are single, in a relationship, become parents at 21 years, smoke
cigarettes, drink alcohol, use drugs, are of diverse races, have different
political affiliation and have various sexual preference. 584
The critical problem regarding digital thought cloning is the lack of any other
knowledge regarding the concept. The professors note that the concept is
novel and is a term they coined. Another significant fact is that digital
thought cloning is a theoretical concept that does not have a practical
technology in use yet.585 Thus, its true practical implications cannot be
entirely ascertained at present. However, from the professors’ study, it is
clear that if the theoretical notion ever comes to fruition, the lack of norms
governing the concept will lead to massive exploitation. The Facebook-
Cambridge Analytica scandal stands as proof of what such technology can
do if left unmonitored. Thus, it is imperative for pre-emptive and
prophylactic action to safely use any digital thought cloning technology that
may arise in the future.
584
ibid 146.
585
ibid 150.
586
J Kietzmann et al, ‘Deepfakes: Trick or Treat?’ (2020) 63(2) Business Horizons 135
<https://doi.org/10.1016/j.bushor.2019.11.006> accessed 7 January 2023.
245
The Virtues and Vices of Digital Cloning and its Legal Implications
587
Kelly M Sayler and Laurie A Harris, ‘Deep Fakes and National Security’ CRS Report
IF11333.
588
ibid.
589
Anne Pechenik Gieseke, ‘“The New Weapon of Choice”: Law’s Current Inability to
Properly Address Deepfake Pornography’ (2020) 73(5) Vanderbilt Law Review 1479
<https://scholarship.law.vanderbilt.edu/vlr/vol73/iss5/4> accessed 7 January 2023.
590
ibid.
246
Acing the AI: Artificial Intelligence and its Legal Implications
591
Samantha Cole, ‘AI-Assisted Fake Porn Is Here and We're All Fucked’ (VICE, 11
December 2017) <https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn>
accessed 10 January 2023.
592
California Civil Code, s 1708.86.
593
Georgia Code, s 16-11-90.
594
Virginia Code, s 18.2-386.2.
595
New York Civil Rights Law, s 52-C.
247
The Virtues and Vices of Digital Cloning and its Legal Implications
Similar to California, the State of New York also has a civil law concerning
deepfake pornography. The New York Civil Rights Law, in Section 52-C,
talks about the private right of action for unlawful dissemination or
publication of a sexually explicit depiction of an individual. Like the
California Civil Code, Section 52-C addresses altered depiction and
digitisation and levies civil liability in case of malicious intent. The provision
additionally levies a limitation period of either three years from the date of
publication of the sexually explicit material or one year from the date the
affected party discovered the sexually explicit material.
However, the States of Virginia and Georgia go a step further and levy
criminal liability upon the offenders. The Virginia Code for Crimes and
Offenses, under the unlawful dissemination or sale of images of another law,
states that the offence of selling videos or still images created through any
means with malicious intent shall be liable for a Class 1 misdemeanour
charge. Due to its holistic nature, deepfake pornography is also included in
its consideration. Similarly, the Georgia Code for Crimes and Offenses,
under the prohibition on nude or sexually explicit electronic transmissions,
states that a person who knowingly transmits sexually explicit content of
another person without their consent through electronic or any other means
248
Acing the AI: Artificial Intelligence and its Legal Implications
249
The Virtues and Vices of Digital Cloning and its Legal Implications
V. Conclusion
In the current scenario, the author believes that digital cloning cannot
become practical in its current form and stage. While there are certain
benefits for society, those benefits cannot currently outweigh the cons that
come with the technology. The number of risks and possible violations
without any legitimate accountability makes it untrustworthy. Perhaps if the
Deepfakes Accountability Act gets passed in the US, we might get a
foundation for the legal jurisprudence for this technology. When we consider
the scenario in a country like India, where the concept of digital cloning or
deepfake has not even been mentioned in the current legislative proposals,
the dangerous possibilities for exploitation are extremely alarming. Although
there are certain companies such as Soul Machines, that use this technology
with the aim of improving education, medicine, and even the entertainment
industry, the cons of digital cloning without proper laws are higher than the
pros. Thus, until then, it would be better for the technology to remain in a
research and development stage rather than enter commercial use.
596
M Conteh, ‘So What the Hell Was FN Meka, Anyway?’ (Rolling Stone, 31 August 2022)
<https://www.rollingstone.com/music/music-features/fn-meka-controversy-ai-
1234585293/> accessed 20 November 2022.
250
Acing the AI: Artificial Intelligence and its Legal Implications