Nliu CLT - Acing The Ai

Download as pdf or txt
Download as pdf or txt
You are on page 1of 260

Acing the AI: Artificial Intelligence and its Legal Implications

Acing the AI: Artificial Intelligence and its Legal Implications

ACING THE AI
ARTIFICIAL INTELLIGENCE AND ITS
LEGAL IMPLICATIONS

JULY 2023
NATIONAL LAW INSTITUTE UNIVERSITY
KERWA DAM ROAD, BHOPAL – 462 044 (M.P.)
Published by National Law Institute University (NLIU),
Kerwa Dam Road, Bhopal - 462044, Madhya Pradesh, India

Acing the AI: Artificial Intelligence and its Legal Implications

Copyright © 2023 by National Law Institute University, Bhopal (India)

No part of this publication may be reproduced or distributed in any form or by any


means, electronic, mechanical, photocopying, recording or otherwise or stored in a
database or retrieval system without the prior written permission of the publishers.
The program listings (if any) may be entered, stored and executed in a computer
system, but they may not be reproduced for publication.

This publication can be exported from India only by the publishers.


National Law Institute University, Bhopal (India)

ISBN: 978-81-957807-9-2

Information contained in this work has been obtained by the Cell for Law and
Technology, National Law Institute University, Bhopal (India), from sources
believed to be reliable and authors believed to be bona fide. However, neither the
publishers nor the editors guarantee the accuracy or completeness of any
information published therein, and neither the publishers nor the editors shall be
responsible for any errors, omissions, or damages arising out of use of this
information. This book is published with the understanding that the publisher,
editors and authors are supplying information but are not attempting to render legal
or other professional services. If such services are required, the assistance of an
appropriate professional should be sought. The views and opinions expressed in this
book are those of the authors only and do not reflect the official policy, position or
opinions of the publishers or the editors. The publishers or the editors shall not be
liable for any copyright infringement, plagiarism or the like committed by the
authors.

Cover Designer: Cell for Law and Technology, National Law Institute University,
Bhopal (India)

Visit us at: www.clt.nliu.ac.in


Write to us at: [email protected]

4
Acing the AI: Artificial Intelligence and its Legal Implications

PATRON
Prof. (Dr.) V. Vijayakumar
Vice Chancellor
National Law Institute University, Bhopal

FACULTY ADVISOR
Prof. (Dr.) Atul Kumar Pandey
Professor of Cyber Law,
Chairperson, Rajiv Gandhi National Cyber Law Centre and Head,
Department of Cyber Law, Bhopal

GENERAL STUDENT BODY OF NLIU CELL FOR LAW


AND TECHNOLOGY

EDITOR-IN-CHIEF
Sharqa Tabrez

DEPUTY EDITOR-IN-CHIEF
Saloni Agrawal

MANAGING EDITORS
Nandini Chouhan and Vanshika Jaiswal
EDITORIAL BOARD

Third Year
Bushra Abid Harshali Sulebhavikar Ishika Srivastava
Karman Singh

Second Year
Amit S. Krishnan Divyank Dewan Sakshi Gour
Samrudhi Memane Shreeji Patel Swadha Chandra

First Year
Ali Asghar Kshitij Gondal Pheoli Manvid
Priyanshu Danu Tejbeer Singh

MANAGERIAL BOARD
Third Year
Astha Jain

Second Year
Neha Kumari Rishabh Dwivedi Shivam Nishad

First Year
Muskan Khatri Sarfraz Alam Sharad Khemka
Utkarsh Maheshwari

8
Acing the AI: Artificial Intelligence and its Legal Implications

TABLE OF CONTENTS

MESSAGE FROM THE PATRON ..................................................................... 1


MESSAGE FROM THE FACULTY ADVISOR ................................................ 2
EDITORIAL NOTE ............................................................................................. 3
PROSPECT OF COPYRIGHT PROTECTION FOR AI-GENERATED
WORKS ................................................................................................................ 5
I. INTRODUCTION .......................................................................................... 6
II. ARTIFICIAL INTELLIGENCE ......................................................................... 9
III.THE APPLICABILITY OF COPYRIGHT LAW TO AI-GENERATED WORKS ....... 13
IV. THE RECENT LANDMARK AI RULINGS ...................................................... 16
V. HOW SHOULD AI GENERATED WORKS BE PROTECTED IN THE FUTURE-WAY
FORWARD ......................................................................................................... 22
VI. CONCLUSION ........................................................................................... 27
THE RABBIT-HOLE OF CONTENT MODERATION BY AI ....................... 29
I. INTRODUCTION ........................................................................................ 30
II. THE MODUS OPERANDI OF INTERMEDIARIES ............................................ 32
III. THE INVOLVEMENT OF AI ........................................................................ 35
IV. THE PROBABLE SOLUTION? ...................................................................... 39
A.I.: PERPETUATOR OF RACISM AND COLOURISM .............................. 42
I. INTRODUCTION ........................................................................................ 42
II. CAUSE OF THE BIAS .................................................................................. 44
III. SOLUTIONS .............................................................................................. 44
IV. AI BIAS IN BEAUTY STANDARDS ............................................................... 47
V. AI BIAS IN LAW ENFORCEMENT ................................................................ 49
VI. BIAS IN HATE SPEECH DETECTING ALGORITHMS ....................................... 56
VII. OTHER AREAS OF AI BIAS ......................................................................... 58
VIII. CONCLUSION ........................................................................................... 59
AI IN LAW: NOT A GAMBLE ON MORALITY BUT A FACILITATOR OF
PRECISION ....................................................................................................... 60
I. INTRODUCTION ........................................................................................ 61
II. SCOPE OF AI POWERED “SMART COURTS” IN INDIA ................................. 64
III. CRITICAL ANALYSIS OF THE USE OF AI IN COURTS ................................... 67
IV. CONCLUSION ........................................................................................... 73
V. SUGGESTIONS .......................................................................................... 74
THE TWO-WAY PROTECTIVE REGIME OF INTANGIBLE CULTURAL
HERITAGE IN ARMED CONFLICT .............................................................. 75
I. INTRODUCTION ........................................................................................ 75
II. THE APPLICABILITY OF EXISTING LAWS ON INTANGIBLE ASSETS ............. 78
III. CONCLUSION ........................................................................................... 95
“WHO LET THE DOGS OUT?” – PLACING ACCOUNTABILITY ON
WEAPONIZED AI IN INTERNATIONAL LAW ............................................ 97
I. INTRODUCTION: ....................................................................................... 98
II. OBLIGATIONS UNDER INTERNATIONAL LAW: ......................................... 103
III. CORPORATE RESPONSIBILITY ................................................................. 106
IV. INDIVIDUAL CRIMINAL RESPONSIBILITY ................................................ 107
V. SPLIT RESPONSIBILITY ........................................................................... 111
VI. ACCOUNTABILITY GAP AND VICTIM’S RIGHT TO LEGAL REMEDY ........... 112
VII. CONCLUSION ....................................................................................... 114
ARTIFICIAL INTELLIGENCE AND INTERNATIONAL LAW;
ANALYSING AI-TOOLS SIGNIFYING THE SCOPE OF CODIFICATION
........................................................................................................................... 116
I. INTRODUCTION ...................................................................................... 117
II. INITIATING IN INTERNATIONAL LAW ...................................................... 118
III. ANALYZING AI-TOOLS........................................................................... 123
IV. THE SCOPE OF CODIFICATION ................................................................. 129
V. CONCLUSION ......................................................................................... 131
PRODUCT LIABILITY DILEMMAS: DRIVERLESS CARS ...................... 133
I. INTRODUCTION ...................................................................................... 134
II. PRODUCT LIABILITY IN THE CONTEXT OF EUROPEAN LAWS ................... 137
III. LIABILITY OWING TO NEGLIGENCE ........................................................ 140
IV. INDIAN BACKGROUND ........................................................................... 142
V. IMPACT OF AUTONOMOUS CARS IN INDIA .............................................. 143
VI. CONSUMER PROTECTION ACT, 2019....................................................... 145
VII.RESOLVING ‘WHO’ MUST BE LIABLE ......................................................... 149
VIII. AMENDMENTS TO PARALLEL LEGISLATIONS ....................................... 151
IX. CONCLUSION ......................................................................................... 152
LEGAL FORECASTING AND AI BASED JUDGES: A JURISPRUDENTIAL
ANALYSIS USING COMPETITION LAW CASE STUDIES ....................... 154
I. INTRODUCTION ...................................................................................... 155
II. AI-BASED JUDGING AND JUDICIAL DISCRETION ..................................... 157

12
Acing the AI: Artificial Intelligence and its Legal Implications

III. CRITICAL LEGAL STUDIES ANALYSIS: HOW A SYSTEM WITH AI-BASED


JUDGES COULD PERPETUATE BIAS AND CREATE HIERARCHIES HIDDEN BENEATH
THE NEUTRAL EXTERIOR? ............................................................................... 164
IV. MOORE’S INSTITUTIONAL APPROACH TO LAW AND LEGAL FORECASTING
....................................................................................................................166
V. CONCLUSION ......................................................................................... 168
EXPLORING THE ROLE OF ARTIFICIAL INTELLIGENCE IN
ARBITRATION: LEGAL VIS-À-VIS TRIBUNAL SECRETARIES............ 171
I. INTRODUCTION ...................................................................................... 172
II. ISSUES IN ARBITRATION DUE TO TRIBUNAL SECRETARIES ....................... 173
III. UNDERSTAND AI AND HOW IT OPERATES ............................................... 177
IV. ARTIFICIAL INTELLIGENCE AND SECRETARIAL TASKS............................. 181
V. AI IN ARBITRATION: AN ASSESSMENT OF PRACTICALITY ....................... 186
VI. CONCLUSION ......................................................................................... 189
THE ROLE OF ARTIFICIAL INTELLIGENCE (AI) IN GLOBAL
FINANCIAL SYSTEM: CHALLENGES AND GOVERNANCE .................. 191
I. INTRODUCTION ...................................................................................... 191
II. MAJOR AI APPLICATIONS IN THE GLOBAL FINANCIAL ECOSYSTEM ......... 194
III. CHALLENGES ARISING FROM AI’S EXPANSION IN THE FINANCE SECTOR .. 198
IV. EXAMPLE OF CHALLENGES ..................................................................... 201
V. LIMITATIONS AND REGULATORY CONCERNS .......................................... 204
VI. IMPLICATION OF ARTIFICIAL INTELLIGENCE IN GLOBAL FINANCE .......... 212
VII.CONCLUSION............................................................................................ 218
AI AND CONSUMER PROTECTION: SAFETY AND LIABILITY
IMPLICATIONS VIS-À-VIS INFORMATION ASYMMETRY ................... 220
I. INTRODUCTION ...................................................................................... 221
II. ARTIFICIAL INTELLIGENCE: HUNT FOR A (MISSING) DEFINITION ............ 222
III. INSEPARABLE FOR CONSUMER: INESCAPABLE PROTECTION ................... 223
IV. AI REGULATION: A SINE QUA NON .......................................................... 224
V. AI IN RURAL AREAS: A STUDY FROM CONSUMER’S PERSPECTIVE .......... 226
VI. IMPLICATIONS OF AI .............................................................................. 227
VII.SOMETHING AT STAKE: CONSUMER PRIVACY? ......................................... 228
VIII.INFORMATION ASYMMETRY ANOTHER IMPLICATION OF AI ..................... 230
IX. DAMAGES BY AI: WHO WOULD BE LIABLE? ........................................... 231
X. SUGGESTIONS: UNSCRAMBLING SCRAMBLED EGGS ............................... 233
XI. CONCLUDING REMARKS......................................................................... 235
THE VIRTUES AND VICES OF DIGITAL CLONING AND ITS LEGAL
IMPLICATIONS .............................................................................................. 236
I. INTRODUCTION ...................................................................................... 237
II. TYPES OF DIGITAL CLONING .................................................................. 238
III. DEEPFAKE TECHNOLOGY – A DANGEROUS VICE FOR DIGITAL CLONING .. 245
IV. ISSUES AND CONCERNS .......................................................................... 249
V. CONCLUSION ......................................................................................... 250

14
Acing the AI: Artificial Intelligence and its Legal Implications

MESSAGE FROM THE PATRON


I am delighted to release the Book ‘Acing the AI: Artificial Intelligence
and its Legal Implications’. As a patron of this initiative, I am proud of the
hard work and dedication put in by the NLIU Cell for Law and Technology
team to bring this book to fruition.
The increasing use of artificial intelligence in various industries,
including the legal profession, has raised new and complex legal and ethical
questions. This book provides valuable insights and perspectives on the
intersection of AI and law and highlights the importance of continued
research and exploration in this field. It also addresses the legal and ethical
implications of AI, such as data protection, bias and discrimination, and the
role of AI in judicial process.
I believe this book will serve as an essential resource for legal
professionals, academicians, researchers, and students interested in the field
of AI and law. It will provide them with a comprehensive understanding of
the opportunities and challenges presented by this technology and equip
them with the knowledge and skills needed to navigate the legal and ethical
implications of AI.
I would like to congratulate the team on their hard work and dedication in
bringing this book together. It is a significant contribution to the legal
profession and the broader field of artificial intelligence. I look forward to
seeing the impact this book will have on advancing the understanding of AI
and its relationship with the law. Finally, I would like to thank the authors
for their valuable contributions to this book. Their insights and expertise
have made this book a comprehensive and insightful guide to the intersection
of AI and law.
Prof. (Dr.) V. Vijayakumar

1
Acing the AI: Artificial Intelligence and its Legal Implications

MESSAGE FROM THE FACULTY ADVISOR

I am pleased to present the Book ‘Acing the AI: Artificial Intelligence


and its Legal Implications’. This book is the result of the hard work and
dedication of the NLIU Cell for Law and Technology team, including the
editors and the authors. I would like to express my sincere appreciation to
everyone involved in this project for their valuable contributions.

The legal industry has been increasingly using AI, which has led to a
range of intricate legal and ethical challenges. The book provides a detailed
and comprehensive overview of the current legal state of AI, including the
opportunities and challenges that come with it.

I would like to extend my sincere thanks to the Vice Chancellor of NLIU


for his constant support and encouragement in making this book a reality.
Furthermore, the editorial board, deserve appreciation for their dedication
and hard work in bringing the book to fruition.

I believe that this book will serve as a valuable resource not only for legal
professionals, researchers, and students, but also for policymakers, business-
leaders, and individuals interested in knowing about the impact of AI on the
legal system and society as a whole. I hope this book will spark further
research and innovation in this field, and help to shape a future where AI is
used ethically and responsibly to advance the legal profession and society.

In conclusion, I would like to thank the NLIU-CLT team, the Vice


Chancellor, the editors and the authors for their hard work and dedication in
bringing this book in its present form. It is a significant achievement and
contribution to the field of AI and law, and I am proud to be associated with
it.
Prof. (Dr.) Atul Kumar Pandey

2
Acing the AI: Artificial Intelligence and its Legal Implications

EDITORIAL NOTE

As artificial intelligence technology continues to rapidly evolve, its


impact on the legal system has become increasingly significant. From
streamlining contract review to predicting case outcomes, AI is transforming
how legal professionals work and make decisions. This book ‘Acing the AI:
Artificial Intelligence and its Legal Implications’ explores the intersection of
these two fields, providing a comprehensive overview of the latest
developments and applications of AI in the legal domain.

Whether you are a law student, practicing attorney, or tech professional


interested in this exciting field, this book offers valuable insights and
perspectives on the challenges and opportunities presented by AI in the legal
system.

From the history and ethics of AI to the practical implications of using AI


tools in law firms, this book covers a range of topics that are critical to
understanding how AI is shaping the future of the legal industry. With
contributions from leading experts in the field, this book provides a thorough
and engaging examination of the key issues at the intersection of AI and law.

We hope this book will serve as a valuable resource for anyone interested
in the fascinating and rapidly evolving world of AI and its impact on the
legal system.

Editorial Board

3
Acing the AI: Artificial Intelligence and its Legal Implications

4
Acing the AI: Artificial Intelligence and its Legal Implications

PROSPECT OF COPYRIGHT PROTECTION FOR AI-GENERATED


WORKS
Satish Kumar and Akansha Yadav
(Teaching faculty, Jindal Global Law School)

Abstract
This chapter investigates the scope of copyright protection for artistic
works created with artificial intelligence (AI), together with the degree
of creative participation. It also assesses the viability of the current
legal system for defending these works through the lens of
international copyright law. Indian copyright law is modelled on
international copyright conventions such as the Berne Convention and
mainly focuses on the notion of original work by a human author. The
notion of authorship attribution for AI-assisted artistic works that are
autonomously made without human participation throughout the
creative process remains to be established.

Because modern AI applications can create works that appear


indistinguishable from those created by human authors when the
human originality exercised is inconsequential, vague, or convoluted,
traditional copyright doctrines appear unfit. Therefore, the use of AI in
the creative process should not preclude the availability of copyright
protection.

The economic and social ramifications of legal protection for AI-


generated works are significant. The whole creative economy of low-
cost AI-generated works deserves strict legislative attention. The
default approach now is to leave AI-generated content in the public

5
Prospect of Copyright Protection for AI-generated Works

domain. This chapter delves into many material facts, such as the
concept of AI and its position as a creator, a broad introduction to
copyright legislation and principles in India, the United Kingdom, the
United States, and Ireland, followed by an assessment of their
relevance to AI-generated works. Currently, AI-generated works are
not protected by copyright law. Nonetheless, this chapter discusses
potential future solutions such as the sui generis rule, the “work
created for hire doctrine” in the United States, and the “doctrine of
related rights” in the European Union to ascertain authorship.

In concluding remarks, two concerns are posed and answered: what is


the scope of legal protection for works generated by an autonomous AI
system under US, EU, and Indian copyright law? And in the future,
what options would there be to offer protection to these works, and
who should/could these rights vest in?

I. Introduction

(a) Background

The domain of intellectual property law, and primarily copyright law, aims
to protect works of authorship and bestow exclusive, economic, and moral
rights. Heretofore, it has been hypothesised in the chapter that creativity is
attributable to human beings only. This assumption that technology persists
largely independently from the creative process has led to a clear and concise
legal study of issues like authorship and originality in the field of copyright
law. Due to the intricate nature of the various human inputs a work created
with AI may potentially contain, terms like “AI-generated” and “AI-assisted”
have arisen to limit the amount of original human contribution. Nevertheless,
these terms still lack accurate descriptions. Due to the evolution of

6
Acing the AI: Artificial Intelligence and its Legal Implications

intellectual property law in today’s technologically based society, AI is one


of the most highly contentious disciplines. 1

Conventional copyright laws safeguard works with originality or levels of


creativity attributable to a human author alone. Still, in the current era, where
human creative contribution is largely based on or coupled with AI
assistance to bring out the original work, there is a requirement to better
understand the creative process. Furthermore, the pertinent question which
looms over us is whether AI-generated works would be covered by copyright
law if they did not meet the traditional requirements, and if they wouldn’t,
then what necessary protections are required to be made available to such
works? There is a need for justifications or adaptations to the current
regulatory framework as a result of the new technologies that are challenging
the IPR status quo and burgeoning more expediently than ever. 2

Because AI-assisted works are quite often supplied in high volumes and with
a great deal of less manpower, the protection of such works becomes more
consequential from an economic standpoint. Additionally, by introducing
better perspectives and facilitating the use of challenging techniques that
would otherwise require significant time and a large workforce to master,
they have the potential to further the heights of human creativity. Exactly
how this kind of optimised creativity will influence the creative sector is still

1
Maria Iglesias, Sharon Shamuilia and Amanda Anderberg, ‘Intellectual Property and
Artificial Intelligence – A literature review’ [2019] EUR 30017
<https://documents.pub/document/maria-iglesias-sharon-shamuilia-amanda-anderberg-2019-
12-17-maria-iglesias-sharon.html?page=1> accessed 10 January 2023.
2
Claudya den Kamp and Dan Hunter ‘On How Novel Technologies and Objects have
Shaped the History of Intellectual Property Rights’ [2019] CUP 1, 1-8.

7
Prospect of Copyright Protection for AI-generated Works

uncertain.3 Since the concept of originality is a critical determinant of


availability for legal protection and modern sophistication, AI models with
the ability to independently learn and refine their outputs have equally
challenged the notion of authorship. The question of the minimal
required level of human contribution in the creative process of an original
work to be eligible for copyright protection is of equal importance. This
chapter will concentrate on the laws, their practical application to AI-created
works “as is,” and potential solutions for authorship and, consequently, the
protection of these works.

(b) Points discussed

This chapter explores copyright law to determine whether AI-created works


are covered “as is” within it. Further, this chapter will provide the
opportunity to examine potential workarounds for the problem of AI-created
works that don’t meet the requirements for copyright protection. Certain
points have been discussed in this chapter, such as, do works involving the
use of AI currently satisfy the criteria for copyright eligibility, and what
unique contributions do they contain? Secondly, how effective is the current
system at safeguarding works created by autonomous AI systems under
copyright laws in distinct jurisdictions? Thirdly, what future remedies would
be available to safeguard these works? Whom should or could these rights
belong to, simultaneously?

3
Matthias Griebel, ChristophFlath and Sascha Friesike, ‘Augmented Creativity: Leveraging
Artificial Intelligence for Idea Generation in Creative Sphere’ [2022] Research-in Progress
Papers 77.

8
Acing the AI: Artificial Intelligence and its Legal Implications

II. Artificial Intelligence

(a) Defining artificial intelligence

Artificial Intelligence was originally coined and defined by Prof. John


McCarthy (also known as the Father of A.I.) in the simplest terms as “the
science and engineering of making intelligent machines, especially
intelligent computer programs.” 4

It is crucial to comprehend artificial intelligence in the context of copyright


law in order to interpret the issue at hand. Artificial Intelligence is not
explicitly defined in law; rather, it is referred to as a broad category of
technical computer-based systems intended to “simulate human behaviour”. 5

The Commission’s definition states that “Artificial Intelligence” refers to


systems that exhibit intelligent behaviour by analysing their environment and
acting somewhat autonomously to achieve specific goals. AI-based systems
that only function in the virtual world include voice assistants, image
analysis software, search engines, speech recognition systems, and Internet
of Things applications. Advanced robots, autonomous vehicles, drones, and
face recognition software are additional examples. 6

4
John McCarthy, ‘What is Artificial Intelligence’ [2007] Stanford University
<http://jmc.stanford.edu/articles/whatisai/whatisai.pdf> accessed 10 January 2023.
5
Josef Drexl et al, ‘Technical Aspects of Artificial Intelligence: An Understanding from an
Intellectual Property Law Perspective’ (2019) 19-13(3) Max Planck Institute for Innovation
& Competition <https://ssrn.com/abstract=3465577> accessed 10 January 2023.
6
Commission, ‘Communication from the Commission to the European Parliament, the
European Council, the Council, the European Economic and Social Committee and the
Committee of the Regions on Artificial Intelligence for Europe, Brussels’ (25 April 2018)
COM/2018/237 final.

9
Prospect of Copyright Protection for AI-generated Works

Another popular definition of artificial intelligence is credited to Ray


Kurzweil, who stated in 1990 that – “AI is the science of making computers
do things that require intelligence when done by humans.” Artificial General
Intelligence, or AGI for short, is a type of AI that has the potential to
independently learn and perceive a wide range of behaviours and tasks as a
human would. This type of AI is still a concept that is generally
acknowledged as being decades away from the state of the art in technology.
Instead, contemporary applications of AI can be seen as falling under the
category of “Narrow AI,” where they are trained to perform a specific,
controlled task. WIPO also suggests that the paramount dilemmas existing in
the field of IPRs are related to works done autonomously by AI, where no
human can be designated as an author if we go by the definitions as they are
today. 7

(b) Artificial intelligence and creative contribution

Boden mentioned that creativity is the ability to devise original, perplexing


and valuable ideas or relics. Many people are convinced that one of the
profound variables that make us human beings is our ability to be creative.
However, the evolution of artistic and literary works produced by AI that are
quite often indiscernible from those produced by humans has compelled a
reappraisal of this conventional paradigm. Miller contends that eventually,
technology may not only be comparable but even transcend human creativity
in ways that are currently unfathomable. 8

7
‘Perhaps in the future possibilities of joint authorships between humans and AI systems
become a relevant consideration?’ [2022] WIPO 4.
8
Margaret A. Boden, The Creative Mind Myths and Mechanisms (2nd edn, Routledge 2004)
7.

10
Acing the AI: Artificial Intelligence and its Legal Implications

Modern AI algorithms may very well be able to manufacture artistic works


that, at the minimum, seem to be creative in a manner that is comparable to
what might be anticipated from a human author from an economic
standpoint.9 According to Boden, creativity can be divided into three
different types: transformational creativity transcends these constraints to
create novel structures, exploratory creativity involves examining the
boundaries of a specific style, and creative combination brings together well-
known ideas in novel ways. 10 Additionally, according to Boden, AI is
capable of comprehending all three of these varieties of creativity, frequently
in a manner that is similar to human creativity. Activities showcased
by artificial machines that are taken to be creative by human perception are
the foundation for the study of artificial creativity.

The dynamic that promotes the notion of AI as an autonomous agent may


minimise the relative value of human creative contributions. Today, artificial
intelligence (AI) can produce a wide variety of works, including music,11
poems,12 and works of art.13 They already possess some expertise that the

9
Babak Saleh and Ahmed Elgammal, ‘Large-scale Amalgamation of Fine Art Paintings:
Learning the Right Metric on the Right Feature’ (2015) Department of Computer Science
Rutgers University, NJ, USA <https://arxiv.org/pdf/1505.00855.pdf?source=post_page>
accessed 10 January 2023.
10
Boden (n 8) 4.
11
Ed Lauder, ‘Aiva Is The First AI to Officially Be Recognised As A Composer’ (AI
Business, 10 March 2017) <https://aibusiness.com/verticals/aiva-is-the-first-ai-to-officially-
be-recognised-as-a-composer> accessed 20 December 2022; Samuel Karlsson, ‘Artificial
Intelligence and the Concept of Legal Entities in Copyright’ [2019] Lund University
<https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9020263&fileOId=90
20290> accessed 21 December 2022.
12
Matt Burgess, ‘Google’s AI Has Written Some Amazingly Mournful Poetry’ (Wired, 16
May 2016) <https://www.wired.co.uk/article/google-artificial-intelligence-poetry> accessed
21 December 2022.
13
Mark Brown, ‘‘New Rembrandt’ to be Unveiled in Amsterdam’ (Guardian, 5 April 2016)
<https://www.theguardian.com/artanddesign/2016/apr/05/new-rembrandt-to-be-unveiled-in-
amsterdam> accessed 20 December 2022.

11
Prospect of Copyright Protection for AI-generated Works

human mind cannot fully comprehend.14 One of the most talked-about AI-
produced works is the painting titled “Next Rembrandt”. Other, more recent
AI-generated art was created by the e-David system, a “painting machine
that simulates human painters and is able to draw a painting on a real
canvas.”15 These works of art would unquestionably be subject to copyright
laws if they had been created by a human. Still, an inevitable question that
nevertheless looms over is whether AI-generated works of art created
without human intervention should be similarly protected by copyright laws.
There are still some unresolved questions about the extent to which the
works created by AI should be secured by traditional copyright law. But,
given their growing popularity, it is more imperative than ever to find a
plausible solution.

(c) Need for protection of AI generated works

The ownership of the work produced, even computer-generated work, was


not customarily questioned because artificial intelligence (AI) was applied as
a tool rather than an autonomous creator of works. The need to address
issues with authorship and ownership in copyright protection for AI-
generated works is becoming more important than ever, as AI technologies
are developing and becoming more autonomous. Despite the fact that works
created by AIs may have significant commercial or scientific value, the
absence of copyright protection or an alternative solution doubtlessly
discourages the development of AI systems. The breakthroughs of AI

14
Madeleine de Cock Buning, ‘Artificial Intelligence and the Creative Industry: New
Challenges for the EU paradigm for Art and Technology by Autonomous Creation’ in
Woodrow Barfield and Ugo Pagallo (eds), Research Handbook on the Law of Artificial
Intelligence (Edward Elgar 2018) 517.
15
Oliver Deussen and Marin Gulzow, ‘e-David, A painting process’ (University of
Konstanz, 17 June 2019) <http://graphics.uni-konstanz.de/eDavid/?page_id=2> accessed 21
December 2022.

12
Acing the AI: Artificial Intelligence and its Legal Implications

machines will unquestionably be hampered by placing these works in the


public domain without safeguarding the work itself and the creator’s moral
rights.

When deciding how to protect these works, a balance must be struck


between upholding the law and motivating AI developers to create. It would
also have to secure the efficient operation of the internal market. According
to Kalin Hristov, if these requirements are met, AI will develop fluidly and
sustain its long-term purpose as a catalyst for creativity and innovation.” 16

III. The Applicability of Copyright Law to AI-generated Works

Autonomy being incorporated into AI creations is ever-increasing. These


smart machines are “deciding for themselves” and even “creating” without
human oversight. Sometimes AI systems produce works that seem original.
People are the intended beneficiaries of copyright law’s core tenets—that
they be granted the right to possess and defend creative creations.
Furthermore, it is widely accepted that works can only be considered original
if they are the product of human creativity. To secure legal protection for AI-
generated works, three primary concerns have been voiced: ownership,
authorship, and originality.

(a) Issues related to ownership and authorship

Many countries’ laws need to be more flexible to accommodate this


situation, since copyright only applies to works made by humans.17 Only “an
original work of authorship, provided that it was created by a human being”

16
ibid.
17
ibid.

13
Prospect of Copyright Protection for AI-generated Works

can be registered with the Copyright Office in the United States.18 The fact
that AI can produce work is undeniable. It is indisputable that AI is capable
of creative output. In its Draft Issues Paper on intellectual property policy
and AI, WIPO raised a number of questions on the subject.19

In the instance of Yoox case,20 the designers were actively involved in the
creative process, which allowed the company’s products to be considered
original. However, as the lines separating the AI’s input and the decisions
made by humans get increasingly blurred, it becomes more and more
difficult for businesses to be certain whether their works will be protected by
copyright or not.

AI has progressed to the point where it can create works independently,


without any guidance from a human, for instance through a self-learning
process. AI has no need for the original computer program’s developer. In
addition, the AI will evolve in such a way that any new creations it makes, as
well as any new applications it advances, will be regarded as unoriginal
because of the “extremely inconsequential link” to the original creator
between them.”21

18
‘Copyrightable Authorship’ (US Copyright Office) 4
<https://www.copyright.gov/comp3/chap300/ch300-copyrightable authorship. pdf> accessed
23 December 2022; Feist Publication, Inc., v. Rural Telephone Service Company, 499 U.S.
340 (1991).
19
WIPO Secretariat, ‘WIPO Conversation on Intellectual Property (IP) And Artificial
Intelligence (AI): Draft Issues Paper on Intellectual Property Policy and Artificial
Intelligence’ (World Intellectual Property Organisation, 13 December 2019)
WIPO/IP/AI/2/GE/20/1.
20
Giulio Coraggio, ‘AI in the Fashion Industry Unveils New Unexpected Legal Issues’
(Gaming Tech Law, 26 November 2018) <https://www.gamingtechlaw.com/2018/11/ai-
fashion-legal-issues.html> accessed 27 December 2022.
21
Erika Hubert, ‘Artificial Intelligence and Copyright Law in a European Context’ [2022]
Lund University 24

14
Acing the AI: Artificial Intelligence and its Legal Implications

(b) Issues related to the originality condition

Only the original AI programme is eligible for such protection. In other


words, the central query is: Where is the author’s “original intellectual
invention” in works created by AI,22 and are the works produced by AI
original? In light of the Infopaq case,23 it is highly improbable that AI would
be acknowledged as the author’s work because human beings are required
for the creative process. The work must also reflect the author’s personality
through “free and creative decisions”, as demonstrated in the Painer case, in
order for it to be regarded original. 24 According to EU law, the product
produced by AI and its connection to the “free and creative choices” of
human producers are thus too remote. The work cannot be credited to the
human programmer either. 25 The CJEU also affirms the notion that some
form of human authorship is required for a work to qualify for copyright
protection. Consequently, the author’s “personal touch” must be embedded
into the work. European copyright law states that artificial intelligence-
generated works, whether wholly or partially autonomous, are not original. It
is also impossible to attribute authorship under copyright law to either a
human or an AI. This means essentially that these kinds of works are not yet
covered by EU copyright law.

<https://lup.lub.lu.se/luur/download?func=downloadFile&recordOId=9020263&fileOId=90
20290> accessed 28 December 2022.
22
Begoña González Otero and João Pedro Quintais, ‘Before The Singularity: Copyright and
the Challenges of Artificial Intelligence’ (Kluwer Copyright Blog, 25 September 2018)
<http://copyrightblog.kluweriplaw.com/2018/09/25/singularity-copyright-challenges-
artificial-intelligence/> accessed 29 December 2022.
23
Eva-Maria Painerv Standard Verlags GmbH and Others [2011] C-145/10.
24
ibid.
25
ibid.

15
Prospect of Copyright Protection for AI-generated Works

IV. Recent Landmark AI Rulings

(a) United States of America

Dr. Thaler applied for patents for two DABUS innovations in 2019 with the
USPTO, and with DABUS designated as the inventor.26 Both applications
were rejected by the USPTO for not having identified a natural person as the
inventor and two other reasons. First, the USPTO determined that inventors
must be individuals in accordance with the legislative requirements. Second,
the current recognised system has never witnessed inventions working totally
on their own without human interference, and third, non-natural entities
cannot have transferable rights of property. Hence they cannot be given
ownership rights.

The USPTO claimed that the keyword “whoever” in section 101, which
reads as, “whoever finds any new and useful procedure, machine,
manufacture, or composition of matter may claim a patent,” implies that a
natural person must be the inventor in order to be issued a patent. 27 A patent
application must identify the person(s) responsible for the invention being
claimed in accordance with section 115(a), and section 115(h)(1) further says
that the inventor who makes an oath or declaration must be a “person.”28

The patent statute would be in contradiction if “’inventor’ were to be


interpreted broadly to embrace machines,” given this constant reference to
persons and humans. The USPTO further underlined in its third point that the
act of inventorship requires conception, a phenomenon that machines are

26
In re Application 16/524,350 [2022] WL 1970052 (22 April 2022).
27
ibid.
28
ibid.

16
Acing the AI: Artificial Intelligence and its Legal Implications

unable to complete.29 The vital element in conception for the inventor is


a distinct and lasting notion of the invention, which is then put into effect.30
Since Dr. Thaler’s patent applications credited DABUS as the inventor, the
USPTO rejected them on the grounds that they did not meet the requirements
of 35 U.S.C. S. 115(a).

According to the Supreme Court of the United States, “[a]s a general rule,
the author is then, the individual who actually creates the work, that is, the
person who turns an idea into a fixed, concrete manifestation entitled to
copyright protection.”31

(b) European Union

An AI system cannot be listed as an inventor on a patent application, the


EPO concluded in the case of DABUS. 32 Dr. Stephen Thaler “is in the trade
of engineering and using advanced [AI] systems that are capable of
generating patented product” in which no natural persons are associated. He
submitted concurrent applications to the EPO and UKIPO in 2018 and 2019,
naming DABUS as the inventor since it allegedly “acknowledged the
originality of its own concept before a natural person did.” Two sets of

29
In re Application of Application [2022] WL 1970052.
30
Burroughs Wellcome Co. v Barr Lab’ys, Inc. [1994] 40 F.3d 1223, 1227–28 (Fed. Cir.).
31
Cmty. for Creative Non-Violence v Reid [1989] 490 U.S. 730, 737; see also Burrow-Giles
Lithographic Co. v Sarony [1884] 111 U.S. 53, 61 (“[A]nd Lord Justice Bowen says that
photography is to be treated for the purposes of the act as an art, and the author is the man
who really represents, creates, or gives effect to the idea, fancy, or imagination.”).
32
‘EPO Refuses DABUS Patent Applications Designating a Machine Inventor’ (European
Patent Office, 20 December 2019) <https://www.epo.org/news-
events/news/2019/20191220.html> accessed 28 December 2022.

17
Prospect of Copyright Protection for AI-generated Works

synthetic neural networks are utilized by DABUS: one to produce ideas, and
another to evaluate them for ‘authenticity’ and ‘utility’. 33

The EPO rejected the application on the grounds of failure to comply with
Article 81 of the EPC and Rule 19(1) of its Implementing Regulations,
which require the designation of an inventor.34 Rule 19(1) of the EPC
requires inventors to be natural or legal persons and thus rejected the
designation of DABUS as inventor.35 The European Patent Office (EPO)
states that “names granted to natural beings, whether consisting of a given
name and a family name or homonyms, serve not just the role of identifying
them, but enable them to exercise their rights and form part of their
personality.” For the same reason that DABUS could not be considered an
employee of Dr. Thaler, it is because machines lack legal personality that
they are entitled to the rights vested in inventors under Article 62 of the EPC,
such as the ability to be listed on patent applications. 36

In the UK, the appellation received a similar rejection. According to the


UKIPO’s analysis, the EPC is limited to human inventors because section
13(2)(a) of the UK Patents Act requires the applicant to identify the “person

33
‘Can Artificial Intelligence Systems Patent their Inventions?’ (Dennemeyer & Associates,
22 November 2019) <https://www.dennemeyer.com/ip-blog/news/can-artificial-
intelligence-systems-patent-their-inventions/> accessed 28 December 2022.
34
Wensen An, ‘The Lay of the Land: Patent Law and AI Inventors’ (The Robotics Law
Journal, 30 October 2020) <https://roboticslawjournal.com/analysis/the-lay-of-the-land-
patent-law-and-ai-inventors-33535461> accessed 28 December 2022.
35
ibid.
36
Joel Smith, Rachel Montagnon and Laura Adde, ‘European Union: EPO Publishes
Reasons for Rejecting AI as Inventor on Patent Application’ (Mondaq, 7 February 2020)
<https://www.mondaq.com/uk/patent/891234/epo-publishes-reasons-for-rejecting-ai-as-
inventor-on-patent-
application#:~:text=by%20Joel%20Smith%2C%20Rachel%20Montagnon%20and%20Laur
a%20Adde,than%20a%20human%2C%20was%20named%20as%20the%20inventor>
accessed 29 December 2022.

18
Acing the AI: Artificial Intelligence and its Legal Implications

or individuals” who are regarded to be the inventor.37 For this reason, even if
the AI machine were considered the inventor, the applicant would have a
hard time securing legal title of the innovation. It is well established in law
that an IPR holder cannot be a corporation. To quote the application, “the
applicant acknowledges that DABUS is an AI machine and not a human,
consequently, cannot be understood to be a ‘person’ as required by the
Act.”38

(a) Arguments against and in favour of granting authorship/ownership


rights to AI machine

Conforming to the Supreme Court’s ruling in Goldstein v California, “any


tangible portrayal of the result of creative intellectual or artistic labour”
counts as satisfying “the authorship requirement.”39 The Ninth Circuit Court
of Appeals heard arguments on the notion of whether or not animals should
indeed be acknowledged as authors of works they ‘produce’ in the case of
Naruto v Slater.40 Seven-year-old ‘Naruto’ was the name given to the crested
macaque in Indonesia that, when left alone with wildlife photographer David
Slater’s unattended camera, captured multiple photos of itself. These
photographs were included in a book published by Slater. This prompted the
People for the Ethical Treatment of Animals to file a copyright infringement
lawsuit against him on behalf of Naruto. Using a strict interpretation of the
Copyright Act, the Ninth Circuit concluded that Naruto lacked the required
standing. 41

37
BL O/741/19, Decision, U.K. Intellectual Property Office.
38
ibid.
39
Goldstein v California [1973] 412 U.S. 546, 561.
40
Naruto v Slater [2018] 888 F.3d 418, 420 (9th Cir.).
41
ibid at 426.

19
Prospect of Copyright Protection for AI-generated Works

The court continued by noting that “[t]he phrases ‘children,’ ‘grandchildren,’


‘legitimate,’ ‘widow,’ and ‘widower’ all imply humanity and necessarily
exclude animals that do not marry and do not have heirs entitled to property
by law,” which are excluded by the Copyright Act.

The human-centric notion of authorship is supported by EU legislation as


well. A photographic work “is to be considered original if it is the author’s
own intellectual production reflecting his personality, no other criteria such
as merit or purpose being taken into account,” as stated in the EU Copyright
Term Directive 113.42 State authorities have often avoided extending
copyright protection beyond works generated by humans, despite the
growing breakthroughs in AI creativity, by banking on a human-centric
notion of authorship. According to the 2022 EU study on IP and AI, “the
copyright principle of originality is linked to a natural person”, and
“intellectual creations reflect an individual’s identity”. 43

Speaking of theories for assigning authorship and ownership rights to AI,


there is one theory which claims that the law declares that the author of an
AI generated work “must be regarded to be the person by whom the
preparations essential for the creation of the work are done. Such laws could
be used to designate AI developers to be authors of AI-generated works.”44
However, proponents of copyright protection for AI works contend that,

42
European Parliament and of the Council, ‘Directive 2006/116 on the Term of Protection
of Copyright and Certain Related Rights [2006] O.J. (L 372) 12, 14 (EC).
43
European Parliament Communication on Legal Affairs, ‘Report on intellectual property
rights development of artificial intelligence technologies’ (European Parliament, 2 October
2022) 9.
44
Copyright, Designs and Patents Act 1988, s 178(b) (EU).

20
Acing the AI: Artificial Intelligence and its Legal Implications

historically speaking, the emphasis on the need for ‘human interference’ has
been counterproductive to the growth of copyright law. 45

As with ‘conventional’ productions, artificial intelligence-generated works


have the potential to extend cultural legacy. As such, they are entitled to
some sort of copyright protection, which has already been accepted by
European legislators.46 The U.S. copyright system has purportedly evolved
far enough away from romantic authorship for algorithmic authorship to be,
maybe surprisingly, not fundamentally disruptive, according to Margot
Kaminski, who accepts a comparable potential. 47 Some supporters contend
that the incentive needed to ensure the creation and transmission of AI-
generated works would be offered by copyright protection. It is further stated
that protecting the works derived by AI systems will encourage their owners
to maintain adequate control over them, as well as aid in the advancement of
AI systems.48

According to the EU’s human-centred approach to copyright, some members


of parliament have advocated that the copyright of such works should be
automatically assigned to the copyright holder of the AI programme (such as

45
Madeleine de Cock Buning (n 14) 511, 533.
46
European Parliament Communication on Legal Affairs, “At a time when artistic creation
by AI is becoming more common [(citing the example of the ‘Next Rembrandt’)], we seem
to be moving towards an acknowledgement that an AI-generated creation could be deemed
to constitute a work of art on the basis of the creative result rather than the creative
process.”).
47
Kaminski, ‘Authorship, Disrupted: AI Authors in Copyright and First Amendment Law
(2017) 51 U.C. Davis L. Rev. 589.
48
Shlomit Yanisky-Ravid, ‘Generating Rembrandt: Artificial Intelligence, Copyright, and
Accountability in the 3A Era—The Human-Like Authors are Already Here—A New Model’
[2017] Michigan State Law Review 659, 701
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2957722> accessed 29 December
2022.

21
Prospect of Copyright Protection for AI-generated Works

the AI developer).49 From a utilitarian standpoint, assigning programmers a


copyright would stimulate them to continue exploring and creating AI and to
claim ownership of the works produced by the AI. Determining whether
there is a definitive link between the AI creator and the AI work is a decisive
factor.

Alternately, it has been suggested that the work-made-for-hire paradigm be


modified to include AI-created works. Experts in the field have drawn
parallels between the creative output of AI systems and that of a "work-
made-for-hire" service. Regardless of whether the employer or commissioner
had any hand in the creative process, under the work-made-for-hire doctrine,
the copyright is assigned straight to the employer or commissioner of the
work’s author, with no previous embodiment of rights for the true creator.

V. How should AI-generated Works be Protected in the Future –


The Way Forward

According to Professor Ole-Andreas Rogstad, the suggested techniques to


attribute ownership to people or artificial intelligence (AI) systems “do not
fit cleanly into the EU legal structure.”50 Adopting a sui generis strategy
would make it more workable and avoid modifying existing international
copyright laws. Despite legislative and academic debate, there have been no
judicial or administrative decisions regarding the copyrightability of AI-

49
European Parliament Committee on Legal Affairs, ‘AI Act: a step closer to the first rules
on Artificial Intelligence’ (European Parliament, 11 May 2023)
<https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-
closer-to-the-first-rules-on-artificial-intelligence> accessed 29 December 2022.
50
Otero and Quintais (n 24).

22
Acing the AI: Artificial Intelligence and its Legal Implications

generated works.51 The significant decision in Naruto v Slater establishes


authorship as a human agency.

The dilemma of whether artificial intelligence or consciousness produces a


legal personality and if “this legal personality retains all of the related rights
and liabilities and/or duties of a natural person is left unanswered.”52 AI
systems cannot own their works or file lawsuits for copyright infringement.
The majority of copyrightability depends on human effort. The Naruto v
Slater-related legal gap was filled by the Dreamwriter decision, which
illustrates how courts might apply the human contribution standard to
determine ownership of AI-generated works.53

(a) Two – tiered Mechanism for Protecting AI Creations

First, legislators should create a new legal protection for AI inventions made
with human input. Second, after deciding how they would ensure the
protection of the work itself and the moral rights of the owner, legislators
should approve making AI-generated works part of the public domain.

i. Sui Generis Protection of IP Rights

‘Sui generis’ means “of its own kind or class” in Latin.54 Unusual IP rights in
copyrighted works, patents, trademarks, and trade secrets are protected by

51
Paul T. Babie, ‘The “Monkey Selfies”: Reflections on Copyright in Photographs of
Animals’ (2018) 52 U.C. Davis Law Review Online 103, 116; See generally Shyamkrishna
Balganesh, ‘Causing Copyright’ (2017) 117 Columbia Law Review 1, 73–74.
52
Victor M. Palace, ‘What if Artificial Intelligence Wrote This? Artificial Intelligence and
Copyright Law’ (2019) 71 FLR 217, 226.
53
Copyright, Designs and Patent Act 1988, s 9(3); Copyright and Related Rights Act 2000, s
21(f); Copyright Act 1994, s 5(2)(a).
54
Henry Campbell Black, Black’s Law Dictionary (11th edn, St. Paul, Minn. West
Publishing Co. 2019).

23
Prospect of Copyright Protection for AI-generated Works

the sui generis system. 55 Copyright, IP rights, and this sui generis right are
separate. Regardless of the existence of copyright protection for the content,
it safeguards databases.

Some eminent experts believe that while current AI technology could be


used as a tool, creativity is something that comes from the people in charge
of it. However, works produced by AI are distinct from those produced by
computers, software, and cameras. There are three justifications for
establishing a new sui generis right to provide copyright protection to AI
works.

1) Over the past ten years, deep learning has made AI ubiquitous.56
Algorithms are used by AI systems to learn from feedback and advance.
In order to improve performance, some AI systems create and rewrite
algorithms.57 Because deep learning and neural networks are used
throughout AI systems, they are always learning and adapting. They can
mimic human intelligence and creativity to generate new original works,
such as news reports,58 poems,59 paintings, 60 and music. 61

55
ibid.
56
Kai-Fu Lee, AI Superpowers: China, Silicon Valley, New world order (9th edn, Houghton
Mifflin Harcourt Publishing 2018).
57
Will Knight, ‘The Dark Secret at the Heart of AI’ (MIT Law Review, 11 September 2017)
<https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai> accessed
19 December 2022.
58
Jaclyn Peiser, ‘The Rise of the Robot Reporter’ (New York Times, 5 February 2019).
59
Matt Burgess, ‘Google’s AI Has Written Some Amazingly Mournful Poetry’ (Wired, 16
May 2016).
60
Nadja Sayej, ‘Vincent van Bot: The Robots Turning Their Hand to Art’ (The Guardian,
22 February 2018).
61
Dani Deahl, ‘This New Alexa Skill Will Play Music Generated by Artificial Intelligence’
(Verge, 14 March 2018).

24
Acing the AI: Artificial Intelligence and its Legal Implications

2) Human inventiveness has transformed because of AI. Some have even


developed to duplicate authors who are human and establish new genres.
Although human creators can set parameters to control AI outputs, AI
systems use neural networks to make their own decisions and produce
works that are similar to human cognitive processes. Even human-AI
partnerships may produce AI-generated creations. Instead of relying on
the programmer’s inputs, the AI system creates and adds original
expression to the output.

3) The uniqueness of AI-generated works, when fixed by AI systems in a


specific media, resides with the AI and not with the human author. Under
U.S. copyright law, a work must be fixed by a human author in a tangible
medium. 62 Works in tangible media reveal the owner of the finished
product. The rising autonomy of AI systems undercuts the
copyrightability requirement that human creators use AI merely as a tool
to enhance their works.

These three considerations suggest the need for a sui generis mechanism to
protect AI-created works against infringement. The question remains, though
– who should have the exclusive rights to an AI-created piece of art? Who
should get credit for creating an AI system, the developer who started it in
motion or the appropriate AI system that made the primary input for the
construction of the work? Several recent AI-related rulings have invalidated
claims of intellectual property ownership by AI systems.

ii. Creating a New Sui Generis Right

62
17 U.S.C. 2018, s 101.

25
Prospect of Copyright Protection for AI-generated Works

Additionally, as they develop, these computers are taking on human


characteristics like independence, “intelligence,” personality, and social
skills. The following provisions of the new sui generis right may serve to
protect AI-generated works:

 The granting of rights and protection

The EU database directive credits “the natural person or group of natural


persons who established the base or, if the legislation of the Member States
so permits, the legal person designated as the right holder by that legislation”
as databases’ authors.63 This could create new, original rights for AI-created
works. This “addresses the authorship and ownership concerns” raised earlier
in this chapter. A company that controls the AI could be credited as the
author or a natural person (or persons) or a legal person (or persons), such as
the AI system itself. This suggests that authorship may be kept open for
member states to independently examine or standardise across the EU.

 The originality criterion

Unlike traditional copyright laws, the new sui generis rule for AI-generated
works may not have an “originality criterion.” 64 The originality requirements
for copyrighting AI-generated works continue to be a major concern when
one wants to attribute copyright protection to a work autonomously
generated by an AI. It would be simpler to implement an exception for AI-
generated works if this stipulation were eliminated from a new sui generis
rule as opposed to a copyright act. It would be simple to protect works by
lowering or eliminating the originality criteria.

63
Council Directive 96/9/EC of 11 March 1996 on the legal protection of databases [1996]
OJ L 77/20.
64
ibid.

26
Acing the AI: Artificial Intelligence and its Legal Implications

 Additional aspects

The new sui generis rule should make it abundantly obvious which works are
covered by the sui generis right and which are protected by standard
copyright law. By doing this, legal certainty would be strengthened, and
the authorities might choose whether to exempt works from copyright
protection. Protection requirements for AI-generated works may be relaxed
or altered. In conformity with rapid technical advancements in the field
the periods of protection would have to be in line with the same.65
Companies could reduce liability and anti-competitive risks by meticulously
regulating the use of their products and AI systems.

Copyright law would not require to be impacted by this protection. Sui


generis right would steer clear of the majority of that contentious issue's
ethical, cultural, and economic discussions. Problems with copyright could
be resolved by sui generis protection.

VI. Conclusion

The Fourth Industrial Revolution will start as a result of innovative advances


made possible by AI technologies. But worries about AI’s possible negative
repercussions are growing. In light of this, IP protection must be constructed
to benefit humanity in the AI age. As this chapter has highlighted, recent AI
judgements aim to assure equitable access to AI-generated works by
maintaining current IP norms like authorship and inventorship. The verdicts
issue a warning against an unduly liberal standard of intellectual property

65
European Parliament and of the Council, ‘Directive for the sui generis protection of
databases’ (European Parliament) 96/9/EC/Art 10 [1996] L 77/20.

27
Prospect of Copyright Protection for AI-generated Works

laws to allow AI systems to hold copyright ownership. Still, it gives way to


an idea of allowing AI developers to exercise proprietary control over AI
creations.

This chapter has suggested a two-tiered legal system to protect AI-generated


works internationally, drawing on recent AI judgements. This forward-
looking idea accords human ingenuity the required credit for creating AI
systems that can produce results that are advantageous to humankind. In the
meantime, it makes autonomous AI system innovations accessible in the
public domain, limiting the detrimental consequences of AI on the
proliferation and enjoyment of general knowledge for all people.

28
Acing the AI: Artificial Intelligence and its Legal Implications

THE RABBIT-HOLE OF CONTENT MODERATION BY AI


Piyush Chakravarty
(Teaching Associate at CMR University)

Abstract
In the last decade, social media platforms like Facebook, Instagram,
Twitter, etc. have had a very tumultuous journey. There are multiple
examples wherein social media has done wonders. The mistreatment of
Iraqi prisoners at Abu Ghraib in Baghdad,66 and the significance of
WikiLeaks,67 in Arab Springs are instances where social media played
a catalytic role.68 Then there are instances where social media has
been used to violate human rights. In Myanmar, social media had a
“determining role” in the suspected acts of genocide in the country.69
Social Media has been used in decade long war in Syria,70 online hate-
speech has been used to provoke enmity in the Central African

66
Hersh Seymour, ‘Torture at Abu Ghraib’ (The New Yorker, 30 April 2004)
<http://www.newyorker.com/magazine/2004/05/10/torture-at-abu-ghraib> accessed 25
March 2022.
67
Raffi Khatchadourian, ‘What Does Julian Assange Want?’ (The New Yorker, 31 May
2010) <https://www.newyorker.com/magazine/2010/06/07/no-secrets> accessed 25 March
2022.
68
Robin Wright, ‘How the Arab Spring Became the Arab Cataclysm’ (The New Yorker, 15
December 2015) <http://www.newyorker.com/news/news-desk/arab-spring-became-arab-
cataclysm> accessed 25 March 2022.
69
Tom Miles, ‘U.N. Investigators Cite Facebook Role in Myanmar Crisis’ (Reuters, 12
March 2018) <https://www.reuters.com/article/us-myanmar-rohingya-facebook-
idUSKCN1GO2PN> accessed 31 March 2022.
70
Patrick Howell, ‘Why the Syrian Uprising Is the First Social Media War’ (The Daily Dot,
18 September 2013) <https://www.dailydot.com/debug/syria-civil-social-media-war-
youtube/> accessed 31 March 2022.

29
The Rabbit-Hole of Content Moderation by AI

Republic,71 and religious attacks were provoked by people in Sri


Lanka.72

The algorithm deployed by these platforms to moderate content and


keep the platforms engaging has been deliberately used to promote
content which would cause problems. AI used by the platforms
exercises too much power as it controls free speech, and the contextual
understanding of speech is needed in multiple cases which gets
overlooked when it comes to AI. Therefore, more nuanced and in-depth
training in the tools has to be undertaken and even after that, the
problems may persist. The author would try to address the issue by
looking into it, by keeping all the stakeholders involved, and
suggesting the necessary reforms.

I. Introduction

An article was written by Eugene Volokh on the future of the internet and the
world around it, around 27 years ago.73 It spoke about the extinction of the
physical forms of music, the prevalence of video music, and free-flowing
speech. Here we are, 27 years later, and every prediction has come true.
Internet and social media have come a long way since their inception. The
kind of problems which have come into the picture now was probably never
envisioned by the creators of the internet and the other platforms which

71
‘Hate Speech on Social Media Inflaming Divisions in CAR - Central African Republic’
(Relief Web, 2 June 2018) <https://reliefweb.int/report/central-african-republic/hate-speech-
social-media-inflaming-divisions-car> accessed 31 March 2022.
72
Max Fisher, ‘Sri Lanka Blocks Social Media, Fearing More Violence’ (The New York
Times, 21 April 2019) <https://www.nytimes.com/2019/04/21/world/asia/sri-lanka-social-
media.html> accessed 31 March 2022.
73
Eugene Volokh, ‘Cheap Speech and What It Will Do’ (1996) 1 The Communication
Review 261.

30
Acing the AI: Artificial Intelligence and its Legal Implications

followed. Now across society, there exists a consensus that the internet and
these platforms cause enormous societal problems, including loss of
privacy, 74 harassment of women and minorities,75 systematic manipulation of
democracy, 76 and incitement to genocide (as in Myanmar).77 The kinds of
content on social media which cause problems are sometimes user- generated
content, and sometimes the activities of the platforms themselves, like over-
censuring, acting late on sensitive content, etc. Facebook has been a mess for
quite some time now, with regular lawsuits in and around the world being
opened against it, along with various inquiries in front of parliamentary
committees/ commissions around the world for their involvement in such
activities. 78

The Cambridge Analytica scandal showed that Facebook manipulated the


US presidential elections of 2016 because of Russian interference. 79 Now
and then, because of such incidents, there erupts talk of extra regulation of
social media. Certain countries have taken steps and enacted new laws to

74
Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at
the New Frontier of Power: Barack Obama’s Books of 2019 (Profile Books 2019).
75
Danielle Keats Citron, Hate Crimes in Cyberspace (Harvard University Press 2014).
76
Alexis C Madrigal, ‘What Facebook Did to American Democracy’ (The Atlantic, 12
October 2017) <https://www.theatlantic.com/technology/archive/2017/10/what-facebook-
did/542502/> accessed 13 March 2022.
77
Alexandra Stevenson, ‘Facebook Admits It Was Used to Incite Violence in Myanmar’
(The New York Times, 6 November 2018)
<https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html> accessed 13
March 2022.
78
Kara Swisher, ‘Opinion | Zuckerberg Never Fails to Disappoint’ (The New York Times, 10
July 2020) <https://www.nytimes.com/2020/07/10/opinion/facebook-zuckerberg.html>
accessed 13 March 2022.
79
Kevin Granville, ‘Facebook and Cambridge Analytica: What You Need to Know as
Fallout Widens’ (The New York Times, 19 March 2018)
<https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-
explained.html> accessed 13 March 2022.

31
The Rabbit-Hole of Content Moderation by AI

address such changes, but the amount of progress in crimes has not been
matched by legal and regulatory reforms.

II. The Modus Operandi of Intermediaries

There is a difference between an intermediary 20 years ago and an


intermediary now. An intermediary now can be termed a “Super-
Intermediary”, 80 whose essence lies in the exercise of power. The fact that
the damaged reputation of a super-intermediary does not reduce its user base
is because of the consolidation that exists between the intermediaries, i.e.
few platforms dominate their fields and second that these super-
intermediaries have so much power that new entities also wish to attach
themselves to one of the major players. 81 Intermediaries like Google,
Facebook, Amazon, and Twitter are not just companies selling goods and
services but rather infrastructural firms. These are the foundations on which
economic and social undertakings function. And this infrastructure is
privately controlled.82

Facebook, Twitter, YouTube, Instagram, Netflix, etc. are platforms


essentially for business. Their entire business model is rooted in the fact that
the users stay “engaged” on these platforms. And the advertisements which
are put up in between posts are the revenue- generating devices for these
platforms. The longer a user stays on a platform, the more they will engage,
and there will be an “impression” on the specific ad. These impressions help

80
Ira Steven Nathenson, ‘Super-Intermediaries, Code, Human Rights’ (2013) 8 Intercultural
Human Rights Law Review (St. Thomas University) 19, 39.
81
Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets
and Freedom (Yale University Press 2006).
82
K Sabeel Rahman, ‘Regulating Informational Infrastructure: Internet Platform as the New
Public Utilities’ (2017–2018) 2 Georgetown Law Technology Review 234.

32
Acing the AI: Artificial Intelligence and its Legal Implications

the platforms mint money. 83 Platforms use the data of individuals to organise
it and use it in a structured form. 84 Along with this, they have scoring power
over the user, via which they exercise control over the entire ecosystem
using an algorithm, recommendations, etc.85

The data of a user is analysed by the algorithm, and then the prediction of
what a user may like is based on the kinds of posts which are liked by an
individual. The accumulated data can be organized into different categories
that each reveal clues about what a user likes to see. Activities like searches,
people whom the user has contacted, etc. are used to customise even more.86
Once the pool of user content is created by the algorithm, the content is
ranked based on how much the user is engaging with it. Following this, the
feed of the user is filled with content which has a higher rank and is on the
top of the feed. 87

The level of involvement of Facebook/ Twitter/ Instagram is much higher


than that of a simple sharing platform or an amplifier. Rather, they curate
personalised feeds based on users’ personalisation preferences via an
algorithm. 88 Along with this, there is substantial research on the bias which

83
Brian O’Connell, ‘How Does Facebook Make Money? Six Primary Revenue Streams’
(The Street, 23 October 2018) <https://www.thestreet.com/technology/how-does-facebook-
make-money-14754098> accessed 30 March 2022.
84
Sang Ah Kim, ‘Social Media Algorithms: Why You See What You See Technology
Explainers’ (2017–2018) 2 Georgetown Law Technology Review 147.
85
Danielle Citron and Frank Pasquale, ‘The Scored Society: Due Process for Automated
Predictions’ (2014) 89 Washington Law Review 1.
86
‘How the Facebook Algorithm Works and Ways to Outsmart It’ (Sprout Social, 3 August
2020) <https://sproutsocial.com/insights/facebook-algorithm/> accessed 30 March 2022.
87
Sang Ah Kim (n 84) 149.
88
‘Facebook Changes Algorithm to Promote Worthwhile & Close Friend Content’ (Tech
Crunch, 16 May 2019) <https://social.techcrunch.com/2019/05/16/facebook-algorithm-
links/> accessed 14 March 2022.

33
The Rabbit-Hole of Content Moderation by AI

exists between the reaction emojis on Facebook which in a way favour the
promotion of hateful content more than that of normal ones. 89 Sites like
Facebook also allow for the creation of “groups,” where people with
common interests can congregate and share posts reflecting their views. 90
The algorithms of these sites also modify the suggestions of whom to follow,
which groups to join, etc. People/organisations pay for targeted advertising
to seek out the most agreeable persons for their purposes. All of this
contributes to the creation of “echo chambers,” where the person targeted to
be incited is repeatedly fed the same rhetoric, misinformation, and hateful
content that would create the requisite context and atmosphere for a call to
violence through incitement.

ISIS has made prodigious use of social media for its varied purposes,91 with
specific strategies employed for recruitment, indoctrination, fundraising,
incitement, etc. Research exists to back the claim that Facebook’s algorithm
boosts posts that elicit strong, negative emotions, in order to increase
engagement.92 In Germany, when Facebook servers were down, anti-refugee
attacks dropped significantly, indicating just how much such violently
inclined outfits depended on the site to incite the attacks.93 However perhaps

89
Gina M Masullo and Jiwon Kim, ‘Exploring “Angry” and “Like” Reactions on Uncivil
Facebook Comments That Correct Misinformation in the News’ (2021) 9 Digital Journalism
1103.
90
Ashutosh Bhagwat, ‘The Law of Facebook’ (2020–2021) 54 UC Davis Law Review 2353.
91
Piotr Łubiński, ‘Social Media Incitement to Genocide’, in Marco Odello, Piotr Łubiński
(eds.), The Concept of Genocide in International Criminal Law Developments after Lemkin
(Routledge, 2020) 262.
92
Tobias Kraemer, ‘The Good, The Bad, And The Ugly – How Emotions Affect Online
Customer Engagement Behavior’ (2016) <https://iae-aix.univ-amu.fr/sites/iae-aix.univ-
amu.fr/files/42_kraemer-the_good_rev.pdf> accessed 14 November.
93
Amanda Taub and Max Fisher, ‘Facebook Fueled Anti-Refugee Attacks in Germany,
New Research Suggests’ (The New York Times, 21 August 2018)
<https://www.nytimes.com/2018/08/21/world/europe/facebook-refugee-attacks-
germany.html> accessed 14 November 2021.

34
Acing the AI: Artificial Intelligence and its Legal Implications

Facebook’s biggest failure to control the spread of hate speech and


incitement occurred during the Rohingya crisis, when Facebook repeatedly
ignored the warnings they received, 94 with any steps taken coming too little
too late. Genocidal elements amongst Myanmar’s military elite made full use
of Facebook’s algorithm to spread their propaganda and calls to violence. 95
U.N. Myanmar investigator Yanghee Lee likened Facebook to a “beast”
while expanding on how the website, which pervades throughout daily life in
Myanmar, had played a pivotal role in enabling the atrocities committed. 96
One of the major issues which arises on Facebook is the problem of
trolling. 97 Fake accounts are being used to spread propaganda and false and
inciting news on Facebook.98

III. The Involvement of AI

Moderation is the central essence and value proposition of platforms.


Moderation is something that is done on a regular basis and has a dedicated
team consisting of company employees, temporary crowd-workers,
outsourced review teams, legal consultants, flaggers, administrators,

94
Steve Stecklow, ‘Why Facebook is losing the war on hate speech in Myanmar’ (Reuters
Investigates, 15 August 2018) <https://www.reuters.com/investigates/special-
report/myanmar-facebook-hate/> accessed14 November 2021.
95
Paul Mozur, ‘A Genocide Incited on Facebook, with Posts from Myanmar’s Military’(The
New York Times, 15 October 2018)
<https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-
genocide.html?nl=top-stories&nlid=61026281ries&ref=cta> accessed 14 November 2021.
96
Tom Miles, ‘U.N. investigators cite Facebook role in Myanmar crisis’ (Reuters, 13 March
2018) <https://www.reuters.com/article/us-myanmar-rohingya-facebook/u-n-investigators-
cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2PN> accessed 14 November 2021.
97
Neriah Yue, ‘The “Weaponization” of Facebook in Myanmar: A Case for Corporate
Criminal Liability Notes’ (2019–2020) 71 Hastings Law Journal 813.
98
Paul Mozur, ‘A Genocide Incited on Facebook, With Posts From Myanmar’s Military’
(The New York Times, 15 October 2018)
<https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html>
accessed 14 March 2022.

35
The Rabbit-Hole of Content Moderation by AI

moderators, activist organisations, and the user population. 99 By shifting the


first burden of moderation to its users, the platforms delegate them the
powers of editors and police. 100 Platforms are websites which offer content
moderation as a service because, without properly regulated content, no
platform would be in business. The platform is defined by what it doesn’t
allow rather than what it allows. 101 Platforms moderate via removal and
allowance of content and they recommend via curation of user-specific
content.

Content moderation by humans, as well as AI, is a common practice. 102


Dependence on AI is becoming more and more every day, as the amount of
removal requests has to be automated. Content moderation is a double-edged
sword, as sometimes it helps while sometimes can be over-protective.103
Content moderation is broad and generalised and the context of the speech is
ignored. There are many false negatives and false positives as well. 104
Because of this, a lot of content remains unchecked so voluntary moderation
is insufficient. The private platforms have elaborate rules and systems to
resolve these conflicts when it comes to the clash of freedom of expression
and regulation of harmful speech. The platforms have their own internal

99
Adrian Chen, ‘The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook
Feed’ (Wired, 23 October 2014) <https://www.wired.com/2014/10/content-moderation/>
accessed 1 April 2022.
100
Kate Crawford and Tarleton Gillespie, ‘What Is a Flag for? Social Media Reporting
Tools and the Vocabulary of Complaint’ (2016) 18 New Media & Society 410.
101
Finn Brunton, Spam: A Shadow History of the Internet (MIT Press 2013).
102
Sebastian Felix Schwemer, ‘Trusted Notifiers and the Privatization of Online
Enforcement’ (2019) 35 Computer Law & Security Review 105, 339.
103
Kate O’Flaherty, ‘YouTube Keeps Deleting Evidence of Syrian Chemical Weapon
Attacks’ (Wired UK, 26 June 2018) <https://www.wired.co.uk/article/chemical-weapons-in-
syria-youtube-algorithm-delete-video> accessed 29 March 2022.
104
Ben Depoorter and Robert Walker, ‘Copyright False Positives’ (2013) 89 Notre Dame
Law Review 319.

36
Acing the AI: Artificial Intelligence and its Legal Implications

terms of service which they adjudicate using automated means. 105 And since
content moderation is done by AI, the decision to look into the removal or
non-removal depends on the private economic interests of the platform. 106

Another aspect which is not looked into is the algorithmic personalisation of


the news feed on Facebook, Twitter, or any other platform, which is also an
exercise of free speech and expression with regard to access to information.
The argument regarding access to information has two sides of the same
coin: a person would want more information and not want a personalised
feed, while another would say that they want a customised feed rather than a
general one. 107 The argument goes around that the government cannot
determine what one should read or not read. But the “user preference,” as
claimed by the platforms, is not the user’s preference per se. The data
collected by the user is most of the time, not an active choice. 108
Furthermore, the inferences the platform makes about a user are not
necessarily the expression of the user, as the “unpredictability of an
individual” would be absent from this, and that constitutes an essential
characteristic of an individual’s exercise of freedom and autonomy. 109

105
Thomas Kadri and Kate Klonick, ‘Facebook v. Sullivan: Public Figures and
Newsworthiness in Online Speech’ (2019) 93 Southern California Law Review 37.
106
Maayan Perel, ‘Enjoining Non-Liable Platforms’ (2020–2021) 34 Harvard Journal of
Law & Technology (Harvard JOLT) 1, 28.
107
Sofia Grafanaki, ‘Drowning in Big Data: Abundance of Choice, Scarcity of Attention and
the Personalization Trap, a Case for Regulation’ (2017–2018) 24 Richmond Journal of Law
& Technology 1.
108
Katherine J Strandburg, ‘Free Fall: The Online Market’s Consumer Preference
Disconnect’ [2013] University of Chicago Legal Forum 95.
109
James Q Whitman, ‘The Two Western Cultures of Privacy: Dignity versus Liberty’
(2004) 113 The Yale Law Journal 1151, 1181.

37
The Rabbit-Hole of Content Moderation by AI

These algorithms combine the power of deciding the norm, enforcing the
norm, and adjudicating based on the norms. There may be errors in content
moderation by algorithms that mark something as illegal because of the lack
of context.110 This will lead to the algorithm censoring legal content as well.
And finally, this act of content moderation by platforms affects the
separation of power within the platform system, as the same algorithm acts
as an enforcer and adjudicator.

The community standards are the rules which are applicable across every
jurisdiction and user. The language used in the document is very broad,
which makes it difficult to have consistent application because of the lack of
context specificity. 111 This paves the way for a lot of subjectivity at the hands
of content moderators. Along with this, Facebook specifies that it would help
law enforcement agencies based on the severity of the violation, and with the
lack of definition, this also leads to subjectivity. Certain leaked documents
revealed that Facebook has internal guidelines which are more specific than
the ones available to the general public. 112 Moderators have several sources
of “truth” which they have to consider before making a decision and that
leads to inconsistency. 113 Also, the internal materials are changed on an ad
hoc basis. Because hate speech is difficult to define without context, the

110
Evan Engstrom and Nick Feamster, ‘The Limits of Filtering’ (Engine, March 2017)
<https://www.engine.is/the-limits-of-filtering> accessed 19 April 2022.
111
Sarah Koslov, ‘Incitement and the Geopolitical Influence of Facebook Content
Moderation’ (2019–2020) 4 Georgetown Law Technology Review 183, 189.
112
Max Fisher, ‘Inside Facebook’s Secret Rulebook for Global Political Speech’ (The New
York Times, 27 December 2018) <https://www.nytimes.com/2018/12/27/world/facebook-
moderators.html> accessed 6 April 2022.
113
Casey Newton, ‘The Secret Lives of Facebook Moderators in America’ (The Verge, 25
February 2019) <https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-
content-moderator-interviews-trauma-working-conditions-arizona> accessed 6 April 2022.

38
Acing the AI: Artificial Intelligence and its Legal Implications

internal document can magnify power disparities. 114 Facebook, not being a
sovereign or democratic institution, is a facilitator of free speech and a
proponent of democratic values.

IV. The Probable Solution?

Facebook uses a combination of AI and human oversight to moderate


content. With the number of posts, it would be humanly impossible to
control without the use of automated systems. 115 The efficiency of the AI
depends on the material it has been trained with. Bias can surface in many
ways, as the nuance of content moderation needs a lot of contexts. A one size
fits all approach to content moderation is flawed, as thorny, context-specific
issues will not be addressed by this. 116 Inciting speech has certain qualities
which are based on factors like history, setting, and circumstances and
cannot be determined in a vacuum, and universally applied AI will produce
erroneous results. Content management and moderation are essential
functions of Facebook’s and Twitter’s businesses and with them comes the
huge amount of data management. 117

114
Julia Angwin and Grassegger Hannes, ‘Facebook’s Secret Censorship Rules Protect
White Men From Hate Speech But Not Black Children’ (Pro Publica, 28 June 2017)
<https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-
algorithms?token=jshSORFCpk4rT10jAyZXIO0twvVlATYO> accessed 6 April 2022.
115
Sarah Koslov (n 111) 200.
116
Andrew D Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian and
Janet Vertesi, ‘Fairness and Abstraction in Sociotechnical Systems’, in Proceedings of the
Conference on Fairness, Accountability, and Transparency (ACM 2019)
<https://dl.acm.org/doi/10.1145/3287560.3287598> accessed 6 April 2022.
117
Yuval Noah Harari, ‘Why Technology Favors Tyranny’ (The Atlantic, 30 August 2018)
<https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-
tyranny/568330/> accessed 6 April 2022.

39
The Rabbit-Hole of Content Moderation by AI

Many moderation decisions require nuanced legal analysis regarding


whether a statement made is actionable under existing laws, whether any
video posted has information about sexual assault, etc.118 Context-based
decisions are becoming more and more relevant these days, and any wrong
decision has implications for the platform and the rights of the people. 119
With more content, there will be more false positives in the mix which are
being detected. Algorithms produce results based on correlation, not
causation, and produce decisions at the population level, not at the individual
level. 120 The decisions are made due to a predetermined structure and have
no room for discretion, which in a way undermines human dignity. 121

Rather than regulation by law, which in a way hinders freedom of speech,


technology should be tackled using technology. Artificial Intelligence (AI)
and Machine Learning (ML) can be used to micro-target posts and profiles
which have been reported by users, which would be a better way of
regulating. The usage of emoji reactions to customise feeds should not be
allowed, and the user’s data should be reset every seven days to diversify the
feed. Also, micro-level flagging of posts should be allowed. For example,
messages or profiles which engage in such activities, should be scrutinised
more quickly and fast-track reporting option should be provided to the
118
Nina Brown, ‘Regulatory Goldilocks: Finding the Just and Right Fit for Content
Moderation on Social Platforms’ (2021) 8 Texas A&M Law Review 451, 475.
119
Robert Gorwa, Reuben Binns and Christian Katzenbach, ‘Algorithmic Content
Moderation: Technical and Political Challenges in the Automation of Platform Governance’
(2020) 7(1) Big Data & Society,
<https://journals.sagepub.com/doi/epub/10.1177/2053951719897945> accessed 6 April
2022.
120
Lorna McGregor, ‘Accountability for Governance Choices in Artificial Intelligence:
Afterword to Eyal Benvenisti’s Foreword’ (2018) 29 European Journal of International Law
1079.
121
Eirini Kikarea and Maayan Menashe, ‘The Global Governance of Cyberspace:
Reimagining Private Actors’ Accountability: Introduction’ (2019) 8 Cambridge
International Law Journal 153, 155.

40
Acing the AI: Artificial Intelligence and its Legal Implications

people. This should also make the person reported accountable, and such a
record should be maintained. The biggest issue amongst all of this is the
issue of language. The existence of various languages in India also acts as a
hurdle in having a uniform standard, hence, a diversified panel is required.

Relying on AI in total would not be beneficial because the contextual


distinction would not be identified by AI and the effectiveness of AI would
be reduced as the algorithm will not be enough. 122 AI training needs years of
“human hand-holding” to come to a level of understanding of the contextual
background.123 But a problem as big as this, which has been created by
technology, can only be addressed by using technology. Law can guide up to
a certain extent, but the action has to be taken to incorporate technology,
which is the only way forward.

122
Rebecca Cambron, ‘World War Web: Rethinking “Aiding and Abetting” in the Social
Media Age’ (2019) 51 Case Western Reserve Journal of International Law 293, 307.
123
Assaf Baciu, ‘Artificial Intelligence Is More Artificial Than Intelligent’ (Wired, 7
December 2016) <https://www.wired.com/2016/12/artificial-intelligence-artificial-
intelligent/> accessed 30 March 2022.

41
A.I.: Perpetuator of Racism and Colourism

A.I.: PERPETUATOR OF RACISM AND COLOURISM


Tejasvati Singh
(Student at National University of Advanced Legal Studies, Kochi)

Abstract
This article will deal with the causes of the inherent bias within A.I. as
well as its subsequent effect on the perception of beauty and law
enforcement, among other areas. It will also discuss its negative and
unforeseen impact on the everyday lives of people, the legal and other
measures people have taken against it, as well as steps taken towards
resolving this issue along with other possible solutions.

I. Introduction

In current times, we are exposed to and influenced by Artificial Intelligence


(AI) in almost every sphere of our lives – be it banking transactions, online
shopping, or social media apps. But what happens when AI is biased? A
growing body of research,124 indicates that bias in AI can lead to
discriminatory outcomes, especially in the case of minority populations and
women. This is especially relevant in an age where large corporations and
banks use AI to determine who is eligible for a job interview, 125 or a loan.

124
Patrick Grother, ‘NIST Study Evaluates Effects of Race, Age, Sex, Gender on Face
Recognition Software’ (NIST, 19 December 2019) <https://www.nist.gov/news-
events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software>
accessed on 13 January 2023; Alex Najibi, ‘Racial Discrimination in Face Recognition
Technology’ (Harvard University, 24 October 2020)
<https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-
technology/> accessed 13 January 2023.
125
Sheridan Wall and Hilke Schellmann, ‘We tested AI interview tools. Here’s what we
found.’ (MIT Technology Review, 7 July 2021)

42
Acing the AI: Artificial Intelligence and its Legal Implications

Facial Recognition Technologies are at the centre of this miasma. Personal


cameras have imaging chips with pre-set ranges for skin tones, making it
technically impossible to accurately capture the variety of complexions. 126
Moreover, digital technologies are narrowing beauty standards, and
algorithms of apps such as Twitter and Instagram are recognising and
promoting fairer skin tones, with big eyes and plump lips.127 An especially
dangerous outcome of such biases is the number of false arrests made by
police in the U.S. based on Facial Recognition Technologies. The bias in AI
leads to it misidentifying minority groups and women more often than white
males.128

Yet, the Police and prosecutors in most of the U.S. are not required to inform
people arrested for crimes that Facial Recognition Technologies played a
role in the investigation that led to their arrest. They can hide the use of
Facial Recognition Technologies in their warrants and affidavits by using
phrases such as ‘investigative means’, which makes it difficult for an
attorney to find the use of the same unless s/he knows the tactics used by the
police to shroud the same.129 There is an urgent need for transparency

<https://www.technologyreview.com/2021/07/07/1027916/we-tested-ai-interview-tools/>
accessed 13 January 2023.
126
Tate Ryan-Mosley, ‘How digital beauty filters perpetuate colourism’ (MIT Technology
Review, 15 August 2021) <https://www.technologyreview.com/2021/08/15/1031804/digital-
beauty-filters-photoshop-photo-editing-colorism-racism/> accessed 14 January 2023.
127
ibid.
128
Catherine Kenny, ‘Artificial Intelligence: Can We Trust Machines to Make Fair
Decisions?’ (UC Davis, 13 April 2021) <https://www.ucdavis.edu/curiosity/news/ais-race-
and-gender-problem> accessed 14 January 2023.
129
T.J. Benedict, ‘The Computer Got it Wrong: Facial Recognition Technology and
Establishing Probable Cause to Arrest’ (2022) 79(2) Washington & Lee Law Review
<https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=4773&context=wlulr>
accessed 13 January 2023.

43
A.I.: Perpetuator of Racism and Colourism

regarding the use of AI, including Facial Recognition Technologies in order


to ensure fairness and trust among people regarding the use of the same
technology in their lives.

II. Cause of the Bias

It is important to state at this juncture that the AI itself is not intentionally


biased.130 Data is the key factor upon which AI and machine learning
algorithms rely, and this data seldom represents minority populations and
women. This is because the decisions about which data to use and how to use
it still lie with people. AI does not have any moral compass; its performance
simply reflects that it has been trained on mostly white faces and thus,
associates them with normality. This type of bias is known as Algorithmic
AI Bias or data bias, wherein algorithms are trained using biased data. This
brings to light the rudimentary issue of a lack of diversity in the workplace,
which leads to the bias and its lack of discussion in the first place.

The second type of bias is the Societal AI Bias, wherein societal norms and
traditions cause people to have certain blind spots in their manner of
thinking. This type of bias heavily influences the aforementioned
Algorithmic bias, therefore, growth and development in Societal Bias help
influence Algorithmic bias for the better.

III. Solutions

130
Kenny (n 128).

44
Acing the AI: Artificial Intelligence and its Legal Implications

In certain areas, there is a pressing need for change, whereas in others,


developments have already been made, and others need to follow in the same
vein:
 There is a burgeoning necessity to educate and train people regarding
these biases that A.I. possesses to prevent the further perpetuation of
racial and colourist biases as well as inequalities in society.

 The issue of bias in AI cannot be discussed without also discussing the


underlying issue of lack of diversity in the workplace. As per Mozilla’s
2020 Internet Health Report, 80% of the workforce in Amazon, Google,
Facebook, and Microsoft consists of males, and there has been negligible
growth in Native American, Latinx, and Black employees in these
companies since 2014. The fewer people of colour employed in a
workplace, the lesser the chances there are of issues related to the same
being discussed and solved. Therefore, there is an urgent need to diversify
the workplace and employ more women and people of colour. This will
definitely help the employees tackle biases at the stage of training the AI,
the images used to train it will be of a variety, instead of predominantly
white.

 San Francisco, in June 2019, was one of the first U.S. cities to ban the
usage of facial recognition technologies by the Police and other
departments, with the State of California also imposing a three-year
moratorium on the same in January 2020.131 A similar strategy must be
followed by other states until the discriminatory biases in AI are settled
and it is well tested before being brought out on the field. Policies framed

131
ibid.

45
A.I.: Perpetuator of Racism and Colourism

to monitor and direct the usage of the same must be strictly followed, as
well as, the punishments in the event of non-compliance. It is crucial to
hold the Police accountable in cases of their complete reliance on AI and
shoddy, sloppy detective work in investigations so as to prevent more
cases of the likes of Robert Williams, Nijeer Parks and Michael Oliver.

 Google has helped reduce colour bias by unveiling a new ten shade skin
tone scale test for its A.I. that is more representative of the different skin
tones in the world and can more accurately test the AI for bias. 132 This
new scale, called the Monk Skin Tone Scale replaced the flawed
Fitzpatrick Skin Type standard of six colours that was proven to result in
colour bias as it underrepresented people with darker skin. Microsoft and
IBM have also pledged to reduce this by improving their data collection
methods.133 Other such multi-national companies that make use of AI
should follow the same strategy to help reduce bias.

 AI is already being used in a myriad of fields, such as improving


healthcare, suggesting what movies to watch on a streaming service,
surveillance, etc., and this trend shows no sign of stopping any time soon.
Therefore, it is essential now more than ever that there is transparency in
AI. For us to trust the decisions AI used by companies are making, the
basis for their decision-making should be accessible to all in order to
ensure fairness.

132
‘Google unveils new 10 shade skin-tone scale to test AI for bias’ (The Economic Times,
12 May 2022) <https://economictimes.indiatimes.com/tech/technology/google-unveils-new-
10-shade-skin-tone-scale-to-test-ai-for-bias/articleshow/91506703.cms> accessed 14
January 2023.
133
Alex Najibi, ‘Racial Discrimination in Face Recognition Technology’ (Harvard
University, 24 October 2020) <https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-
in-face-recognition-technology/> accessed 13 January 2023.

46
Acing the AI: Artificial Intelligence and its Legal Implications

IV. AI Bias in Beauty Standards

Beauty standards are constantly evolving nationally and internationally. The


cost of meeting such lofty standards can often be very high, but people
usually comply with the same to avoid being labelled outliers and maintain a
sense of belonging. The more you comply with a certain beauty trend, the
more you feel as if you are an integral part of the fabric of society, maybe
even superior. Young people, especially teenagers, often fall prey to such
unhealthy trends. The teenage years are a tumultuous time for many when
the person is not yet self-confident and is often impressionable. The effect of
statements made about a person’s physical appearance in these vulnerable
years can have a long-lasting impression.

In today’s time, it is not necessary for a person to be physically present to


make hurtful comments about someone’s appearance – conveying that
through social media is enough. But what happens when the algorithms on
which social media sites are based also promote the same discriminatory
ideals?

Colourism is one such prejudice perpetuated through social media.


Colourism in itself has existed for thousands of years, and although it has ties
to racism, the difference lies in the fact that it affects people regardless of
their race and can affect people of the same backgrounds differently. 134
People with darker complexions in India have historically been placed lower

134
Ryan-Mosley (n 126).

47
A.I.: Perpetuator of Racism and Colourism

in the caste system. 135 Light skin is associated with beauty and nobility in
China. 136 People of many different races experience colourism in the US
because this prejudice is primarily based on appearance rather than race.
Black Americans who had lighter skin historically received more domestic
chores, while those who had darker skin were far more likely to labour in the
fields when they were enslaved.137 In current times, digital colourism has
emerged due to the widespread use of selfies and face filters 138. As per a
report by Snapchat, about 200 million people use its feature ‘Snapchat
Lenses’ every day, and some of them use it to lighten their skin tone.
Instagram and Tiktok have automatic image-enhancing features and filters
that bring about the same effect, and this is done in an almost imperceptible,
subtle manner.139 As mentioned before, there are pre-set skin tone ranges in
the imaging chips in personal cameras that prevent the accurate portrayal of
darker skin tones. As recent as 2020, Twitter had come under fire for its
image cropping tool that preferred faces that were ‘lighter, thinner and
younger’, thus enforcing the popularity of people with a lighter skin tone
over those with a darker skin tone.140 Digital technologies are thus
continuing to narrow beauty standards.

135
‘Skin colour tied to caste system, says study’ (Times of India, 21 November 2016)
<https://timesofindia.indiatimes.com/india/skin-colour-tied-to-caste-system-says-
study/articleshow/55532665.cms> accessed 14 January 2023.
136
Zhang Xi, ‘Chinese consumers obsessed with white skin bring profits for cosmetics
companies’ (The Economic Times, 20 November 2011)
<https://economictimes.indiatimes.com/news/international/chinese-consumers-obsessed-
with-white-skin-bring-profits-for-cosmetics-companies/articleshow/10796591.cms>
accessed 14 January 2023.
137
Verna M. Keith and Cedric Herring, ‘Skin Tone and Stratification in the Black
Community’ (1991) 97(3) American Journal of Sociology
<http://www.jstor.org/stable/2781783.> accessed on 14 January 2023.
138
Ryan-Mosley (n 126).
139
ibid.
140
Alex Hern, ‘Student proves Twitter algorithm ‘bias’ toward lighter, slimmer, younger
faces’ (The Guardian, 10 August 2021)

48
Acing the AI: Artificial Intelligence and its Legal Implications

There are also flourishing ‘facial assessment tools’, based on AI, on websites
such as Quoves and the world’s largest open facial recognition platform,
Face++.141 These rate the attractiveness of a face, detect faults with it, and
recommend solutions that involve injectable or surgical enhancements to
rectify the same. However, the detection of these ‘faults’ has unfortunate
racist biases.142 Economist Lauren Rhue found that the system on Face++
consistently rated women with lighter skin tones higher on the attractive
scale than those with darker skin tones. The same phenomenon was observed
with women with Eurocentric features – smaller noses and lighter hair were
rated higher than those with other features, regardless of their skin tone. This
reflects a Eurocentric Bias in the people who graded the images to train the
A.I, and thus codified and amplified this bias.

There is a suspicion of the same bias being reflected in dating services and
social media platforms that codify their recommendation algorithm on the
basis of these prejudiced, discriminatory, colourist and racist beauty scoring
AI.143 This further establishes how influential AI has become, influencing the
likes and dislikes of people, be it on a social media app or an online dating
service.

V. AI bias in Law Enforcement

<https://www.theguardian.com/technology/2021/aug/10/twitters-image-cropping-algorithm-
prefers-younger-slimmer-faces-with-lighter-skin-analysis> accessed 14 January 2023.
141
Tate Ryan-Mosley, ‘I asked an AI to tell me how beautiful I am’ (MIT Technology
Review, 5 March 2021) <https://www.technologyreview.com/2021/03/05/1020133/ai-
algorithm-rate-beauty-score-attractive-face/> accessed 14 January 2023.
142
ibid.
143
ibid.

49
A.I.: Perpetuator of Racism and Colourism

AI-based pattern matching technologies such as Facial Recognition


Technologies, Video Analytics and Anomaly Detection, among others are
being used by law enforcement agencies such as the Police out of public
view, in the shadows.144 As mentioned before, Police and Prosecutors in
most of the US are not required to inform people arrested that Facial
Recognition Technologies played a role in the same, and this has been
established by court cases.145

As stated in a recent study by Davidson and PhD student Hongjing Zhang,


anomaly detection algorithms, if used for surveillance purposes, are more
likely to detect faces of People of Colour and thus more likely to predict that
they are anomalies.146 This appears to be computer aided racial profiling.
Considering that anomaly detection based on AI is applied to people who are
then seen as performing unusual behaviours, fairness in determining the
same is essential, especially as it has become an increasingly common tool
used by the Police.

(a) Wrongful Arrests

In January 2020, Robert Williams was arrested,147 on suspicion of stealing


five watches from a Shinola shop in Detroit. He had been wrongfully
identified by facial recognition software. Robert was one of the first people
known to be the victims of wrongful accusations by the software. In 2019,
both Michael Oliver and Nijeer Parks were wrongfully arrested,148 after

144
Kenny (n 128).
145
Lynch v State [2018] 260 So.3d 1166; People v Knight [2020] 130 N.Y.S.3d 919.
146
ibid.
147
Khari Johnson, ‘How Wrongful Arrests Based on AI Derailed 3 Men’s Lives’ (Wired, 7
March 2022) <https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/>
accessed 13 January 2023.
148
ibid.

50
Acing the AI: Artificial Intelligence and its Legal Implications

facial recognition software misidentified them. The three cases had some
common points.

 Michael and Nijeer had prior criminal records,


 Michael and Robert had been investigated by the same detective,
 All three men were married, as well as Black,
 The repercussions of their arrest extended beyond the time they were
in jail, and affected their bonds with their family, friends, neighbours,
co-workers, etc.,
 They possessed hardly any knowledge about facial recognition before
their arrest, but now wish to ban or suspend its use in criminal
investigations.
While all three cases were dropped, Nijeer’s case took about a year,
including ten days in jail.

Apart from the commonalities mentioned before, another common thread


running between these cases is the sloppy, or more accurately, utter lack of
investigative work by the officers in charge and complete reliance on a
technology based on an AI that is known to be biased towards people of
colour.

i. The Nijeer Parks Case


In 2019, Parks was accused of allegedly shoplifting candy and snacks from a
Hampton gift shop in New Jersey.149 According to police reports, the
shoplifter almost ran over a police officer with a car while escaping and left a
fake Tennessee Driver’s License at the scene. A Real Time Crime Centre

149
Johnson (n 147).

51
A.I.: Perpetuator of Racism and Colourism

received the photo from the fake ID and identified Nijeer as a ‘high profile
match’ through their facial recognition system. When he learnt that the
police had approached his grandmother’s home regarding this case that they
believed concerned him, he went to the police station himself in an attempt
to clear his name. He was, however, arrested.

After his first appearance in court, three days after his arrest, Nijeer was not
released. This led him to wonder how long of a sentence he faced,
considering that the charges of assault, theft, and eluding arrest would carry
long terms. According to the complaint, his maximum sentence could have
been up to twenty-five years. His previous drug related charges also weighed
heavily on him, and a plea deal instead of an opportunity to prove himself
innocent at trial seemed to be a more attractive option, especially as the
Prosecutors kept pushing him to do so. However, when he got a new phone
about six months later, he found a receipt for a money transfer to his fiancé
among his old photos that conclusively placed him at a location thirty miles
away from the gift shop, thus proving him innocent. It was still only after
several more months that the charges against Nijeer were dropped.

He thereupon filed a lawsuit,150 in federal court in New Jersey alleging false


arrest, false imprisonment, a violation of his constitutional rights against
unlawful search and seizure, and cruel and unusual punishment against the
Director of the Woodbridge Police Department, some local officials, and
Idemia, the company that made the facial recognition system. In the lawsuit,
he claimed that police did not employ conventional investigative methods,
such as having a line-up of potential witnesses look at Parks or his

150
ibid.

52
Acing the AI: Artificial Intelligence and its Legal Implications

photograph in person. The lawsuit also asserts that Parks was not a suspect
because police disregarded any DNA or fingerprint evidence the suspect left
at the crime scene. Parks sought loss of wages and emotional harm, however,
there was no set date for the trial.

Nijeer only shared this ordeal with close family and friends, in part due to his
prior criminal record. According to him, this incident divided the people he
knew into two factions: those who stood by him, and those who wondered if
he had actually committed the crime and hence kept their distance. His
fiancé was part of the latter faction. Nijeer did not discuss his case with his
10-year-old son while he was fighting it, but he did so in May 2021 after
they watched a sixty minute segment regarding the use of facial recognition
by the police. This led to a rite of passage that African American Families
sometimes refer to as ‘The Talk’, where they discussed the different ways in
which a black person is supposed to interact with the police for their safety.

All parties to the lawsuit denied the allegations against them, while Idemia
refused to comment on the matter.151

ii. The Michael Oliver Case


Michael had a home and a family, including his wife and son, and was the
economic backbone of the household. When he was arrested, he lost his job,
and the stability of his former life and family was thrown into the air. He was
arrested for allegedly snatching a smartphone from a teacher who was
recording a fight outside school and throwing it to the ground, breaking it.

151
ibid.

53
A.I.: Perpetuator of Racism and Colourism

Facial recognition software identified Michael through a screenshot of the


video the teacher had recorded. According to Michael’s Public Defender,
there were several irregularities involving how this case had been dealt with.
For starters, the detective investigating the case failed to question Michael.
He also did not review the recording of the incident, which is why he
probably failed to notice that the assailant in the video had no visible tattoos,
while Michael had several. The detective seemed to have used the shortcut of
facial recognition software, relying on it heavily and not performing any
basic investigative work.152

Oliver filed a federal lawsuit, 153 against the city of Detroit and Detective
Donald Bussa in Michigan in October 2020, seeking damages for economic
loss and emotional distress. The lawsuit asserted that Bussa misrepresented
information in the search warrant, including the teacher’s initial
identification of a former pupil, and that the detective failed to speak with
numerous witnesses or the school where the altercation occurred. Oliver
requested a court order prohibiting the Detroit police from using facial
recognition software until issues with the software’s performance on people
of various races, ethnicities, and skin tones are addressed. The lawsuit
demanded that investigating officers be obligated to inform judges reviewing
arrest warrants that the quality of an image can impact the performance of
the software and the accuracy of its results. Oliver’s attorney also wanted the
police to reveal the images returned by the facial recognition software

152
T.J. Benedict, ‘The Computer Got it Wrong: Facial Recognition Technology and
Establishing Probable Cause to Arrest’ (2022) 79(2) Washington & Lee Law Review
<https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=4773&context=wlulr>
accessed 13 January 2023.
153
Johnson (n 147).

54
Acing the AI: Artificial Intelligence and its Legal Implications

besides Michael, and the accuracy of the software in an area where brown
and black people predominantly reside. All parties to the lawsuit denied
allegations against themselves.

iii. The Robert Williams Case


Williams was arrested from his front lawn, in the presence of his wife and
children, and held by Police for thirty hours. His four-year-old daughter was
heavily impacted after witnessing the arrest and still fears that her father
might be taken away again. As mentioned before, Robert was wrongfully
identified by facial recognition technologies and accused of allegedly
stealing watches. His case was eventually dropped, but the arrest had long-
term consequences, including multiple strokes since then and the fact that his
family members are still suffering because of it. He also filed a lawsuit,154
against former Police Chief James Craig and Detective Bussa in the Federal
Court of Michigan. It alleged that they did not take Robert and Michael
Oliver’s alibis into account and only relied on facial recognition software for
the arrest. The lawsuit also claimed that Detective Bussa could not question
any Shinola employee present at the time of the crime as the Company does
not like its employees to appear in court. Detective Bussa then proceeded to
show six photographs to a security guard who was not present at the time of
the crime, who ‘identified’ Williams from the line-up. The suit also
mentioned that the reliance on Facial Recognition Software was so heavy
that nobody from the Police Department asked Robert Williams where he
was on the day the crime had occurred.

154
ibid.

55
A.I.: Perpetuator of Racism and Colourism

Craig later informed the Police Department that Detective Bussa had realised
his mistake four days after Robert’s arrest when he reviewed security camera
footage and discovered that the thief was a different person. Craig had told
Police Commissioners in July 2019 that facial recognition technologies could
never be used as the sole reason for arresting a person. Michael had been
arrested a week after making this statement. In September 2019, the
commissioners adopted a policy that instructed officers to only use facial
recognition software in the event of violent crimes like homicides or home
invasions and made violation of the same a fireable offence. Four months
later, Robert Williams was arrested for shoplifting.

After learning about Robert William’s botched arrest, Craig stated that it was
the result of sloppy investigative work and that nothing was wrong with the
facial recognition software. He also stated that Robert and Michael were
arrested through the software before the policy governing the same was
enforced. While Craig has since admitted,155 that this facial recognition
software identifies the wrong person 90% of the time, this sudden change of
opinion most probably had more to do with the fact that he was the
Republican candidate running for the position of Governor of Michigan,
instead of an actual admission of the obvious flaws in facial recognition
software.

VI. Bias in Hate Speech Detecting Algorithms

Social media giants such as YouTube, Facebook, Twitter, and others are
counting on AI Technology to help reduce or control the spread of racist,

155
ibid.

56
Acing the AI: Artificial Intelligence and its Legal Implications

violent hate speech on their platforms,156 as violence such as mass shootings


influenced by hate speech grows exponentially in number. These companies
are banking on the notion that these algorithms that make use of natural
language processing are capable of flagging such dangerous content faster
than humans are capable of.

Two recent research studies,157 however, suggest that AI that has been taught
to recognise hate speech may instead wind up reinforcing racial bias. In one
study, 158 researchers discovered that tweets published by African Americans
were 1.5 times more likely to be flagged as hateful or offensive, and 2.2
times more likely to be flagged if they were written in African American
English (which is commonly spoken by black people in the US). Similar
widespread evidence of racial bias against hate speech was discovered by
another study in five commonly used academic data sets for analysing hate
speech, which included approximately 1,55,800 Twitter posts.159

A major reason,160 for the same is that social context determines what is
considered to be offensive. For example, the words ‘queer’ and ‘n-word’ can
be considered offensive and derogatory in certain contexts and entirely

156
Shirin Ghaffary, ‘The algorithms that detect hate speech online are biased against black
people’ (Vox, 15 August 2019) <https://www.vox.com/recode/2019/8/15/20806384/social-
media-hate-speech-bias-black-african-american-facebook-twitter> accessed 14 January
2023.
157
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi and Noah A. Smith, ‘The Risk of
Racial Bias in Hate Speech Detection’ (University of Washington, 2019)
<https://maartensap.com/pdfs/sap2019risk.pdf> accessed 14 January 2023; Thomas
Davidson, Debasmita Bhattacharya, and Ingmar Weber. ‘Racial Bias in Hate Speech and
Abusive Language Detection Datasets’ (Association for Computational Linguistics, 2019)
<https://aclanthology.org/W19-3504> accessed 14 January 2023
158
Sap et al (n 157).
159
Davidson et al (n 157).
160
Ghaffary (n 156).

57
A.I.: Perpetuator of Racism and Colourism

harmless or even empowering in others. However, algorithms as well as


content moderators who grade the test data that train the algorithms on their
job are not aware of the context of the comments being reviewed.

These studies exposed the flaws in the natural language processing


algorithms of AI. This AI had been portrayed as an objective tool that could
identify offensive language. However, these papers, presented at the
acclaimed Annual Conference for Computational Linguistics, brought to
light the fact that not only can this A.I. amplify the already present biases
that humans have, but also that the test data itself results in the A.I. having a
baked-in bias.

It is not known beyond any doubt that the Facebook, Twitter, and Google
content moderation systems show the same biases as in these studies; these
businesses employ proprietary technology to control content. But the big
tech companies frequently consult academics for advice on how to
effectively enforce laws against hate speech. Therefore, if leading
researchers are identifying weaknesses in widely used academic data sets, it
poses a serious issue for the tech sector as a whole.

VII. Other Areas of AI Bias

Interestingly enough, the effects of this bias can also be seen in other areas.
Object detection systems used by driverless cars to identify pedestrians have
been found to be more likely to hit people of colour, as the system finds it
difficult to identify people with darker skin tones. 161 As per the FDA, pulse
oximeters used to measure oxygen saturation levels for COVID-19 patients

161
Kenny (n 128).

58
Acing the AI: Artificial Intelligence and its Legal Implications

also do not work accurately with people who have darker skin
pigmentation.162

VIII. Conclusion

Artificial Intelligence is undoubtedly a tool with great potential and is


already being applied in a myriad of streams, including but not limited to
surveillance, driving, healthcare, and limiting hate speech. The inherent
biases in the AI, however, have life changing impacts on the people it
touches. They can result in people being unfairly accused of crimes they are
innocent of and can be unreasonably discriminated against simply because of
their skin tone. Now, that we know that the biases in AI are a reflection of
the people that trained and codified it, untangling these biases and working
towards creating truly objective AI is a goal that can be steadily worked
towards. The data collection methods which form the basis for the data used
to train the AI must be made more inclusive of people of different races and
genders, with the ultimate aim of creating an AI that can aid humans in
achieving their true potential, regardless of their sex or skin colour.

162
Jacqueline Howard, ‘FDA panel examines evidence that pulse oximeters may not work as
well on dark skin’ (CNN, 1 November 2022)
<https://edition.cnn.com/2022/11/01/health/pulse-oximeters-fda-
meeting/index.html#:~:text=FDA%20panel%20examines%20evidence%20that%20pulse%2
0oximeters%20may,as%20well%20on%20dark%20skin&text=Pulse%20oximeters%20are
%20used%20to,individuals%20with%20darker%20skin%20pigmentation.> accessed 14
January 2023.

59
Acing the AI: Artificial Intelligence and its Legal Implications

AI IN LAW: NOT A GAMBLE ON MORALITY BUT A


FACILITATOR OF PRECISION
Aryan Raj and Siddhi Rupa
(Students at Chanakya National Law University, Patna)

Abstract
Artificial intelligence refers to a computer system capable of carrying
out tasks that typically require human intelligence. Machine learning
(ML) involves gathering knowledge and guidelines for using the data,
which is the driving force behind artificial intelligence (AI) systems.
The law is highly favourable for using AI and machine learning in
many aspects. Both machine learning and law are based on
surprisingly similar ideas: they both use previous instances to infer
rules that will apply to new circumstances. This sort of reasoning is
precisely the kind of endeavour to which artificial intelligence may be
successfully applied.

The number of cases that are still pending in Indian courts is


enormous. The paper discusses “How can AI benefit the legal system
in India?” It is debatable whether AI is equivalent to human
knowledge and how far it can perceive morality. However, it can help
create “smart courts” by primarily assisting in addressing the issue of
pendency. An AI system can recommend laws and regulations to
judges, as well as prepare legal documents and correct what it
perceives to be human errors in a verdict. Through the use of AI, legal
research’s precision, litigation prediction, contract analytics, speed,
and other aspects could also be significantly improved.

60
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

Aside from case hearings, courts also need to handle various


administrative tasks like planning and organising different trials and
managing official communications. The judicial system could
concentrate on its primary duties, which include providing prompt
justice, by automating such tasks through AI. It has already been used
extensively to support judges during bail and parole hearings. For
instance, in US courts, the “Safety Assessment” (PSA) AI-based tool
assists judges in such hearings by generating a risk score after
accounting for various previously established parameters. In addition
to this, the paper explains how “enabling hybrid courtrooms” will
facilitate quicker access to justice and prevent delays in lawyers’
appearances caused by conflicting court matters.

I. Introduction

Artificial intelligence is the subject of extensive research and policy


implementation; big businesses and governments around the world are
evaluating the possibility and viability of incorporating AI into various
sectors for better, quicker, and more reliable outcomes, and the legal sector is
one of those where the viability of AI integration is being studied and
critically analyzed. John McCarthy,163 who first coined the word “artificial
intelligence,” defines it as “making a machine behave in ways that would be
called intelligent if a human were so behaving.”

163
Dr. Paul Marsden, ‘Artificial Intelligence Defined: Useful list of popular definitions from
business and science’ (Digital Wellbeing, 4 September 2017)
<https://digitalwellbeing.org/artificial-intelligence-defined-useful-list-of-popular-
definitions-from-business-and-science/> accessed 28 December 2022.

61
Acing the AI: Artificial Intelligence and its Legal Implications

A lot of things are being said about the utilization of artificial intelligence in
law, and in a country like India, where 5 crore cases are pending, 164 and
problems are not expected to be solved in the near future or within two
decades, artificial intelligence has raised a lot of expectation among judges,
legal professionals, and the general public. While delivering his keynote
address on ‘Navigating AI and Technology Disputes via Arbitration’ in May
2022 in Dubai, Hon’ble Mr Justice DY Chandrachud said that “law and
arbitration must keep up with technological advancements and the increasing
use of artificial intelligence in daily life in order for the existing adjudicatory
system to resolve new generational disputes,”165 which shows that present
judicial authorities are comfortable with the incorporation of artificial
intelligence in the legal sector.

AI is predominately expected to solve the numbers of pending cases in India


and provide air to the legal system, which is facing the issue of high numbers
of pending cases. The efficiency of Indian courts is hampered by a massive
backlog of unresolved cases, which eventually delays access to justice. Huge
numbers of cases are still pending in Indian courts, some of which have been
there for close to three decades. Further AI is anticipated to improve the
efficiency, speed, and features of legal research, which can elevate the
quality of justice.

164
‘Pendency of 5 Crore Court Cases a Matter of Grave Concern: Kiren Rijiju’ (The Hindu,
6 December 2022) <https://www.thehindu.com/news/national/pendency-of-5-crore-court-
cases-a-matter-of-grave-concern-kiren-rijiju/article66231956.ece> accessed 28 December
2022.
165
Dhananjay Mahapatra, ‘Law must keep up with tech progress: Justice Chandrachud’ (The
Times of India, 21 March 2022) <https://timesofindia.indiatimes.com/india/law-must-keep-
up-with-tech-progress-justice-chandrachud/articleshow/90341606.cms> accessed 28
December 2022.

62
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

The Indian judiciary has taken some significant actions to incorporate AI


into its operations in order to fulfil the demands of the present. The Supreme
Court of India established an Artificial Intelligence Committee,166 in 2019 to
investigate how AI might be used in the judicial system. In 2021, the
Supreme Court launched its first AI platform, “SUPACE,”167 or the Supreme
Court Portal for Assistance in Court Efficiency. SUPACE uses machine
learning to handle enormous volumes of case data and assist judges in legal
research for efficient justice delivery. A judge can access pertinent
information by using the SUPACE program, which compiles pertinent facts
and legal rulings.

In 2022, the Supreme Court completely introduced the Supreme Court


Vidhik Anuvaad Software (SUVAS),168 an artificial intelligence (AI)
software that can translate judgements and orders into nine different
vernacular languages (Assamese, Bengali, Hindi, Kannada, Marathi, Odia,
Tamil, Telugu, and Urdu). The declared purpose of this launch was to give
the non-English-speaking public access to judgements and orders and to
promote a better understanding of judicial proceedings.

However, not everyone is supportive of the use of AI in the legal system; a


number of concerns have been expressed, including the question of whether
an AI algorithm can take on the role of a lawyer. Will AI have a detrimental
impact on job opportunities in the legal sector? Practically speaking, AI
cannot take the position of a lawyer in court, but it is capable of drafting and

166
Pritam Bordoloi, ‘The Power & Pitfalls of AI in Indian Justice system’ (Analytics India,
25 July 2022) <https://analyticsindiamag.com/the-power-pitfalls-of-ai-in-indian-justice-
system/> accessed 28 December 2022.
167
ibid.
168
ibid.

63
Acing the AI: Artificial Intelligence and its Legal Implications

creating documents. As a result, the secretarial role of lawyers may be


significantly reduced. Al works more effectively and tends to make fewer
mistakes, which raises these issues. In several areas, including contract
analysis, trademark search software, legal research software, and many
others, the legal industry has seen the development of numerous new
solutions that have increased the productivity of lawyers. However, none of
the AI-based software or programs intend to displace attorneys; rather, by
concentrating on results, all AI-based software and programs attempt to
increase the objectivity and accuracy of research and analysis.

II. Scope of AI Powered “Smart Courts” in India

According to the National Judicial Data Grid (“NJDG”), 3.93 crore cases are
pending in the subordinate courts,169 49 lakh in high courts, and 57,987 in the
Supreme Court.170 With the current strength of judges, a Law Commission
Report from 2009 stated that it would take 464 years to clear the cases. 171 In
the High Courts, the pendency is even higher: half of all the 8 million cases
there have been pending for more than three years.172 Issues of high backlogs
of cases from lower to higher courts are not new in India; the issue has been
there since independence.

169
‘PTI, Over 3.93 Crore Cases Pending in Lower, Subordinate Courts: Govt’ (News 18, 4
August 2021) <https://www.news18.com/news/india/over-3-93-crore-cases-pending-in-
lower-subordinate-courts-govt-4044947.html> accessed 29 December 2022.
170
Roshni Sinha, ‘Examining pendency of cases in the Judiciary’ (PRS Legislative
Research, 8 August 2019) <https://prsindia.org/theprsblog/explainer-code-occupation-
safety-health-and-working-condition?page=45&per-page=1> accessed 29 December 2022.
171
‘Judicial vacancies in the Supreme Court must be filled soon to speed up justice delivery’
(Hindustan Times, 2 January 2018) <https://www.hindustantimes.com/editorials/judicial-
vacancies-in-the-supreme-court-must-be-filled-soon-to-speed-up-justice-delivery/story-
hfSW8pGam5QqGJzcrLY3hP.html> accessed 30 January 2022.
172
ibid.

64
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

The Indian judiciary can, however, benefit from significant reforms like “AI-
based smart courts” in order to reduce the large backlog of pending cases. It
will be an automated court with a database of all prior decisions involving
comparable facts and situations. It can be useful in resolving cases involving
family, marriage, drunk driving, land, and other issues. For instance, if a
person broke the speed limit while driving their car, going 100 km/hr in an
80 km/hr zone, the judge would only enter the keywords (80 km/hr, 100
km/hr) into the system, which would have all cases with comparable facts
and circumstances in its database, and the system would then render a verdict
based on laws and precedent dealing with similar facts and circumstances.
Now suppose if the car driver was under the influence of alcohol while
breaking the speed limit, then the judge will also put “driver had consumed
alcohol” along with the other two keywords (like 80 km/hr or 100 km/hr),
and the verdict will be delivered based on the laws and precedents based on a
similar situation where the accused has broken the speed limit in an alcoholic
state.

India can take inspiration from China in terms of automation and use of AI in
resolving the backlog of pending cases in their country. The emergence of
automated pattern analysis has transformed the way courts operate in China.
Since 2014, Chinese courts have uploaded more than 120 million documents
to a centralised website named “China Judgments Online”, 173 allowing the
public access to an unprecedented amount of court judgements. In recent
times, courts all around the nation have been experimenting with integrating

173
Rachel E. Stern, Benjamin L. Liebman, Margaret E. Roberts, and Alice Z. Wang,
‘Automating Fairness? Artificial Intelligence in the Chinese Courts’ (2021) 59(515)
Columbia Journal Of Transnational Law
<https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=3946&context=faculty_s
cholarship> accessed 30 January 2022.

65
Acing the AI: Artificial Intelligence and its Legal Implications

AI into adjudication by introducing software that evaluates the evidence,


offers outcomes, examines the consistency of verdicts, and gives
recommendations on how to determine cases. However, Chinese automation
of courts is not only for the purpose of solving the pending cases at a greater
pace but also to monitor its judges and their pattern of giving verdicts based
on algorithms, so that the Chinese Communist Party can keep an eye on
those judges and judicial officers who are working against the ideology of
the party.

India will need to work in this direction with the backing of the leaders of all
major political parties, who should support the implementation of “smart
courts” in India because, unlike China, India is a multi-party democracy, and
the allegations about the motivations of “smart courts” can reduce public
confidence in the judiciary and lead people to believe that the court has
turned into a puppet of the government, which works on the algorithms fixed
by the government. There should be minimum interference of government in
the implementation of “Smart Courts” in India; however, the government
will need to provide the legal backing through statutes. There is currently no
statute in India that deals exclusively with the subject related to automation
and AI; the only statute that even “touches” on this topic is the Information
Technology Act of 2000. Despite the modifications to Sections 43A and 72
of the Act,174 there are still many issues and no protection for the AI systems.
The United Kingdom and the United States, who have given the AI a lot of

174
SS Rana and Co. Advocates, ‘Information Technology (Reasonable Security Practices
And Procedures And Sensitive Personal Data Or Information) Rules, 2011’ (Mondaq, 5
September 2017) <https://www.mondaq.com/india/data-protection/626190/information-
technology-reasonable-security-practices-and-procedures-and-sensitive-personal-data-or-
information-rules-2011> accessed 30 January 2022.

66
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

exposure by passing specific laws for the protection of the AI as well as the
humans using it, are examples of thorough legislation that we also need.

AI-automated smart courts can be a boon for India. The majority of cases
that are pending in all types of courts can be easily solved based on
algorithms, but in the case of grave offences that impact society as a whole
like rape, murder, dacoity, robbery, kidnapping, etc., their trials should be
based on the classical method because in these cases the judge’s
discretionary powers and their obiter dicta matter a lot.

III. Critical Analysis of the Use of AI in Courts

Artificial Intelligence (AI)-powered smart courts have garnered attention as a


means to enhance the effectiveness and accessibility of the legal system in
India. However, the sustainability of these courts is a complex issue that
must be evaluated in terms of its impact on democratic principles, the
financial feasibility of investment, data security, trust in the algorithm, and
the government’s monopoly over the technology, among various other
things.

(a) Impact on Democratic Principles

Transparency is a fundamental aspect of the rule of law, and the opaque


nature of AI systems makes it challenging for citizens to comprehend the
decision-making process. This lack of transparency can undermine the
principle of accountability, as citizens are unable to hold the government
accountable for its actions. Furthermore, the use of AI in the legal system
can also lead to the erosion of human rights and civil liberties if decisions are
made without proper oversight or due process. Collectively and individually,

67
Acing the AI: Artificial Intelligence and its Legal Implications

the threats to privacy and democracy degrade human values.175 AI can have
(and likely already has) an adverse impact on democracy, in particular where
it comes to: (i) social and political discourse, access to information, and voter
influence, (ii) inequality and segregation; and (iii) systemic failure or
disruption.176

Additionally, AI systems are often trained on large datasets and may


perpetuate the biases present in the data. This could lead to unfair or
discriminatory decisions that disproportionately affect marginalized groups,
thus undermining the principle of equal protection under the law.

(b) Judicial Independence

The use of AI in the legal system may have an impact on the principle of
judicial independence. Judicial independence refers to the autonomy of
judges to make decisions based on their own interpretation of the law,
without interference from other branches of government or external
pressures. AI systems, on the other hand, are designed to follow specific
rules and procedures, and they may be less likely to consider extenuating
circumstances or to exercise discretion in a way that is consistent with the
principles of justice.

There is a possibility that AI systems could be used to standardise legal


decision-making and limit the discretion of judges. This could lead to a

175
Karl Manheim and Lyric Kaplan, ‘Artificial Intelligence: Risks to Privacy and
Democracy’ (2019) 21(106) Yale J.L. & Tech
<https://yjolt.org/sites/default/files/21_yale_j.l._tech._106_0.pdf> accessed 8 January 2023.
176
Catelijne Muller, ‘The Impact of Artificial Intelligence on Human Rights, Democracy
and the Rule of Law Draft D.D.’ (2020) ALLAI <https://allai.nl/wp-
content/uploads/2020/06/The-Impact-of-AI-on-Human-Rights-Democracy-and-the-Rule-of-
Law-draft.pdf> accessed 8 January 2023.

68
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

reduction in the diversity of legal outcomes and an increased likelihood of


biased or unfair decisions. The possibility of impacting judicial decisions by
interfering with the system’s algorithm raises a question about the courts
meeting the requirements of another attribute, namely general independence.
The judiciary should be independent of other powers.177 Furthermore, if
judges rely too heavily on AI systems, it could lead to a lack of
accountability for their decisions, as they may be seen as having been made
by the AI rather than by the judge.

(c) Huge Financial Investment

The implementation of AI-powered smart courts in India would require


significant financial investment. The cost of developing and deploying these
systems can be substantial, and it may not be financially sustainable for the
government to bear the burden alone. Additionally, the maintenance and
updating of these systems over time can also be costly, which could make the
investment less viable in the long term. The cost of developing AI-powered
smart court systems includes expenses such as software development, data
collection and management, and the training of AI models. Adding to it, the
cost of deploying these systems includes expenses such as hardware and
infrastructure, as well as the cost of training court staff on how to use the
systems.

The cost of maintenance and updating these systems over time is also a
significant concern. AI systems require regular updates and maintenance to
ensure that they continue to function properly and to keep them in line with
the latest technological developments. This includes expenses such as

177
Paweł Marcin Nowotko, ‘AI in Judicial Application of Law and the Right to a Court’
[2021] Faculty of Law and Administration, University of Szczecin.

69
Acing the AI: Artificial Intelligence and its Legal Implications

software updates, data management, and the retraining of AI models.


Furthermore, the cost of implementing AI systems in the legal system could
also lead to increased litigation costs, as citizens may challenge the decisions
made by the AI-powered smart courts, which could lead to a rise in legal
fees.

In summary, while AI-powered smart courts have the potential to improve


the efficiency and accessibility of the legal system in India, the cost of
implementing and maintaining these systems over time could be financially
draining for the government.

(d) Lack of trust

One reason is that AI systems are inherently opaque, and it is difficult for
citizens to understand how they make decisions. This lack of transparency
can lead to a lack of trust in the system, as citizens may question the fairness
and impartiality of the decisions made by AI. People will likely have little in-
depth knowledge about legal technologies. The perceived risk negatively
affects trust and the perceived usefulness of legal technologies.178

Another reason is that AI systems may perpetuate biases that are present in
the data they are trained on, which could lead to unfair or discriminatory
decisions that disproportionately affect marginalized groups. The majority of
these biases result from either training the algorithms on biased data or

178
Baryse and Dovil, ‘People’s Attitudes towards Technologies in Courts’ Laws 11: 71.
Institute of Psychology, Vilnius University <https://doi.org/10.3390/laws11050071>
accessed 10 January 2023.

70
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

because the AI focuses on correlation rather than causation179. This can lead
to a lack of trust in the system among these groups, as they may feel that the
system is not working in their best interests.

Additionally, if AI-powered smart courts are not adequately designed and


implemented, they may produce inconsistent or unreliable results, which
could further erode trust in the system.

(e) Government Monopoly

The government’s monopoly over the algorithm used in AI-powered smart


courts could lead to abuse of power, as the government could use the
algorithm to further its own interests rather than serve the people. When the
government has a monopoly over the algorithm, it controls how it is
designed, trained, and deployed. This can lead to the government using the
algorithm to further its own interests rather than serve the people. For
example, the government could use the algorithm to target specific groups or
individuals or make decisions that are not in the public’s best interest. Philip
Alston writes in his report of the “grave risk of stumbling zombie-like into a
digital welfare dystopia” in Western countries. 180

Moreover, a government monopoly over the algorithm could also lead to a


lack of accountability for the decisions made by the AI-powered smart
courts, as the government would be the only entity with access to the
algorithm and the data it uses. This could make it difficult for citizens to

179
Erlis Themeli and Stefan Philipsen, ‘AI as the Court: Assessing AI Deployment in Civil
Cases’, in K. Benyekhlef (ed), AI and Law: A Critical Overview (Thémis edn, 2021) 213-
233 <https://ssrn.com/abstract=3791553> accessed 8 January 2023.
180
Sir Henry Brooke, ‘Algorithms, ‘Artificial Intelligence and the Law’ (Bailii, 12
November 2019) <https://www.bailii.org/bailii/lecture/06.pdf> accessed 8 January 2023.

71
Acing the AI: Artificial Intelligence and its Legal Implications

challenge or appeal the decisions made by the AI-powered smart courts.


There is already concern about the totalitarian possibilities of state control,
which are being illustrated by China’s social credit system, in which
computers monitor the social behaviour of citizens in minute detail and
reward or withhold benefits according to how they are marked by the
state.181

Additionally, government monopoly over the algorithm could also lead to a


lack of transparency, as the government would be the only entity with
control over the algorithm, making it difficult for citizens to understand how
the algorithm makes decisions.

Thus, while AI-powered smart courts have the potential to improve the
efficiency and accessibility of the legal system in India, the government’s
monopoly over the algorithms used in these systems could lead to abuse of
power, a lack of accountability, and a lack of transparency.

In conclusion, while AI-powered smart courts have the potential to improve


the efficiency and accessibility of the legal system in India, their
sustainability must be critically evaluated in terms of their impact on
democratic principles, financial feasibility, data security, trust in the
algorithm and the government’s monopoly over the technology. It is
imperative to weigh the pros and cons of implementing these systems and
ensure that they are implemented in a manner that respects the rule of law,
human rights, and civil liberties.

181
ibid 21.

72
AI in Law: Not a Gamble on Morality but a Facilitator of Precision

IV. Conclusion

Artificial intelligence is a rapidly growing field with the potential to


revolutionize various sectors, including the legal industry. The legal sector in
India, in particular, is facing significant challenges, such as a high number of
pending cases, which has led to the exploration of the viability of
incorporating AI into the system. The Indian judiciary has taken steps
towards incorporating AI, with the establishment of an AI committee in
2019, and the launch of platforms such as SUPACE and SUVAS to assist in
legal research and the translation of judgements.

AI has the potential to improve the efficiency and speed of the legal system
by handling large volumes of data and assisting judges in legal research.
However, there are also concerns about the use of AI in the legal sector, such
as the question of whether AI can replace lawyers and the potential impact
on job opportunities in the legal field. It is important to note that while AI
can assist in tasks such as document drafting and contract analysis, it cannot
replace a lawyer in court.

Furthermore, the use of AI in the legal system also raises concerns about
transparency and accountability, as the decision-making process of AI
systems may be difficult for citizens to understand. Additionally, AI systems
may perpetuate biases present in the data, leading to unfair or discriminatory
decisions. Therefore, it is crucial to ensure that AI systems are implemented
with proper oversight and due process to protect human rights and civil
liberties. The Indian judiciary’s effort to incorporate AI into its operations is
a step in the right direction, but it is crucial to continue monitoring the
impact of AI on the legal system and make adjustments as necessary.

73
Acing the AI: Artificial Intelligence and its Legal Implications

V. Suggestions

Based on the information provided in the chapter, some potential future


courses of action for the incorporation of AI in the legal sector in India could
include:

 Continual research and development of AI platforms for the legal system,


such as “SUPACE” and “SUVAS,” to improve efficiency and speed in
handling pending cases and assisting judges in legal research.
 Addressing concerns about the impact of AI on job opportunities in the
legal sector by providing training and education for lawyers and legal
professionals on how to integrate and use AI tools in their work.
 Implementation of regulations and guidelines for the use of AI in the legal
system to ensure that it is used ethically and in compliance with laws and
regulations.
 Collaboration between the legal sector and the technology industry to
develop AI solutions that can assist lawyers in their work while also
protecting the rights and interests of clients and the public.

Monitoring and evaluating the impact of AI on the legal system, such as its
effect on the number of pending cases and the quality of justice delivered,
will help determine if further adjustments or improvements are needed.

74
Acing the AI: Artificial Intelligence and its Legal Implications

THE TWO-WAY PROTECTIVE REGIME OF INTANGIBLE


CULTURAL HERITAGE IN ARMED CONFLICT
Shivesh Saini
(Student at University School of Law and Legal Studies, GGSIPU)

Abstract
The destruction of property has been dealt with in different
conventions across International Humanitarian Law. However, in the
light of evolving warfare, certain aspects of warfare demand evolution.
One such aspect is the protection of digital intangible assets in several
forms of armed conflict. The existing protection conferred to intangible
assets is questionable and has been very little addressed in light of
international law. Therefore, the paper seeks to demonstrate the
enforceability of existing principles over intangible assets. In addition,
there is explicit dependability of protection of these intangible cultural
assets on cyber security. The cyber technology of contemporary times
is abundantly capable to affect the social and cultural assets of the
opponent adversely. Recognizing the paradigm shift, the chapter
entails the comprehensive efforts that should be realized to expand the
applicability of international law.

I. Introduction

The emergence of intangible property along with the evolution of warfare


stresses regulation to a great extent. Despite the fact that humanitarian
regulations were conceived and drafted long before the advent of cyber
offensive warfare, there is considerable uncertainty about their applicability.

75
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
The conventions were framed in the context of two world wars primarily
attentive to saving the lives of individuals and cultural property, particularly
from the revulsions of kinetic warfare. 182 Although this foundation will not
lose its relevance in the coming future, considering the military complexity
of today’s age, it is imperative to add dimension to this ever-developing legal
arena.

In this regard, the ICC charged Ahmad Al Faqi Al Mahdi, an Islamic scholar
with the destruction of cultural property. 183 He was the first person to be
charged with the crime of destroying cultural heritage and monuments of
historic importance in Mali. It is vital to trace the jurisprudence of this case,
as it could set up a potential example for prosecuting the offenders for the
destruction of intangible assets. The destruction could potentially be caused
by cyber-attacks which pose an inescapable hazard to this new form of
heritage. This note principally suggests the active applicability of existing
humanitarian laws to digitalized assets that should be considered archives.
Countries in contemporary times are making active efforts to digitalise their
cultural heritage to confer protection from terrorist attacks, natural disasters,
and any other aggression. 184

In light of the malicious cyber intent in the last few years, the chapter seeks
to initiate a new deliberation centred around the existing norms on state-led
cyber occurrences aimed at destroying the cultural traditions of the

182
See, Convention for the Protection of Cultural Property in the Event of Armed Conflict
(adopted on 14 may 1954, entered into force 7 August 1956) 249 U.N.T.S. 216.
183
The Prosecutor v Ahmad Al Faqi Al Mahdi (Judgement) CC-01/12-01/15 (27 September
2016).
184
Alonzo C. Addison, ‘The Vanishing Virtual: Safeguarding Heritage’s Endangers Digital
Record, in New Heritage: New Mediaand Cultural Heritag’e in Yehuda Kalay, Thomas kvan
and Janice Affleck (eds.) (Routledge, 2002) 27, 28–29.

76
Acing the AI: Artificial Intelligence and its Legal Implications

adversary. 185 The chapter further undertakes that the regulations of


international armed conflict (“IAC”) and non-international armed conflict
(“NIAC”) does not vary significantly (in the case of non-state actors) as to
constitute liability; therefore, the suggestions could be functional in both
situations of armed conflict.186 Against this background, it intends to convey
the delinquencies involved in accommodating modern-day warfare with age-
old regulations that had been framed in an altogether different context. Due
to this, possible case scenarios and interpretations have been entertained in
the chapter to accommodate the gap between these two for better regulation
of modern-day warfare.

The goal of this chapter is to frame the issue to serve as the starting point for
a more in-depth conversation among interested parties about required
clarifications on existing rules, or the creation of new frameworks. To
achieve this, the paper presents some hypothetical scenarios in which state-
led cyber operations carried out during an armed conflict interfere with
activities that are crucial to the operation of contemporary interconnected
societies as a way to first map the current cyber threat landscape. The section
that follows explores whether and to what extent the existing legal structures
are adequate to safeguard society against the repercussions of potential cyber
conflict. While International Humanitarian Law (“IHL”) will be the main
topic, the chapter also looks at how International Criminal Law may apply
and be relevant in armed conflict circumstances. The final section offers
potential future directions based on these results and serves as a jumping-off
point for in-depth talks with all pertinent stakeholders.
185
ibid 35-36; Graciela Gestoso Singer, ‘ISIS’s War on Cultural Heritage and Memory’
[2015] UK Blue Shield.
186
Jean-Marie Henckaerts and Louise Doswald-Beck, Customary International
Humanitarian Law Volume I: Rules (Cambridge University Press 2005) Rule 38–41.

77
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict

II. The Applicability of Existing Laws on Intangible Assets

The prime set of regulations that could regulate the emerging frequent cyber-
attacks on Intangible cultural assets is IHL. It is guided by the maxim of
Article 35 of Additional Protocol 1 (“AP I”) which illustrates that the right of
the parties to choose the methods or means of warfare must not be unlimited,
but rather should be limited and regulated.187 The prime focus of such a
principle is to alleviate the suffering, particularly of civilians. This can be
done by several principles such as – (a) proportionality (b) distinction and (c)
military necessity. Given the inspiration for these fundamental principles of
IHL, it is not difficult to argue that disproportionate cyber-attacks on the
intangible cultural property of public concern are prohibited and forbidden,
regardless of the nature of the conflict.

It should be understood that the international protection regime concerning


Intangible Cultural Heritage (“ICH”) will be more complex and challenging
compared to the traditional notion that was confined to the cultural property
only. This means, that international law mainly deals with eminent
monuments and movable objects with a distinctly religious character. 188 The
protection of ICH has been undertaken under UNESCO’s Convention on
Safeguarding Intangible Cultural Heritage of 2003. It has been defined under
Article 2(1) of the Convention as practices, representations, expressions
knowledge, skills, and instruments that might be related to any community or
a person. This heritage generally passes through generations to consolidate

187
Protocol Additional to the Geneva Conventions of 12 August 1949 (adopted 8 June 1977)
1125 UNTS 3, art 35.
188
See, A.F. Vrdoljak, ‘Minorities, Cultural Rights and the Protection of Intangible Cultural
Heritage’ [2005] ESIL.

78
Acing the AI: Artificial Intelligence and its Legal Implications

their identity and maintain the continuity of their rituals and customs.189 By
examining the said definition thoroughly, one can conclude that the majority
of ICH would fall into the domain of recognized groupings that are already
recognized under previous Geneva and Hague Conventions of 1949 and
1954 respectively as it includes musical instruments, sacred groves, forms of
dances and other spiritual assets. Besides, even though some assets are
intangible, they satisfy some material elements that justify the applicability
of previous conventions on these assets as they both are interconnected and
dependent on each other for their existence, for example, the traditional
dance in the Royal Palace of Vietnam or the progression of Lord Jagannatha
from the Shree Jagannatha Temple.

In conclusion, the fortification of buildings of cultural heritage will


ultimately and indirectly lead to the protection of such heritage. The chapter
emphasizes that an integrated approach must be taken regarding their
protection and administration. The Istanbul Declaration of 2002 even
highlighted that there exists a dynamic link and close interaction between
tangible and intangible cultural assets. Traditional forms of cultural property,
such as monuments, holy sites, and archaeological digs, as well as digitalized
forms of cultural property, have been revolutionised by the emergence of the
internet and the expansion of digital technologies. This development also
gave rise to a new form of digital cultural property, also known as ‘born-
digital cultural heritage’.

As far as the legal regulations of IHL are concerned, the initial question
arises whether the definition of ‘attack’ in Additional Protocol I could be

189
UNESCO’s Convention on Safeguarding Intangible Cultural Heritage (adopted 17
October 2003), art 49(1).

79
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
appropriately applicable to cyber-attacks that are meant for destroying ICH.
The threshold can be satisfied by indicating that any systematic act that
foresees the obliteration of or injury to a person or an object shall qualify as
an ‘attack’ under Article 49(1).190 However, the delinquency persists as to
what if the cyber-attack just affects the functionality of an attacked object
rather than destroying it completely. The majority of experts in this regard
maintain that it will amount to an attack if the affected system demands
restoration in any way. 191 On the same line, the ICRC supports the broader
interpretation of the definition of ‘attack’. 192 It maintains that the object and
purpose of IHL is to assure the protection of civilians and the reprisal of their
objects in armed conflict. Therefore, they seek the enforceability of Article
31 of the Vienna Convention on the Law of Treaties, which preserves that
the treaty shall be interpreted in the light of its objects, purposes, and
ordinary meaning.193 This definition of ‘attack’ indeed has the potential to
act as a default rule for constituting the liability of an aggressor under other
relevant conventions. Another relevant recourse is the Nicaragua judgment to
determine whether the act in question can be said to be an ‘attack’ on ICH.
194

Therefore, it can be concluded that if the cyber-attack is successful in


instituting the required scale and effect in comparison to non-cyber

190
See, Laurent Gisel, Tilman Rodenhäuser and Knut Dörmann, Twenty Years on:
International Humanitarian Law and the Protection of Civilians Against the Effects of
Cyber Operations During Armed Conflicts (International Review of the Red Cross 2020).
191
Michael N. Schmitt, Tallinn Manual 2.0 on the International Law applicable to Cyber
Operations (2nd edn, Cambridge University Press 2017) 417.
192
International Humanitarian Law and the Challenges of Contemporary Armed Conflicts
(International Committee of the Red Cross 2015) ICRC 32IC/15/11.
193
ibid.
194
Case Concerning Military and Paramilitary Activities In and Against Nicaragua
(Nicaragua v United States of America) Merits, Judgement [1986] ICJ rep 14, [195].

80
Acing the AI: Artificial Intelligence and its Legal Implications

operations, then it should be qualified to be an attack, and there would be no


rational basis to exclude such cyber operations from the scope of an attack.
Hereafter, if a state or any non-state entity provides formal training to a
group of hackers against any state, it shall amount to an attack within the
meaning of Article 49(1).

As the definition of attack is the initial phase in constituting liability, it will


be easier to comprehend liability in the context of other conventions of IHL.
The Brussels Convention of 1874 asserts that the annihilation of traditional
works of science, art, and history must constitute accountability before the
competent legal tribunal. 195 Similarly, Article 56 of the Hague Convention
1907, affirms that the property of municipalities, religious establishments,
works of science, and state property shall be treated as a private estate that
shall not be subjected to the opponent’s aggression. 196 Article 27 of the same
convention stipulates that all the necessary measures must be taken with
regard to sparing buildings with religious and scientific characteristics, and
these protected buildings should be demarcated.197 Article 46, in an identical
way, holds that: Family honours and rights, individual lives, and private
property, as well as religious convictions and practices, must be respected at
all costs, and private property cannot be confiscated in any manner.198

195
The Brussels International Declaration concerning the Laws and Customs of War
1874, art 14(164).
196
The Hague Convention (V) Respecting the Rights and Duties of Neutral Powers and
Persons in Case of War on Land 1907 (entered into force 26 January 1910), art 56.
197
ibid art 27.
198
ibid art 46.

81
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
The same declaration has also been given in Articles 75 and 4(1) of
Additional Protocol I and II of the Geneva Conventions, respectively. 199
These articles which shelter the physical and intellectual possessions should
be read collectively with UNESCO Convention of 2003. The specific words
‘practise and customs’ in the context of the UNESCO convention relate to
oral traditions and knowledge.200 It implicitly relates to the custom of passing
knowledge from generation to generation for the continuation and
perseverance of knowledge. Such cultural practises are also entitled to
protection under ILO Convention No. 169 and the United Nations
Declaration on the Rights of Indigenous People. 201 Thus, it is mandatory to
deconstruct the importance of ‘preservation’ and ‘transmission’ for its
bearers and their generations. The protection of ICH is not just essential for a
few individuals among many concerned individuals; it is obligatory for the
existence of a ‘nation’ itself.

The Hague Protocol of 1954 and its additional protocols are lex specialis that
aim to protect the tangible cultural heritage in warfare. It seeks to protect
cultural property, which is of great importance to people. There is an absence
of common ground as to what should be the threshold of importance.
Regarding this, the prevailing view of scholars is that it shall be the
responsibility of the state to determine the monuments of its national

199
Protocol Additional to the Geneva Conventions of 12 August 1949 (Protocol I) (adopted
8 June 1977) 1125 UNTS 3, art 75; Protocol Additional to the Geneva Conventions of 12
August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts
(Protocol II) (adopted on 8 June 1977) 1125 UNTS 609, art 4(1).
200
Christoph Antons and Willian Logan, ‘Intellectual Property, Cultural Property, and
Intangible Cultural Heritage’ in Christoph Antons and Willian Logan (eds.), Intellectual and
Cultural Property and the Safeguarding of Intangible Cultural Heritage (Routledge 2018).
201
UN General Assembly, United Nations Declaration on the Rights of Indigenous Peoples,
A/RES/61/295 (2 October 2007).

82
Acing the AI: Artificial Intelligence and its Legal Implications

status.202 The Additional Protocols of 1977 strengthened the protection


mechanism of cultural property. However, little delinquency persists as
Additional Protocols refer to the protection of ‘cultural and spiritual
heritage’203 which is disparate from the notion of the 1954 Hague
Convention that is concerned with ‘the object of great importance’. 204

To this end, the ICRC upheld that the rudimentary idea is the same and of
similar nature. The 2003 UNESCO Convention is more similar to the 1954
Hague Convention as appropriate protection was given to cultural property
while preparing nominations for the 2003 UNESCO Representative List of
the Intangible Cultural Heritage of Humanity that is of great importance to
the people.205 The list was identical to the 1954 Hague Convention list and
the 1972 World UNESCO lists, which serve as a guide for state forces to
follow in the case of tangible cultural heritage. 206 The cultural sites on these
two lists are protected by Article 85(4)(d) of AP I, which prohibits violations
of provisions related to cultural heritage conservation because there is a
symbiotic relationship between tangible and intangible heritage. Due to the
similarities between the Hague Convention and the lists, it can be argued that
similar protection should be accorded to cultural properties enshrined in the
2003 UNESCO Representative List of Humanity’s Intangible Cultural

202
See, J. Blake, Introduction to the Draft Preliminary Study on the Advisability of
Developing a Standard-setting Instrument for the Protection of Intangible Cultural Heritage
(UNESCO International Round Table Conference 2001).
203
Protocol Additional to the Geneva Conventions of 12 August 1949 (adopted 8 June 1977)
1125 UNTS 3, arts 53, 85 (4)(d).
204
Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict
(adopted May 14, 1954, entered into force 7 August 1956) 249 U.N.T.S. 216 (Hague
Convention).
205
Vrdoljak (n 188) 285.
206
ibid.

83
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
Heritage. Such a conclusion can be supported by the fact that both lists
contained the same historical monuments of public importance.

Similarly, the World Heritage Committee introduced a new list of cultural


landscapes that incorporated a few forms of intangible cultural heritage
within its protection. 207 The list was in response to criticism coming from
indigenous societies, which held that natural and cultural heritage should be
distinguished as it would be an inappropriate construct in the context of non-
western societies. 208 In light of the fact that tangible and intangible assets are
typically coterminous, the Hague Convention of 1954 shall apply to ICH.
This would strengthen the legal regime over the same set of cultural
properties, irrespective of whether they are tangible or intangible. As argued,
the destruction of a particular material object will also harm its intangible
side, which could impede the community’s cultural and customary customs.
This eradication of culture will be termed ‘cultural cleansing’, and
culpability should be enforced correspondingly.

(a) The Interrelation of Intangible heritage and Cyber Attacks

The conservation of ICH and the cyber protection mechanism of a state are
two sides of the same coin and are directly correlated to each other. Like
almost everything else, the digital revolution has revolutionized the arena of
ICH. Now, states, for their convenience, can convert tangible or intangible
heritage into digitalized information such as 3D visuals, scanned texts and
chronicles, and audio recordings. Platforms such as YouTube act as the
largest collection of moving photographs that have the nature of ‘continuing
value’. Similarly, Wikipedia can be called a digital storehouse of information

207
UNESCO World Heritage Convention 2015.
208
Antons and Willian Logan (n 200) 6.

84
Acing the AI: Artificial Intelligence and its Legal Implications

that has cultural importance. Similarly, CyArk is a digital archive that was
created after the destruction of the Bamiyan Buddhas of Afghanistan to
digitally store documents related to the world’s most spectacular cultural
places. Motion capture technology has enabled the digitization of traditional
Japanese dances, allowing master performers to study their craft in a new
way and enabling the preservation of cultural artefacts. Iso Huvila has
developed participatory archives by using MediaWiki software to convert the
digital archives of two Finnish cultural heritage sites, namely the Saari
Manor in Mietoinen and Kajaani Castle into participatory spaces for archive
users.209 The Huvila model is an appropriate example of how digital archives
can engage future generations with their cultural roots. Likewise, digital
protection of China’s intangible cultural heritage has developed rapidly, with
great success and numerous accomplishments, such as the digital protection
of the Silk Road cultural heritage project in 2004, the research project for the
world’s intangible cultural heritage protection in cooperation with Samsung
Galaxy Co., Ltd. in 2004, and the “Memory of the World in Lijiang, China”
Project in 2005.

However, with evolution, the threat to these heritages is also increasing with
cyber threats that aim to destroy these assets owing to their religious and
political beliefs. Therefore, the law of armed conflict shall necessarily be
applicable in both non-international and international armed conflicts. For
this purpose, cyber-attacks shall be incorporated under the definition of
armed conflict. Even the absence of any concrete regulation, consideration
shall be given to Marten’s clause of the Geneva Conventions and their

209
Kozaburo Hachimumura, ‘Digital Archives of Intangible Cultural Properties’
(International Conference on Culture and Computing, 17 December 2017) 55.

85
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
Additional Protocols.210 Marten’s clause specifies that even without any
complete code, the belligerents and civilians shall be under the protection of
the principles and regulations recognized by civilised society and driven by
public conscience.211 Consequently, the Martens clause reflects customary
international law that ensures that nothing shall take place in a legal vacuum.

Although the digitalized cultural assets and cyber-attacks could fall within
the domain of IHL, the question of what consequences perpetrators will face
remains unanswered. The obligations will become more important where the
cultural asset is present in its intangible form. For example, YouTube
contains traditional Mongolian throat singing and traditional American-
Indian dance. The answer depends on the nature of the armed conflict,
whether it is of international or non-international nature. As far as the armed
conflicts of international nature are concerned, there should be the
involvement of two or more states as opponents as per common Article 2 of
the Geneva Conventions.212 Besides, there shall be the presence of
sophisticated and organised armed groups which would be under the
command of one of the states engaged in hostility. However, delinquency
persists as to whether the acts of non-state armed groups can be accredited to
the state. Concerning this, the ICTY in Tadic’s case originated the ‘overall
control’ test to ascertain whether the Bosnian Serbs were under the control of
the Federal Republic of Yugoslavia.213 The Tribunal concluded that there

210
See, F. Kalshoven, Constraints on the Waging of War (Martinus Nijhoff ed, Dordrecht
1987) 14.
211
Geneva Convention I, art 63; Geneva Convention II, art 62; Geneva Convention III, art
142; Geneva Convention IV, art 158.
212
Geneva Convention Related to the Protection of Civilian Persons in Time of War
(adopted on 12 August 1949) 75 UNTS 287, art 2.
213
Prosecutor v Dusko Tadic (Appeal Judgement) IT-94-1-A (15 July 1999) [131], [145],
[162].

86
Acing the AI: Artificial Intelligence and its Legal Implications

existed sufficient influence from the state, which confirmed the existence of
international armed conflict in that case. 214 Applying a similar test in cyber
warfare, it could be ascertained if the state authorities exercised a certain
level of influence on hackers who destroyed and caused significant damage
to ICH, then the regulations pertaining to international armed conflict could
be applied as it could amount to an attack under Article 53 of Additional
Protocol I and the Hague Convention of 1954.

In regards to the armed conflict of non-international nature (“NIAC”), there


must be hostility between government forces and non-governmental
organised armed groups that are not affiliated with the state.215 However,
mere situations of disturbances, riots and tensions will not render the
situation of non-international armed conflict.216 Going by the stance,
intermittent and erratic cyber-attacks would not cause a situation of NIAC.
On the question of threshold, the ICTY in the Tadic case held that there
should be a protracted conflict between organised insurgent groups.217
Therefore, the standards regarding the NIAC involve two elements: (a)
intensity, and (b) organised armed groups.218 Regarding the threshold of
intensity as a criterion, the ICTY in the past has considered factors such as
displacement of people,219 recurrence and gravity, 220 and types of weapons

214
ibid [131], [140], [145].
215
Prosecutor v Dusko Tadic (Decision on the defence motion for interlocutory appeal) (2
October 1995), [67]-[70].
216
Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the
Protection of Victims of Non-International Armed Conflicts (Protocol II) (adopted on 8 June
1977) 1125 UNTS 609, art 1(2).
217
Kalshoven (n 210) [70].
218
Prosecution v Milosevic (Judgement) IT-02-54-R77.4 (13 May 2005), [16]-[17];
Prosecution v Furundzija (Judgement) IT-95-17/1-A (21 July 2000), [59].
219
Prosecution v Haradinaj (Judgement) IT-04-84-A (19 July 2010), [49].
220
Prosecution v Mile Mrksic (Judgement) IT-95-13/1-A (5 May 2009), [419]; Prosecution
v Fatmir Limaj (Judgement) IT-03-66-A (27 September 2007), [135].

87
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
that are employed.221 For cyber-attacks to be classified as NIAC, the
organisation must be well-armed and have a command structure that is
sophisticated enough to execute extended military operations. Even
individuals who function “collectively” but not “cooperatively” cannot be
said to be under proper direction and organisation if this is the case. The
majority of these groups act digitally with a degree of anonymity rather than
physically executing the attack. Therefore, it is impossible to determine
whether the group in issue meets the NIAC standard, as the mere fact that
they are targeting the state is insufficient to trigger the application of
international humanitarian law.

Additional issues, such as the reluctance of states to acknowledge the


occurrence of a non-international armed conflict and the anonymity of
rebels, will make it difficult to apply the law. Besides, Additional Protocol II
cannot be applied in conflicts consisting of two non-state actors or when it
mandates the control of some territory. 222 The control of cyber activities
alone cannot equate to territorial control. To date, only one provision of the
Hague Agreement requires non-state entities to respect cultural heritage
during conflict.223 In addition to these limitations, the exclusion of military
necessity further restricts the application of humanitarian law within the
NIAC. The deficiencies can be remedied using International Criminal Law
(“ICL”). Despite the fact that ICL applies to armed conflict and non-state
armed actors, its applicability to cyber operations that target intangible assets

221
ibid [39]-[40].
222
Protocol Additional to the Geneva Conventions of 12 August 1949 (adopted 8 June 1977)
1125 UNTS 3, art 1(1).
223
Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict
(adopted 14 May 1954, entered into force 7 August 1956) 249 U.N.T.S. 216, art 19.

88
Acing the AI: Artificial Intelligence and its Legal Implications

remains contested and disputed. Nonetheless, it is the best solution for


imposing accountability on rebels in non-international armed confrontations.

(b) Protection of Intangible Cultural Assets under International


Criminal Law

After World War II, efforts were made to hold criminally liable the
perpetrators who were engaged in the destruction of public and private
estates. The Nuremberg trials marked the beginning of such efforts when the
Nazis were sentenced for plundering and destroying cultural property.224 The
resort to International Criminal Law is essential as the existing conventions
do not enumerate special offences that can hold the perpetrator criminally
liable in a proportionate manner.

In pursuance of establishing adequate accountability, Article 3(d) of the


ICTY criminalizes the act of destruction or damage to the institutions of
religion, charity, art and science. 225 The provision was added considering the
bombardment of the famous UNESCO heritage site old town of Dubrovnik.
To this end, the Rome Statute defines the war crimes that could assist in
imposing liability in cases related to cultural property. 226 But further
examination suggests that the current regime under the Rome Statute is still
unsatisfactory, and even the term ‘cultural property’ is not defined. It simply

224
Agreement for the Prosecution and Punishment of the Major War Criminals of the
European Axis 59 Stat. 1544, 82 U.N.T.S. 279, E.A.S. No. 472 (18 August 1945).
225
Statute of the International Tribunal for the Protection of Persons Responsible for Serious
Violations of International Humanitarian Law Committed in the Territory of the Former
Yugoslavia since 1991, S.C. Res., U.N. SCOR, 4 8th Sess., 3217th mtg, at 1-2, U.N. Doc.
S/RES/827 (1993).
226
Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187
U.N.T.S. 90.

89
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
borrowed the terms from the previous Geneva and Hague Conventions.227 In
accordance with Article 8(2)(b)(ix) of the Rome Statute, hospitals and
schools have been designated as protected property. This protectorate status
is not an upgraded level of protection, as hospitals and schools lose their
protectorate status when their services are no longer required. In contrast,
cultural property must be safeguarded despite these external circumstances.

Similarly, the UNESCO Convention did not hold perpetrators accountable


for their actions and lacked enforcement tools. The appropriate response
under these circumstances is to prosecute the perpetrator for crimes against
humanity. The recent precedents of the ICTY have held that the destruction
of cultural heritage can amount to persecution on religious grounds. 228
Therefore, there is no justification for excluding intangible cultural assets
from the definition of “cultural legacy” if they have been destroyed for
religious or political reasons. Article 15(1) of the International Covenant on
Economic, Social, and Cultural Rights, states that everyone has the right to
participate in cultural life and be associated with cultural ‘goods’. 229 To this
end, even the trial chamber in the famous Al-Mehndi trial held that
satisfactory attention should be paid to the symbolic value of an asset that
has been destroyed in the conflict. 230 The trial chamber determined the
severity of the offence committed based on the emotional distress caused to
Timbuktu’s residents. This similar nexus between the cultural property
(whether tangible or intangible) and persecution had been established in

227
The Prosecutor v Strugar (Judgment ICTY Trial Chamber) Case IT-01-42-T (31 January
2005), [229].
228
Addison (n 2).
229
UN Committee on Economic, Social and Cultural Rights, General Comment No. 21, ESC
43rd, UN Doc E/C12/GC/21 (2009), s 15(b).
230
Addison (n 2) 79.

90
Acing the AI: Artificial Intelligence and its Legal Implications

Blaskic,231 and Kordic’s judgment232 too where the perpetrator had the
requisite intent.

Regarding the term ‘persecution’, it has been defined as the deprivation of


one’s rights because of their identity. 233 However, the sole crime of
persecution cannot be prosecuted in ICC as it has to be in conjoint with other
offences. For this purpose, it can be in conjunction with the crimes of “other
inhuman acts causing great suffering” that have been enshrined under
Article 7 (1)(k).234 This enduring provision would hold the perpetrator liable
for wilfully destroying intangible heritage. Another such legal innovation can
be found in the Al-Mehndi case, where the perpetrator argued that the
destruction of cultural property does not satisfy the gravity threshold that is
necessary for the admissibility of a dispute in the ICC. Nevertheless, the
court understood the ‘gravity’ and held the perpetrator liable under Article
8(2)(e)(iv) of the Rome Statute.235

The then prosecutor of the ICC, Mr. Fatou Bensouda, even held that – ‘what
is at stake here is not walls and bricks, those mausoleums were important
from a religious point of view and identical point of view too’.236 The
prosecution went further to highlight the destruction of intangible heritage in
Mali at the vital stage of proceedings. This precedent is crucial since it is the
only instance in which the culprit is prosecuted for crimes against cultural

231
The Prosecutor v Blaskic (Judgment ICTY Appeals Chamber) Case No. IT-95-14-T (29
July 2004), [149]-[159].
232
The Prosecutor v Kordic and Cerkez (Judgment ICTY Appeals Chamber) Case No. IT-
95-14/2-A (17 December 2004), [104]-[108].
233
Rome Statute (n 45) art 7(2)(g).
234
ibid art 7(1)(k).
235
ICC, Al Mahdi Transcript of the Confirmation of Charges Hearing (1 March 2016), 39.
236
ibid 13.

91
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
property and not against a person. However, to meet the requirement under
Article 7(1)(k) for crimes against humanity, the prosecution has to prove that
there were additional inhuman acts together inflicted considerable suffering
on mental or bodily health. This expression, ‘other inhuman Acts’ was
enshrined in the ICTR Charter,237 and is a part of Customary International
Law.

As per the ILC and ICTY in Tadic, the act must have an adverse
consequence to be classified as an inhuman act.238 The act that has been
enacted to inflict mental pain, which also includes moral agony, does not
need to be rape or murder, and it could also be the act of apartheid or
discriminatory legislation within its domain. 239 The act would be said to be
an ‘Inhuman Act’ even if it caused temporary unhappiness or humiliation.
Applying the same test to the destruction of any sort of cultural heritage
would certainly hold the perpetrator liable, which can be inferred from the
Al-Mehndi case where the witnesses were crying when they saw the
destruction of the holy gate, which caused them mental suffering in the form
of ‘temporal humiliation’.

An additional method of conferring criminal liability can be traced to Article


25 of the Rome Statute, which entails individual criminal responsibility for
wrongful acts. War crimes in violation of customary international law entail
individual criminal responsibility. As mentioned above, the acts committed
online through cyber-attacks could constitute liability regardless of the

237
UN Security Council, Statute of the International Criminal Tribunal for Rwanda (8
November 1994), art 3 (i).
238
The Prosecutor v Katanga and Ngudjolo Chui, ICC-01/04-01/07-717 (30 September
2008) 450.
239
The Prosecution v Delalic (Judgement) IT-96-21-I (21 March 1996) 511.

92
Acing the AI: Artificial Intelligence and its Legal Implications

nature of the conflict. The individuals could also incur liability for cyber
operations provided they possess the required mens rea under Article 30.240
The crimes committed with more volition element shall be liable under dolus
directus of I or II degrees whereas, crimes committed with recklessness or
negligence with more cognitive element shall be liable under dolus
eventualise enshrined under Article 30 of the Rome Statute.241

In the case of organised destruction of intangible assets through cyber-


operations, the group of hackers shall be accountable for acting under the
joint plan or common plan. The ICTY,242 and ICC243 have already evolved
their jurisprudence to entail crimes of joint nature or cooperation. Moreover,
for accountability, it is certain that the required contribution would also
include the planning and preparation, and the perpetrator does not need to be
present during the crime as long as he has control over it. 244 This means that
the planting of malware and DDoS attacks to destroy intangible assets would
entail criminal responsibility.

Criminal responsibility can also arise in cases in which the perpetrator acts
under the command of a third person. 245 In such cases, these commanders
and superiors too cannot escape their liability just because they did not
commit any act that constitutes a war crime by the virtue of Article 28 of the

240
Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187
U.N.T.S. 90, art 30.
241
Sarah Finnin, ‘Mental Elementunder Article 30 of the Rome Statute of International
Criminal Court: A Comparative Analysis’ [2012] ICLQ 325.
242
Delalic (n 58) [345]-[354].
243
The Prosecution v Thomas Lubanga Dyilo (Judgement) ICC -01/04-01/06 (14 March
2012) 326.
244
ibid 1005.
245
See also Prosecutor v Germain Katanga and Mathieu Ngudjolo Chui (Judgement) ICC-
01/04-01/07 (30 September 2008), [495]-[499].

93
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict
Rome Statute.246 In a cyber-war context, the responsibility could be imposed
on the military commander or cyber operations head of the state who ordered
the commission of an act amounting to the destruction of intangible cultural
property. Even a subordinate commander who conforms to the order of the
commander will not be absolved of responsibility in any manner. 247 This
regulation is in conformation of Articles 86 and 87 of Additional Protocol I,
which ensures that the superiors shall take steps to investigate war crimes.
Additionally, it is not even mandatory that the individual be the
‘commander’ or have military status.248 The said rule is appropriate for the
cyber-attack which is generally administered by the hacker without any
military position.

These regulations act as a default rule that need to be used in conjunction


with other articles. Article 8(2)(e)(iv) can be implemented in conjunction
with Articles 25 or 28 of the Rome Statute, as was done in the Al-Mehndi
case. It must be ensured, however, that the agony and suffering of the human
population will be considered relevant for calculating liability for the
destruction of intangible assets. It shall be treated as a separate offence with
no additional requirement to meet the severity standard. In order to
accomplish this, the definition of the offences must be reconsidered and their
reach must be expanded with immediate effect.

246
ibid.
247
Rome Statute of the International Criminal Court (entered into force 1 July 2002) 2187
U.N.T.S. 90, art 33; Statute of the International Tribunal for the Protection of Persons
Responsible for Serious Violations of International Humanitarian Law Committed in the
Territory of the Former Yugoslavia since 1991, S.C. Res., U.N. SCOR, 4 8th Sess., 3217th
mtg, at 1-2, U.N. Doc. S/RES/827 (1993), art 7(4).
248
Rome Statute, art 28 (b); Delalic (n 58) [239]-[254].

94
Acing the AI: Artificial Intelligence and its Legal Implications

III. Conclusion

The contemporary debates centring on digital intangible cultural heritage and


its protective regime, acknowledged by the 2003 UNESCO Convention
posed substantive questions to International Humanitarian Law that focused
on the cultural property of material aspect. The present humanitarian
regulations, as they stand, are unable to confer adequate protection alone. It
has to act in conjunction with the Rome Statute and Tallinn Manual.

This ‘legal grey zone’ gained further prominence in the pandemic and
beyond due to this era of digital culture and data storage. This digital
emergence raised questions regarding the intersection of digital assets with
humanitarian regulations. As proven above, the regulations, particularly in
the sphere of NIAC, need the assistance of International Criminal and
Human Rights Law that could bolster the protection and prevent the
destruction of intangible assets as happened in Iraq in 2003. The questions
over sovereignty, proportionality, and freedom of expression can only be
answered through the combined conjunction of IHL, the Rome Statute, and
the Human Rights treaties. Their scope and applicability could be a
promising subject for future research owing to the ‘grey zones’ in NIAC
conflicts and doubts regarding the extra-territorial applicability of the human
rights treaties.

This chapter concludes that the rise of digital cultural property brings up
additional intriguing issues about international human rights law, cultural
heritage, and cyberspace. Since access to the internet is a fundamental
human right, and because cultural life results in a human right to cultural
heritage, protections for digital cultural property may also be derived from
international Criminal law in addition to international humanitarian law.

95
The Two-Way Protective Regime of Intangible Cultural Heritage in Armed
Conflict

More so than international humanitarian law, the use of Criminal regulatory


framework could strengthen safeguards for digital cultural resources in times
of peace, posing intriguing issues about personal privacy, national security,
ownership of intellectual property, freedom of expression, and the
application of human rights.

96
Acing the AI: Artificial Intelligence and its Legal Implications

“WHO LET THE DOGS OUT?” – PLACING ACCOUNTABILITY


ON WEAPONIZED AI IN INTERNATIONAL LAW
Ahan Gadkari
(Student at Jindal Global Law School)

Abstract
Technology is a double-edged sword has never before been true to its
meaning as it is now. Law thus playing catch-up with technological
progress is inevitable. On the other hand, human decision-making
being effectively and eventually supplanted by autonomous weapon
systems (“AWS") has raised profound ethical concerns and legal
obstacles. The uncertain future of the use of technology in weapons
goes beyond the borders of International Humanitarian Law (“IHL”)
compliance, wherein the reliability of such weapons raises a deep
sense of discomfort. The biggest challenge for the technology in such
weapons is to factor in a situation of doubt in a conflict setting;
furthermore, it needs to maintain the consistency of zero error in an
algorithm that can assimilate unique situations without an on-field
experience. Requirements described above become necessary since a
single error on the learning curve of this adapting Artificial
Intelligence would meet the threshold of a broken rule under public
international law.

Lethal Autonomous Weapons have not yet met the legal compliance
standards, however, there is uncanny haste among the States that
promote their use. This establishes a compelling case for believing
that, shortly, the use of AWS will result in violations of several human
rights and a grey mist of non-compliance with IHL principles. Lack of

97
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
accountability in this regard is complementary to the adverse impact
on the victim’s right to legal remedy. This chapter shall therefore
explore the forms of accountability and legal discourse available
under public international law. It will examine treaty law and
customary international law to address the challenges of establishing
mens rea within individual criminal responsibility while also
substantiating the present obligations on States. A domain of ‘Split-
Responsibility’ and its contemporary understanding within the context
of AWS will be addressed as well. This chapter builds upon the ICRC’s
Commentary on the 1977 Additional Protocol I that prompted a
sensitive outlook on the development of weapon systems that minimizes
the roles of humans. Furthermore, this chapter will also analyze the
role of individuals associated with the production and development of
such technology and their accountability.

I. Introduction

In the last days of the fight against the Islamic State in Syria, as members of
the once-ferocious caliphate were besieged in a dirt field outside the town of
Baghuz, a United States (US) military drone buzzed high above, searching
for military targets. However, all it saw was a swarm of women and children
gathered against a riverbank. Unannounced, an American F-15E assault
fighter sped over the drone’s high-definition range of view and dropped a
500-pound bomb on the crowd, devouring it in a shuddering boom. 249 As the
smoke cleared, a few individuals fled for safety. Then the following jet

249
Dave Phillips and Eric Schmitt, ‘How the U.S. Hid an Airstrike That Killed Dozens of
Civilians in Syria’ (New York Times, 13 November
2021) <https://www.nytimes.com/2021/11/13/us/us-airstrikes-civilian-deaths.html>
accessed 24 December 2021.

98
Acing the AI: Artificial Intelligence and its Legal Implications

dropped a 2,000-pound bomb on them, followed by another, killing the


majority of the survivors.250 Without a solid justification, the drone attack
very probably breaches international law, and the pilots may face
prosecution. 251 Indeed, if the pilot fired the drone’s missiles with an unlawful
purpose (for example, to avenge his slain friends), he would unquestionably
be guilty of a war crime. 252 But what happens if the pilot is not included in
the equation? What if the drone functioned entirely autonomously (i.e.,
without human supervision) owing to highly developed artificial
intelligence? Who should be held accountable for the crime if this
completely autonomous weapon murdered Syrian people without legal
justification? This scenario may seem implausible, considering that no
government presently has drones capable of operating with this degree of
autonomy. 253 However, the technology necessary to deploy such full AWS
may become available future shortly. 254 Because these systems will be able

250
ibid.
251
See International Covenant on Civil and Political Rights (entered into force 19 December
1966) 999 U.N.T.S. 171 (ICCPR), art 6; Paul M Taylor, A Commentary on the International
Covenant on Civil and Political Rights: The UN Human Rights Committee’s Monitoring of
ICCPR Rights (Cambridge University Press 2020) 138-170; Stuart Casey-Maslen and C H
Heyns, The Right to Life under International Law: An Interpretative Manual (Cambridge
University Press 2021) 659-671; ‘Counter-Terrorism Module 8 Key Issues: Arbitrary
Deprivation of Life’ (United Nations) <https://www.unodc.org/e4j/en/terrorism/module-
8/key-issues/arbitrary-deprivation-of-life.html> accessed 24 December 2021; Académie De,
Use of Force in Law Enforcement and the Right to Life: The Role of the Human Rights
Council (Geneva Academy Of International Humanitarian Law And Human Rights 2016);
Cóman Kenny, ‘Legislated Out of Existence: Mass Arbitrary Deprivation of Nationality
Resulting in Statelessness as an International Crime’ (2020) 20 International Criminal Law
Review 1026.
252
Robert Sparrow, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy 62, 62-66.
253
Vincent Boulanin and Maaike Verbruggen, ‘Mapping the Development of Autonomy in
Weapon Systems’ (Sipri, 2017) 20 <https://www.sipri.org/sites/default/files/2017-
11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf>
accessed 24 December 2021.
254
ibid.

99
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
to engage targets autonomously, many researchers predict they may
eventually replace drones in future battles. 255

Additionally, although the technology may seem distant, the topic is already
gaining prominence among international law researchers and participants.
Major human rights groups and notable professors have expressed opposition
to AWS, and the United Nations has started considering a preemptive ban. 256
Indeed, some commentators believe that within the next several years, the
world community will achieve an agreement on these weapons. 257 As the
prospective ban indicates, the thought of people being removed from the
battle loop has alarmed certain human rights organizations. 258 One key worry
they have voiced is the above-mentioned accountability issue.259 Without
human intervention, the weapons’ artificial intelligence will allow them to
assess data, identify courses of action, and execute reactions to a variety of
circumstances. 260 Unlike human-operated drones, the acts of an AWS are

255
‘Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How
the Laws of War Can’ (Columbia University Scholarship Archive) 4-5
<https://scholarship.law.columbia.edu/faculty_scholarship/1803/> accessed 21 December
2022; Daniel Hammond, ‘Autonomous Weapons and the Problem of State Accountability’
(2015) 15 Chicago Journal of International Law 654, 655.
256
‘States must address concerns raised by autonomous weapons’ (International Committee
of the Red Cross, 2019) <https://www.icrc.org/en/document/states-autonomous-weapons>
accessed 24 December 2022; ‘Autonomous weapons that kill must be banned, insists UN
chief’ (UN News) <https://news.un.org/en/story/2019/03/1035381> accessed 24 December
2022.
257
Hammond (n 255).
258
UN Human Rights Council, Report of the Special Rapporteur on Extrajudicial, Summary
or Arbitrary Executions, A/HRC/23/47,
<https://digitallibrary.un.org/record/755741?ln=en> accessed 24 December 2021.
259
Ahan Gadkari, ‘Question on the Use of Ethical Responsibility on the Use of Unmanned
Aerial Vehicles in Combat Zones’ (JSIA
Bulletin) <https://www.thejsiabulletin.com/post/question-of-ethical-responsibility-on-the-
use-of-unmanned-aerial-vehicles-in-combat-zones> accessed 24 December 2021; Boulanin
and Verbruggen (n 253).
260
Gadkari (n 259).

100
Acing the AI: Artificial Intelligence and its Legal Implications

difficult to attribute to a single individual. 261 In light of this, and other issues,
several experts have questioned whether the mere use of AWS breaches
international law by definition. 262 Furthermore, even if such usage does not
violate international law in and of itself, an individual weapon might
undoubtedly result in a breach in a particular occurrence. 263 In many
instances, it is difficult to determine who should be held accountable for an
AWS crime.

Defenders of AWS have suggested that those sufficiently involved with the
weapon—military commanders, designers, or manufacturers—could be held
accountable for the weapon’s illegal actions, but opponents have identified
several flaws with each of these potential candidates for accountability. 264
Few, on the other hand, have examined the practicality of holding the state
liable for crimes committed by its AWS. 265 Indeed, few academics have
questioned whether this alternative is desirable in principle or practicable in
reality. 266

261
Darren Stewart, ‘New Technology and the Law of Armed Conflict’ 87 International Law
Studies 272, 278.
262
ibid.
263
Stephen White, ‘Brave New World: Neurowarfare and the Limits of International
Humanitarian Law’ (2008) 41 Cornell International Law Journal 177, 177–
210 <https://scholarship.law.cornell.edu/cilj/vol41/iss1/9> accessed 24 December 2021.
264
‘Autonomous Robotics thrust group of the Consortium on Emerging Technologies,
Military Operations, and National Security, International Governance of Autonomous
Military Robots’ (Columbia University Academic Commons,
2011) <https://academiccommons.columbia.edu/doi/10.7916/D8TB1HDW> accessed 24
December 2022; Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous
Weapons (Ashgate 2012) 103-5.
265
Gadkari (n 259).
266
ibid; Jack Beard, ‘Autonomous Weapons and Human Responsibilities’ (2014) 45 College
of Law, Faculty Publications 617, 617–678
<https://digitalcommons.unl.edu/lawfacpub/196/> accessed 24 December 2021.

101
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
This chapter addresses these concerns by providing a conceptual use of state
responsibility for AWS crimes and examines the major procedures for
holding a state accountable. Normatively, it would be better to hold the state
liable for AWS crimes rather than commanding commanders, designers, and
makers, since the state is in the best position to guarantee that these weapons
conform with international law consistently. Furthermore, as the principal
buyers and users of these weapons, governments would have the most
responsibility in the case of unforeseeable AWS war crimes. Finally, the
majority of nations that use this technology will very certainly have the
financial means to recompense victims.

State responsibility, therefore, seems to be a preferable choice to individual


accountability in the abstract, but it is unclear how the two venues for
holding governments accountable for AWS crimes would operate in reality.
The first option is for governments whose people were harmed by the crime
to bring a case before the International Court of Justice (ICJ). 267 The ICJ has
extensive subject-matter jurisdiction and hence may offer a platform for the
victim state to seek remedy on behalf of its people. Regrettably, the severe
restrictions on its jurisdiction will very certainly preclude it from hearing
AWS claims. Additionally, it lacks an enforcement mechanism to address
any potential violations. These circumstances jeopardize the ICJ’s capacity
to hold nations accountable for AWS crimes. This necessitates the
establishment of wholly new international legal accountability systems that
will hold the state deploying AWS accountable for AWS crimes.

267
Graduate Institute of Geneva, Academy Briefing No. 8 - Autonomous Weapon Systems
under International Law (2014) 9.

102
Acing the AI: Artificial Intelligence and its Legal Implications

This chapter is divided into VII sections. Each highlighting specifies


segments of accountability and various factors that the same depends on.

II. Obligations under International Law

When a private firm acts under the State’s instructions, guidance, or control,
the company’s activity is traceable to the State.268 States, as indicated by
state practice and opinio juris, are required by customary law to assess the
legitimacy of novel means and techniques of combat.269 This evaluation must
take into consideration the weapon’s anticipated usage.270 Article 2(4) of the
United Nations Charter prohibits states from threatening or using force
against the territorial integrity of other states.271 The term “territorial
integrity” refers to a State’s exclusive sovereignty over its territory. 272 A
threat of force may take the form of the prospect of cross–border weapon use

268
Armed Activities on the Territory of the Congo (the Democratic Republic of the Congo v
Uganda) (Judgment) [2005] ICJ Rep 168, 66 [175]-[176].
269
Isabelle Daoust et al, ‘New wars, new weapons? The obligation of States to assess the
legality of means and methods of warfare’ (2002) 84 Revue Internationale de la Croix-
Rouge/International Review of the Red Cross 345, 354-7; A Guide to the Legal Review of
New Weapons, Means and Methods of Warfare (International Committee of the Red Cross
2010) <https://www.icrc.org/en/publication/0902-guide-legal-review-new-weapons-means-
and-methods-warfare-measures-implement-article> accessed 24 December 2021; Natalia
Jevglevskaja, ‘Weapons Review Obligation under Customary International Law’ (2018)
94 International Law Studies 187, 933-4; Group of Governmental Experts of the High
Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain
Conventional Weapons, Weapons Review Mechanisms Submitted by the Netherlands and
Switzerland, CCW/GGE.1/2017/WP.5, 4 [18].
270
William Boothby, Weapons and the Law of Armed Conflict (Oxford University Press
2009) 249.
271
Oil Platforms (the Islamic Republic of Iran v the United States of America) (Judgment),
[2003] ICJ Rep 161, 51 [100]; Legal Consequences of the Construction of a Wall in the
Occupied Palestinian Territory (Advisory Opinion of 9 July 2004) [2004] ICJ Rep 136, 31
[63].
272
Military and Paramilitary Activities in and against Nicaragua (Nicaragua v the United
States of America) (Judgment) [1986] ICJ Rep 14, 99-100 [209].

103
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
and troop concentrations along borders, as Turkey, Yugoslavia, Pakistan,
Iraq, and the Soviet Union have shown via their state practice. 273

(a) International Human Rights Law (“IHRL”)

States are obligated to respect the human rights of those who reside on their
territory and are subject to their authority. 274 Extraterritorial jurisdiction
arises when a State’s acts have consequences that extend beyond its
borders.275 States are prohibited under Article 6(1) of the ICCPR from
engaging in activities that may arbitrarily deprive life, even if such conduct
does not result in death.276 The use of force in law enforcement
circumstances is only legal when a danger to life is imminent. 277 Using AWS

273
‘Action concerning Threats to the Peace, Breaches of the Peace, and Acts of Aggression,
Article 51’, in The Charter of the United Nations: A Commentary, Volume 2 (3rd edn) 1410;
JA Green, The Threat of Force as an Action in Self-defense under International Law (2011)
44 Vanderbilt Journal of Transnational Law 239–285; Repertoire of the Practice of the
Security Council, Suppl. 1964-1965, XVI, 238 S. (Sales No. 1968. VII. 1), Doc.
ST/PSCA/l/Add. 4., 202 (1968); U.N.S.C., Letter dated 1 February 1999 from the Chargé
D’Affaires A.I. of the Permanent Mission of Yugoslavia to the United Nations addressed to
the President of the Security Council, U.N Doc. S/1999/107 (2 February 1999); U.N.S.C.,
Letter dated 5 February 1999 from the Chargé D’Affaires A.I. of the Permanent Mission of
Yugoslavia to the United Nations addressed to the Secretary-General, U.N. Doc.
S/1999/118 (4 February 1999); U.N.S.C., Cablegram dated 15 July 1951 from the
Permanent Representative of Pakistan to the President of the Security Council and the
Secretary-General, U.N. Doc. S/2245 (15 July 1951); S.C. Res. 949 (15 October 1994);
Anthony De Luca, ‘Political Science Quarterly: Fall 1977: Soviet-American Politics and the
Turkish Straits’ (1977) 92 Political Science Quarterly 503, 516-20.
274
Application of the Convention on the Prevention and Punishment of the Crime of
Genocide (Bosnia and Herzegovina v Serbia and Montenegro) (Judgment) [1996] ICJ Rep.
595, 24-5 [31]; ICCPR, art 2(1).
275
Drozd and Janousek v France and Spain (Admissibility and Merits) App. No. 12747/87,
A/240, [1992] ECHR 52, 22 [91].
276
ICCPR, art 6; UN Human Rights Committee (HRC), ‘General Comment No. 36 on art 6
Right to Life’ 3 September 2019, CCPR/C/GC/35, 2 [7].
277
Dilek Kurban, ‘Forsaking Individual Justice: The Implications of the European Court of
Human Rights’ Pilot Judgment Procedure for Victims of Gross and Systematic Violations’
(2016) 16 Human Rights Law Review 731; Benzer and others v Turkey, Application no.
23502/06, Judgement (2013), p. 33-4 (para. 163); Andreou v. Turkey, Application no.
45653/99, Judgement of 27 Oct. 2009, 12 [46]; Daragh Murray and Dapo
Akande, Practitioners’ Guide to Human Rights Law in Armed Conflict (Oxford University
Press 2016) 119-120; United Nations Congress on the Prevention of Crime and the

104
Acing the AI: Artificial Intelligence and its Legal Implications

with algorithmic tagging to identify goals and approve the use of force
violates this criterion since threats are recognized in advance of an
“imminent” emergency. 278 Effective remedies need the State to pursue and
punish those responsible for human rights breaches. 279 Individual
responsibility for arbitrary deprivation of life is not attainable in the case of
AWS, since the weapon cannot be penalized or deterred.280

According to the principle of distinction, assaults may be aimed solely at


military targets and goals.281 Weapons designed to target based on
observable, behavioral, or other “signatures” violate this concept since they
do not precisely correspond to the criteria of individuals or things that may
be declared the target of attack under IHL.282 Additionally, under the
principle of proportionality, civilian casualties must not be disproportionate
in comparison to the expected tangible military benefit from the strike as a

Treatment of Offenders, Basic Principles on the Use of Force and Firearms by Law
Enforcement Officials (United Nations 1990) [9]; Landaeta Mejías Brothers et al v
Venezuela (Preliminary Objections, Merits, Reparations and Costs, Judgment, Judgement of
27 August 2014) 34-5 [131]; UN Human Rights Council, Report of the Special Rapporteur
on extrajudicial, summary or arbitrary executions (1 April 2014) A/HRC/26/36, 10 [59].
278
Maya Brehm, ‘Defending the Boundary’ (Geneva Academy, 2017) 24
<https://www.geneva-academy.ch/joomlatools-files/docman-files/Briefing9_interactif.pdf>
accessed 24 December 2022.
279
ICCPR, art 1, 2(3); ‘Losing Humanity | The Case against Killer Robots’ (Human Rights
Watch) <https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-
robots> accessed 24 December 2021; Hammond (n 255).
280
UN Human Rights Council, ‘Report of the Special Rapporteur on extrajudicial, summary
or arbitrary executions’ (9 April 2013) A/HRC/23/47, 14 [76]; Christof Heyns, ‘Human
Rights and the use of Autonomous Weapons Systems (AWS) During Domestic Law
Enforcement’ (2016) 38 Human Rights Quarterly 350.
281
Henckaerts and Doswald-Beck (n 186) 25; Legality of the Threat or Use of Nuclear
Weapons (Advisory Opinion of 8 July 1996) [1996] ICJ Rep 226, 257 [78].
282
Kristina Benson, ‘“Kill ‘em and Sort it Out Later:” Signature Drone Strikes and
International Humanitarian Law’ (2014) 27 Global Business & Development Law
Journal 17, 49 <https://scholarlycommons.pacific.edu/globe/vol27/iss1/2> accessed 24
December 2022.

105
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
whole. 283 This balance necessitates a subjective assessment of military
benefit versus humanitarian considerations.284

Finally, the principle of precaution requires States to take all reasonable


measures to halt or cancel an assault if it becomes clear that the aim is not
military. 285 This needs human agents to maintain adequate control to detect
and respond to changing situations in a timely way. 286 AWS is unable of
discriminating between civilian and military targets since it is designed to
target whatever its algorithm – based on tagged “signatures” perceives as an
armed threat.287 Additionally, AWS lacks substantial human oversight over
how damages are weighted subjectively. 288 More crucially, during armed
conflict, human control over AWS must transition to a law enforcement
approach as circumstances dictate. AWS is capable of detecting individual
and isolated threats, but not of analyzing social or political aspects to
determine whether or not the State is involved in an armed conflict that
would trigger the application of IHL.

III. Corporate Responsibility

Establishing state responsibility is towards the epitome of accountability and


subsequently towards legal action. It is however largely dependent upon

283
Daniel Thürer, International Humanitarian Law: Theory, Practice, Context (Ail Pocket,
Cop 2011) 74; Henckaerts and Doswald-Beck (n 186) 173-5; Dieter Fleck and Michael
Bothe, The Handbook of International Humanitarian Law (Oxford University Press 2021)
119-186; William Fenrick, Attacking the Enemy Civilian as a Punishable Offence (Core
1997) <https://core.ac.uk/download/pdf/62547705.pdf> accessed 24 December 2021.
284
Prosecutor v Stanilav Galic (Trial Judgement and Opinion) IT-98-29-T (5 December
2003) [58]; Henckaerts and Doswald-Beck (n 186) 60.
285
ibid.
286
Maya Brehm (n 278) 24.
287
ibid 40.
288
ibid.

106
Acing the AI: Artificial Intelligence and its Legal Implications

direct derogation from conventional or customary obligations. A further


downside to this approach is that it is more consequential in nature than
preventative. This is to say that such a form of accountability can be held
upon states only post a violation since it has more to do with the action than
the object of violation per se. On the other hand, sanctions on corporate
entities that develop such autonomous technologies give authorities a
different viewpoint to instill accountability. Countries such as the USA, the
UK, France, and Israel have domestic laws that impose criminal sanctions on
corporate entities that extend to corporate criminal liability against
companies that manufacture, sell or distribute systems that cause direct
harm. 289

IV. Individual Criminal Responsibility

As the cornerstone of the edifice of criminal law, the International Criminal


liability of an individual290 has a key role in promulgating the principle of
Criminal Law while also paving a mechanism to enforce international
humanitarian law. Responsibility as an Individual caught within the acts of
lethal autonomous weapon systems has been catching a lot of attention
compared to state responsibility in this regard. The recognition of
accountability of an individual has a wide arc, it ranges from domestic law to
the IHL and international human rights law as well. 291 The discussion for the
aforesaid responsibility is fundamentally determined by the question as to
whether and to what extent the presence of autonomy undermines the

289
Corporate Criminal Liability: Emergence, Convergence and Risk (M. Pieth and R. Ivory
(eds.), Springer 2011) 7–14.
290
Christian Tomuschat, ‘The Legacy of Nuremberg’ (2006) 4 J. Int’l. Crim. Just. 830, 840.
291
Marco Sassoli, ‘Humanitarian Law and International Criminal Law’, in The Oxford
Companion to International Criminal Justice (Antonio Cassese ed., 2009) 112-113.

107
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
possibility of essentials of criminal responsibility and whether the same is
attributable to an individual. The presence of autonomy is assessed primarily
to understand the impacts on which an individual may engage in unlawful
conduct and to further narrow down the context in which the conduct
occurred. Scrutinizing the situation under which the act transpires is crucial
to establishing the intention behind it. Thus, analyzing temporal and
geographical circumstances as parameters to intent is paramount to
establishing individual criminal responsibility.

The chief ingredients while establishing individual criminal responsibility


that is attributed to an individual are as hereunder –
a) a serious violation of International Humanitarian Law
b) a material element of conduct
c) mens rea (mental element)
d) the conduct carried out through established modes of responsibility
under the international criminal law

It is crucial to assert at this moment that the use of AWS in a set scenario has
no bearing on the individual criminal responsibility associated with it. This is
to say that whether they are committed in a domestic feud, internal conflict,
or an international armed conflict if a crime is committed as a direct or
indirect use of AWS then accountability should exist. 292 Following the
similar object and purpose of the above-given essentials, Article 25 of the
Rome Statute demarcates the boundaries in which individual criminal
responsibility shall subsist. Determination of conditions under which an
individual is considered to have aided, instigated, or contributed directly or

292
The Prosecutor v Tadic, Case No. IT-94-1-T (2 October 1995) (Decision on the Defence
Motion for Interlocutory Appeal on Jurisdiction) 129.

108
Acing the AI: Artificial Intelligence and its Legal Implications

indirectly in a material violation resulting through an AWS is to be


considered here as a prerequisite to determine individual criminal
responsibility. 293

Mens Rea and Actus Reus for Individual Criminal Responsibility

General acts of Criminal nature are pivoted on the state of a guilty mind and
act in consonance with such a state done with a guilty intention.294 The
Foremost requirements for raising a standing within the ICC shall be of the
aforesaid state in addition to meeting the jurisdiction of the court under
Article 5 of the Rome Statute. It is to be noted that both actus reus and mens
rea need to be established for individual criminal responsibility.

i. Class of Perpetrators

The nature of violations that can be caused due to AWS points to the fact
that there can be a different class of perpetrators for the same crime. It is
therefore important for us to encompass these classes and identify the
accountability associated with each of these individuals. A person who
instigates a plan and the person who orders the commission of a crime will
have equal accountability. 295 The aforesaid conduct as part of a common
criminal purpose only applies as long as the acts of the persons participating
have a direct and material impact on the commission of a crime. Such a class
of co-perpetrators is relatively easier to establish generally. About AWS this
becomes a challenge since weapons acting with full autonomy are
unpredictable, thus whether its actions were within the knowledge and

293
Jack M. Beard, ‘Autonomous Weapons and Human Responsibilities’ (2014) 45 Geo. J.
Int'l L. 617, 646.
294
‘Trial of Bruno Tesch et al (Zyklon B Case), UNWCC, Case Number 9, British Military
Court (1946)’, in Law Reports of Trials of War Criminals (1949) 93-104.
295
The Prosecutor v Delalic et al (Trial Judgement) Case No. IT-96-21-T (16 November
1998)1 328.

109
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
understanding of the deployer becomes a farfetched question. Furthermore,
demonstrating a criminal state of mind to a level of responsibility is an even
tougher task.

The class of perpetrators for the use of AWS can rise to the level of
politically motivated individuals as well. This is to say that the commission
of an act by AWS was a direct consequence of an order given by an official.
Although on numerous occasions the UN Security Council has affirmed that
the responsibility of leaders and executants exists,296 which is also why
ICTY and ICTR can prosecute leaders and members along with command
responsibility, 297 there exists a conundrum as to whether the responsibility of
an ‘ordinary soldier’ acting the operation persists or not.

Furthermore, this class of perpetrators can also shift to the manufacturers,


wherein they are prosecuted under Article 25(3) of the Rome Statute. It is
however extremely difficult to show a collision between the manufacturers
and command responsible individuals, neither is it easy to prove both/ one
acted under a common concerted plan. It would put the weapons
manufacturers at a natural disadvantage to the meaning that the production
itself constituted an act with criminal mens rea.

ii. Issues establishing individual criminal responsibility

The biggest hurdle that gives a strong defense to multiple classes of


perpetrators is the complexity of systems. The evolving nature of AWS
indicates that most of the time, apart from the initial manufacturer and
programmers, the actual deployers do not have a complete idea of what the

296
S.C. Res. 1329, U.N. Doc. S/RES/1329 (30 November 2000).
297
Statute of the International Criminal Tribunal for the former Yugoslavia 1993, art 7(1).

110
Acing the AI: Artificial Intelligence and its Legal Implications

AWS is capable of. In such a situation the culpability of individual criminal


responsibility decreases to a very large extent. Training programs for
deployers etc. shall be of importance however to a point some individuals are
not exposed to such expertise, and deference for everyone would lie. 298
Furthermore failure of conscience while developing the technology also
gravely affects the situation. Deterrence to the use of force on a battlefield
will drastically reduce when the soldiers are replaced by machines
specifically designed to commit crimes.

V. Split Responsibility

This chapter makes out arguments to show that there exist multiple sets of
actors who contribute to a single violation that involves AWS. The Concept
of ‘Split responsibility’ is a way that suggests that the responsibility shall be
shared among all the aforesaid actors ranging from the manufacturers to the
programmers and the military and political officials in command
responsibility positions as well. The rationale behind such an approach is to
counter the lack of moral agency in AWS that lacks ‘effective human
control’ by holding accountable every human component to the activity
behind its functioning.299 This approach growing popular is not only
misdirected but is also impossible owing to legal challenges. Primarily
responsibility is difficult to split since the threshold between each participant
is unknown; so is the extent to which the same shall be applicable. Tribunals

298
Ministry of Defence, Development, Concepts, and Doctrines Centre, ‘The UK Approach
to Unmanned Aircraft Systems’ [2011] JDN 2-11, 510.
299
‘The Convention on Certain Conventional Weapons (CCW), Informal Meeting of
Experts on Lethal Autonomous Weapons Systems, U.S. Delegate Closing Statement’
(United Nations, 2014)
<http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/6D6B35C716AD388CC12
57CEE00487 1E3/$file/1019.MP3> (accessed on Dec 23, 2021).

111
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
that take up such violations are also concerned more about the bearer than
the manufacturer given the direct choice the user makes compared to them
later. Legally, manufacturers, programmers shall not fall within the
interpretation of IHL, unless they directly are part of the armed conflict. In
an event they are applying split-responsibility to them is simply conflating
the overall responsibility for selected perpetrators.

VI. Accountability Gap and Victim’s Right to Legal Remedy

Without accountability, International Law is nothing but an empty


revolver.300 The identity of international law comes not only from setting
legal standards for states, organizations, and non-state actors but also from
setting a recourse to the harm caused. The looming surety that international
law, therefore, meets the aforesaid qualities is primarily the reason why
violations of greater threshold are prevented. 301 It is therefore to maintain the
sanctity of international law and its application that the states shall strive to
provide victims with remedies independent of acts where the state is directly
accountable for the act.302 The element of ‘meaningful human control’ poses
a grey area amidst the entire functioning of such weapon systems. It not only
makes their conduct unpredictable but also blurs the lines of accountability
to a point where the legally responsible entity is left unscathed. Such
ambiguity has an extreme impact on the victim’s legal right to remedy. As
long as there remains the possibility of an unpredictable character to the

300
Aaron Xavier Fellmeth and Maurice Horwitz, Guide to Latin in International Law (2009)
47.
301
Steven Ratner et al, Accountability for Human Rights Atrocities in International Law:
Beyond the Nuremberg Legacy (3rd edn, 2009) 3.
302
Draft Articles on Responsibility of States for Internationally Wrongful Acts (ILC 2001)
art 5; U.N. Human Rights Committee, ‘General Comment No. 31 [80] on the Nature of the
General Legal Obligation Imposed on States Parties to the Covenant’ (adopted 29 March
2004, entered into force 26 May 2004) CCPR/C/21/Rev.1/Add.13 18.

112
Acing the AI: Artificial Intelligence and its Legal Implications

weapon systems not just in terms of the individual actions but also the
command given and general programming set up, establishing accountability
would remain to be a long unachieved dream.

For state responsibility, technological progress is a more genuine threat. It


gives rise to possibilities wherein states shall be able to deploy lethal
autonomous weapon systems in non-attributable ways.303 This leaves a
bottom line wherein the only way to ensure a set degree of accountability in
all aspects of the use of autonomous weapons is by ensuring ‘meaningful
human control’ over the direct use of weapons. This is to say that critical
attributes of the weapons and their use are within the hands of a human.

Access to justice and reparation are two primary components of a victim’s


legal right to remedy. Both of which are closely intertwined with the state’s
responsibility to prosecute the accountable. 304 Furthermore, the state has the
onus to not just curb the extent of violation but also investigate the root
causes of the conduct of the offenders. 305 Where justice is enshrined as the
edifice of the legal system, reparation is advanced through treaty law,306 as is

303
‘Report of the Expert Meeting on Autonomous Weapons’ (International Committee of
the Red Cross, 9 May 2014) 89-91 <https://www.icrc.org/eng/assets/files/2014/expert-
meeting-autonomous-weapons-icrc-report2014-05-09.pdf> accessed on 23 December 2021
[hereinafter ICRC Report].
304
Ken Obura, ‘Duty to Prosecute International Crimes Under International Law’, in Chacha
Murungu, Japhet Biegon (eds.) Prosecuting International Crimes in Africa (Oxford
University Press 2011) 11-31.
305
Social and Economic Rights Action Centre and Centre for Economic and Social Rights v
Nigeria, Cmt. No. 155/96 (27 October 2001) [44]-[48]
<http://www.achpr.org/communications/decision/1 55.96/> accessed 5 April 2023.
306
International Covenant on Civil and Political Rights (adopted 1966, entered into force
1976), art 2(2); Rome Statute of the International Criminal Court (adopted 17 July 1998), art
75.

113
“Who Let The Dogs Out?” – Placing Accountability on Weaponized AI in
International Law
done as a custom against certain offenses. It is on several occasions the
United Nations Security Council has also affirmed the need for reparation. 307

Several Cases across various Courts have emphasized the need for reparation
and its role in holistically completing the right of legal remedy. 308 The nature
of such reparation can be variable i.e., to the effect of compensation,
restitution, etc. however the possibility of lack of accountability affecting
this remedy as a whole is the crux of the matter. In an event where the
responsibility for the acts of such machines is not established and due to all
the possible reasons, that this chapter addresses pertaining to the
responsibility of the state and of the individual, whether any form of
reparation be granted? is a question that shall remain unanswered. This
further helps substantiate the illegality of the use of AWS, since their use
becomes ethical purely out of the fact that someone shall be accountable for
the illegal acts in war, and the use of AWS does not have a mechanism to
help authorities reach there.309

VII. Conclusion

The threats posed by AWS should be taken seriously but even more so the
accountability gaps that it creates. In a world where accountability holds the
fort of international law steady, one loose end could prove fatal to the
discourses available in other segments of law as well. One extreme of which
could be the loss of ways to counter impunity and assist victims. The

307
Khmer Rouge Trials, 77th Plenary Meeting (18 December 2002) UN Doc. A/RES/57/228
B (22 May 2003).
308
Factory at Chorz6w (Indemnities) (Germany v Poland) (Judgement) [1927] PCIJ (ser. A)
No. 17, 29.
309
Robert Sparrow, ‘Killer Robots’ (2007) 24 J. of Applied Phil. 62.

114
Acing the AI: Artificial Intelligence and its Legal Implications

complementary concepts of State Responsibility and Individual Criminal


Responsibility are not exclusive to each other in the use of AWS. This chapter
has therefore not only laid down the essential components for accountability in
technological setup but also attempted to refute upcoming ideas that build
upon the flawed premise. The lack of moral agency in AWS can only be
countered by effective human control; it is only then the indispensable element
of accountability shall have a chance to survive the era of autonomous
weapons.

115
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification

ARTIFICIAL INTELLIGENCE AND INTERNATIONAL LAW:


ANALYSING AI-TOOLS SIGNIFYING THE SCOPE OF
CODIFICATION
Muhammed Shafeeq M K
(Student at Markaz Law College, University of Calicut)

Abstract
AI has proven valuable in national legal practice, contract drafting,
and other processes closely related to law. However, international law
is yet to grab the opportunity. Unlike domestic laws which have the
luxury of definite statutes and codified provisions, international law is
formed out of several treaties and state practices. Possibly, there might
be difficulties with regard to introducing artificial intelligence.
However, the transformations and advantages that AI might bring in
cannot be overlooked. One among them is the codification of
international law. The first phase of our discussion analyses those
potential tools of AI bearing in mind the scope of codification and
progressive development of international law as suggested by Article
13 (1)(a) of UN charter. Taking this as the premise, this chapter
contends that AI could be a powerful mechanism for the codification
and progressive growth of international law. It is even powerful
enough to surpass the cognitive ability of humans. To study the scope
of codification using AI this chapter analyses different advanced tools
of AI such as information extraction tool, similarity analyses tool,
authorship analyses tool, etc. The result found that AI can develop a
codified law of nations to be applied in diverse fields of international
law by collecting primary data from numerous texts relating to state

116
Acing the AI: Artificial Intelligence and its Legal Implications

practices, records of negotiations, documents of treaties, and


judgements of adjudicatory authorities.

I. Introduction

By the time this chapter sees the daylight, the world’s first AI-enabled
lawyer created and trained by AI would have assisted a party in a court of
law. DoNotPay, a startup based in San Francisco, has agreed to use its robot
lawyer to assist a defendant by telling him what to say via an earpiece.310
Evidently, the scope of AI is increasing day by day. Many professions have
already started to use its advanced tools in different previously human-
handled operations. As a solution to the limitations of human analytical
capacity, AI has made data analysis and evidence-based interpretation very
simple.

International law is the instrument of international coexistence. However,


International law is not considered as powerful as domestic laws. Compared
to municipal laws, it lacks many traits one among them being the lack of
enforceability. Since different states hold different positions in controlling
international law and order, sometimes the law seems less powerful towards
some. This fact indicates the need for a unique and impartial mechanism that
can increase the participation in and enforceability of international law.
Further, in national laws there is sufficient data for AI. Unlike national laws
international law does not have a vast number of codified laws and statues to
be taken as data. Nevertheless, there are some possible ways for initiating AI

310
Tech Desk, ‘World’s first AI-enabled robot lawyer will tell defendant what to say in
upcoming court case’ (The Indian Express, 11 January 2023)
<https://indianexpress.com/article/technology/worlds-first-robot-lawyer-will-tell-defendant-
what-to-say-in-upcoming-court-case-8374910/> accessed 14 January 2023.

117
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
in international law. The goal of initiating AI through such ways includes the
codification and progressive development of international law as stated in
Article 13 (1)(a) of the UN Charter.311

In the first phase of this investigation, we discuss the sources of data which
can be utilized in the successful application of AI. Then the study goes on to
analyze the different AI tools with regard to their applicability in
international law. Finally, the research reaches the conclusion that those AI
tools can effectively be used to codify international law and thereby can
introduce some regulatory mechanisms as well.

II. Initiating in International Law

(a) Artificial Intelligence

The rapid development in Information and Communication Technologies


(ITC) has resulted in the creation of ‘artificial intelligence’ which is capable
of doing tasks that were normally expected to be done only by human beings.
This is rather an understatement when stated in the current context, where AI
even possesses a vital threat to the employability of naturally intelligent
humans. The goal of AI research is to comprehend and create intelligent
beings that fall under the parameters of thinking and acting rationally and
humanly. The birth of AI can be traced back to the 1950s when the founding
fathers of AI, Minsky and McCarthy, tried to define the same as “any kind of
activity or task performed by a specific machine or robot that required
human intelligence, previously.” The definition of AI given by Russell and
Norvig reads as follows: “Artificial Intelligence is a field of research to

311
United Nation Charter 1945, ch 4, 13 (1)(a).

118
Acing the AI: Artificial Intelligence and its Legal Implications

understand and build intelligent entities that pertain to the categories of


thinking and acting humanly and rationally.”312 The trajectory of the
development in the field of AI and the pace at which it takes place thereof,
confidently point to the world of Artificial Super Intelligence (ASI). It is
defined as “any intellect that greatly exceeds the cognitive performance of
humans in virtually all domains of interest.”313 Obviously, AI has gone too
far even to question the analytical and cognitive capacity of human beings.

One of the main features of AI is its adaptability. AI is known for its ability
to react to unprecedented circumstances and environment in the ‘right way’.
Here the tool is ‘machine learning’, a method of AI that enables machines to
“learn” from data and experience without having been explicitly
programmed. 314 When it comes to the debate over the right and wrong of the
machine, attempts have been made by Russell and Norvig to clarify that “a
system is rational if it does the ‘right thing’, given what it knows.”315 The
initial commands given to a system work as raw data based on which the
system will interpret and decide on a given problem. Sometimes, the
machine will use the general characteristics of similar problems learned from
the data pool to decide in unprecedented situations. This act of learning
lessons is truly an imitation of human cognitive abilities. Right and wrong
will be based on, as Russell and Norvig put it, “given what it knows.”316

312
Stuart J. Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th edn.,
Prentice Hall 2020) 4.
313
N. Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press
2014) 22.
314
A. L. Samuel, ‘Some Studies in Machine Learning Using the Game of Checkers’ (1959)
3 IBM Journal of Research and Development 210.
315
Peter Norvig and Stuart J. Russell, Artificial Intelligence: A Modern Approach (3rd edn,
Prentice Hall 2010) 1.
316
ibid.

119
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
This is, indeed, the era of information overflow. The major problem we face
now is the processing of information from an enormous set of data. Although
experts can tackle this constraint, AI proves here more efficient than human
experts. Compared to human specialists, AI can produce results with higher
quality, greater efficiency, and better results.317 This being the central
premise, scholars have given much attention to discussing the future of the
workforce, especially in an organizational structure.318 The field of
application of AI is expanding day by day. The traditional fields that were
meant to be man’s domain, including innovation, are no longer remote to
AI.319

(b) International Law

International relations and inter-state relationships are guided by


international laws and treaties, to an extent. International law is known for its
identity crisis as to the element of ‘legal enforceability’ it lacks. To put it
simply, the question of whether international law is really a law has been
grounded in profound scholarly discussion. Because international law is a
consent-based system, the most frequently cited argument against its position
as a law is that there is no international judiciary, police force, or even
military to enforce it.320 However, international bodies and agencies like

317
A. Agrawal, J. Gans and A. Goldfarb, ‘Prediction Machines: The Simple Economics of
Artificial Intelligence’ [2018] Harvard Business Review Press.
318
J. Bughin, E. Hazan, S. Lund, P. Dahlström, A. Wiesinger and A. Subramaniam, ‘Skill
Shift: Automation and the Future of the Workforce’ (McKinsey Global Institute, 23 May
2018) <https://www.mckinsey.com/featured-insights/future-of-work/skill-shift-automation-
and-the-future-of-the-workforce> accessed on 12 November 2022.
319
Teresa M. Amabile, ‘Creativity, Artificial Intelligence, and a World of Surprises’,
Academy of Management Discoveries’ (2019) 6(3) Academy of Management Discoveries
<https://doi.org/10.5465/amd.2019.0075> accessed 14 January 2023.
320
Elizabeth M. Bruch, ‘Is International Law Really Law? Theorizing the Multi-
Dimensionality of Law’ [2011] Law Faculty Publications, Valparaiso University School of
Law <https://scholar.valpo.edu/cgi/viewcontent.cgi?article=1134&context=law_fac_pubs>

120
Acing the AI: Artificial Intelligence and its Legal Implications

United Nations and its subsidiary bodies have tried and won, to an extent, in
nurturing a ‘binding’ character for international law that can be named as
application, if not enforcement of international law. International bodies like
the International Court of Justice, United Nations, International Criminal
Court, etc. are carrying out their duties by way of persuasion. Moreover, in
the case of international law, it is primarily up to the States to interpret their
obligation in the agreement which they themselves have entered since there
is no internationally recognized sovereign. 321

Nonetheless, owing to the new world order and the multitude of transnational
treaties, the stakeholders of international law have started to take a different
approach to it. For some, compliance with the law and treaties is a matter of
international existence. This can be proved by analyzing the United Nations
Security Council’s actions and its global reflections in the aftermath of the
9/11 attack of 2001. Security Council constituted a ‘Counter-Terrorism
Committee’ providing them with the authority to list any entities that fund
terrorism.322 The Security Council’s legislative action against the whole
terrorist activities led to passing of similar legislations by many states in
compliance with the rules of the Security Council. It shows that international
law caused a tremendous amount of legislation at the domestic level. This
proves that even if international law lacks the element of complete legal-
enforceability neither is it essentially unstable nor is it a set of unenforceable
words. This brings us to the fundamental theme of the chapter that if the
modes and means used in the practice and implementation of international
law become more advanced and easier to interact then the participation of the
321
A Roberts, ‘Power and Persuasion in Investment Treaty Arbitration: The Dual Role of
States’ [2010] American Journal of International Law 179.
322
E. Alexandra Dosman, ‘Designating “Listed Entities” for the Purposes of Terrorist
Financing Offenses at Canadian Law’ [2004] University of Toronto Faculty of Law Review.

121
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
states and other international bodies will increase making it a stronger, stable
and enforceable law of the globe.

(c) AI in National Laws

Prof. Thomas Burry, in his work on international law and AI, sheds some
light on the prospective powers of machines to outperform human
specialists. He opines that tasks that once required lawyers’ keen attention
are increasingly falling under the term mechanized.323 Those tasks include
legal assessment, due diligence scrutiny, contract drafting, appeals of
grievances, etc. Even though he narrows down the possibility of automation
and application of AI only to municipal laws due to the structural difference
between national and international laws,324 the changes that the legal
profession is put to, as a whole, indicate to the inevitable world of AI-
enabled international law. Sometimes the application of AI may take place
through the use of Machine Learning which is a tool of AI that renders a
system to learn without being explicitly programmed.325

The best example to understand the application level of machine learning is


the tax laws at the national level. Tax laws are definite, the obligations and
liabilities are more often explicitly expressed in numerical data. Likewise,
the rulings of the courts and legislation by the national authorities are clear.
Moreover, the application of these laws is uniform throughout the territory.
That makes it perfect for machine learning. Many professions and companies
have already started the fruitful application of AI in their sectors. For
example, AI is used by a company named LawGeex to examine contracts,

323
Thomas Burri, ‘International Law and Artificial Intelligence’ [2017] German Yearbook
of International Law 91.
324
ibid 92.
325
Samuel (n 314).

122
Acing the AI: Artificial Intelligence and its Legal Implications

and it also helps them detect legal hazards that the contract may have if they
are not addressed. A company called Blue J uses AI to forecast the results of
court cases involving tax law. AI’s power of natural language processing is
widely used in the field of legal research by many lawyers by way of
analyzing legal records, judgments, and question-based investigations. 326

Limitations of International Law

Unlike national laws, international law lacks many characteristics when AI


is to be applied. In municipal law, the rulings delivered by the courts and
enactments made by the legislature are so high in numbers that there is
enough raw data for AI to learn from. However, the case is almost different
in international law. There are only a few such rulings, and treatises
compared to that of the states. One of the major reasons for this is that in
international law, changes and new rulings typically require broad consensus
and happen slowly and gradually.327 This being one of the prime concerns,
we investigate here how the requisite amount of data pool for AI can be
made up and using those data in what ways AI tools can be successfully
applied in the codification of international law.

III. Analyzing AI-Tools

The two most significant sources of international law are treaties and
customs. The diverging appearance of international law is the result of these
sources formed out of manifold treaties and diverse customary practices of
states. Using AI these diversities can be codified, to an extent. These sources
326
Manohar Samal, ‘International Law, Litigation and Alternate Dispute Resolution’ in
Abhivardhan, Suman Kalani, Akash Manwani and Kshitij Naik (eds.) Handbook on AI and
International Law (Indian Society of Artificial Intelligence and Law 2020) 134.
327
Nico Krisch, ‘International Law in Times of Hegemony: Unequal Power and the Shaping
of the International Legal Order’ [2005] European Journal of International Law 369.

123
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
can provide enough data for the purposes of prediction, analysis, information
extraction, etc. The three major potential ways of application are:

 Prospects in International Treaties


 Connecting the Customary International Law Texts
 Intervention in International Dispute Resolution

(a) Prospects in International Treaties

An international treaty is all about consensual agreement on certain rules and


duties. The signatory states are expected to abide by the provisions laid down
in the agreement. It also makes a binding responsibility upon the ratifying
state not to contravene the mutually created accord. In case of an upcoming
treaty, understanding what the partner state would more concentrate upon
and how their past way of interference in international legal and political
agreements will influence the upcoming treaty, would be advantageous. It is
a key element in winning the game to our benefit in negotiations.328 Machine
learning can play a pivotal role here. If a set of data pertaining to the party to
the upcoming treaty is provided, AI, using machine learning, will interpret it
into a piece of productive information.329 This will deliver us information
regarding the possible behaviour of the partnering states in the treaty.330
Bearing in mind the abovementioned insufficiency of data in international
law, a question might arise as to what would make up such a huge data pool
for AI in respect of international law. The truth is that lack of data is a fact

328
John Saee, ‘Best Practice in Global Negotiation Strategies for Leaders and Managers in
the 21st Century’ [2008] Journal of Business Economics and Management 309.
329
Ajay Agrawal, Avi Goldfarb and Joshua S. Gans, ‘What to Expect from Artificial
Intelligence’ [2017] MIT Sloan Management Review.
330
Ashley Deeks, ‘High-Tech International Law’ [2020] George Washington Law Review
576.

124
Acing the AI: Artificial Intelligence and its Legal Implications

only in comparison to the data available in the national laws. Certainly, the
data for AI analysis can be collected from:

i. Past international treaties and negotiation of states.331

No doubt whether applying AI tools or not, analyzing past treaties,


negotiations, and other bilateral agreements of the parties is an important
step before entering into a new one.332 It is because they might have some
relevant information regarding the stance of the state on different
international matters. Then the question is which method are we going to
use? If it is machine learning, then the amount of data it can be engaged with
and gone through at once is relatively huge. This quality makes it preferable
to other methods.

ii. The Database of UN

UN digital library can be utilized as a source of data that contains almost all
the relevant details concerning the Security Council, Economic and Social
Council, and the General Assembly. Moreover, the UN Treaty Collection
website provides official data regarding the drafting and negotiation of
specific treaties. 333

iii. Relevant Domestic Statutes

Domestic laws will certainly have a constraining effect on the decision of the
state in its international discourses. This will apply to international treaties as

331
Wolfgang Alschner, Julia Seiermann and Dmitriy Skougarevskiy, ‘Text-as-Data Analysis
of Preferential Trade Agreements’ [2018] Journal of Empirical Legal Studies 648.
332
Peter Reilly, ‘Was Machiavelli Right? Lying in Negotiation and the Art of Defensive
Self-Help’ [2009] Ohio State Journal on Dispute Resolution 481.
333
‘International Law Documentation’ (UN General Assembly) <https://treaties.un.org/>.

125
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
well. Hence, a sufficient set of data for AI must include the relevant codes
which will possibly influence the decision of the state.334

(b) Connecting the Texts of Customary International Law

‘Law of nations’ is the old term used to denote customary international law.
It is a set of state practices and opinion juris across the world that lays down
one of the constituent elements of international law. The function of the
same is to recognize common law causes of action for infringement of
international standards recognized by the civilized world.335 There is a lot of
tension regarding the availability of documents pertaining to state practice. It
is true that many of them are not even digitized. 336 This does not essentially
constitute that there is a lack of primary data. Still, the availability of data
may vary depending upon the willingness of the states to publish their
records and materials.

Despite this fact, the challenges are being addressed and the states are
coming forward with their materials and records which will gradually result
in the enhanced performance of AI tools in the field. Here the potential AI-
tools for the codification are:

i. Information Extraction

The meaning of some terminologies in different legal texts might be different


from state to state not to mention what the term might be used for in the

334
David Sloss, ‘Domestic Application of Treaties’ in Duncan B. Hollis (eds), The Oxford
Guide to Treaties (OUP 2020).
335
Ryan M. Scoville, ‘Finding Customary International Law’ (2016) 101 Iowa Law Review
<https://ilr.law.uiowa.edu/print/volume-101-issue-5/finding-customary-international-law/>
accessed 9 November 2022.
336
Report of the Secretary-General, ‘Ways and Means of Making the Evidence of
Customary International Law More Readily Available’ (1949) U.N. Doc. A/CN.4/6.

126
Acing the AI: Artificial Intelligence and its Legal Implications

practical sense. To extract the exact contextual meaning and to reach a


mutually, if not internationally, agreeable one, it is necessary to examine the
whole record/text. AI is capable of using the tool of ‘information extraction’
(text mining) for this purpose. It refers to the technique of discovering new
information from text archives.337 This operation is carried out by thoroughly
analyzing the texts and thereby finding the precise context in which such
terms are used in different texts.338

ii. Probabilistic Method (Topic Model)

In this method, AI uses an algorithm that facilitates the finding of underlying


key themes in a significant, unstructured collection of documents, and then
arranges the collection in line with the themes discovered.339 Apart from
finding the theme this type of system can additionally be used to locate the
terms and ideas which are nearly similar in concept. This in turn helps to find
the unidentified relationships between the concepts as well.340 Since this
appears as a topic-centric analysis this is also known as the topic model
method.

(c) Prospects in International Dispute Resolution

An international dispute may arise due to several reasons. It may stem from
the action of a state contravening a ratified treaty or when there arise
disputes regarding a contract or in the meaning and interpretation of certain
terms in the agreement. The bodies under which these issues would be

337
Hu, Xia and Huan Liu, ‘Text Analytics in Social Media’ in Charu C. Aggarwal and
Cheng Xiang Zhai (eds) Mining Text Data (Springer 2012).
338
ibid.
339
D.M. Blei, ‘Probabilistic Topic Models. Communications of the ACM’ [2012] Open
Journal of Statistics 77.
340
K. D Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in
the Digital Age (Cambridge University Press 2017) 7.

127
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
addressed for adjudication may vary given the clauses and conditions in the
contract. The adjudicatory bodies may be the International Court of Justice,
Tribunals, Permanent Court of Arbitration, etc.

When the dispute is before the adjudicatory authority, a state might be


interested to know the possible outcome of the dispute. Even if we consider
the statistical prediction using human intelligence for this end, the
voluminous data consisting of hundreds of past decisions and judgments of
the court/body that one has to go through to predict the possible outcome
might seem difficult for human analytical capabilities. However, any such
analytical operations in the legal arena that analyses the historical data of the
courts to predict the outcome will certainly come under the domain of AI.341
It is said that “no human lawyers stand a chance in it.”342 The potential AI
tools in IDR are:

i. Authorship Analysis

This AI tool analyzes the judgment or arbitral award with respect to which
judge or arbitrator wrote the core decision or award and on which argument
or reason he/she was found to be more persuasive.343 Taken in the context of
a state in an international dispute, this tool helps in identifying key elements
and characteristics of the whole body and provides insight as to the selecting
or avoiding of a particular body of adjudication in the future.

(ii) Textual Similarity Checker

341
Lauri Donahue, ‘A Primer on Using Artificial Intelligence in the Legal Profession’
[2018] Harvard Journal of Law & Technology <https://jolt.law.harvard.edu/digest/a-primer-
on-using-artificial-intelligence-in-the-legal-profession> accessed on 7 December 2022.
342
ibid.
343
William Li, ‘Using Algorithmic Attribution Techniques to Determine Authorship in
Unsigned Judicial Opinions’ [2013] Stanford Technology Law Review 503.

128
Acing the AI: Artificial Intelligence and its Legal Implications

This tool of AI can identify the level of influence a party to the dispute has
over the final decision if it is given a set of data containing the judgment and
submission of parties. Here, AI can identify the tendency to what extent a
person who wrote the decision would be cognitively inclined by the language
and words used in the brief of a party to the suit.344 This might be helpful for
the states to understand the probability of winning the award or decision in
the same legal question in future disputes. AI can enable them to choose
whether to go forward with the judicial adjudication if the algorithm detects
those types of ‘persuasive’ languages, words, and terms in the brief of the
opposite party.

IV. The Scope of Codification

Article 13 (1)(a) of the UN Charter calls for the codification of international


law. 345 The more exact articulation and implementation of international law
principles on areas that have previously received considerable coverage by
state practices and different doctrines are referred to as the codification of
international law. 346 The call for codification was first raised by Jeremy
Bentham, the person who also coined the term ‘international law’ in 1789.347
This codification by way of human legislation and conventions is in progress
even today. However, the above-discussed advanced capabilities of AI tools
indicate the possible way of international law codification based on AI.

344
Pamela Corley, Paul Collins and Bryan Calvin, ‘Lower Court Influence on U.S. Supreme
Court Opinion Content’ [2011] Journal of Politics 31.
345
United Nation Charter 1945, art 13 (1)(a).
346
‘Codification Division Publications’ (Office of Legal Affairs) <http://legal.un.org/cod/>
accessed 18 December 2022.
347
Jeremy Bentham, An Introduction to the Principles of Morals and Legislation (Dover
Publications 2007).

129
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
The codification, as stated above, can be understood as enacting international
laws on matters upon which domestic legislations have already been in place.
AI has something to do with this codification of international law. Namely,
in the legislation, enforcement and implantation of international laws,
treaties, agreements and other rules these highly capable tools of AI are
offering a considerable contribution. Since international law has different
facets this AI-aided codification may also take different approaches
accordingly.

In the case of international treaties, under the guidance of data scientists and
experts in the field of AI, machine learning and natural language processing,
AI can put forward a ‘listing mechanism’ which can be automatically
updated by it from time to time. If any state contravenes any treaties or court
orders, irrespective of the position it holds, AI may list them separately
suspending them from entering a new treaty for a certain period. As a tool of
AI, machine learning is appropriate for this purpose. Since the chances of
vested interests are comparatively insignificant,348 in the case of a machine
run by AI, this mechanism stands a chance against its prevailing
counterparts. This will increase the reliability of bilateral and multilateral
treaties because even before the states enter into a treaty, AI will predict the
possibility of partnering states contravening the rules in the future.349

Using the information extraction tool and topic model method which are
efficient enough to extract the underlying themes from different texts,350 a

348
Henry Adobor and Robert Yawson, ‘The Promise of Artificial Intelligence in Combating
Public Corruption in The Emerging Economies: A Conceptual Framework’ [2022] Science
and Public Policy.
349
Deeks (n 330).
350
Hu, Xia and Liu (n 337).

130
Acing the AI: Artificial Intelligence and its Legal Implications

synthesis of different texts of states pertaining to state practices can be


formed. Such a codified text can be relied on during court proceedings to
avoid confusion over the meaning of some terminologies and usages.351
Moreover, this method will be convenient to avoid misconceptions or
misinterpretations of any term.

As far as authorship and similarity analysis tools are concerned, apart from
the utilities to the States in identifying relevant information about the award
and legal points, these tools offer a stronger judicial/adjudicatory system at
the international level. Instead of relying on academic accomplishments, the
selection committee can directly learn from the report of AI how rooted is
the cognitive capacity of the judge and the arbitrator.352 The information
provided by authorship and similarity analysis tools will aid the international
authorities to choose the least (cognitively) inclined one to take the position
in the adjudicatory bodies.

V. Conclusion

International law is formed out of the mutual agreement between sovereign


and independent states to keep their ends. A well-maintained international
legal order can be a solution to abundant legal, political, economic, and
sociological issues around the world. AI has proven to be a system that
predicts accurate information and generates practical instructions after
learning from a set of data. It holds a potential role in empowering
international law since its novel tools can increase the participation in and
trustworthiness of international law. The codification of international law

351
Ashley (n 340).
352
Corley, Collins and Calvin (n 344).

131
Artificial Intelligence and International Law: Analysing AI-Tools signifying
the Scope of Codification
using AI tools will enable the states to comprehend and practice the law
more efficiently. Moreover, this act of replacing human intelligence with its
artificial counterpart will be beneficial in the long run as the enormous set of
interpreted data, reports, machine codifications, and various other
information will aid and advise the concerned authorities for the expeditious
disposal of disputes and negotiations.

If a proper mechanism is introduced where no state could outrun the ambit of


accountability, many international disputes can be avoided. However, human
intelligence may seem inadequate for this purpose due to many limitations.
AI has undoubtedly proved to be an expedient alternative to various
previously human-handled operations. The advanced tools of Artificial
Intelligence like information extraction tools, authorship analysis tools,
similarity analysis tools, etc. are capable of being employed in various
requirements of international legal discourse. Taking primary data from
different texts related to state practices, records of negotiations, documents
of treaties, and decisions of adjudicatory bodies AI can produce a codified
law of nations to be applied in different segments of international law. In
addition, these tools will operate as a regulatory mechanism where any
international entity contravening the established rules will have to go through
the unbiased machine-probe of AI. To sum it up, the novel advancement in
artificial intelligence research and its newly invented tools has the potential
to bring about an improved, codified, and nuanced law of nations.

132
Acing the AI: Artificial Intelligence and its Legal Implications

PRODUCT LIABILITY DILEMMAS: DRIVERLESS CARS


Kopal Kesarwani
(Student at Jindal Global Law School.)

Abstract
Autonomous cars minimize human intervention and rely on the
software system installed within them to take decisions about their
functioning. An unfortunate revelation is that a highly advanced
Artificial Intelligence (“AI”) even after undergoing supervised
training and testing, can have unaccounted bugs that can land a car in
an accident. In this scenario, the foremost question is pertaining to
product liability. The law has to decide who is liable: a manufacturer,
software developer, service provider, testing authority, customer or the
AI itself. Furthermore, how do we ascertain liability when multiple
parties have substantially contributed to the making of the product and
all their functions, are intricately tied up? More than that, how do we
understand if the flaw in the software was latent or patent?

In order to resolve these complicated questions, the author will


highlight laws concerning product liability as prescribed by the
European Commission (“EU”) and the Indian legislature. The reason
behind focusing on these two legislations would become conspicuous
by the end of the chapter. Under the current framework of the EU, the
producer’s liability for a defect is based on elements of
‘foreseeability’. However, AI-enabled technologies contradict this
‘foreseeability’. An accident in these cars, despite being avoidable,
may always occur due to unforeseeable defects. Therefore, the first

133
Product Liability Dilemmas: Driverless Cars

half of the chapter analyses whether the Product Liability Directive by


the EU tackles this concern by focusing on ‘defect’.

Moving forward, the author carefully examines the product liability


clauses present under the Indian Consumer Protection Act, 2019,
which are not entirely dependent upon the application of the concepts
of foreseeability, negligence and defect. Indian legislation alternatively
provides for holding the producer liable for an accident even if he was
not negligent provided, he had exercised substantial control over the
system which defaulted. Therefore, Indian laws concentrate more on
‘compensating for damages’ than spending time on ‘finding the
defect’. Both Europe and India have differing outlooks on resolving
this issue. The author’s aim would be to assess if laws in India are ripe
enough to handle the product liability issues attached to driverless
cars.

I. Introduction

Dramatically increasing innovation and automation have called for a


modification in legislation, and it has become extremely important to align
the laws according to this ever-changing modern industry so as to reduce the
potential risks arising from technologies. Artificial intelligence (“AI”) is one
such technological advancement that has heavily altered the manner in which
we interact with our surroundings. While this has made certain human tasks
easier, it has also brought an equal number of challenges on the legal front.
AI’s intervention into our lives has modified and expanded the definition of
certain basic terminologies, such as ‘vehicle’, which may not just include a
traditional vehicle but also an AI-enabled self-driving car, and a ‘product’,

134
Acing the AI: Artificial Intelligence and its Legal Implications

which may consist of both a tangible property and an intangible property


such as software.

The focus of the author’s research would be on these AI-enabled cars and the
kind of legal implications they attract. There was a time when these AI-
enabled cars were a far-fetched dream, or rather, an improbable thought. But,
today, it is seen quite often. What is an AI-enabled car?

An AI-enabled vehicle is a self-driving car capable of functioning


autonomously without any human intervention by sensing and learning from
its environment.

There are several kinds of autonomous vehicles. An overview of the different


levels of automated driving is provided below.

(i) “Level 0: Involves no automation and can be controlled manually.


(ii) Level 1: A minute automated feature such as adaptive cruise control
to assist the driver.
(iii) Level 2: Partial automation such as automatic acceleration and
braking systems. Humans still monitor the driving environment and
have control over the vehicle.
(iv) Level 3: Vehicle has ability to detect and monitor environment
through sensors and conduct driving. Humans may take away their
hands off the car, but they must be ready to take control.
(v) Level 4: Vehicle is highly automated at this stage and can perform
all the driving tasks solely, however, humans can manually override
the automated system also.

135
Product Liability Dilemmas: Driverless Cars

(vi) Level 5: The Vehicle is fully automated and can carry out all the
driving functions without any human assistance or attention.”353

Returning to the issue, it is quite evident that the functioning of such cars is
dependent on not just electrical, mechanical, and physical factors but also on
the quality of the software. Additionally, the legal problem will vary at each
automated level.

For instance, it is a Level 3 automated car, which requires driver assistance


at certain times. Suppose the software system informs the driver 10 seconds
in advance of such a requirement, and before giving such control, the
software on the system fails and a dangerous accident occurs. Who is
responsible in such a situation? Take another instance where the AI-enabled
car is confused about whether it must speed or not. Such questions are not
rare for this technology. Ascertaining liability might be extremely easy for,
say, a Level 0 or Level 1 automated car. However, the question of liability
can be tricky for other levels of automation.

This immediate shift in the kind of vehicles driven has led to a departure
from the traditional system where only the owner of the vehicle could be
held accountable for an accident unless there was a severe defect in the
manufacturing or design of the vehicle. The parties and stakeholders
involved in this new setting can be the user of the car, the owner of the car,
the person who coded the software, the manufacturer of the car, or any other

353
Rahul Kapoor, ‘Autonomous Driving Cars’ (The Financial Express, 9 December 2020)
<https://www.financialexpress.com/auto/car-news/autonomous-driving-cars-all-six-levels-
autonomous-vehicles-explained-level0-level-1-level-2-level3-level-4-level-5-volvo-
chevrolet-audi-bmw-mercedes-benz-artificial-intelligent-ai-self-driving-cars/2146290/>
accessed 11 January 2023.

136
Acing the AI: Artificial Intelligence and its Legal Implications

party involved in the making of the vehicle. Therefore, the issue of product
liability has expanded in the context of AI-enabled cars and poses some
serious novel concerns. One of the questions that will also be the focus of
this chapter is: can AI-enabled cars be accommodated safely in our legal
system?

This chapter will deeply analyse the above-mentioned research question.


This chapter involves highlighting laws for product liability as devised by
the European Commission and the Indian legislature and analysing gaps
within these laws when applied to autonomous cars’ scenarios. Further, the
author will elaborate on the most contested question with respect to self-
driving vehicles, which is – who must be held liable? The last segment is the
concluding statement.

II. Product Liability in the Context of European Laws

The European Union has devised a Product Liability Directive,354 to


adjudicate liabilities arising out of defective products. Under this legislation,
a producer is liable for defects in its product. A producer is defined as “the
person who has manufactured the good or whose name is mentioned on the
product as the ‘producer’.”355 This definition itself is not in consonance with
the type of challenges the AI world brings to us. As discussed earlier, there
can be a number of parties involved in the making of driverless cars.
Therefore, who shall be the real ‘producer’?

354
Product Liability Directive 85/374/EEC of 2017 [2017].
355
ibid art 3(1).

137
Product Liability Dilemmas: Driverless Cars

A producer will be held liable only if they deliver a product that does not
match the standards of ‘safety’ expected by a ‘reasonable’ man. In order to
establish product liability under Article 4 of the Product Liability Directive,
the plaintiff has to prove the defect in the product and the damage caused due
to such defect. The sufficient and proximate link must be there in the
damages caused due to the defect.356

An AI may make a completely unforeseeable mistake like mistaking a white


painted truck as the bright sky. Although this is a ‘basic safety standard’ that
a reasonable person would expect to be able to distinguish between a ‘sky’
and a ‘truck’, however, it is still not possible to ascertain if non-compliance
with such a basic feature makes the software ‘defective’. 357 Would these
unpredictable mistakes be referred to as a ‘defect’ or will it fall under an
exception and exempt the producer from liability?

As per the exceptions, the producer will be exempted from liability if the
technical know-how at the time when product was put into the market was
not sufficient to find out the defect (development risks defence), if the defect
is pertaining to a component of the product that has been designed or
manufactured by a third party, or if the defect arose after the product was put
into circulation. 358 Let us analyse the impact of the direct application of these
exceptions to the defects in AI-enabled cars.

356
ibid art 4.
357
Jeff Hecht, ‘Self-driving vehicles: Many challenges remain for autonomous navigation’
(Laser Focus Word, 14 April, 2020) <https://www.laserfocusworld.com/test-
measurement/article/14169619/selfdriving-vehicles-many-challenges-remain-for-
autonomous-navigation> accessed on October 13, 2022.
358
Product Liability Directive (n 354) art 7.

138
Acing the AI: Artificial Intelligence and its Legal Implications

Consider an instance where an accident occurs because the automated car


failed to make a quick decision due to a bug that couldn’t have been
discovered initially due to the limited technical knowledge in the market.
Will the ‘development risks defence’ apply? According to the Product
Liability Directive, the defence may be useful to the producer. This is
because when the software was installed and updated, there was no way the
bug could have been detected. In fact, in the case of software, even after
thousands of corrections and verifications, the probability of an error is still
high.

The second defence given to the producer is such that it can bind one
producer but release another producer from liability. This is because if the
defective component is designed by a different manufacturer, he shall be
responsible for it. Such a mechanism is suitable and required for autonomous
cars, as there are multiple producers who could be responsible for the same
accident. While one producer who assembled the car may be freed from
liability, the other may be liable as he had failed to install the updated
software into the car.

Through guided supervision, AI-enabled cars learn from their environment


and their mistakes. This means the software installed keeps getting better and
updated. Now, for instance, due to an update, a mistake is made by the car in
the process of learning. The defect essentially arose after the product (car)
was put into circulation, but due to the ever-changing nature of the product’s
software, the lawmakers must alter the third exception in such a manner that
a defect in software that is updated after the product (car) is put into
circulation is not exempt from liability. If this alteration is not made, then,

139
Product Liability Dilemmas: Driverless Cars

incorrect software updates may be exempt from liability as they arose after
the product was put into circulation.

Since the AI learns on its own from its surroundings and from data that may
be collected from third parties, the producer of the car may also not have
exclusive control over the software updates. The producer may have some
degree of control, but because the AI keeps developing, it may not be
possible for the producer to constantly monitor these updates. In such a
muddy scenario where the traditional role of the producer has decreased and
the role of AI working on its own has increased, accountability for defects
becomes a major issue. The producer may be liable for the defect provided
the legislation is altered, but the question is, who is the real producer?

III. Liability Owing to Negligence

Every person owes a ‘duty of care’ and will be liable for his negligent
actions provided he did not adhere to the standard of behaviour he should
have adhered to. In Donoghue v. Stevenson, the court established that one
must take ‘reasonable care’ to avoid any harm or injury to the plaintiff. 359 On
applying this principle of negligence to an AI-enabled car, it becomes
difficult to ascertain who exactly owes this duty of care to the plaintiff and
what ‘reasonable care’ implies. It would completely depend upon the party
that made an error in making the product when deciding on who owed a duty
of care. If, the driver of the car, that is the human himself, was responsible
for the accident that injured a pedestrian, then the answer would be pretty
straightforward, and it would be easier to decide if he took reasonable care.

359
Donoghue v Stevenson [1932]SC(HL) 31.

140
Acing the AI: Artificial Intelligence and its Legal Implications

However, if the AI behaved in an erratic manner, the answer would be very


complex.

Ryan Abott says that in such cases, the focus must be on the acts of AI, as it
has evolved and learned to such an extent that it can be treated as an
independent person. We must analyse if the acts of the AI were safer than
those of a reasonable person or if the AI’s actions were safer than those of
other AIs in the market.360

The comparison of AI’s actions to human actions is flawed. This is because


an AI makes mistakes for a wider range of reasons than a human. For
example, for hitting a pedestrian or a truck, a human can be held liable
however, for an AI to make such a mistake, reasons could be numerous: AI
may not have been trained well, the AI software failed due to a hack, the user
had not updated the AI software and so on. Therefore, determining if the AI
acted in a reasonable manner at that point is extremely difficult.
Additionally, expectations from self-driving cars are that they would reduce
car accidents due to very less human involvement. Therefore, comparing the
AI’s standard with a reasonable man’s standard fully distorts that purpose.

On the other hand, comparing AI with other AIs on the market poses a
different set of problems. There can be a range of AI-enabled cars in the
market, ranging from low-end automated cars to high-end automated cars.
Within these ranges, there can be subcategories and a number of companies
that may have built these cars using totally different software, techniques,
and training methods. When there can be a difference between two AI-

360
Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law (Cambridge
University Press 2020).

141
Product Liability Dilemmas: Driverless Cars

enabled cars, it becomes hard to determine which set of cars can be


compared with each other to ascertain if the AI was negligent.

This was an analysis of the situation when AI is treated as an independent


entity. If this AI is not considered an independent party, then the producers
or manufacturers of the product become responsible for the negligent actions
of the car, provided they did not exercise ‘reasonable care’. But again,
exercising ‘reasonable care’ in the manufacturing of the car is not as easy to
interpret and apply as it sounds. The manufacturer might have taken more
than extraordinary care to eliminate any risk; however, the AI may still end
up making an extremely silly mistake. Therefore, negligence may not be
established as the producer or manufacturer fulfilled the duty of care owed to
the consumer and exercised due and reasonable care. Hence, the traditional
concept of negligence itself seems to have become outdated in the context of
this modern technology.

IV. Indian Background

This segment of the chapter seeks to examine the transformations that will
have to be made to accommodate autonomous cars in the Indian space. A
range of Indian conglomerates and start-ups have engaged themselves in the
commercialization of this dream of ‘driverless’ cars. However, with this
dream of the impossible, there come not just technological and societal
barriers but also legal ones. The Vienna Convention of Road Traffic, 1968,
to which 70 countries excluding India are signatories, prescribed that cars
may be automated, but they still must be in full control of the driver. 361

361
Vienna Convention on Road Traffic [1968] Chapter XI on Transport and
Communications, Road Traffic.

142
Acing the AI: Artificial Intelligence and its Legal Implications

However, this was later amended, and something called ‘self-driving’ cars or
‘autonomous cars’ were allowed. But this amendment did not allow for
driverless cars.

While the international conventions and the European Commission have


clearly gone through a range of changes to set up a safe atmosphere for
autonomous cars, India has not signalled any such amendment. The Motor
Vehicles Act of 1988 makes it compulsory to have a human operator while
driving, which makes it clear that autonomous vehicles are illegal in India.
But, as discussed previously, there are several levels of autonomous vehicles.
AI intervention until the point where humans monitor the environment and
control the car is legal in India. On the other hand, from level 3, the cars
monitor the environment, and humans can take their eyes and hands off the
roads. This kind of independence is not yet given to a car in the Indian
environment. However, we shall also delve into the implementation of level
3, level 4, and level 5 autonomous cars.

In this part of the chapter, we shall analyse the risks and benefits of the
operation of autonomous vehicles in India and then, in detail, examine the
legal status of product liability clauses in India.

V. Impact of Autonomous Cars in India

i. Risks

India has a wide variety of terrain, ranging from the Himalayas to plains and
from plateaus to beaches. Some of these areas do not even have proper roads.
In fact, India is still a developing nation, and smooth roads are a dream in

143
Product Liability Dilemmas: Driverless Cars

many parts. In countries like the UK and USA, implementation of


autonomous cars becomes easy due to uniform roads and terrains.

On top of that, the challenges that the roads face are also very different. For
example, rural roads may very often be loaded with a herd of cows, which is
not commonly seen in the UK and USA. Additionally, there might be road
blockages due to a variety of reasons, such as Indian baraats, religious
ceremonies, festivals, or even legitimate protests. Autonomous cars will have
to be trained to deal with hurdles that are relevant to Indian society.

Car parking is also done in a very haphazard and scattered manner, except
for the parking in malls. The dynamic parking arrangements in India would
require AI to be heavily trained so that it can remove cars efficiently from
these areas.

Even following road instructions can be risky for autonomous vehicles. For
example, expressways in India prescribe a driving speed of 60km/h.
However, most cars run their vehicles at a higher speed. In that case, if the
autonomous vehicle tries to abide by the speed regulation, an accident may
occur. Therefore, it is not just important for autonomous cars to be
acquainted with the Indian laws and their terrain but also with Indian society
and its mindset.

ii. Benefits

Autonomous cars may prove to be a boon for children, the elderly,


handicapped, and mentally disabled persons who are not eligible to drive
cars. Even intoxicated persons may be allowed to travel alone in fully
autonomous cars as human control is negligible there.

144
Acing the AI: Artificial Intelligence and its Legal Implications

The advent of autonomous cars will also remove the subjective and irrational
decision-making on roads. For example, many people are in a rush to
overtake and remove their cars from traffic by breaking lines and road laws.
Such unpredictability will be erased. In fact, autonomous cars will be law-
abiding entities that will stop at signals, listen to the traffic police, and follow
all other minute instructions. Unnecessary horns that create noise pollution
would also be reduced. On top of that, autonomous cars may detect high
traffic and shut down, thereby reducing air pollution as well. AI-enabled
cars’ ability to supervise themselves and take righteous decisions can help
the authorities.

VI. Consumer Protection Act, 2019

The intervention of the autonomous cars will involve transformation of many


legislations in India. The focus of this chapter is to analyse ‘product liability
clauses’ which are given under the Consumer Protection Act of 2019, and we
will analyse this legislation in detail.

As per the Act, the central government appoints a Consumer Protection


Authority, which looks into matters of consumer rights violation and takes
action against the producer or seller. 362 In order to determine the liability, the
authority may seize evidence as per the Criminal Procedure Code of India.363
However, this evidence may vary from normal evidence in the case of
autonomous cars. It may involve the AI-enabled system or the system that
contains the software. Such investigations can be similar to air crash

362
Consumer Protection Act 2019, s 10.
363
ibid s 22(2).

145
Product Liability Dilemmas: Driverless Cars

investigations, where the team looks at complex and highly technical


evidence to come to a conclusion. Therefore, the primary question is whether
the team or the authority would be equipped enough to identify the problem
in the software or the issue that led to the accident.

As discussed previously in the context of European legislation, a ‘defect’ in


software may not really be a ‘defect’. It may not be easily identifiable, and
therefore a similar question arises as to how the team will understand if the
defect was actually a latent one. Guidelines will have to be framed to
examine and investigate cases relating to autonomous car accidents, which
are not akin to normal car accidents.

Furthermore, as per the Act, a manufacturer can be held liable only if there is
“a manufacturing defect, a design defect, a deviation from manufacturing
specifications, non-compliance with the express warranty, or inadequate
warning signals.”364 All these requirements may be inapplicable in the
context of autonomous cars, as all the features of the car are so intricately
connected that it may become impossible to determine if it is a
manufacturing defect, car design defect, or software design defect. Even
after providing all warning signals, information, specifications, and safety
guidelines, the software system may fail. The functioning of these cars is
highly dependent on the software installed within them, and nobody can be
held responsible for such failures as they are sudden, unpredictable, and out
of human control.

364
ibid s 84.

146
Acing the AI: Artificial Intelligence and its Legal Implications

Therefore, a way out for the victim customer is to rely on Section 84(2) of
the Act which says that the manufacturer will be liable even if he was not
negligent in making the express warranty. 365 The benefit of this provision is
that it removes the need for an ‘examination of defect’ and the issue of
‘foreseeability or non-foreseeability’ of the defect. However, the customer
can rely on this only if the producer warranted the customer a certain level of
safety, which was later breached. If suppose the warranty indicates that the
car can identify between basic elements of nature like the sky, tree and truck
but the car actually fails to identify this, a claim may be successful against
the producer.

Furthermore, as per Section 85 of the Act, a service provider may also be


held liable if the service was faulty, imperfect, deficient in quality, nature,
and performance, or if he was negligent and committed fraud or omission,
did not provide instructions, or performed the service in such a manner that
the product did not comply with the express warranty.

This section can be applied to the continuous software updates required in


autonomous cars. The person who is providing software updates is rendering
a service to the customer, and if his service is faulty, inadequate, or
performed in an incorrect manner, then, the service provider may be held
liable instead of the manufacturer. Quite visibly, a whole lot of parties are
involved in the making and operation of autonomous cars, such as the driver,
AI-enabled software, the manufacturer, the person who trained the system,
the service provider, testing authorities, or the person who is constantly
monitoring or observing the actions of the AI.

365
ibid s 84(2).

147
Product Liability Dilemmas: Driverless Cars

Indian legislation is quite proactive in the sense that it imposes a penalty on


the seller who exercises ‘substantial control’ over the designing, testing,
manufacturing, packaging, or labelling of the product under Section 86. 366
Understanding who had ‘substantial control’ when the accident occurred
may ease the question of liability. Since it is an AI-enabled system, there will
always be a supervising authority that is constantly monitoring the AI’s
learning and development. The person who was in substantial control of the
system or its functioning when the offence took place may also be held
liable. For instance, if the part of the software that defaulted was recently
updated by the service providers but the manufacturers had very little
knowledge of the updated version, the service provider could be held liable.

Fortunately, the Act does not place huge reliance on the ‘foreseeability’
aspect, but it does say that a seller may be responsible if he “did not exercise
‘reasonable care’ in assembling, inspecting, and maintaining the product or
failed to give enough warnings.”367 Now, this ‘reasonable care’ leaves a grey
area in the law. It is almost impossible to decide if the manufacturer or the
seller exercised reasonable care in making or training the AI. It would
involve a deep study of various aspects of computer sciences, which may not
even lead us to a fulfilling conclusion. However, if it is clearly visible that
something substantially negligent was done, the Act clearly holds the person
in charge of the product responsible.

There are also exceptions to product liability, which complicates the question
even more. Under Section 87, “the producer shall not be liable if, at the time

366
ibid s 86.
367
ibid s 86(e).

148
Acing the AI: Artificial Intelligence and its Legal Implications

of the offence, the product was altered or misused,” which may be relevant
in the context of hacking (third party intervention). Other exceptions say that
if sufficient warnings were present, the producer will not be liable. 368
Application of this principle might be tricky as the producer may provide
adequate warnings for the usage of the car, but it can still malfunction.

The last and most thought-provoking exception available under the Act is
that if the danger is obvious, then the product manufacturer shall not be
liable for the failure of the product.369 It is similar to the defence of volenti
non-fit injuria. If such an exception is applied to autonomous cars, it may
create a situation where the producers escape liability. They may claim that
the customer is aware that even if the software acts rationally mostly, there is
still an iota of a chance that it may go erratic, and if the customer is driving
the car with this knowledge, he has consented to the precedented injury,
therefore, the manufacturer should not be liable after such consent.

It is amply clear that, through the application of sections 84(2), 85, and 86,
parties can be brought under the purview of liability. However, certain
provisions also create confusion in the law, such as the exceptions
highlighted above, which will have to be modified with regard to
autonomous cars.

VI. Resolving ‘who’ must be Liable

After circling around all the issues, we come to a singular problem, namely,
on whom does the concerned Act impose liability? It is clear that in some or

368
ibid s 87.
369
ibid.

149
Product Liability Dilemmas: Driverless Cars

the other manner, a producer, seller, or manufacturer may be held liable.


However, the real concern is how can liability be ascertained easily? Even
after trying their best to hold someone responsible, laws may not be very
successful in curbing accidents, so there is also a need for all the involved
parties to be careful and clear about their roles.

In the case of a level 0 automated car, there are only two parties who could
be held responsible: the car’s company and the car’s owner. However, in the
case of a self-driving car, the number of parties that can get involved will
depend on the level of automation. For instance, level 3 is a substantial jump
from level 2 as the self-driving car is capable of monitoring its environment
and taking decisions. However, Level 4 automation requires no human
intervention for functioning. 370 In these two instances, multiple parties can be
held responsible: the car manufacturer company, the owner of the car, the AI
itself, or the entity that trained the AI.

Clearly, the whole system consists of so many parties and contributors that it
becomes unfair to make one party responsible in a scenario where liabilities
cannot be distinguished or separated. Therefore, where there is complexity in
attributing the accident or fault to one person, a fair solution could be
apportioning the risks.371 By opting for this, one party is not under a full-
fledged obligation to pay damages when the defect occurred due to the
shared contributions of multiple stakeholders in the product.

370
Kapoor (n 353).
371
Bernhard A. Koch, ‘Product liability for autonomous vehicles’ (Polska Izba Ubezpieczeń,
1 April 2019) <https://piu.org.pl/wp-content/uploads/2020/03/WU_2019-04_01_BAK.pdf>
accessed 3 January 2022.

150
Acing the AI: Artificial Intelligence and its Legal Implications

Another solution could be that the parties involved in making autonomous


cars enter into a predetermined contract and clearly delineate their roles and
functions. This is possible as AI is a collection of small modules where each
team could be held liable for the parts they have created. Within the contract,
each team may also lay down the point of responsibility or liability for a
particular defect. In the event of an accident due to a potential defect, the
contract can be used as a guide to determine which party is responsible.

Through the above-discussed solutions, at least one party would be held


responsible, and the victim customer would not suffer. However, even if he
could not get compensation through the product liability laws, he could rely
on third-party insurance. The court can play a role in not holding the
customer responsible for the erratic behaviour of AI, which will
automatically make the customer eligible for the insurance.

VII. Amendments to Parallel Legislations

It is extremely important to strengthen road laws, the Motor Vehicles Act,


insurance laws, testing laws, and many other related Acts in order to prevent
road accidents, which will reduce the need for product liability clauses.
Despite product liability laws being equally necessary, the other parallel laws
stated above can be more helpful to the nation. They can act as a stoppage to
many accidents, and on the other hand, product liability clauses can come
into force only after the accident. For example, if testing standards are
tightened, accidents would naturally decrease. Additionally, if under the
Motor Vehicles Act, autonomous cars are given a different colour license
plate, it will be easier for traffic police or authorities to provide assistance to
these cars and would also help other passengers on the road to be alert if they

151
Product Liability Dilemmas: Driverless Cars

are around an autonomous vehicle. Guidelines may also be issued to the


public, stating that they must act rationally around an autonomous vehicle.

Third-party insurance can be made compulsory when buying an autonomous


car, as a third party must not suffer due to the fault of an autonomous
vehicle. Additionally, the laws for procuring a driving license would also
have to be changed in India. Since the functioning of these cars is entirely
different, the driving tests would also change. The sellers will have to
acquaint various driving schools with the change in driving mechanism.
Therefore, the responsibilities of the seller would greatly increase in terms of
giving all the necessary information to the driving schools about the
functioning of the vehicle. It will be interesting to see the law undergo this
huge shift due to the presence of autonomous vehicles.

VIII. Conclusion

Quite clearly, many parallel laws will have to be amended for the proper
adaptation of autonomous vehicles in India. These parallel laws would help
the government be proactive in encountering all the issues beforehand. On
the other hand, the product liability clauses might require little amendment
for application into this new sector, but efforts must be made to ensure that
these clauses are utilized in only a handful of circumstances. More efforts
must be dedicated to strengthening laws that act as precautionary measures,
so that the dilemma of who must be held liable is lessened.

In a situation where the consumer and society have suffered an


immeasurable loss due to the actions of the vehicle, the court may invoke
product liability clauses. However, in an accident where not much harm is
suffered, the guidelines to the seller could be: (i) to correct the flaw in its

152
Acing the AI: Artificial Intelligence and its Legal Implications

software immediately; (ii) on the occurrence of a similar accident by any of


its car models, the car company will have to undergo a penalty as decided by
the court, which may also result in the shutting down of the company; and
(iii) to compensate the third party in case the insurance amount is deficient.
Issuing such guidelines would provide the customer with clarity and
certainty about the remedies available to him and would also give him a
chance to ask the seller to correct the flaw in his car.

In order to formulate apt laws, a jurisdiction must first ascertain whether it


wants to protect its customers more or its producers more. The Indian
population consists of many underprivileged and middle-class families for
whom legal routes can be challenging and may even discourage them from
buying autonomous cars, which will in turn affect the producer’s profit.
Therefore, it would be a better option to secure the customers in an Indian
setup and only then introduce autonomous cars.

The Government and the public would also have to come to terms with the
fact that if earlier humans made errors while driving then, it could be
software in the future. Accepting this reality could be a huge leap towards
the prevalence of self-driving cars in the country. Needless to say, the laws
in India and the EU are not fully ripe to tackle the issues of product liability
arising out of autonomous vehicles. In order to frame better strategies,
governments will have to unlearn the old conventional laws and formulate
new ones for a better and safer penetration of autonomous vehicles.

153
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies

LEGAL FORECASTING AND AI BASED JUDGES: A


JURISPRUDENTIAL ANALYSIS USING COMPETITION LAW
CASE STUDIES
Nandini Goel
(Student at National Law University, Delhi)

Abstract
AI-based judges are the “next big thing” in the sphere of the judiciary.
China has reportedly rolled out Internet courts and has been using AI-
based SoS/Robot Judges to assist in adjudication for quite some time
now, which has saved the Chinese Legal System billions of dollars and
reduced almost one-third of the workload of the Chinese legal system.
There have been reports of Estonia working on a project using AI-
based judges to decide small claims. India, though it has not developed
any AI or robot judges, has launched initiatives to introduce AI into
the functioning of the judiciary through the recently launched
SUPACE Project to increase efficiency and fast-track the judicial
process. This chapter will analyze the fallacies of such a system and
demonstrate why AI-based judging can never fully replace “human
judges” using theories of jurisprudence, such as the ones dealing with
judicial discretion, morality and positivism, legal realism, and critical
legal studies. This will be done with the help of examples from the
domain of competition law.

The second important question in the chapter would be dealing with


“Legal Forecasting”. Attempts to predict judicial behaviour have been
made in the past by various theorists in the domain of jurisprudence.

154
Acing the AI: Artificial Intelligence and its Legal Implications

While some legal realists have reached the conclusion that it is almost
impossible to predict how a judge will decide a particular case, some
like Moore have devised statistical theories like the “Institutional
Approach” to predict the judicial behaviour of a particular judge at a
particular time and place in an uncontrolled environment. This chapter
will try to address the concern of whether a statistical or mathematical
approach can be adopted to predict judicial behaviour, like the one
propounded by Moore, or whether the indeterminacy in judicial
behaviour cannot be conquered through mathematical tools.

I. Introduction

Artificial Intelligence (‘AI’) has pervaded almost all areas of human life
today. Judiciary and the legal field are no exceptions. China has reportedly
rolled out internet courts and has been using AI Based/Robot Judges to assist
in adjudication for quite some time now, which has saved the Chinese Legal
System billions of dollars and reduced nearly one-third workload of the
Chinese legal system. 372 There were reports of Estonia working on a project
of using AI based judges to decide small claims. 373 Albeit, the authenticity of
this news has been disputed.374 India has not developed any AI or robot
judges. However, it has launched initiatives to introduce AI in the
functioning of the judiciary through the recently launched SUPACE Project

372
Ben Wodecki, ‘AI helps judges decide court cases in China’ (AI Business, 18 July 2022)
<https://aibusiness.com/document.asp?doc_id=779080#:~:text=China%20claims%20to%20
have%20used,can%20alter%20errors%20in%20verdicts.> accessed 05 April 2023.
373
‘Estonia creating AI Powered Judge’ (Daily Mail, 26 March 2019)
<https://www.dailymail.co.uk/sciencetech/article-6851525/Estonia-creating-AI-powered-
JUDGE.html> accessed 5 April 2023.
374
‘Estonia does not develop AI Judge’ (Ministry of Justice, Republic of Estonia, 16
February 2022) <https://www.just.ee/en/news/estonia-does-not-develop-ai-judge> accessed
5 April 2023.

155
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
to increase efficiency and fast-track the judicial process.375 AI-based judges
seem to be the next big thing in the sphere of judiciary. This chapter will try
to analyse the fallacies of such a system and demonstrate why AI-based
judges can never fully replace ‘human judges’. The premise has been
substantiated, in this chapter, via theories of jurisprudence such as the ones
dealing with judicial discretion, morality and positivism, legal realism, and
critical legal studies. This will be done with the help of examples from the
domain of Competition law.

The second important question that the chapter deals with is that of ‘Legal
Forecasting’. Attempts to predict judicial behaviour have been made in the
past by various theorists in the domain of jurisprudence. While some legal
realists reached the conclusion that it is almost impossible to predict how a
judge will decide a particular case, some like Moore devised statistical
theories like the ‘Institutional Approach’ to ‘predict the judicial behaviour of
a particular judge at a particular time and place in an uncontrolled
environment’.376 If a business group could predict in advance what direction
a case would take, they could mitigate the impending losses or avoid
venturing into such an area, from the beginning. Even the investors can make
more informed choices with their investments. They could decide beforehand
whether they wish to invest their time and money in a certain venture, or buy
or sell their shares. This is keeping aside the extraneous factors like

375
‘CJI launches top court’s AI driven research portal’ (Indian Express, 7 April 2022)
<https://indianexpress.com/article/india/cji-launches-top-courts-ai-driven-research-portal-
7261821> accessed 5 April 2023.
376
U. Moore and T.S. Hope, ‘An Institutional Approach to the Law of Commercial Banking.
(1929) 38(6) Yale Law Journal 703–719 <https://doi.org/10.2307/790071> accessed 5 April
2023.

156
Acing the AI: Artificial Intelligence and its Legal Implications

colluding with the judges and getting information from the departments from
the back end.

Let us take the example of the recent case of Future Retails v Amazon
(CCI),377which caused a severe blow to the Amazon and Future Retail deal.
If the legal team could have foreseen the possibility of such a decision, then
a loss worth of crores and time could have been saved. Even from the
perspective of retail investors, the price of shares fluctuates massively by
such decisions and if predictions can be made in advance, then it would help
them make informed investments. Additionally, if someone is funding
litigation through Third-Party Litigation Funding (‘TPLF’), or champerty
and maintenance, they could also benefit from the same. This could be
subject to public policy concerns,378 nevertheless a valid use of such a
prediction theory as one could predict the outcome of the case beforehand
and make an informed bet or litigation funding decision.

This chapter will aim to address the concern whether a statistical or


mathematical approach can be adopted to predict judicial behaviour like the
one propounded by Moore or whether the indeterminacy in judicial
behaviour cannot be conquered through mathematical tools.

II. AI-based Judging and Judicial Discretion

377
‘Amazon v Future Retail’ (Competition Commission of India, 2021)
<https://www.cci.gov.in/combination/order/details/order/1148/1> accessed 5 April 2023.
378
‘A Strategic Look at Champerty and Third-Party Litigation Financing’ (JDSupra, 23
January 2019) <https://www.jdsupra.com/legalnews/a-strategic-look-at-champerty-and-
third-79997/> accessed 5 April 20203; Thomas J. Salerno, ‘Third-Party Litigation Funding
(TPLF) and Ethical Issues In Bankruptcy’ (Daily DAC, 26 September 2022)
<https://www.dailydac.com/third-party-litigation-funding/> accessed 05 April 2023.

157
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
Judicial discretion is an integral element of judicial decision making. It is
one of the reasons for the evolution of a diverse variety of thought in various
fields of law and, in its absence there would be stagnancy in the legal
thinking. Hart, in his theory on ‘Open Texture of Laws,’379 and Dworkin, in
his theory on ‘Hard Cases’, 380 have spoken at length on the role of discretion
in judicial decision making. Legal realists have identified it as the major
contributing factor for ‘indeterminacy in law.’

In this section, an analysis will be presented as to how AI-based judicial


decision making would prove detrimental or impede judicial discretion.
Following are some areas which have been identified as controversial:

(a) Rule and its exception – Competition Law and Intellectual Property
Rights – Section 3(5) of the Competition Act

A controversial area of interpretation in judicial decision making is where


there are conflicting rules and principles, or there is a rule and a certain
exception to that rule. For instance, consider competition law scenarios. It is
well established that the general basis of competition laws is to prevent
monopolization and promote ‘distributive justice’ so as to further objects as
enshrined in Articles 38, 39(b) and 39(c) of the Indian Constitution.
However, there are certain exceptions to this general rule. The most
prominent of them is captured in Section 3(5) of the Competition Act, which
deals with Intellectual Property (‘IP’) Rights.

379
HLA Hart, The Concept of Law, Formalism and Rule Scepticism (See Section I. Open
Texture of Laws).
380
Ronald Dworkin, Taking Rights Seriously (Duckworth 1978) Ch 4.

158
Acing the AI: Artificial Intelligence and its Legal Implications

Section 3(5) purports to check the imposition of unreasonable conditions in


exercise of IP-protected rights. It carves out an exception by permitting an
Intellectual Property right holder to “impose reasonable conditions necessary
for protecting any of his rights”.381

While this exception looks simple at the outset, the interpretation of what
constitutes ‘unreasonable’ is highly controversial and there are varying
considerations that need to be balanced. On the one hand the individual
rights of IP holders, and on the other hand, there is the collective interest of
society in maintaining workable competition and not letting anyone gain a
position of dominance which could serve detrimental to the intended object
of the legislation. The trade-off needs to be justified based on dynamic
efficiency which promotes innovation in the economy over static efficiency
which deals with promoting competition. 382

Interestingly, Dworkin, in his work, has also pointed out a controversial rule
and principle relating to the competition law domain in the United States
(‘US’) Anti-trust legislation. 383 He pointed out in the Sherman Act, Section 1
provided for “every contract in restraint of trade to be void”.384 The Supreme
Court of the US had held that only ‘unreasonable’ restraint of trade is void.
Therefore, this principle of unreasonableness was the sphere where courts
were to exercise discretion when deciding whether a contract in restraint of
trade is void.

381
Competition Act 2002, s 3(5).
382
V Korah, ‘Competition Law and IPRs’ in V Dhall (ed), Competition Law Today:
Concepts, Issues and Law in Practice (Oxford University Press 2007) 131.
383
Dworkin (n 380) ch 3.
384
Sherman Act (USA) s 1.

159
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
It would be pertinent to note how AI-based judges would deal with such
situations involving the use of judicial discretion in cases involving
conflicting rules and principles. Some theorists have suggested classifying
cases based on facts of cases and drawing similarities based on the extent of
deviation. However, how far the balancing of competing interests would
work out as new facts present themselves would be worth observing.

(b) Divided bench or jury on a particular case

Several times, it is observed in judgments involving a bench of judges (at


appellate stages of adjudication) or even in jurisdictions having juries that
the bench or jury is divided on a particular question involved in the case.
Judges give dissenting opinions and the contradictory views in the juries are
deliberated upon. This is the beauty of human judges – it allows for the
subjectivity of opinions and deliberations of various factors, which becomes
especially important at the appellate stages of adjudication. Such a conflict in
opinion on the very same issue between the different judges on the bench
also shows that there can be diverse opinions on a question, and as such there
is no ‘right’ or ‘wrong’ especially in complex questions of larger public
interest dealing with competing interests of various stakeholders of the
society. AI-based judges will adopt a ‘singular line of thinking’ which will
prove detrimental to the subjectivity in law.

(c) Appeals

It is observed that the lower court’s decision is overturned in appellate stages


of review. This is one of the examples of the subjective nature of exercise of
the judicial discretion. Now, there can be varying degrees of subjectivity. 385

385
The ‘Sociological Wing of Realism’ (represented by writers like Oliphant, Moore,
Llewellyn and Felix Cohen) propounded that “judicial decisions fell into predictable patterns

160
Acing the AI: Artificial Intelligence and its Legal Implications

After analysing the appellate cases in the Indian competition law domain
often months between January to October 2022 as a part of this research, it
was observed that none of the cases were substantially overturned in the
appeal at NCLAT, the appellate body for competition law cases. Therefore,
competition law regime in India resembles more with the ‘sociological wing
of Legal Realism’. A reason for this is that the data set for competition law
appeals in India is very small, only 27 appeals in 10 months. In the past,
however, decisions have been overturned at appellate stages in competition
law in India, as well.386

However, the important point to note here with relation to AI-based


adjudication is that, if an AI-based judge is allowed to adjudicate a certain
matter in a lower stage of adjudication and an AI-powered judge is also
employed to decide the matter in appeal, then there would be no overturning
of the lower court’s decision as the same program or algorithm is employed
in both the judges. Therefore, AI-based judge can never be used in appellate
stages as it would render the whole process of appeals futile, since there
would be no difference in the judgement held at both stage.

It may be contended that two different AI-based programs or algorithms can


be employed at different stages of adjudication to sustain an AI-based
appellate system. However, then there would be no justification for
employing a superior algorithm for decision making at appellate system than

(though not, of course, the patterns one would predict just by looking at the existing rules of
law).” It can be inferred from this that “various social forces must operate upon judges to
force them to respond to facts in similar, and predictable ways.” The ‘Idiosyncrasy wing of
Realism’ mainly propounded by Frank and Judge Hutcheson asserted that “personality of a
judge is the pivotal factor in law administration.” [Extracted from Brian Leiter’s “American
Legal Realism”]
386
India Trade Promotion Organisation v CCI (Appeal No. 36 of 2014).

161
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
at the lower stages. After all, if the whole purpose of AI-based judge is to
bring efficiency and fast-track the court process, there can be no possible
justification for employing a superior algorithm at appellate stages when the
same can be employed at lower stages, as well.

(d) Morality and following the letter of law

The conflict between morality and strict interpretation or the positivist


understanding of law is perhaps as old as the subject of Jurisprudence. AI-
based judges would have to deal with this conflict. In this sense, it would
also be essential to determine what would be the domain of ‘law’ for AI-
based judges – whether it would include legislations, judicial precedents and
customs, or it would also incorporate moral principles and legal history. This
ethical dilemma of balancing different interests would be an important
concern in implementing an AI-powered judicial system.

(e) Theory of Mistake

Dworkin terms the precedents that are not justifiable by the principles that
provide the best interpretation of past practice and says that such precedents
are to be set aside as ‘mistakes.’ Another major question AI-based
adjudication will need to address is how such ‘mistakes’ will be set aside.
Again, human intervention from the legislative angle will be required to
qualify such precedents as mistakes.

(f) How different approaches will be dealt – Form Based Approach


versus Effect Based Approach

A form-based approach relates to a formalist approach to competition law


where there is a “strict interpretation of the legal provisions without

162
Acing the AI: Artificial Intelligence and its Legal Implications

considering the effects resulting from the conduct on competition and


consumers.”387

In contrast, the effect-based approach to competition law calls for a case-by-


case analysis that aims at “weighing the anti-competitive effects against pro-
competitive effects of the conduct to arrive at a conclusion on its legality.” 388
This approach takes into account ‘an overall economic analysis’ before
reaching a final decision.

As opposed to the form-based approach, this approach uses judicial


discretion, taking into account the distinct scenario surrounding the facts of
the case and basing the decision on policy considerations as well. 389 The US
anti-trust law makes more use of the effects-based approach while the
European Union law, though based more on the form-based approach, is
slowly transitioning towards effects-based approach.

One of the manifestations of these two approaches can be seen in Section 4


of the Indian Competition Act (‘the Act’) which deals with the abuse of
dominance provision. A major concern with the use of the form-based
approach is that it promotes the presence of dominance of an enterprise as a
prerequisite to the application of the provision. The effects-based approach
would, instead, imply that due importance is given to the consequences of
the conduct of the enterprise. 390

387
MM Sharma, ‘Hear the Monopolyphony’ (The Economic Times, 4 March 2020)
<https://economictimes.indiatimes.com/blogs/et-commentary/hear-the-monopolyphony/>
accessed 5 April 2023.
388
ibid.
389
Some of the factors which this approach takes into consideration include: efficacy theory,
consumer welfare theory and essential facilities theory (list not exhaustive).
390
Sharma (n 387).

163
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies

Therefore, as per the effects-based approach, even a non-dominant enterprise


can be held liable under this section, if the conduct results in an “anti-
competitive practice”.391

Another manifestation of this form-based and effects-based approach can be


seen in ‘per se rule’ and ‘rule of reason’. They are related to ‘tying-in’ under
Section 3(4) of the Act and Section 3 of the US Clayton Act. While the ‘per-
se rule’ again adopts a more formalistic interpretation, the ‘rule of reason’
adopts interpretation more on the line of the effects-based approach.

For an AI-based judge, it would be easier to adopt the form-based approach


as it does not require much judicial discretion as opposed to the effect-based
approach. Nevertheless, in doing so, it would compromise on the larger
interests of justice. This can also be seen as one of the shortfalls of AI-based
decision-making.

III. Critical Legal Studies Analysis: How a System with AI-based


Judges could Perpetuate Bias and Create Hierarchies hidden
beneath the Neutral Exterior?

It is established that the technology for AI-powered Judges will be supplied


by someone – say a private tech giant or the government may also develop
its own AI-based systems.

391
ibid.

164
Acing the AI: Artificial Intelligence and its Legal Implications

In case, the government develops its indigenous systems without taking any
help from private actors, then what that would essentially do is give the
legislature and executive an upper hand (or uncontrolled powers) over the
judiciary, This would be violative of the doctrine of separation of powers
which is considered to be a part of basic structure of Indian Constitution and
an essential component of jurisprudence worldwide. This is because AI-
based judging system would lack the subjectivity and the discretion which is
exercised by human judges and as such would basically follow a similar line
of thinking as held by previous judgments. The new changes which will be
brought into the law would be by the legislature, when they enact a new law
in the Parliament and the changes are fed in the AI-based judges. It would be
the will of the Legislature which will dominate all the organs of the
government, in case a fully AI-based Judging system is allowed to operate in
the future. Needless to say, the political parties in power would also get an
opportunity to abuse their position of dominance which could manifest into
arbitrary actions by the government in power.

Another risk ensues when private actors power the technology for this AI-
based judging system. Consider a practical real-life example to understand
this. Recently, there was a decision by the Competition Commission of India
(‘CCI’) wherein a penalty of Rs. 1,338 Crore was imposed on Google for
indulging in anti-competitive practices.392 In a second order which came just
within a week after the first, another penalty of Rs. 936 crore ($113.04mn)

392
Mr. Umar Javeed and Others v Google LLC and Anr (Case No. 39 of 2018 CCI); ‘CCI
imposes Rs 1,338-crore fine on Google for 'anti-competitive practices'’ (The Economic
Times, 21 October 2022) <https://economictimes.indiatimes.com/tech/technology/cci-
imposes-rs-1337-crore-fine-on-google-for-anti-competitive-
practices/articleshow/94993416.cms> accessed 05 April 2023.

165
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
has been imposed on the company for abusing its dominant position with
regard to its payment app and in-app payment system. 393

In case the technology for AI Judges is powered by a company like Google


(or for that matter, any other company), there is very low possibility of such
a judgment being passed against such a company and, in time, this would
lead to creating a monopoly for such companies as the judges operating on
bias fed by the programmers will not pass judgment contrary to their
interests. Clearly, this would lead to a violation of the Principles of Natural
Justice (PNJ).

Apart from this, there are also several other questions of balancing
competing interests as discussed earlier which exist. One such example
would be balancing of individual interest of IPR holders with the collective
interest of distributive justice in the society. Another minor non-legal yet
important point for a CLS Analysis could be the loss of jobs for human
beings due to the introduction of an AI Based judiciary system.

IV. Moore’s Institutional Approach to Law and Legal Forecasting

Underhill Moore’s ‘Institutional Approach to Law’ was an attempt at


predicting judicial behaviour. The theory proposed methods of comparison
and measurement to the facts of a decided case and applying the correlation
between the decision and the measured degree of deviation between the facts

393
XYZ (Confidential) v Alphabet Inc. and Others (14 of 2021); Match Group, Inc. v
Alphabet Inc. and Others (35 of 2021); Alliance of Digital India Foundation v Alphabet Inc.
and Others (Competition Commission of India); ‘Google fined Rs 936 crore in second
antitrust penalty this month’ (The Economic Times, 25 October 2022)
<https://economictimes.indiatimes.com/tech/technology/google-fined-113-million-in-
second-antitrust-penalty-this-month/articleshow/95080594.cms> accessed 05 April 2023.

166
Acing the AI: Artificial Intelligence and its Legal Implications

and the institution can be stated. After the method has been applied to large
numbers of cases in many fields it may be possible to state ‘laws’ for some
fields in terms of that correlation. 394

The author finds that this theory could be useful in creating AI Based
systems for judicial decision-making at ‘lower levels of adjudication’. It
could be coupled with coefficients of correlation or similarity coefficients
like Jaccard distance. Jaccard Distance and other coefficients of correlation
are statistical tools which are used to locate the shared and distinct members
in different data sets and measure the degree of similarity/dissimilarity
between them.

As far as legal forecasting is concerned, the author believes that while


mathematical and statistical methods can be developed, nevertheless looking
at the above discussion on judicial discretion and legal realists’ theory on
‘indeterminacy of law’, an analysis by an experienced and trained human
lawyer would be much more reliable than one based on these theorems as
there are various factors such as the inclination of judges, political
environment, existing notions of morality, back door pressures, among other
things, which go into deciding a case.

As a disclaimer, this analysis of legal forecasting is for human judging.


Obviously, if AI Based judging comes into play, then the question of legal
forecasting becomes ‘redundant’ as a singular right answer will be delivered
by the algorithm and subjectivity will be erased.

394
Moore and Hope (n 376).

167
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
V. Conclusion

While it is a given that introducing Artificial Intelligence into the domain of


judiciary would increase efficiency, fast track court proceedings, reduce
workload and save a good deal of resources, yet it is important to analyse
what is at stake.

Justice SA Bobde, while launching the AI system SUPACE for Indian


Judiciary, quoted the first game where AI-based ‘Deep Blue’ system beat
Kasparov in 1997395 for the first time and how nobody at that time believed
it. This chapter nowhere questions the ability of AI to develop to a stage
where it becomes capable of judicial decision-making in the future. What
this chapter tries to highlight is the aspects that would be compromised by
such a system and why AI could (or should) never fully replace human
judges.

Some important observations that were made in this chapter are as follows:

a) Judicial discretion would be at peril in case AI-based decision-


making is adopted. This would lead to a ‘singular line of thinking’ in
judicial decisions and lead to stagnancy in legal thinking. AI in
judicial decision-making means the ‘death of legal realism’.
b) AI-based decision-making can never be adopted at ‘appellate stages
of review’. This is because if the same algorithm is adopted at both

395
Snehanshu Shekhar, ‘Supreme Court embraces Artificial Intelligence, CJI Bobde says
won’t let AI spill over to decision-making’ (India Today, 7 April 2021)
<https://www.indiatoday.in/india/story/supreme-court-india-sc-ai-artificial-intellegence-
portal-supace-launch-1788098-2021-04-07> accessed 05 April 2023; ‘Behind SUPACE: the
Artificial Intelligence Portal of Supreme Court of India’ (AI Magazine, 29 May 2021)
<https://analyticsindiamag.com/behind-supace-the-ai-portal-of-the-supreme-court-of-india/>
accessed 05 April 2023.

168
Acing the AI: Artificial Intelligence and its Legal Implications

the stages, it would give the same output. There can be no


justification for adopting a superior algorithm at the stage of appellate
review when the same can be adopted at the lower stage to save time.
Therefore, human judges at appellate stages will always be required.
c) The competing interests of various stakeholders and the conflict
between morality and the black letter of law are other contentious
area for AI-based adjudication.
d) A fully AI-based judiciary would give the legislature an upper hand
over the judiciary and lead to the violation of the doctrine of
‘separation of powers’. Also, if private players are allowed to operate
the technology for these systems, there is a chance of them gaining
undue advantage. In any case, the programmer’s bias is an inherent
disadvantage of these AI Based systems.
e) This research mainly confined itself to case studies from competition
law domain. It is important to note that in areas where questions
require greater deliberation such as constitutional law and areas
where there is a greater chance of injustice being meted out like
criminal law, the use of AI Based judges is even more problematic.
f) The author also suggests the use of Moore’s Institutional Approach
Theory and co-efficient of deviation and correlation, such as Jaccard
distance for developing an AI-based decision-making system at the
lower levels of adjudication.

As far as legal forecasting is concerned, the author is of the view that even
though attempts can be made to predict legal decisions using mathematical
and statistical tools. Yet, the human intuition and legal sense developed over
time with experience is perhaps a better meter to judge the outcome. Again,

169
Legal Forecasting and AI based Judges: A Jurisprudential Analysis using
Competition Law Case Studies
this is with regard to hard cases and not the technical easy cases which can be
decided in mechanical manner.

So, in the end, this chapter concludes that ‘the human touch’ shall forever
remain essential for the legal profession and can never be entirely done away
with.

170
Acing the AI: Artificial Intelligence and its Legal Implications

EXPLORING THE ROLE OF ARTIFICIAL INTELLIGENCE IN


ARBITRATION: LEGAL VIS-À-VIS TRIBUNAL SECRETARIES
Vinita Singh and Jeeri Sanjana Reddy
(Students at Damodaram Sanjivayya National Law University,
Visakhapatnam)

Abstract
In today’s techno-savvy world, artificial intelligence is revolutionizing
development and the field of law is no different. Artificial Intelligence
(AI) is based on the idea that if all characteristics of learning and
intelligence are thoroughly recorded, they may be replicated by a
computer programme. Deep learning models are a form of machine
learning that take their inspiration from the design of the human brain.
While the fundamental human element in dispute resolution can never
be substituted by technology, it is time for certain human elements in
arbitration to be gradually replaced by AI.

Parties’ apprehension of bias by tribunal secretaries often causes


procedural delays in arbitration. Parties even challenge arbitral
awards after the completion of arbitration on such grounds to render
the award unenforceable. This chapter proposes employing AI to
perform administrative tasks of tribunal secretaries, thereby freeing up
arbitrators and lawyers to focus on those parts of the arbitration
process that require the most human judgment. The intended outcome
is a streamlined arbitration system that is effective, quick, and free of
biases which frequently afflict in-person arbitrations. The chapter
elaborately discusses the working of AI and the programs that can be
used to execute the proposed idea.

171
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries

Additionally, the chapter conducts a cross-jurisdictional study to


assess the usage and efficacy of AI in arbitration procedure for
administrative tasks. Further, to critically examine the implementation
of the proposed idea, an assessment of practicalities is done. This
includes - but is not limited to - technical and legal problems like the
black box system and the determination of nationality respectively. The
chapter thus delves into a comprehensive discussion on incorporating
AI into arbitration along with addressing the challenges which may
arise from the implementation of the proposed idea.

I. Introduction

In today’s techno-savvy world, artificial intelligence is revolutionizing


development and the field of law is no different. While the fundamental
human element in dispute resolution can never be substituted by technology,
Artificial Intelligence’s (AI’s) capacity to mimic human cognitive abilities,
automate time-consuming tasks and process enormous amounts of data
makes it a powerful tool to eliminate inefficiencies in the arbitration process.
Tribunal secretaries are chosen to help arbitrators with administrative work,
procedural responsibilities, and their duty to decide the dispute. Organization
of meetings and gathering of records or evidence for the case file are
administrative duties. Case management and the drafting of procedural
sections of the award are examples of procedural responsibilities. By
translating papers and summarising facts, parties’ written submissions, and
evidence, secretaries also aid the tribunal in carrying out its responsibility to
decide the case.

172
Acing the AI: Artificial Intelligence and its Legal Implications

Concerns have surfaced meanwhile, in the sphere of arbitration regarding


what is perceived as the undue influence of tribunal secretaries. 396 To avoid
delays caused by such apprehension, AI can be an effective substitute
to tribunal secretaries; it can assist arbitrators in a number of ways, including
analysing and condensing facts of the case or written submissions, and
scheduling hearings and meetings in the arbitral procedure.397 NDA,
Property Contract Tools and Opus 2 are among the few automation
technologies for such applications. Deep learning models are a subset of
machine learning that are based on the structure of the human brain. It
utilises enormous volumes of historical data to learn without human
interaction.398 AI is becoming more widely used in the legal sector for a
variety of functions due to the applicability of deep learning techniques in
the legal-tech sector. AI’s application in arbitration thus makes it a valuable
tool in the drafting of procedural portions of the award, creating compiled
parts of court documents and arbitral rulings, and translating, transcribing,
and summarising evidence.

II. Issues in Arbitration due to Tribunal Secretaries

The right to challenge the mandate of tribunal secretaries emanates from the
right of parties to challenge the Tribunal’s jurisdiction. Therefore, the

396
Constantine Partasides, ‘The Fourth Arbitrator? The Role of Secretaries to Tribunals in
International Arbitration’ (2002) 18(2) Journal of International Arbitration 147
<https://doi.org/10.1023/A:1015787618880> accessed 10 January 2023.
397
Horst Eidenmüller and FaidonVaresis, ‘What is an Arbitration? Artificial Intelligence and
the Vanishing Human Arbitrator’ (2020) <https://ssrn.com/abstract=3629145> accessed 13
January 2023.
398
Paul Bennett Marrow, Mansi Karol and Steven Kuyan, ‘Artificial Intelligence and
Arbitration: The Computer as an Arbitrator – Are We There Yet?’ (2020) 74(4) Dispute
Resolution Journal
<https://www.marrowlaw.com/wp-content/uploads/2021/02/Marrow-et-al.-AI-and-
Arbitration.pdf> accessed 14 January 2023.

173
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
foundations for challenging an arbitrator, such as a “lack of impartiality and
independence” or the performance of “impermissible duties,” apply by
analogy to the right to challenge a secretary. 399 The rationale behind this is
that secretaries must adhere to similar ethical standards as arbitrators, and
that the possibility of the involvement of a biased secretary could result in a
flawed award.400 Similar to how parties use arbitrator challenges as a tactic
to delay arbitration proceedings, increase costs, or render an award
unenforceable, it is typical for them to challenge tribunal secretaries for the
same purposes.

In general practice, this is usually done by framing a procedural issue to be


decided by the tribunal regarding the secretary’s role. General issues framed
by tribunals include as to whether the secretary has exceeded his mandate,
and if yes, whether his role needs to be reduced, or whether he needs to be
replaced. As a result, arbitration proceedings are prolonged – sometimes
intentionally – by parties. It is also common for a losing party to wait for an
arbitral award to be passed, and then challenge its validity on the grounds of
it being influenced by secretarial bias. Consequently, the effectiveness of
arbitration is frequently hindered by procedural delays resulting from
challenges to the secretary’s role during arbitration.401

For instance, the most common ground for challenging tribunal secretaries is
“justifiable doubts” regarding their impartiality and independence. In Victor

399
J Ole Jensen, Tribunal Secretaries in International Arbitration (Oxford University Press
2019) 319.
400
Klaus Peter Berger, International Economic Arbitration (Kluwer 1993) 259.
401
Malcolm Langford, Daniel Behn, and RunarHilleren Lie, ‘The Revolving Door in
International Investment Arbitration’ (2017) 20(2) Journal of International Economic Law
318 <https://doi.org/10.1093/jiel/jgx018> accessed 13 January 2023.

174
Acing the AI: Artificial Intelligence and its Legal Implications

Pey Casado and President Allende Foundation v Republic of Chile,402 an


International Centre for Settlement of Investment Disputes (ICSID) tribunal
secretary had previously interned for two months at the investor’s law firm,
years before the dispute emerged, after which he took up his job at ICSID.
After considering the parties’ arguments, the Tribunal replaced the secretary.
While similar instances illustrate that legitimate claims of secretarial bias
should be upheld even if it prolongs the course of the arbitration, it is a
common tactic for parties to raise such claims in order to delay arbitral
proceedings. Typically, secretaries “wear double hats,” serving as an
assistant in one matter and a lawyer in another dispute involving similar
issues. 403 In another case, P v Q,404 the court was requested to dismiss
arbitrators chosen by the London Court of International Arbitration (LCIA).
This was the result of an email from the Chairman that was meant to be sent
to the tribunal secretary but was misdirected to a paralegal at the Claimant’s
law firm. It contained the words, “Your response to [Claimant’s] most recent
submission?” The Court dismissed the application holding that as long as the
use of a tribunal secretary does not affect the personal decision-making
capacity of an arbitrator, the secretary cannot be claimed to have served as a
‘fourth arbitrator’.

Parties may also wait until the award has been rendered before attempting to
render it unenforceable on the basis of bias or on the grounds that the
secretary performed impermissible duties. 405 In some instances, the use of

402
Victor Pey Casado and President Allende Foundation v Republic of Chile [2014] ICSID
Case No ARB/98/2.
403
Langford, Behn and Lie (n 401).
404
P v Q & Ors [2017] EWHC 194 (Comm).
405
Chloe J Carswell and Lucy Winnington-Ingram, ‘Awards: Challenges Based on Misuse
of Tribunal Secretaries’ (Global Arbitration Reviews, 8 June 2021)
<https://globalarbitrationreview.com/guide/the-guide-challenging-and-enforcing-arbitration-

175
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
such strategies to render an arbitral ruling unenforceable was ineffective. For
instance, in the Court of Arbitration, ICC, Sonatrach v Statoil,406 Sonatrach
attempted to invalidate the award on the pretext that the Tribunal unfairly
allowed the secretary to participate in its considerations by allowing her to
analyse and make notes on the dispute’s substantive issues. The court
dismissed Sonatrach’s attempts and also denied Sonatrach access to the
secretary’s notes as their disclosure would undermine the confidentiality of
the Tribunal’s deliberations.

Similarly, in the case of Yukos Universal Limited (Isle of Man) v Russia,407


the secretary published an article on the case’s central issues. This
publication was later cited as a reason to challenge the enforcement of the
judgement, on the grounds that the secretary “stated distinctive opinions on
questions squarely before the Tribunal.” The secretary had drafted some
procedural aspects of the arbitral ruling, which raised suspicion. Asserting
that according to the IBA Conflicts Guidelines 2014,408 circumstances in
which a secretary has previously expressed a legal opinion regarding an issue
that also arises in the arbitration fall under the “Green List”, the Tribunal
ruled that in the absence of a direct connection between the comment and the
ongoing case, there may not be ‘justifiable’ doubts.

Regardless of the success or failure of such challenges to secretarial


mandate, the aforementioned arbitrations are prime illustrations of how

awards/2nd-edition/article/awards-challenges-based-misuse-of-tribunal-secretaries>
accessed 12 January 2023.
406
Sonatrach v Statoil [2014] EWHC 875 (Comm).
407
Yukos Universal Limited (Isle of Man) v Russia (2015) UNCITRAL PCA Case No AA
227.
408
‘IBA Guidelines on Conflicts of Interest in International Arbitration’ (International Bar
Association, 23 October 2014) GSt 5(b), s 4.1.1.

176
Acing the AI: Artificial Intelligence and its Legal Implications

issues of impartiality of tribunal secretaries can be raised thereby defeating


the purpose of arbitration to resolve disputes efficiently. They demonstrate
the disadvantage of secretaries as humans as opposed to AI and the
apprehension that their views may be introduced into their drafts of the
award or summaries of case documents– perhaps even unintentionally, and
could affect the Tribunal’s decision would not arise in the case of AI in the
place of tribunal secretaries.

III. Understand AI and How it Operates

Despite its recent evolution, the concept of artificial intelligence may be


traced back to Greek mythology. Talos, the protector of the island Crete was
a bronze automaton, which would automatically follow a sequence of
operations and respond to a predetermined set of instructions. This is similar
to today’s AI robots in function.

In 1958, Frank Rosenblatt invented Perceptron at Cornell Aeronautical


Laboratory in 1957, driven by the biological concept of neuron. This formed
the very basis of an artificial neural network. This model simulated the
operation of a single neuron that uses standard learning algorithms to resolve
linear classification tasks. It was not until the late twentieth century that AI
saw wide scale usage. AI is a branch of computer science. It has led to the
establishment of a link between computation and human intelligence. The
capacity of AI to learn and develop with each application sets it apart from
other automation and legal tech tools. The two main categories of AI
operations are machine learning and rule-based learning. Most of the AI
tools today use machine learning, in which the AI observes patterns and
updates its algorithm based on pre-existing data and user feedback, as
contrasted by the former, which is ideal for static and gradual settings.

177
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries

Machine Learning applies statistical techniques to find patterns in a provided


dataset. It has been instrumental in addressing some complex issues.
Computer vision and natural language processing is part of the solution
offered by it. Machine Learning identifies a set of patterns from the large
bulk of information given to it. As opposed to being pre-programmed with
responses to a given set of conditions, it develops its own response based on
the pattern it recognizes every time. With the advent of Machine Learning,
there has been widespread use of AI across all spheres of life.

Inferential analytics, a type of analytics, signifies a significant shift away


from the predominant model of logic-based programming. Machine Learning
operates based on inferential analytics. The learning experience in the case of
computers includes millions of examples of a given object from which a
pattern may be drawn and incorporated. Machine Learning approaches
presumptively find patterns and regularities behind the random variations of
data stored in a database. These datasets can be organised in multiple ways
depending on the purpose. Machine Learning is classified into two
categories, supervised and unsupervised learning.

There is a third type as well, referred to as reinforcement learning. It,


however, broadly falls in the category of unsupervised Machine Learning.
Supervised learning aids are an accurate prediction of outcomes. 409 A known
value is used to forecast an unknown value. There is a mapping from an
input (x) to an output (y). Training prepares the way for the mapping. It
occurs on a collection of input-output pairs that have already been labelled.

David E Sorkin, ‘Technical and Legal Approaches to Unsolicited Electronic Mail’ [2001]
409

USFL Rev <https://ssrn.com/abstract=265768> accessed 14 January 2023.

178
Acing the AI: Artificial Intelligence and its Legal Implications

Let D stand for the training set, which is the entire set of information utilised
to train the system. As the size of D increases, the system will have more
experience and be able to make predictions that are more accurate when
applied to input that has not yet been seen. This system is referred to as
generalisation ability.

There are many different types of digitised data, including pixels, sound
bites, game scores, temperature records, etc. The datasets are a disjointed
collection of random facts. Machine Learning models can be of different
types. The few widely used are linear, nonlinear, monotonic and
discontinuous.410

Deep learning incorporates layered neural networks for replicating human


brains and their working by constructing neural connections through
training. 411 Additionally, it lets the system come up with the algorithm
needed to make predictions.412 Any AI algorithm relies on good data as its
foundation, and the algorithm’s performance is directly tied to the class of
the dataset it uses. The basic purpose of artificial intelligence is to unearth
knowledge and information hidden in data. Data from artificial intelligence
systems can only be helpful if it has been cleansed, categorised, annotated,
and processed for pertinent input and analysis. The Machine Learning

410
AD Selbst and S Barocas, ‘The Intuitive Appeal of Explainable Machines’ (2018) 87(3)
Fordham Law Review <https://ir.lawnet.fordham.edu/flr/vol87/iss3/11/> accessed 13
January 2023.
411
Temitayo Bello, ‘Online Dispute Resolution Algorithm: The Artificial Intelligence
Model as a Pinnacle’ (2018) 84(2) Int’l J of Arb Med & Disp Man
<https://dx.doi.org/10.2139/ssrn.3072245> accessed 13 January 2023.
412
Harry Surden, ‘Machine Learning and the Law’ [2014] Washington Law Review
<https://scholar.law.colorado.edu/faculty-articles/81> accessed 15 January 2023.

179
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
algorithm performs better and more efficiently with time as it receives more
datasets.413

The computer quickly estimates the likely result if given a factual event and
is asked to compare it to similar cases that are identified and documented in a
dataset. More often than not, the tasks related to tribunal secretaries are
labour-intensive. It is feasible and more efficient to automate such tasks
using a Machine Learning algorithm. This algorithm can be trained to
identify a certain task performed by tribunal secretaries by training it. This
can be done by providing it with a sufficient sample of datasets. Here, the
data is the factual or legal information of the case that needs to be processed.
With the aid of AI tools, pertinent factual extracts, agreed-upon and disputed
view points of the parties, and procedural history can be incorporated to help
with formulating the award.414

Artificial intelligence can mostly be used to review the massive amount of


arbitral micro-data that the parties and their lawyers maintain. AI can analyse
and figure out what information is pertinent to the argument and then
communicate it more persuasively. 415 Machine Learning algorithm needs to
draw a mapping between the given information and a corresponding
outcome. It automatically detects the input received and starts working on it
accordingly. For example, it might get to analyse certain documents for a

413
ibid.
414
Falco Kreis and Markus Kaulartz, ‘Smart Contracts and Dispute Resolution – A Chance
to Raise Efficiency?’ (2019) 37(2) ASA Bulletin 337, 350
<https://doi.org/10.54648/asab2019031> accessed 11 January 2023.
415
Annie Chen, ‘The Doctrine of Manifest Disregard of the Law After Hall Street:
Implications for Judicial Review of International Arbitrations in US Courts’ (2009) 32(6)
Fordham International Law Journal <https://ir.lawnet.fordham.edu/ilj/vol32/iss6/3/>
accessed 13 January 2023.

180
Acing the AI: Artificial Intelligence and its Legal Implications

particular case. Over time, as the sample size increases with increased input
of information, the AI becomes more efficient at detecting a pattern and
producing a result.

IV. Artificial Intelligence and Secretarial Tasks

Artificial intelligence (AI) is based on the idea that all attributes of intellect
and learning can be mimicked by a computer programme if they are carefully
recorded. By assuming the responsibilities of secretaries, artificial
intelligence can improve arbitration by organising huge amount of
documentation with inhuman speed and efficiency, and hence prevent
procedural delays.

(a) Administrative tasks

i. Scheduling Meetings and Oral Hearings

A tribunal secretary’s administrative duties include providing fundamental


logistical support for arbitration. Frequently, tribunal secretaries are
responsible for case administration, including the organisation of meetings
with or without the parties, as well as oral hearings. 416 Especially in the
context of international arbitration, which can involve several people from
different time zones, scheduling can be a difficult task. Scheduling can be
challenging, particularly in the context of international arbitration, which can
involve multiple parties from different time zones. There are now several
meeting scheduling software programmes available, such as Instabot, that
enable AI to arrange appointments with ease. Instant Meeting Scheduling
(X.AI) is another AI-application that helps parties and arbitrators schedule

416
UNCITRAL Notes on Organizing Arbitral Proceedings 2016, s 36.

181
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
and arrange their activities. Using this application, parties can seamlessly
organise all meetings and hearings. It can integrate parties’ objectives and
take into consideration crucial factors such as time, place and people with
minimal human intervention. 417

ii. Organization of Case Documents

Another essential administrative duty of a secretary is to organise and


manage the case file for the tribunal. 418 The secretary may receive
communication, written arguments, evidence and other materials that must
be reviewed, organised, and filed in the record. This can be accomplished by
AI through “predictive coding.” In Pyrrho Investments Ltd v MWB Property
Ltd,419 the court examined the use of predictive coding as a method of review
that would mostly be performed by software rather than people.

Arbitration can be very efficient if predictive coding is used to organise the


case file after receiving documents. First, people must assess and categorise
a sample of documents taken from a large number of documents; second, the
machine learning algorithm must observe and “understand” the criteria
employed by humans; and third, the algorithm applies its “understanding” to
the remaining documents in the huge data set. Initially, humans will be
required to correct categorization errors so that the software may learn and
gradually reduce the number of errors.420 In this way, technology powered by
AI can organise vast numbers of documents in arbitration, which is an

417
Azael Socorro Marquez, ‘Can Artificial Intelligence be used to Appoint Arbitrators?’
(2020) 1 AVANI <https://avarbitraje.com/wp-content/uploads/2021/03/ANAVI-No1-A12-
pp-249-272.pdf> accessed 15 January 2023
418
P v Q & Ors [2017] EWHC 194 (Comm).
419
Pyrrho Investments Ltd v MWB Property Ltd [2016] EWHC 256 (Ch).
420
Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (Sweet & Maxwell
2020).

182
Acing the AI: Artificial Intelligence and its Legal Implications

essential component of the daily duties of secretaries. DISCO is an


alternative AI model for organising evidence, reviewing witness videos and
transcripts, and locating documents. 421

A well-organized case file is vital for the smooth and effective operation of
the arbitration, despite the fact that its value may appear to be minimal. A
key part of the arbitration between Croatia and Slovenia was the efficient
handling of paperwork.422 In this dispute, an arbitrator had colluded to
secretly introduce new evidence after the record had been closed. However,
he was unsuccessful in convincing the registrar to sneak in some documents
in the case file. Even if a tribunal secretary could be convinced, the AI
algorithm can be programmed to not allow such uploading of documents at a
later stage. Another advantage of using AI is that while a secretary may be
required to digitise documents,423 this activity is completed automatically
because AI data are digitalized and therefore also sustainable.

(b) Procedural Duties

i. Drafting of Certain Procedural Decisions

Tribunal secretaries often draft tribunal decisions (e.g., procedural orders,


directions, procedural sections of the award or timetables). This is a
contentious secretarial task, with some authorities deeming it legitimate for
tribunal secretaries,424 and others considering it as non-delegable and

421
Marquez (n 417).
422
Arbitration between the Republic of Croatia and the Republic of Slovenia [2016] PCA
Case No 2012-04.
423
Jensen (n 399) 235.
424
Yukos Universal Limited (Isle of Man) v Russia [2015] UNCITRAL PCA Case No AA
227.

183
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
prohibited.425 This is a common reason to doubt the secretary’s mandate and
the credibility of the arbitral ruling due to concerns of bias. Tribunal
secretaries may also support the drafting of any procedural decision in a
purely administrative capacity, such as by typing a pre-decided verdict,
proofreading, including the correction of typographical, grammatical, or
mathematical errors, and checking citations, dates, and cross-references.426
With numerous AI software such as Motionize, Spellbook and Kira Systems
available to draft legal documents, similar technologies can be adapted in the
arbitration context.

ii. Communication to Parties

Once a procedural decision has been drafted, the parties must be notified.
This may require the secretary to just send an e-mail to the parties, or to
officially serve the decision upon them,427 which can also be accomplished
using AI technology, as demonstrated by the success of Swart Writer and
Lavender.

(c) Assistance in discharge of Arbitrator’s duty to decide the dispute

i. Analysing Submissions, Facts and Evidence

The arbitrator’s first logical step is to get deeply familiar with the dispute’s
facts, the parties’ statements, and the evidence. The sheer bulk of these
materials is a persistent constraint to their assessment. 428 Tribunals therefore

425
Compañía de Aguas del Aconquija SA and Vivendi Universal SA v Argentine Republic
[2010] ICSID Case NoARB/97/3.
426
Hong Kong International Arbitration Centre Domestic Arbitration Rules 2014, s 3.3(e);
Note to Parties and Arbitral Tribunals on the Conduct of the Arbitration under the ICC Rules
of Arbitration 2017, s 150.
427
Jensen (n 399).
428
‘Artificial Intelligence ‘AI’ in International Arbitration: Machine Arbitration’ (Nairobi
Centre for International Arbitration, August 2021) <https://ncia.or.ke/wp-

184
Acing the AI: Artificial Intelligence and its Legal Implications

frequently task secretaries with condensing parties’ written submissions in


the form of notes, memos, or an assessment of the parties’ stances. 429 These
materials are meant to assist the court in “digesting” the parties’ arguments.
As a corollary to their responsibility to outline the parties’ arguments,
tribunal secretaries are frequently required to summarise the case’s evidence
and factual circumstances.430 The majority of current guidelines on tribunal
secretaries permit this duty. Due to the possibility of pre-judgment by the
tribunal secretary, however, some do not think this to be permissible.

A summary of information requires the secretary to determine what


information is relevant and what information is irrelevant. Thus, the outcome
reflects the secretary’s interpretation, or “spin” of the document’s contents
and captures his opinions in the summary. Furthermore, summaries run the
risk of including errors. In addition to the reduction and manipulation of
material, summaries may contain incorrect information, as tribunal
secretaries may misunderstand arguments, overlook aspects of the
arguments, or make a clerical error when drafting the summary. 431 Given
these technicalities, AI can effectively replace secretaries in doing these
analytical tasks. Kira Systems is one of the most commonly used artificial
intelligence applications that quickly recognises, extracts, and analyses
document text. Arbitrators may find it useful to study pertinent information
regarding the case’s facts, the parties’ arguments, and the evidence for a
particular dispute.432

content/uploads/2021/08/ARTIFICIAL-INTELLIGENCE-AI-IN-INTERNATIONAL-
ARBITRATION.pdf> accessed 13 January 2023.
429
Partasides (n 396).
430
Young ICCA Guide on Arbitral Secretaries 2014, art 3(2)(h).
431
Jensen (n 399) 254.
432
Marquez (n 417).

185
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
ii. Document Translation and Interpretation Services

Only when a tribunal secretary possesses a particular level of linguistic


proficiency is he asked to assist arbitrators with translation. 433 In such
instances, the risk of secretarial bias is greater than that of a neutral
translator. In contrast, an AI secretary would be able to do translation jobs
quickly and impartially. One of the most recent developments in the sphere
of arbitration is the introduction of e-Arbitration services run by Hong
Kong’s “Electronic Business-Related Arbitration and Mediation Platform”
(eBRAM).434 The Platform’s objective is to provide AI-based services such
as document translation and interpretation of online-hearings of arbitration
proceedings.

V. AI in Arbitration: An Assessment of Practicality

(a) Lessons from across jurisdictions

For years, lawyers have conducted legal research by employing AI-enhanced


tools like LexisNexis and Google. To increase effectiveness and cut costs, it
is time to consider using AI in arbitration. AI legal solutions presently
available in the United States are mostly geared toward assisting arbitrators
with case management and administrative activities, such as scheduling,
transcribing and translation services, and document analysis. 435 While the
courts in the United States of America have utilised AI techniques in
433
Alabama Claims of the United States of America against Great Britain (1872) 29 RIAA
125.
434
Gülüm Bayraktaroğlu-Özçelik and Ş BarışÖzçelik, ‘Use of AI-Based Technologies in
International Commercial Arbitration’ (2021) 12(1) European Journal of Law and
Technology <http://hdl.handle.net/11693/78118> accessed 8 January 2023.
435
Bakst et al, ‘Artificial Intelligence and Arbitration: A US Perspective’ (2022) 16(1)
Dispute Resolution Journal <https://www.cov.com/-
/media/files/corporate/publications/2022/05/artificial-intelligence-and-arbitration-a-us-
perspective_bakst-harden-jankauskas-mcmurrough-morril.pdf> accessed 12 January 2023.

186
Acing the AI: Artificial Intelligence and its Legal Implications

criminal trials, the arbitral community has yet to widely adopt AI. In 2019,
the United Kingdom invested £61 million in law technology, making
Artificial Intelligence and technology one of the country’s most impressive
fields. Since 2021, there have been discussions in London regarding the
deployment of this technology to dispute resolution. 436 Hong Kong has
already implemented AI-based eBRAM Services in arbitration. Africa has
witnessed a lot of exciting advancements in recent years, but it has yet to
explore adopting AI into the arbitral process due to a variety of public policy
restrictions. 437 Similarly, while it may be common for arbitral institutions
worldwide to consider using AI, envisioning the same for countries like India
requires considerations of cost, necessity and viability.

(b) Challenges for implementation

It is a general belief that computers operate as a ’black box,’ and that they
are unable to explain the how and why of particular results. There has been a
common concern regarding the lack of transparency with regard to AI and its
functioning. Deep learning and/or artificial neural network-based
applications are the ones that are most frequently affected by the black box
issue. Artificial neural networks are made up of a buried layer of nodes. Each
of these nodes processes the input and sends its output to the layer of nodes
after it. Deep learning comprises these buried layers hidden in a huge neural
network. Moreover, the complexity of this might be endless. What the nodes
learn by training is not openly visible to people. Therefore, we only know the

436
Dr Paresh Kahthrani, ‘The use of tech and AI in the future of dispute resolution in
London’ (CiArb, 17 June 2021) <https://www.ciarb.org/resources/features/the-use-of-tech-
and-ai-in-the-future-of-dispute-resolution-in-london/> accessed 7 January 2021.
437
Sadaff Habib, ‘The Use of Artificial Intelligence in Arbitration in Africa – Inevitable or
Unachievable?’ (IBA Net) <https://www.ibanet.org/article/E62B06F6-7772-458A-A6E7-
1474DB7136B5> accessed 12 January 2023.

187
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
final output given to us and not what made the algorithm reach that
conclusion. This situation is referred to as an AI black box. 438

Rule-based systems had an edge over Machine Learning in this context. A


rule-based AI system is designed to generate artificial intelligence largely
through a model based on pre-determined rules. This system consists of a
number of human-programmed rules. There is an explicit if-then structure
that takes the system to a predetermined output.

With the advancement of machine learning techniques, it is now possible to


train a computer a lot about litigation and arbitration. Some people, however,
have argued that data sets might have ingrained biases that could make
artificial intelligence (AI) less objective. 439 Unconscious biases might subtly
intervene in the algorithm design process by influencing and/or determining
it. It holds true for training data as well. An algorithm needs a structure, and
choosing that structure can simply include adopting a subjective standard
that is invisible to the creator or others who will be utilising the output.

Training data comes from two different places. It may be chosen by the
algorithm’s creator or by outside parties. Training data tainted by
unintentional biases of those involved in the selection process is a significant
danger. It is simpler to identify and address bias and inaccuracy in algorithm
design than in training data sets. It is likely for the biases to be self-
perpetuating since learning algorithms retrain and reinforce by utilising

438
Ronald Yu and Gabriele Spina Ali, ‘What’s inside the Black Box: AI Challenges for
Lawyers and Researchers’ (2019) 19(1) Legal Information Management
<https://doi.org/10.1017/S1472669619000021> accessed 8 January 2023.
439
Caryn Devins, ‘The Law and Big Data’ (2017) 27(2) Cornell Journal of Law& Public
Policy <https://scholarship.law.cornell.edu/cjlpp/vol27/iss2/3> accessed 9 January 2023.

188
Acing the AI: Artificial Intelligence and its Legal Implications

previous findings generated with corrupted data, resulting in the lowering of


the model’s usefulness. 440 This poses a serious risk that with time, biases
that are undiscovered and inaccurate could become so well ingrained that
they ultimately change the algorithm without being seen by humans.

VI. Conclusion

Artificial intelligence is a reality and will remain so in the modern day where
large data is the norm and computers are getting more powerful by the
day. 441 AI has been instrumental in increasing procedural efficacy and
revolutionising numerous processes, such as e-discovery. There is always a
need for larger and better things. Creation of big data technology and
quantum physics-based computers that process intricate algorithms and large
amounts of data back up this idea. There is a growing need for an automated
legal system, notably the laborious and time-consuming procedures. There is
currently no law that specifically provides for the use of AI in the arbitration
procedure. Additional rules and restrictions would be required if AI were to
somehow win support from the legal field to be implemented in arbitration.
Arbitration is a widely preferred alternative to dispute resolution for a lot of
people today. It is a recognised alternative method for achieving a prompt
and effective resolution of disputes. Therefore, it becomes increasingly
important for us to try to apply AI to it. The benefits of arbitration could
advance significantly more swiftly with arbitration by AI. Questions around
bias as well as related expenditures will be greatly reduced if the plan is

440
Maxi Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open?
Study on the Example of International Arbitration’ (2019) Queen Mary School of Law Legal
Studies Research Paper No. 318/2019 <https://ssrn.com/abstract=3392669> accessed 9
January 2023.
441
Marrow, Karol and Kuyan (n 398).

189
Exploring the Role of Artificial Intelligence in Arbitration: Legal vis-à-vis
Tribunal Secretaries
implemented properly. Conflicts over scheduling will be lessened. There will
be less delay. At the end of the day, we need to remember that AI is only
enhanced statistics, not magic. It is important that we embrace the
advancements it has to offer while addressing the challenges it carries along.

190
Acing the AI: Artificial Intelligence and its Legal Implications

THE ROLE OF ARTIFICIAL INTELLIGENCE (AI) IN GLOBAL


FINANCIAL SYSTEM: CHALLENGES AND GOVERNANCE
Aayush Pandey and Vanisha Singh
(Students at Gujarat National Law University, Gandhinagar and Institute of
Law, Nirma University respectively)

Abstract
Artificial Intelligence (AI) functions as an engine for modern finance.
AI is an interdisciplinary field classified as a strategic sector by the
EU and a vital engine of economic progress that can solve many
societal problems. It expects to be able to deliver better service in less
time and at a cheaper cost. This chapter explores major AI
applications that are changing the financial ecosystem, transforming
the financial industry, and having the potential to improve many of its
functions. It examines AI’s risks and limitations in law, finance, and
society. Part I of this chapter maps AI use-cases in banking,
demonstrating why AI has progressed so quickly and will continue to
do so. Part II discusses potential challenges arising from AI’s
expansion in finance. Part III discusses AI’s regulatory problems in
financial services and the methods available to address them. Part IV
emphasises the need for human engagement. It examines the inherent
and structural risks and limitations of financial AI, discusses their
implications, and gives future recommendations. This chapter seeks to
inspire new thinking on compliance, technology, and modern finance.

I. Introduction

191
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
Artificial intelligence (hereby referred to as AI) refers to intelligent
computational agents442 that mimic human power and accuracy. 443AI
systems have advanced in the past decade. Natural and biological
intelligence are modelled using algorithmic models. 444 Although a machine
that can comprehend or perform any intellectual work a human undertakes is
not yet possible, today’s AI systems can perform well on activities that
require human intelligence. Many financial activities today rely on AI,
algorithmic programs, and supercomputers. 445 Fintech organisations have
increased the usage of AI in the financial sector. Recent financial sector
adoption of big datasets and cloud computing, combined with the
development of the information economy, makes AI systems conceivable.
77% of financial institutions surveyed expect AI to be important to their
business within two years. 446

AI will be used for income-generation, automation systems, risk assessment,


customer support, and client recruitment. The COVID-19 pandemic boosted
AI use in finance. Banks are researching methods to use their AI experience

442
David Poole and Alan Mackworth, Artificial Intelligence: Foundations of Computational
Agents (Cambridge University Press 2018).
443
Laurie Hughes, Yogesh K. Dwivedi and Tom Crick, ‘Artificial Intelligence (AI):
Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for
Research, Practice and Policy’ (2019) 57(7) International Journal of Information
Management <https://www.sciencedirect.com/science/article/pii/S026840121930917X>
accessed 13 December 2022.
444
Shivam Gupta, Vinayak A. Drave and Yogesh K. Dwived, ‘Achieving Superior
Organizational Performance via Big Data Predictive Analytics: A Dynamic Capability
View’ (2020) 90(3) Industrial Marketing Management 581
<https://doi.org/10.1016/j.indmarman.2019.11.009> accessed 13December 2022.
445
MB Fox, ‘The New Stock Market: Sense and Non-Sense’ (2015) 65(2) Duke Law
Journal <http://scholarship.law.duke.edu/dlj/vol65/iss2/1> accessed 13December 2022.
446
Madeleine Hillyer ‘AI Will Transform Financial Services Industry within Two Years,
Survey Finds’ (World Economic Forum, 4 February 2020)
<https://www.weforum.org/press/2020/02/ai-will-transform-financial-services-industry-
within-two-years-survey-finds> accessed 15 December 2022.

192
Acing the AI: Artificial Intelligence and its Legal Implications

to improve underwriting and fraud detection. 447 Its progress could widen the
digital gap between developed and poor nations. Its deployment and benefits
are mostly in major nations and a few emerging economies. These
technologies could aid emerging economies by lowering the costs of credit
risk assessments.448 AI usage in the banking sector brings new risks and
problems to ensure financial stability. AI is seen as a disruptive technology
driver. In financial services, AI could change in different ways. AI could
increase the quality of products and services for clients by using a larger and
deeper analytical base. It can drive innovation. 449

Certain challenges also occur after using Al in financial institutions. AI-


based choices may not be simply explainable and without prejudice. Its use
increases cyber and privacy hazards. Financial stability difficulties could
occur with the resilience of AI algorithms due to structural transformations
and greater interconnection from relying on a few AI service providers. This
chapter explores major AI applications that are changing the financial
ecosystem, transforming the financial industry, and having the potential to
improve many of its functions. It examines AI’s risks and limitations in law,
finance, and society. It emphasises the need for human engagement and

447
R Thomas, ‘AI-Bank of the Future: Can Banks Meet the AI Challenge?’ (McKinsey &
Company, 20May 2021) <https://www.mckinsey.com/industries/financial-services/our-
insights/ai-bank-of-the-future-can-banks-meet-the-ai-challenge> accessed15 December
2022.
448
Cristian Alonso, Siddharth Kothari and Sidra Rehman, ‘How Artificial Intelligence
Could Widen the Gap between Rich and Poor Nations’ (IMF, 02 December 2020)
<https://www.imf.org/en/Blogs/Articles/2020/12/02/blog-how-artificial-intelligence-could-
widen-the-gap-between-rich-and-poor-nations> accessed 10 December 2022.
449
Stephen Bredt, ‘Artificial Intelligence AI in the Financial Sector Potential and Public
Strategies’ (Frontiers, 4 October 2019)
<https://www.researchgate.net/publication/336261157_Artificial_Intelligence_AI_in_the_Fi
nancial_SectorPotential_and_Public_Strategies/fulltext/5d973e15299bf1c363f7a30c/Artifici
al-Intelligence-AI-in-the-Financial-Sector-Potential-and-Public-Strategies.pdf> accessed 5
December 2022.

193
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
seeks to inspire new thinking on compliance, technology, and modern
finance.

II. Major AI Applications in the Global Financial Ecosystem

(a) Advancement towards personalised banking

E-banking is part of the international financial landscape to suit customers’


needs.450 Banks have invested heavily in technology to boost financial
customer service. Banks offer digital banking channels like ATMs, online
banking, mobile banking, and electronic payments to increase profitability
and reduce operating costs.451 Online banking is a relatively new e-banking
service that allows customers to complete financial operations digitally
without visiting a branch. 452 Due to internet banking, the global banking
climate has changed drastically. 453 Australia, 454 Taiwan,455 Malaysia,456

450
Sweety Gupta and Anshu Yadav, ‘The Impact of Electronic Banking and Information
Technology on the Employees of Banking Sector’ (2017) 42(4) Management and Labour
Studies 379 <https://doi.org/10.1177/2393957517736457> accessed 5 December 2022.
451
Dan Sarel and H Marmorstein, ‘Marketing Online Banking Services: The Voice of the
Customer’ (2003) 8 Journal of Financial Services Marketing
<http://dx.doi.org/10.1057/palgrave.fsm.4770111> accessed 15December 2022.
452
Deborah R. Compeau and Christopher A. Higgins, ‘Computer Self-Efficacy:
Development of a Measure and Initial Test’ (1995) 19(2) MIS Quarterly
<http://dx.doi.org/10.2307/249688> accessed 6 December 2022.
453
A. Meharaj Banu, N. Shaik Mohamed and Satyanarayana Parayitam, ‘Online Banking
and Customer Satisfaction: Evidence from India’ (2019) 15(1) Asia-Pacific Journal of
Management Research and Innovation 68 <http://dx.doi.org/10.1177/2319510x19849730>
accessed 10 December 2022.
454
Carmel Herington, Scott Weaven, ‘Can Banks Improve Customer Relationships with
High Quality Online Services?’ (2007) 17(4) Managing Service Quality: An International
Journal 404 <http://dx.doi.org/10.1108/09604520710760544> accessed 14 December 2022.
455
T Chen, ‘Critical Success Factors for Various Strategies in the Banking Industry’ (1999)
17(2) International Journal of Bank Marketing 83
<http://dx.doi.org/10.1108/02652329910258943> accessed 14 December 2022.
456
Mohd. Al-Hattami, Abdulwahid Ahmed Hashed Abdullah and Afrah Abdullah Ali
Khamis, ‘Determinants of Intention to Continue Using Internet Banking: Indian Context’
(2021) 17(1) Innovative Marketing 40 <http://dx.doi.org/10.21511/im.17(1).2021.04>
accessed 10 December 2022.

194
Acing the AI: Artificial Intelligence and its Legal Implications

Finland,457 Thailand, 458 Italy, 459 Turkey, 460 UK,461 Singapore,462 and
Korea,463 found that internet banking boosted consumer satisfaction due to
its quick access and time-saving features. Electronic banking makes it easier
for consumers to compare bank services and products, encourages
competition, and helps banks grow into new markets. Many banks are even
offering mobile financial guidance.

These AI-powered solutions track income, recurring expenses, and spending


behaviours and offer financial strategies and ideas. 464 Mobile banking apps
help us pay bills, complete transactions, and engage with the bank more
efficiently. Electronic banking is prone to the same kinds of risks as
traditional banking, especially regulatory, legislative, technical, and

457
Heikki Karjaluoto, Minna Mattila and T Pento, ‘Electronic Banking in Finland:
Consumer Beliefs and Reactions to a New Delivery Channel’ (2002) 6 Journal of Financial
Services Marketing 346 <http://dx.doi.org/10.1057/palgrave.fsm.4770064> accessed 10
December 2022.
458
S Prompattanapakdee, ‘The Adoption and Use of Personal Internet Banking Services in
Thailand’ (2009) 37(1) The Electronic Journal of Information Systems in Developing
Countries 1 <http://dx.doi.org/10.1002/j.1681-4835.2009.tb00261.x> accessed 11 December
2022.
459
Rocco Ciciretti, Iftekhar Hasan and Cristiano Zazzara, ‘Do Internet Activities Add
Value? Evidence from the Traditional Banks’ (2008) 35 Journal of Financial Services
Research 81 <http://dx.doi.org/10.1007/s10693-008-0039-2> accessed 11 December 2022.
460
Vichuda Nui Polatoglu and Serap Ekin, ‘An Empirical Investigation of the Turkish
Consumers’ Acceptance of Internet Banking Services’ (2001) 19(4) International Journal of
Bank Marketing 156 <http://dx.doi.org/10.1108/02652320110392527> accessed 12
December 2022.
461
Gary Boyes and Merlin Stone, ‘E-Business Opportunities in Financial Services’ (2003) 8
Journal of Financial Services Marketing 176
<http://dx.doi.org/10.1057/palgrave.fsm.4770117> accessed 12 December 2022.
462
Z Liao and MT Cheung, ‘Internet-Based e-Banking and Consumer Attitudes: An
Empirical Study’ (2002) 39(4) Information & Management 283
<http://dx.doi.org/10.1016/s0378-7206(01)00097-0> accessed 13 December 2022.
463
B Suh and I Han, ‘Effect of Trust on Customer Acceptance of Internet Banking’ (2002)
1(3) Electronic Commerce Research and Applications 247
<http://dx.doi.org/10.1016/s1567-4223(02)00017-0> accessed 14 December 2022.
464
RAlt, R Beck and MT Smits, ‘FinTech and the Transformation of the Financial Industry’
(2018) 28 Electronic Markets 235 <http://dx.doi.org/10.1007/s12525-018-0310-9> accessed
14 December 2022.

195
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
reputational issues. Many national authorities have adjusted their regulations
to guarantee the security and profitability of the domestic financial market,
promote market discipline, and preserve client rights and public faith in the
banking sector. Policymakers are becoming more cognizant of macro
policy’s impact on capital flows. 465

(b) Heavy reliance on the risk management

AI-driven solutions can help banks decide how much to lend a client, notify
traders regarding position risk, detect consumer and insider fraud, improve
compliance, and reduce model risk. 466 The Global Financial Crisis in
2008 underlined bank risk management’s importance (GFC).467 The GFC’s
economic and financial tragedy was largely caused by banks’ disdain for risk
management in the period before 2008.468 Financial risk relates to uncertain
outcomes that affect a company’s earnings. 469 Banks transfer funds between
deficit and surplus units. Due to knowledge asymmetries, economic entities
prefer utilising an intermediary. The risk of the transaction is transferred to
the bank as a middleman instead of the surplus unit. Part (or even all) of the
bank’s investment is in danger. This risk is the chance that an investment’s

465
Saleh M. Nsouli and Andrea Schaechter, ‘Finance and Development’ (2002) 39(3)
Finance and Development
<https://www.imf.org/external/pubs/ft/fandd/2002/09/nsouli.htm> accessed December 16,
2022.
466
S Aziz and MM Dowling, ‘AI and Machine Learning for Risk Management’ (2018)
SSRN Electronic Journal <http://dx.doi.org/10.2139/ssrn.3201337> accessed 16 December
2022.
467
J Blažek, T Hejnová and H Rada, ‘The Impacts of the Global Economic Crisis and Its
Aftermath on the Banking Centres of Europe’ (2018) 27(1) European Urban and Regional
Studies 35 <http://dx.doi.org/10.1177/0969776418807240> accessed 16 December 2022.
468
Wyn Grant and Graham K. Wilson, ‘The Consequences of the Global Financial Crisis:
The Rhetoric of Reform and Regulation’ (2013) 50 Choice Reviews Online 50
<http://dx.doi.org/10.5860/choice.50-2771> accessed 17 December 2022.
469
H. David Sherman and S. David Young, ‘Where Financial Reporting Still Falls Short’
(Harvard Business Review, 1 July 2016) <https://hbr.org/2016/07/where-financial-reporting-
still-falls-short> accessed 14 December 2022.

196
Acing the AI: Artificial Intelligence and its Legal Implications

real return will be lower than projected. Banks’ business models depend on
managing this risk. 470 Effective risk management identifies, measures, and
monitors a bank’s market, credit, and liquidity risks. Upside risk can boost a
bank’s worth.471

The bank’s mindset and risk appetite complement its objective of enhancing
shareholder value. The robust risk culture ensures excellent workplace
relationships for workers who are aware of the bank’s risk attitude and risk
parameters. It results from employee behaviour and beliefs, strategic
decisions and experiences, and underlying assumptions. 472 Regulation
protects customers, reduces crime, supports macroeconomic objectives, and
maintains investor trust. Credit, liquidity, reputational, and operational risks
are major financial hazards. Credit risk is the likelihood that a bank may lose
money if a customer does not meet their contractual obligations or repay a
loan. 473 Credit risk is measured by expected and unexpected loss. 474

470
Harry DeAngelo and René M. Stulz, ‘Liquid-Claim Production, Risk Management, and
Bank Capital Structure: Why High Leverage Is Optimal for Banks’ (2015) 116(2) Journal of
Financial Economics 219 <http://dx.doi.org/10.1016/j.jfineco.2014.11.011> accessed 14
December 2022.
471
Anthony Saunders, Marcia Cornett and Otgo Erhemjamts, Financial Institutions
Management: A Risk Management Approach (Mc Graw Hill 2021).
472
J Galbreath, ‘Drivers of Corporate Social Responsibility: The Role of Formal Strategic
Planning and Firm Culture’ (2009) 21(2) British Journal of Management 511
<http://dx.doi.org/10.1111/j.1467-8551.2009.00633.x> accessed 15 December 2022.
473
K Horcher, ‘Managing Treasury Risks in the Real World’ (2005) 17(1) Journal of
Corporate Accounting and Finance 23 <http://dx.doi.org/10.1002/jcaf.20163> accessed 15
December 2022.
474
E Angelini, G. Tollo and A. Roli, ‘A Neutral Network Approach for Credit Risk
Evaluation’ (2008) 48(4) The Quarterly Law Review 733
<https://doi.org/10.1016/j.qref.2007.04.001> accessed 15 December 2022.

197
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
AI could improve a bank’s credit risk quantification process. It classifies
credit risk more accurately than traditional methods.475 Operational risk
relates to possible losses through internal management, functional and
accounting system failures, failed procedures and processes, and fraud, and
human mistakes.476 AI uses clustering algorithms to uncover abnormal
spending trends and fraud rings. 477 Banks must have enough liquidity to
handle loan requests and depositor withdrawals. AI could save operating
costs and effort, increase automation, and improve liquidity management. 478
A bank’s reputation depends on its financial strength.479 Reputational risk is
the risk of bad headlines harming a company’s brand and economic well-
being. 480 AI discovers patterns of inaccessible data, and algorithms encode
this data to make predictions and judgments. 481

III. Challenges arising from AI’s Expansion in the Finance Sector

475
I-Cheng Yeh, Che-hui Lien, ‘The Comparisons of Data Mining Techniques for the
Predictive Accuracy of Probability of Default of Credit Card Clients’ (2009) 36(2) Expert
Systems with Applications 2473 <http://dx.doi.org/10.1016/j.eswa.2007.12.020> accessed
15 December 2022.
476
ibid.
477
GD Brown Swankie and D Broby, ‘Examining the Impact of Artificial Intelligence on the
Evaluation of Banking Risk’ (University of Strathclyde, 28 November 2019)
<https://pureportal.strath.ac.uk/en/publications/examining-the-impact-of-artificial-
intelligence-on-the-evaluation> accessed 16 December, 2022.
478
M Tavana et al, ‘An Artificial Neural Network and Bayesian Network Model for
Liquidity Risk Assessment in Banking’ (2018) 275 Neurocomputing 2525
<http://dx.doi.org/10.1016/j.neucom.2017.11.034> accessed 17 December 2022.
479
C.S. Fernando, V.A. Gatchev, A.D. May and W.L. Megginson, ‘The Value of
Reputation: Evidence from Equity Underwriting’ (2015) 27(3) REPEC 96
<https://ideas.repec.org/a/bla/jacrfn/v27y2015i3p96-112.html> accessed 17 December 2022.
480
F Fiordelisi, M-G Soana and P Schwizer, ‘The Determinants of Reputational Risk in the
Banking Sector’ (2011) SSRN Electronic Journal <http://dx.doi.org/10.2139/ssrn.1895327>
accessed 18 December 2022.
481
H Gao, G Barbier and R Goolsby, ‘Harnessing the Crowdsourcing Power of Social
Media for Disaster Relief’ (2011) 26(3) IEEE Intelligent Systems 10
<http://dx.doi.org/10.1109/mis.2011.52> accessed 19 December 2022.

198
Acing the AI: Artificial Intelligence and its Legal Implications

The financial industry’s technological revolution has produced new risks.


Modern financial companies are primarily tech companies. The financial
industry’s reliance on modern technology could cause systemic hazards. 482
Because of the expansion of AI in the financial field, there are loss of
jobs, acceptance testing concerns, privacy violations, creativity and
adaptability failures, restrictive application and operating procedures, access
to technology, accessibility of vast accurate information, AI-business
arranging, and loss of emotional human touch. 483 Complex, digital systems
mostly do not deliver and have faults. 484

At the heart of modern finance’s high-tech infrastructure, financial mishaps


are unavoidable and could strain the entire system. 485 A technical glitch or
error in a programme or bank’s computer networks could produce volatile
and compounding effects as AI technologies instantly and negatively react to
the glitch. As the financial sector gets more high-tech and faster, turbulent
market events will undoubtedly increase in the future years. 486 Financial
authorities are challenged by systemic risk. As today’s finance is becoming
more technology-driven, authorities will ensure that they broaden their
control and focus on high-tech systemic risks linked with linkages and

482
KN Johnson, ‘Cyber Risks: Emerging Risk Management Concerns for Financial
Institutions’ [2016] SSRN Electronic Journal
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2847191> accessed 17 December
2022.
483
A Ghandour, ‘Opportunities and Challenges of Artificial Intelligence in Banking:
Systematic Literature Review’ (2021) 10(4) TEM Journal 1581
<http://dx.doi.org/10.18421/tem104-12> accessed 17December 2022.
484
J Ravetz, ‘Normal Accidents; Living with High-Risk Technologies’ (1985) 17(3) Futures
287 <http://dx.doi.org/10.1016/0016-3287(85)90044-8> accessed 18 December 2022.
485
SK Mohanty and S Mishra, ‘Regulatory Reform and Market Efficiency: The Case of
Indian Agricultural Commodity Futures Markets’ (2020) 52 Research in International
Business and Finance 101145 <http://dx.doi.org/10.1016/j.ribaf.2019.101145> accessed 18
December 2022.
486
ibid.

199
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
speed.487 Beyond systemic issues, the modern, considerable capital industry
faces many challenges and weaknesses. Because of its reliance on
computerised systems, the financial firm is vulnerable to the risks posed by
technology. 488

For many financial firms, computer codes, proprietary products, private data
and other intellectual properties are important assets.489 The introduction of
the internet in financial activities also comes with the internet of financial
dangers. 490 Rogue employees with adequate authorization are among the
most significant risks to finance organizations in this digital era. There are
few protections against someone who is properly verified and approved. 491
Algorithms can produce errors. AI faults cause algorithmic bias: erroneous
internal representations distort to arrive at a decision. 492 The algorithm may
draw conclusions based on misleading or changing statistical trends in
data.493 Specifying AI’s objectives is difficult in complex social

487
HS Scott, ‘The Reduction of Systemic Risk in the United States Financial System’ [2010]
SSRN Electronic Journal <http://dx.doi.org/10.2139/ssrn.1602145> accessed 18 December
2022.
488
Tom C. W. Lin, ‘Financial Weapons of War’ [2016] SSRN Electronic Journal
<https://ssrn.com/abstract=2765010> accessed 17 December 2022.
489
David Barboza and Kevin Drew, ‘Security Firm Sees Global Cyberspying’ (Innovation
Toronto, 6 August 2011) <https://innovationtoronto.com/2011/08/security-firm-sees-global-
cyberspying/> accessed 17December 2022.
490
DB Hollis, ‘Why States Need an International Law for Information Operations’ [2008]
SSRN Electronic Journal <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1083889>
accessed 18 December 2022.
491
SR Chabinsky and A Archives, ‘Cybersecurity Strategy: A Primer for Policy Makers and
Those on the Front Line’ (2010) 4(1) Journal of National Security Law & Policy
<https://jnslp.com/2010/08/13/cybersecurity-strategy-a-primer-for-policy-makers-and-those-
on-the-front-line/> accessed 10 January 2023.
492
A Klein, ‘Reducing Bias in AI-Based Financial Services’ (Brookings, 10 July 2020)
<https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/>
accessed 17 November 2022.
493
S Mullainathan and Z Obermeyer, ‘Does Machine Learning Automate Moral Hazard and
Error?’ (2017) 107(5) American Economic Review 476
<http://dx.doi.org/10.1257/aer.p20171084> accessed 17 November 2022.

200
Acing the AI: Artificial Intelligence and its Legal Implications

environments. The algorithm needs a precise function that analyses the cost
and advantages of possible actions given both the current state and future
evolution of the environment. Mis-specifying the problem structure causes
suboptimal decisions.494

IV. Examples of Challenges

(a) Explainability and Complexity

The explainability of AI results is vital challenges, especially in finance. AI


models are called black boxes since they are not user-explainable. 495 They
are sophisticated; their input signals may not be understood; and they are
ensembles of models instead of a single model. This could make detecting
the correctness of AI conclusions challenging and expose businesses to bias,
inadequate modelling methodologies, or inaccurate decision-making,
undermining their robustness. 496 Stronger explainability could let outsiders
influence the algorithm and generate financial hazards. 497

(b) Cybersecurity

Its usage increases cyber dangers and hazards. AI systems are vulnerable to
cyber threats that go beyond human or software flaws. Such attacks use data

494
S Athey, ‘Beyond Prediction: Using Big Data for Policy Problems’ (2017) 355(6324)
Science 483 <http://dx.doi.org/10.1126/science.aal4321> accessed 18 November2022.
495
R Guidotti et al, ‘A Survey of Methods for Explaining Black Box Models’ (2018) 51(56)
ACM Computing Surveys 1 <http://dx.doi.org/10.1145/3236009> accessed 20 November
2022.
496
J Silberg and J Manyika, ‘Tackling Bias in Artificial Intelligence (and in Humans)’
(McKinsey & Company, 6 June 2019) <https://www.mckinsey.com/featured-
insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans>
accessed 22 December 2022.
497
C Molnar, Interpretable Machine Learning (Lulu 2022).

201
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
collected during the AI cycle to exploit algorithmic weaknesses. 498 Data
poisoning causes AI to misclassify or recognise data. It can also be used to
develop Trojan models that mask dangerous behaviours. 499 Corrupted
systems could damage the financial sector’s ability to identify, price, and
manage risks, leading to unrecognised systemic problems. Attackers could
also obtain sensitive training datasets. Financial AI providers and users
should implement mitigating mechanisms as part of their cyber-security
strategy, which includes detection and reporting systems, rigorous training in
data feed protection, and model and data privacy measures.

(c) Data privacy

Big data privacy concerns predate AI’s mainstreaming. Data anonymity and
privacy tools have been developed. To address these problems, legal data
policies are being implemented globally. AI models’ ability to prevent
training data leaks presents additional privacy concerns. AI may directly or
indirectly disclose critical data. 500 Data consumption intensifies the risk of
cyber attacks.501 Organized crime groups may aim for critical possessions to

498
M Comiter and H Rong, ‘Attacking Artificial Intelligence: AI’s Security Vulnerability
and What Policymakers Can Do About It’ (Belfer Center for Science and International
Affairs, 1 August 2019) <https://www.belfercenter.org/publication/AttackingAI> accessed
24 December 2022.
499
K Liu, B Dolan-Gavitt and S Garg, ‘Fine-Pruning: Defending against Backdooring
Attacks on Deep Neural Networks (Semantic Scholar, 1 January 2019)
<https://www.semanticscholar.org/paper/Fine-Pruning%3A-Defending-Against-
Backdooring-Attacks-Liu-Dolan-Gavitt/790ec1befba47991e8fd50a24d13be6094253f93>
accessed 7 December 2022.
500
N Kshetri, ‘The Role of Artificial Intelligence in Promoting Financial Inclusion in
Developing Countries’ (2021) 24(1) Journal of Global Information Technology
Management 1 <http://dx.doi.org/10.1080/1097198x.2021.1871273> accessed 7 December
2022.
501
‘OECD Digital Economy Outlook 2020’ (OECD, 27 November 2020)
<https://www.oecd.org/digital/oecd-digital-economy-outlook-2020-bb167041-en.htm>
accessed 22 November 2022.

202
Acing the AI: Artificial Intelligence and its Legal Implications

sell illegally. Digital innovation is likely to increase industrial digital


espionage.

(d) Lack of robustness

Robust AI algorithms will establish confidence in an AI-pulled finance


sector and protect stability and integrity in financial field. 502 When a once-
reliable signal becomes faulty, they have a more difficult task. Like in
COVID-19, AI models were not trained for the crisis, hence their
performance suffered.503

(e) Impact on Financial stability

AI systems may deliver improved efficiency, improved risk analysis,


management and pricing, improved compliance requirements, and
instruments for prudential implementation and supervision if the algorithms
they use are carefully built and verified and if they are used in conjunction
with other safeguards to limit risks and performance difficulties. Because of
their opaque nature, susceptibility to manipulation, the difficulties associated
with their robustness, and the issues surrounding their privacy, AI systems
present new dangers. These have the potential to erode public faith in
financial systems driven by AI. 504

502
‘The Economics of Artificial Intelligence and Machine Learning’ (YouTube, 22 June
2021) <https://www.youtube.com/watch?v=esBgWGAvjQw> accessed 10 December 2022.
503
David Bholat, Mohammed Gharbawi and Oliver Thew,‘The Impact of Covid on Machine
Learning and Data Science in UK Banking’ (Bank of England, 18 December 2020)
<https://www.bankofengland.co.uk/quarterly-bulletin/2020/2020-q4/the-impact-of-covid-
on-machine-learning-and-data-science-in-uk-banking> accessed 25 December 2022.
504
A Mirestean and others, ‘Powering the Digital Economy: Opportunities and Risks of
Artificial Intelligence in Finance’ (2021) 2021(024) IMF Departmental Papers 1
<http://dx.doi.org/10.5089/9781589063952.087> accessed 05January 2023.

203
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
Increasing consistency in assessments and credit decisions in the finance
sector could be induced by third-party AI algorithm providers, which,
together with expanding interconnectedness, could lead to systemic issues.
Data concentrations and the growing use of data sources in artificial
intelligence may contribute to swarming hazards which may result in
systemic danger. In the case of a tail risk occurrence, an improper risk
assessment and reaction by AI algorithms may magnify and propagate
shocks all across the financial system, which would either make the response
more complicated or less effective. Concerns have been voiced that policies
or marketing tactics based on all of these designs will be hard to decipher or
predict by relevant counterparties. This would result in the incorporation of
new asymmetric information into the market, which could have
unpredictable effects on financial stability. Both the proliferation of new
financial technology and the expansion of the regulatory role will have a
variety of effects on the course that the financial industry will take in the
future.

In a nutshell, the contemporary financial sector is a technology sector. As


such, it is susceptible to the same kinds of external and internal technical
threats as the information technology industry. In the years to come, as
financial technology becomes more widespread and advanced, the technical
issues faced by financial organisations will also increase in complexity. 505

V. Limitations and Regulatory Concerns

505
Tom C. W. Lin, ‘Compliance, Technology, and Modern Finance’ [2017] SSRN
Electronic Journal <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2904664> accessed
5 January 2023.

204
Acing the AI: Artificial Intelligence and its Legal Implications

The growing use of new technologies within the financial sector has ushered
in a new set of threats. Every sophisticated financial institution operating in
today’s environment is, at its core, a technology corporation. In addition to
the standard issues about the balance sheet, financial institutions today also
have to concentrate on the dangers and threats that are related to new
financial technologies.

Despite the tremendous progress that has been accomplished and the
enormous potential that has been made available by breakthroughs in
financial artificial intelligence, it still offers certain severe dangers and
restrictions that are interrelated. 506 Particularly interesting are the four types
of risks and restrictions that relate to programming codes, data bias, virtual
dangers, and systemic hazards. In isolation and as a whole, these four
potentially hazardous domains stand out as potentially inherent and structural
concerns that are connected to the development of artificial intelligence in
the financial sector.

Many in the financial business have been led to assume, incorrectly, that the
solutions to the issues that humans have caused in the financial sector can be
found within the great capabilities and implementations of financial artificial
intelligence. 507 While such praise and acclaim are well-deserved, it is
important to keep in mind that AI systems still have significant gaps in their
coding that prevent them from fully representing the intricacies of the

506
KN Johnson, ‘Cyber Risks: Emerging Risk Management Concerns for Financial
Institutions’ (2015) 50(1) Georgia Law Review
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2847191> accessed 5 January 2023.
507
Emanuel Derman, Models. Behaving. Badly: Why Confusing Illusion with Reality Can
Lead to Disaster, on Wall Street and in Life (2012).

205
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
modern financial market and the entire world. 508 The capacity of artificial
intelligence algorithms to accurately record all market activity is constrained
by the programmes’ underlying programming. Financial markets and our
unpredictable environment are filled with complicated, unfathomable human
and other aspects that cannot be adequately represented by artificial lines of
code, no matter how thorough or sophisticated they may be. Therefore, it is
important to keep in mind that computer codes and models commonly make
simple and oversimplifying assumptions about how the market actually
functions, giving a false impression that they are more predictive and
productive than they actually are. 509

Financial artificial intelligence tools can make robust predictions and


generate immense value that helps move and grow markets as a consequence
of these simplifications, but they also have the capability to emerge with
highly hazardous blind spots due to their limited understanding of how the
market actually functions. 510 The financial crisis of 2008 was in part
precipitated and aggravated by an excessive number of people working in the
financial industry placing an excessive amount of reliance on intelligent
computers to adequately account for the dangers and implications of a
burgeoning and then exploding real estate market.511

Volatility, danger, consequences, and animal spirits in finance cannot be


precisely codified, modelled, reduced, or removed due to human

508
Brian Christian, The Most Human Human (Anchor 2012).
509
RJ Shiller, Finance and the Good Society (Princeton University Press 2012) 132.
510
JO Weatherall, The Physics of Wall Street (Harper Business 2014).
511
A Saunders and L Allen, Credit Risk Management in and Out of the Financial Crisis
(Wiley 2010).

206
Acing the AI: Artificial Intelligence and its Legal Implications

unpredictability. 512 In addition, the majority of crucial financial operations,


such as transaction negotiations, board presentations, regulatory actions,
legal interpretations, and many others, are carried out by humans talking
with one another via the use of verbal and nonverbal language, which is
something that intelligent machines have not been able to accomplish
consistently up to this point. Thus, while our confidence and optimism in
financial AI grows, we must be cognizant of its limited capacity to grasp the
unfathomable complexity of a market that is still mostly human-driven. 513

(a) Embedded bias

AI and machine learning are being used more and more in the financial
sector, which is highly regulated and depends on public trust. This has led to
discussions about the risk of bias being built into the systems. Friedman and
Nissenbaum (1996) say that embedded bias is when a computer system treats
some people or groups of people unfairly and consistently worse than others.
AI/ML processes for putting customers into groups can lead to bias in the
financial sector by making prices or service quality vary. 514 Bias in AI
decisions is often caused by training data that is biased because it comes
from already biased processes and data sets. This will teach AI/ML models
to be biased as well. 515 Data bias, like wrong or not enough information,
could make it harder for people to get a loan and increase distrust of

512
ibid.
513
MB Fox, ‘The New Stock Market: Sense and Non-Sense’ (2015) 65(2) Duke Law
Journal <http://scholarship.law.duke.edu/dlj/vol65/iss2/1> accessed 13 December 2022.
514
EL Lehmann, ‘Consistency and Unbiasedness of Certain Nonparametric Tests’ (1951)
22(2) The Annals of Mathematical Statistics 165
<http://dx.doi.org/10.1214/aoms/1177729639> accessed 9 January 2023.
515
P Wang, ‘On Defining Artificial Intelligence’ (2019) 10(2) Journal of Artificial General
Intelligence <https://sciendo.com/article/10.2478/jagi-2019-0002> accessed 10 January
2023.

207
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
technology, especially among the most vulnerable. 516 There are two ways
that collecting data could lead to bias:
 The data that was used to teach the system may not be complete or
accurate. For example, predictive algorithms (like those used to
decide whether to give a loan) favour groups that are better
represented in the training data, since those predictions will be less
uncertain (Goodman and Flaxman 2016).
 The data may back up existing biases (Hao 2019). For example,
Amazon found that its internal tool for hiring was rejecting women
because it was trained on past hiring decisions that gave men more
jobs than women.

Financial AI has tremendous promise, but we must be wary of systems that


are constructed using data that may reflect negative historical prejudices
against the poor and the disadvantaged, which we would do well not to
reproduce in the present or prolong in the future. Considering that the
underlying data contexts and applications are being chosen and coded by
fallible people, we need to be extra cautious. 517

One of the major classes of risks and restrictions associated with the
development of financial AI is bias in data and algorithms. Due to the
increasing use of AI in the financial sector, it is imperative that
policymakers, legislators, and other important stakeholders be vigilant
against the possible damages that might result from data and algorithmic

516
Martin Cihak and Ratna Sahay, ‘Finance and Inequality’ (IMF, 17 January 2020)
<https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2020/01/16/Finance-
and-Inequality-45129> accessed 10 January 2023.
517
V Eubanks, Automating Inequality (St Martin’s Press 2018).

208
Acing the AI: Artificial Intelligence and its Legal Implications

bias. Major and serious efforts have been made in recent years to reduce
algorithmic bias in the financial sector and beyond. 518 It is crucial that the
promise of creativity, impartiality, and objectivity not be used as a cover for
the perpetuation of long-standing biases in the context of today and
tomorrow.519

(b) Cyber threats limitation

The proliferation of cyber threats and cyber conflicts in the financial sector is
another important class of hazards and constraints brought on by the
development of financial AI. The financial sector is becoming increasingly
susceptible to cyber attacks due to its increasing dependence on technology,
which is reflected in the rise of financial AI. IBM research published in 2019
indicated that the financial and insurance sectors were the most vulnerable to
cyber attacks.520 The financial sector is rapidly becoming a high-tech
business, making it susceptible to the same kinds of cyber threats as other
sectors of the IT sector.521

There are both external and internal cyber threats to the financial industry.
First, when it comes to virtual threats from the outside, foreign nation-states,
competitors, terrorist groups, organised cyber criminals, and cyber
combatants must be watched closely by financial firms and regulatory

518
‘Algorithmic Justice League –‘Unmasking AI Harms and Biases’ (AJL)
<https://www.ajl.org/> accessed January 10, 2023
519
DK Citron and FA Pasquale, ‘The Scored Society: Due Process for Automated
Predictions’ (2014) 89 Washington Law Review
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2376209> accessed 8 January2023.
520
‘IBM Security X-Force Threat Intelligence Index’ (IBM, 2023)
<https://www.ibm.com/security/data-breach/threat-intelligence> accessed 05 January 2023.
521
DB Hollis, ‘Why States Need an International Law for Information Operations’ (2008)
11 Lewis & Clark Law Review 1023
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1083889> accessed 05 January2023.

209
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
agencies. 522 In just the last ten years, the financial world has had to deal with
a wide range of external threats from both state and non-state actors. Some of
these actors want to make money, while others just want to create mayhem
by stealing billions of dollars, getting crucial data, and causing major
problems. 523

Second, financial institutions and their regulators must keep an eye out for
internal risks like disgruntled workers, corporate spies, and misdirected
outside contractors in terms of the external dangers they face. 524 Although
such internal dangers have long been present in the financial sector, their
impact has been amplified due to the sector’s substantial dependence on
technology such as artificial intelligence. A rogue internal threat may be one
of the most significant threats to the financial industry in today’s
instantaneous marketplace.525 The financial sector faces growing and
substantial dangers related to virtual and other technology-oriented hazards
as it increasingly resembles the technology sector through increased usage of
artificial intelligence.

(c) Systemic risk

Systemic risk and big financial mishaps are more likely to occur as a result
of the proliferation of financial artificial intelligence and other similar forms
of financial technology.526 In the field of finance, an increasing dependence

522
M Bowden, Worm (Atlantic Monthly Press 2012).
523
‘APT28: A Window into Russia’s Cyber Espionage Operations Report’ (Fire Eye)
<https://www2.fireeye.com/CON-ACQ-RPT-APT28_LP.html> accessed 5 January 2023.
524
Chabinsky and Archives (N 491).
525
D Lawrence, ‘Companies Are Tracking Employees to Nab Traitors’ (Bloomberg, 12
March 2015) <https://www.bloomberg.com/news/articles/2015-03-12/companies-are-
tracking-employees-to-nab-traitors> accessed 5 January 2023.
526
‘Not so Social: High-Frequency Trading: Twitter Speaks, Markets Listen, and Fears
Rise’ (Indian Express, 30 April 2013) <http://archive.indianexpress.com/news/not-so-social-

210
Acing the AI: Artificial Intelligence and its Legal Implications

on artificial intelligence and other kinds of technology might increase


interconnected systemic concerns relating to scale, speed, and
connectedness. In addition, the ever-increasing complexity of technology
raises the stakes for potentially catastrophic financial mishaps.

The widespread use of artificial intelligence in finance has the potential to


exacerbate some systemic risks for the global financial system, particularly
those associated with its scale, speed, and linkages. To begin, there is the
well-known systemic risk of “too big to fail,” in which large financial
institutions are said to become “too big to fail” because they are supposedly
too large and important to the welfare of the system to falter or fail. 527 This
risk is a result of the fact that the phrase “too big to fail” has become
commonplace. It is possible that institutions that are essential to the system
due to the massive amounts of data they keep for the purpose of financial
artificial intelligence may also become too crucial for the system to collapse
as this technology obtains greater traction in the financial industry. In light of
this, in the future, a financial institution’s database size may become as
important as its balance sheet size for assessing its systemic risk.

The development of financial AI has the potential to increase systemic risk


and cause accidental financial losses. In his seminal work on technological
dangers, Normal Accidents: Living with High-Risk Technologies, Charles
Perrow postulated that complex technical systems, such as the artificial
intelligence-driven ones at the core of our financial systems, are intrinsically
subject to malfunctions and accidents. There will undoubtedly be a rise in the

highfrequency-trading-twitter-speaks-markets-listen-and-fears-rise/1109483/> accessed 8
January 2023.
527
‘Wall Street and the Financial Crisis:Anatomy of a Financial Collapse’ (US Senate,
2011).

211
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
frequency of “normal financial accidents” as the use of financial AI becomes
more widespread.528

Both the New York Stock Exchange and the Nasdaq, the two largest stock
exchanges in the United States, have had major breakdowns in recent years,
causing the temporary suspension of hundreds of billions of dollars’ worth of
trade for many hours during otherwise typical trading sessions. 529

In conclusion, the widespread adoption of AI in the financial sector raises the


stakes for potential systemic risks and catastrophic financial mishaps. While
the numerous new benefits of financial AI for some organizations and
institutions are certainly worth acknowledging, it is also important to be
aware of the risks and difficulties that financial AI may pose in the future. 530

VI. Implication of Artificial Intelligence in Global Finance

The emergence of various financial innovations and the compliance function,


together with the risks and opportunities that accompany them, will have a
significant impact on the future of the financial sector in a variety of crucial
ways. The increasing relevance of financial cybersecurity, the greater
integration of regulatory and technological tasks, and the role of the human

528
M Schneiberg and T Bartley, ‘Regulating or Redesigning Finance? Market Architectures,
Normal Accidents, and Dilemmas of Regulatory Reform’ (2010) 30 Markets on Trial: The
Economic Sociology of the U.S. Financial Crisis: Part A
<https://www.emerald.com/insight/content/doi/10.1108/S0733-
558X(2010)000030A013/full/html> accessed 7 January2023.
529
E S Browning and Scott Patterson ESB, ‘Market Size + Complex Systems = More
Glitches’ (WSJ, 22 August 2013)
<http://online.wsj.com/article/SB10001424127887323980604579029342001534148.html>
accessed 2 January 2023.
530
J Hasbrouck and G Saar, ‘Low-Latency Trading’ [2011] Johnson School Research Paper
<http://dx.doi.org/10.2139/ssrn.1695460> accessed 2 January 2023.

212
Acing the AI: Artificial Intelligence and its Legal Implications

aspect in the future of finance are three important consequences that stand
out as highly significant.

(a) Cyber security in Global Finance

One of the biggest problems facing the financial sector in the near future is
cyber security. The complexity of the financial system and the security
threats it faces are exacerbated by the fact that the underlying technology
infrastructure is mostly privately held and managed by a wide variety of
financial intermediaries.531 Private financial firms control a large portion of
the United States’ technological and cyber infrastructure, which could make
it difficult to take timely, coordinated, and security-enhancing actions,
especially if companies prioritize short-term profits and other factors, such as
secrecy, over financial cyber security. 532 While it may make sense for certain
financial institutions to delay investing in cyber security in the short term,
doing so might increase cyber security risks for the whole sector.533 Due to
the integrated and intermediated structure of contemporary finance, a
company’s vendors and counterparties also require robust financial cyber
security to protect against market volatility.

It is anticipated that the financial industry will see an increase in the number
of investments made in this area, as well as a larger push for improved
cooperation between private and governmental players, in order to better
meet the issue posed by cyber security in the financial sector. To begin with,

531
K Eichensehr, ‘The Cyber-Law of Nations’ [2014] Georgetown Law Journal
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2447683> accessed 2 January 2023.
532
S Baker and S Waterman, ‘In the Crossfire: Critical Infrastructure in the Age of Cyber
War’ (Cyberwar Resources Guide, 1 January 2010) <https://www.projectcyw-
d.org/resources/items/show/158> accessed 1 January 2023.
533
DE Bambauer, ‘Ghost in the Network’ (2014) 162(5) University of Pennsylvania
Review1011 <https://scholarship.law.upenn.edu/penn_law_review/vol162/iss5/1> accessed
17 December 2022.

213
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
it is expected that, in the next few years, financial institutions will increase
their spending on cyber security.

The second reason is that policymakers, lawmakers, and industry partners


will likely encourage individual enterprises to collaborate more concertedly
with public and private players to increase financial cyber security because
most of the technology infrastructure of the financial markets is privately
controlled and run. 534

(b) International Law and Global Finance

International Economic Law (IEL) is the part of international law that


comprises trade rules, investment law, finance law, global
banking development loans and crisis lending, as well as international
commercial law.535 Three ideas were identified by Zhao (2010) to explain the
emergence of financial centres. First, there is the geography of finance
theory, which focuses on the location of financial transactions (information
hubs) rather than economic output (economic hinterland). The second theory
describes the formation of financial centres based on the Anglo-American
and Continental European legal traditions. And the third, the time zone
theory, defines the division of global marketplaces according to time zones.

According to the International Banking Centres Development (IFCD) Index


for 2014, technology, particularly the growth of real-time communication

534
S Harris, @War:The Rise of the Military- Internet Complex (Houghton Mifflin Harcourt
2015).
535
FJ Garcia, ‘Globalization, Inequality & International Economic Law’ (MDPI, 26 April
2017) <https://www.mdpi.com/2077-1444/8/5/78> accessed 15 December 2023.

214
Acing the AI: Artificial Intelligence and its Legal Implications

systems, is posing a threat to the traditional financial system. 536 Throughout


the course of history, the vast majority of nations and the authorities in
charge of financial markets have operated under the presumption that
maintaining the stability of financial markets requires imposing limitations
on market competition and maintaining a segmented market structure that
keeps international banks, securities businesses, and insurance companies
separate from one another. The perception in particular that the banking and
securities industries are separate and distinct aided the oversight of both
businesses. This is mostly due to the well-defined and well-understood legal
and commercial differences that exist between the various types of financial
organisations.

In any case, the new economic and political dynamics that are shaping our
increasingly globalized environment, the diversity of the underlying cultures
and related attributes, the disparities in legal systems and approaches all
throughout the world, the pure realities of the enormous changes that occur
within Latin America, East Asia, Central and Eastern Europe, and Southern
Africa in particular, the ongoing need for feasible financial sector law
reforms, and the increasing significance of globalisation are all factors that
should be considered. This does not lessen the legal significance and
relevance of developing new international “road rules” for financial
institutions and banking institutions (whether private, public, or
intergovernmental in character) in the context of the global 21st century.

536
B Alex and LR Pierre, ‘International Financial Centres, Global Finance and Financial
Development in the Southern Africa Development Community (SADC)’ (2017) 9(7) Journal
of Economics and International Finance 68 <http://dx.doi.org/10.5897/jeif2017.0849>
accessed 5 January 2023.

215
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance
(c) The Implication of Compliance and Technology

Since regulation and new financial technology are both on the rise at the
same time, many financial institutions will merge their compliance and
technology departments to better serve the needs of today’s financial sector.
Many financial institutions have previously seen the benefits of leveraging
the capabilities of modern information technology in their trading, investing,
research, and other business-side activities, but they are now realising that
the same is true for their regulatory operations. As a result of increased
demand from regulators and managers, compliance will increasingly rely on
cutting-edge IT infrastructure. Financial institutions in the modern period
operate in an extremely fluid and intricate market and regulatory setting. 537 It
seems certain that, if not already the case, in the near future, a strong IT
infrastructure will be equated with a strong compliance system in the
financial sector. And the tech-savvy compliance officer will rise to
prominence as a crucial part of the 21st-century financial system’s web of
life.

(d) Implication of AI on Individuals of the International Finance


Sector

The rise of AI in finance and financial regulation inevitably raises existential


issues about the role of people in the future of the financial sector, just as
comparable questions are being generated by technological advances in other
areas throughout the economy. 538

537
Manuel A. Utset, ‘Complex Financial Institutions and Systemic Risk’ (2011) 45 Georgia
Law Review<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1810144> accessed 5
January 2023.
538
J Barrat, Our Final Invention (Saint Martin’s Griffin 2015).

216
Acing the AI: Artificial Intelligence and its Legal Implications

However, closer inspection suggests that people will still be essential to


efficient financial and compliance operations in the not-too-distant future.
On the basis of a range of metrics, it is incontestable that machines driven by
artificial intelligence are more advanced than humans. AI devices are not
influenced by the same kinds of emotional responses that drive
an individual. 539 There is much to appreciate in AI and its data-driven
models, but those who do so should also be aware of the restrictions to which
they are subjected. When the housing market bubble burst in 2008, advanced
computer simulations failed to predict its pace, repercussions and impact.540

In a world full of fallible, irrational, and unpredictable human actors, no


amount of new financial technology or artificial intelligence will ever be able
to produce a system that can perfectly predict financial futures and economic
hazards.541

When it comes to legal and compliance activities in the financial sector, the
real battles of the future would not be between people and AI, but rather
between humans and other humans. 542 Rather than wondering how smart
machines will replace lawyers and compliance officers, the future of security
and regulatory responsibilities in the financial industry should focus on how
lawyers and compliance officers can use smart machines to build more
lawful, compliant, and lucrative institutions.

539
GA Akerlof and RJ Shiller, Animal Spirits: How Human Psychology Drives the
Economy, and Why It Matters for Global Capitalism (Princeton University Press 2009).
540
A Saunders and L Allen, Credit Risk Management in and Out of the Financial Crisis
(2010).
541
S Baker, Final Jeopardy: Man vs. Machine and the Quest to Know Everything (Houghton
Mifflin Harcourt 2011).
542
Barry Ritholtz, ‘Trading under the Influence of Emotion’ (Bloomberg, 3 December 2015)
<https://www.bloomberg.com/opinion/articles/2015-12-03/trading-based-on-emotion-is-
disastrous-for-investors> accessed 2 January 2023.

217
The Role of Artificial Intelligence (AI) in Global Financial System:
Challenges and Governance

VII. Conclusion

The financial industry is expected to see a further uptick in the use of AI and
ML technologies. Accelerating advancements in processing power, data
storage, and big data, as well as noteworthy gains in modelling and use-case
adaptations, are all major factors fuelling this pattern. The spread of the
COVID-19 virus hastened the transition to a cashless society and the
widespread use of digital financial services, both of which boost the allure of
AI/ML systems among financial service providers. AI/ML will deliver
benefits but pose financial policy problems. AI systems give financial
institutions cost savings, efficiency benefits, new markets, and improved risk
management; they deliver customers new experiences, products, and cheaper
costs; and offer strong tools for regulatory compliance and prudential
monitoring. These systems raise ethical problems and new hazards to the
integrity and safety of the financial system, whose full scope is unknown.
Financial sector officials face a difficult problem since these innovations are
continually developing as new technologies emerge. These advances require
better supervision, monitoring mechanisms, and active interaction with
stakeholders to detect hazards and implement regulatory measures.

The field of finance is in the midst of undergoing a significant upheaval right


now. New financial technologies have revolutionized financial businesses’
operations. This chapter predicted this tidal shift. It studied the
contemporaneous and overlapping rises of financial technology and
compliance, as well as associated hazards. It also emphasized the broader
financial ramifications of new technologies and compliance. It focused on
cyber security, technology, compliance, and the role of individuals in

218
Acing the AI: Artificial Intelligence and its Legal Implications

contemporary finance. This chapter seeks to inspire new thinking on


regulation, technologies, and contemporary finance.

219
The Virtues and Vices of Digital Cloning and its Legal Implications

AI AND CONSUMER PROTECTION: SAFETY AND LIABILITY


IMPLICATIONS VIS-À-VIS INFORMATION ASYMMETRY
Aman Upadhyay and Nitesh Ranjan
(Students at National University of Study and Research in Law, Ranchi)

Abstract
Within a short period, AI has established itself as a significant area
that is transforming every walk of life. Most sectors, including trade
and commerce, are impacted by its influence. Consumers are the key to
trade and commerce. An effective, efficient, and balanced
implementation of AI in commerce and trade is the sine qua non for
ensuring better promotion and protection of the rights of consumers.
However, the regulations necessary for the effective implementation of
AI in India seem to have been neglected by policymakers over time.
Furthermore, the vexing issue concerning AI is that it is very difficult
to characterise, at least comprehensively. In this scenario, when there
are no specific regulating norms related to AI, consumers are bound to
suffer. This chapter makes an effort to analyse the aforementioned
issues. The foremost issue deals with the concern of whether the
provisions of consumer protection need to be strengthened after the
implementation of AI. The author will try to establish that the current
consumer protection laws are almost negligible concerning issues
related to AI and hence, exhaustive provisions related to AI are the
need of the hour. Another important aspect upon which this chapter
focuses is the right of the consumer to compensation in the event that
their data is infringed. The other issue pertains to the liability that
arises out of the damages caused by the activities performed within the

220
Acing the AI: Artificial Intelligence and its Legal Implications

ambit of artificial intelligence. In this issue, the author tries to analyse


that, since intellectual property rights are still undecided in matters
related to AI, who will be held liable in cases of damages? This is an
important question to protect consumers from the conundrum. This
chapter offers a proposal to have a specific and dimensional policy to
achieve the safety and security rights of consumers in matters related
to AI.

I. Introduction

With the advancement in technology and especially the revolution of AI,


consumer life is transforming in a way that was not imagined before.
Artificial intelligence is no longer confined to the imaginary landscapes of
science fiction novels or fantasy movies in which robots take over humans
but has become our everyday reality. It is not locked in the labs and factories
but very much present around us, increasingly affecting our practical lives.543
AI is seen as a replacement for or an alternative to human intelligence.
However, it is a matter of debate as to what extent it should be considered as
an alternative, as there are certain roles for which it is impossible to find an
alternative to human intelligence anytime soon. Over the last few years, the
digital economy in India has seen a tremendous increase in consumer
activity. A wide bouquet of services is going to be delivered by AI in the
future. Although these services may make many services more easily
accessible and convenient for users, the possibility of different kinds of harm
and risk also comes with them. The consumers are the most susceptible to
their rights, which may be at stake due to the use of AI in the industrial

543
Robert Walters and Marko Novak, Cyber Security, Artificial Intelligence, Data
Protection and the law (1st edn, Springer Nature Singapore Pte Ltd. 2021).

221
The Virtues and Vices of Digital Cloning and its Legal Implications

setup. These harms could be in the form of a breach of privacy for


consumers and disrupting the fundamental objective of intellectual property
rights, which is to promote creativity, non-obviousness, recognition of
creators for their efforts, and protection from harm. In this regard, it is
inevitable to prioritise the protection of the rights of consumers.

II. Artificial Intelligence: Hunt for a (Missing) Definition

Before we start dealing with the topic, it is of utmost importance to


understand what AI means, as there is no certain definition of it. Artificial
intelligence (AI) technology has evolved through several developmental
phases, from its beginnings in the 1950s to modern machine learning, expert
systems and “neural networks” that mimic the structure of biological
brains. 544 The term refers to interdisciplinary science concerned with a
paradigm shift in the tech industry and its various sectors. Artificial
intelligence (AI) is the ability of computers or other machines to display or
mimic intelligent behaviour. In this system, the performance of tasks
associated with human intelligence is done by any computer or robot. It aims
to perform activities that are difficult to perform without human intelligence.

Professor John McCarthy, who is regarded as the father of artificial


intelligence, first used the term and defined it in the simplest way as, “the
science and engineering of making intelligent machines, especially
intelligent computer programs.”545 Whenever we think of intelligence,
humans come to mind. It seems implausible to imagine independent

544
Robert Mazzolin, ‘Artificial Intelligence and Keeping Humans “in the loop”’ (2020)
Modern Conflict and Artificial Intelligence <https://www.jstor.org/stable/resrep27510.10>
accessed 12 January 2023.
545
McCarthy (n 4).

222
Acing the AI: Artificial Intelligence and its Legal Implications

intelligence without the contribution of humans. However, now, after the


development of science, it is possible to design machines that can perform
complex tasks independently without human intervention. Artificial
intelligence makes it possible for the software to automatically learn from
patterns in the data by fusing large amounts of data with quick, iterative
processing and clever algorithms, thus, making it able to perform human-like
tasks.546 AI is performing independently without human intervention.
Recently the first music album composed by AI has been released.547
Although the bare idea of AI can be understood from the above views, it is
unfortunate that even after almost 70 years of the usage of the term AI, there
is no universally accepted definition of it.

III. Inseparable for Consumer: Inescapable Protection

Consumer protection is necessary to safeguard the interests of consumers. A


better understanding of consumers can be sought from the Consumer
Protection Act, 2019 which defines consumers as –
“[a]ny person who buys any goods for a consideration which
has been paid or promised or partly paid and partly promised,
or under any system of deferred payment and includes any user
of such goods other than the person who buys such goods for
consideration paid or promised or partly paid or partly
promised, or under any system of deferred payment, when such
use is made with the approval of such person but does not
include a person who obtains such goods for resale or any
commercial purpose.”548

546
Jim Goodnight, ‘Artificial Intelligence’ (SAS Insights)
<https://www.sas.com/en_in/insights/analytics/what-is-artificial-intelligence.html> accessed
January 10, 2023.
547
Dom Galeon, ‘The World’s First Album Composed and Produced by an AI Has Been
Unveiled’ (Futurism, 21 August 2017) <https://futurism.com/the-worlds-first-album-
composed-and-produced-by-an-ai-has-been-unveiled> accessed on January 11, 2023.
548
Consumer Protection Act 2019, s 2(7).

223
The Virtues and Vices of Digital Cloning and its Legal Implications

In many instances, courts have observed that, to avoid consumer exploitation


and reduce business malpractices, consumer protection is inevitable.
Consumer protection protects the well-being and interests of consumers by
educating and mobilising them.

Measures for consumer protection are frequently mandated by legislation.


Such regulations are meant to prevent firms from indulging in fraud or
unjustified acts to obtain a competitive advantage or mislead consumers, and
to encourage consumers to make better purchasing decisions and file
grievances against firms. To file complaints, Consumer Dispute Redressal
Forums are established in every district as provided under the Consumer
Protection Act of 2019. Aside from that, the Indian Contract Act of1872
contains provisions that bind the promisor to fulfil his promise as agreed
upon. As such, it is the responsibility of the seller to sell products of standard
quality and not dupe the consumers. Also, the Sale of Goods Act 1930
provides – “Unless otherwise agreed, when the seller tenders delivery of
goods to the buyer, he is bound, on request, to afford the buyer a reasonable
opportunity of examining the goods to ascertain whether they conform with
the contract.”549 These provisions along with the Consumer Protection Act
pave the way forward for protection of the interests of the consumer.

IV. AI Regulation: A sine qua non

With the advancement of technology, AI has started playing a vital role in


the lives of consumers. While AI is useful for us in resolving complex

549
Sale of Goods Act 1930, s 41(2).

224
Acing the AI: Artificial Intelligence and its Legal Implications

problems faster, error-free, and easy, it is also true that it comes with certain
complications. To what extent society must adapt to technological
innovations has to be based on the needs of that society, be they economic or
social. 550 Since the process is complicated, it is very likely that a
considerable group of people, especially those who are not tech-savvy,
would find themselves in a disadvantaged position. As such, unawareness
can be used by the companies as an opportunity to stultify the consumers.

Research shows that 56 percent of companies are using AI at least in one


function, and a recent McKinsey report suggests that the degree of
automation of all work activities could soon reach 45 percent.551 Automated
decisions about individuals tend to have an ultimate impact on human life
and raise concerns about issues of discrimination and accuracy. For ensuring
the fair, safe and sustainable development of society, regulating AI is
crucial. There is a lack of specific regulations in this regard, and hence, the
effective implementation of the AI regime does not seem like a cakewalk.
This is a dangerous situation where the rights of consumers are at stake.
Consumers are sceptical about the proper functioning of AI, as it is complex
and intricate. As a result, AI is becoming a source of distrust. The legislators
have to come up with specific laws and regulations to gain the trust of
consumers. The imposition of laws on AI would help prevent the
infringement of consumer rights by technology. In this regard, the EU has

550
Robert van den Hoven van Genderen, ‘Do We Need Legal Personhood in the Age of
Robots and AI?’, in Marcelo Corrales and Mark Fenwick (eds) Robotics (Springer Nature
Singapore Pte Ltd. 2018).
551
Thomas H. Davenport and Randy Bean, ‘Companies Are Making Serious Money with
AI’ (MIT Solan Management Review, 17 February 2022)
<https://sloanreview.mit.edu/article/companies-are-making-serious-money-with-
ai/#:~:text=A%202021%20McKinsey%20global%20survey,AI's%20economic%20return%2
0is%20growing> accessed on 13 January 2023.

225
The Virtues and Vices of Digital Cloning and its Legal Implications

suggested implementing an AI act so that the positive effects of technology


could be ensured.552 The OECD has articulated five basic principles for
regulating AI. Indian decision-makers can take a cue from this while drafting
the laws regarding AI in India. Furthermore, the decision-making process
should be fair and transparent. A clearly defined scope of whether certain
activities are covered by the regulation or not should be established, without
which it is difficult to implement the transparency requirement.

V. AI in Rural Areas: A Study from Consumer’s Perspective

A significant part of our economy is being contributed by transactions held


in rural areas. Producers and consumers in rural areas play a vital role in the
economy. The adoption of technology, especially AI, can bring a significant
increase in this contribution, but the implementation of the AI regime
requires various infrastructural and technological establishments. Besides
these establishments, basic amenities like uninterrupted power and data
supplies are also needed. A large portion of rural India lags on this front.
Nearly 60 percent of rural India lacks internet connectivity, 553 and high-
speed internet is a long way off. Furthermore, the literacy level is not
encouraging in rural India, and as such, it is a herculean task to implement
AI in rural India. The implementation should fulfil the very purpose of AI,
i.e., to make the daily life of human beings easier. On the contrary, it seems
to affect the rural lifestyle negatively.

552
Commission, ‘Directorate-General for Communications Networks, Content and
Technology, Regulation of the European Parliament and of the Council Laying Down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending
Certain Union Legislative Acts’ [2021] COM 2021/206.
553
Bhavya Dilip Kumarand and Himanshi Lohchab, ‘Low Smartphone Reach Coupled with
Lack of Digital Literacy Hit Rural India Covid Vaccine Drive’ (The Economic Times, 16
May 2021) <https://ecoti.in/DGnN1b53> accessed on 12 January 2023.

226
Acing the AI: Artificial Intelligence and its Legal Implications

AI would be burdensome for the majority of rural populations. There are


high chances of it being misused unless people are made aware of its
applications. However, spreading awareness is not an easy task owing to the
social and educational backwardness of the rural areas of the country. It
becomes more difficult for a country like India, where AI is not even
properly regulated and the existing procedures and rules for consumer
protection are complex, making them arduous for rural consumers to
understand.

Any government schemes and the like become available to the rural
population much later. It takes time to reach them. This is the case when a
simple Act is to be implemented. The application of AI is a rather complex
setup, which requires much more technicality and specialisation. It would not
be justified to expect such a level of expertise from the rural population
immediately. Women consumers, in particular, are disadvantaged because
they are not independent, are less educated, and have less awareness.
However, it does not mean that the implementation of the AI regime is
impossible in rural areas. It would require much more attention and
consciousness. We cannot hold back any scheme in perpetuity owing to the
backwardness of the rural areas. We should make gradual developments
along with making rural people aware of the technicalities of AI. The holistic
development of rural India depends considerably on these significant
developments.

VI. Implications of AI

227
The Virtues and Vices of Digital Cloning and its Legal Implications

There are certain limitations when it comes to the implementation of the AI


regime. The issues range from liability to the lack of specific laws for
effective implementation of AI, and they all have an impact on the right to
privacy of an individual. The need to balance AI’s requirement for huge
amounts of organised or standardised data with the right to privacy is
probably the most difficult challenge facing the AI sector. However, data
access and sharing are consistent with privacy but require regulatory
adjustments. Also, there is an asymmetry in the amount of information that a
businessman possesses in comparison to the customer, which puts consumers
in a highly prejudiced situation. The issues, including those discussed above,
are likely to cause hindrance to the successful execution of the AI regime.

VII. Something at Stake: Consumer Privacy?

For simulating human behaviour, AI takes out useful information from the
previous behaviour of the consumers, and this information is used by the
businesses, at the cost of harming the data privacy and security of the users.
AI can be used as an important asset of the business and economy of the
country. However it can also be a liability for the legislators and companies
when there is a breach of data privacy or issues with AI regulation. Even
though privacy and data protection are evolving areas of law and economic
development, they have not matured as a measure of redressing economic
and personal harm comparable to the protection of intellectual property,
copyright, criminal procedure, and international trade law. 554 In a landmark
decision of 2017, the Supreme Court held that the right to privacy is an
inalienable and intrinsic part of the right to life enshrined under Article 21 of

554
Robert Walters and Marko Novak, ‘Cyber Security, Artificial Intelligence, Data
Protection and the Law’ (1st edn, Springer Nature Singapore Pte Ltd.) 90-91.

228
Acing the AI: Artificial Intelligence and its Legal Implications

the Constitution of India. 555 Although India does not have any specific law
for Data Protection, the personal information of consumers is protected
under Sections 43A and 72 of the Information Technology Act, 2000. Still,
the laws are not sufficient to protect the privacy of consumers.

The challenge in front of the legislature is to pass privacy laws that can
protect consumers from the adverse effect of AI without compromising the
efficient implementation of AI. The onus is on the government to enact
specific privacy laws in relation to consumer protection. Further, the major
challenge for companies is to use AI as an asset while prioritising the data
privacy and security of the consumers. Furthermore, companies face issues
working with AI to build the confidence of consumers, for which they are
already in a disadvantageous position in their eyes in terms of privacy.

Privacy is a complex topic; it is not static but rather a dynamic concept. The
same can be inferred by a Kerala High Court judgment that extended the
ambit of privacy by laying down that the right to be forgotten is an intrinsic
part of the right to privacy. 556 Hence, it is inevitable to consider the right to
be forgotten while deciding AI-related disputes. Data can be disguised via
anonymity techniques or pseudonymization to prevent storing identifiable
data, thus circumventing the right to be forgotten. However, there is no
practical way to retrieve data once it has been given to the machines, and the
system operates in ways that we do not often understand. As a result, the
concept of the right to be forgotten is difficult to apply to AI.

555
Constitution of India 1950, art 21.
556
Justice K.S. Puttaswamy (Retd) v Union of India [2017] 10 SCC 1.

229
The Virtues and Vices of Digital Cloning and its Legal Implications

VIII. Information Asymmetry: Another Implication of AI

Information is exchanged between the consumer and the businessman during


the course of a transaction. The businessman gets access to some previously
saved data when the transaction happens through self-learning algorithms in
big data analysis. As such, there is a difference in the amount of information
held by them. Thus, AI creates a situation of information asymmetry, where
consumers and businessmen are not on the same pedestal in terms of the
information held by them. This influences the decisions of the consumers
since the ability of humans to make decisions is not static. It gets affected by
the information of that particular matter. The dominant entity uses
information asymmetry to influence the decision of the subservient entity in
its favour. The effect of manipulation happens in the form of afflictive
decisions taken by consumers under the influence of a dominant entity.
Based on this information, the company targets a specific type of customer
and, according to that, fluctuates the prices of products and the conditions of
the contract. The information obtained by AI is also used by companies to
determine the status of consumers and whether they can afford certain
products or not, which jeopardises ethical values and leads to monopoly and
consumer exploitation.

Usually, consumers do not know that the information of any product, price,
and the terms of the contract are specifically designed for them according to
their profile. Sometimes they receive unfavourable information without the
knowing how such information was received. This happens because of some
specific features of AI such as the “black box effect” (a metaphor used for
AI technology that refers to the function of AI providing relevant
information without referring to how it internally works), complexity, and

230
Acing the AI: Artificial Intelligence and its Legal Implications

uncertainty. These characteristics make the proper implementation of


consumer protection legislation difficult, the same could be found in the
European Commission’s report on AI. It states that –

“the key characteristics of AI create challenges for ensuring the


proper application and enforcement of EU and national
legislation. The lack of transparency (opaqueness of AI) makes
it difficult to identify and prove possible breaches of laws,
including legal provisions that protect fundamental rights,
attribute liability and meet the conditions to claim
compensation.”557

Therefore, it becomes a challenge to ensure the effective application and


enforcement of consumer protection-related legislation in matters of AI. It
may be necessary to adjust or clarify existing legislation in certain areas.
This does not mean that AI is highly unfavourable. Sometimes latent
information that is helpful for humanity and not possible for human agents to
extract and analyse could be easily accessible through the use of AI
techniques.

IX. Damages by AI: Who would be liable?

The use of AI signifies an enhancement in the interaction between humans


and robots. These automated machines tend to cause harm inevitably to
humans. There is a lack of specific legislation regarding determining liability
for the harms caused by AI systems to consumers. The law does not provide
anything specific on the matter of ascertaining the burden of proof in such
matters. Generally, product liability is attributed to the producer of the
product placed on the market. However, what if the AI system is not used by

557
European Commission, ‘EU White Paper Report on AI and Law’ (19 February 2020).

231
The Virtues and Vices of Digital Cloning and its Legal Implications

the person who created it? The same is said in the European Union’s report
on AI –
“In general, EU legislation on product safety allocates the
responsibility to the producer of the product placed on the
market, including all components e.g., AI systems. But the rules
can for example become unclear if AI is added after the product
is placed on the market by a party that is not the producer. In
addition, EU Product Liability legislation provides for the
liability of producers and leaves national liability rules to
govern the liability of others in the supply chain.”558

The Product Liability Directives 559 make the producer strictly liable in the
event that a consumer incurs damages due to their product. The injured party
needs to establish a causal link between the defect in the product and the
injury caused to him, which is difficult to prove in AI transactions.

However, the liability of the producer is reduced in cases of contributory


negligence on the part of the consumers. For instance, if the consumer does
not adhere to the updates concerning safety, it would amount to contributory
negligence, reducing the liability of the producer. Understanding the data and
the algorithm used by AI requires logical and scientific proficiency, which
consumers may find exorbitantly expensive. Furthermore, accessing the data
and algorithm is not easy without the support of a potentially liable party. As
a result, consumers may find it difficult to claim liability. Also, it is difficult
to prove the fault of an autonomous AI or the fault of a person who relies on
AI.

558
ibid.
559
Council of the European Communities, ‘Product Liability Directives (1985)’ (30 July
1985).

232
Acing the AI: Artificial Intelligence and its Legal Implications

An important issue arises when the producer has no role in the cause of
action. Nowadays, artificial intelligence is creating creative content with
minimal human assistance, but creativity and rationality are considered
human attributes by society, and the same reflects in the legislation too,
especially in matters related to copyright. The laws are silent on the question
of who will get the copyright in the creation of AI – the AI itself, or the
person who created it? This shows the loopholes involved in the laws in the
path of getting remedy for the damages caused by the AI, for which even
copyright is not certain.

X. Suggestions: Unscrambling Scrambled Eggs

The increase of AI in transactions is having ethical, legal, and economic


consequences. The ethical consequences need to be considered when framing
legislations in such a way that the economic consequences must not be
hampered. With the advancement of technology, the use of AI is also
increasing in the process of transactions, and the consumer protection law
must adopt these changes. Firstly, people need to be well-informed about
both faces of AI, so that the very purpose of AI can be achieved. Every
aspect of AI must be clear with the ambiguity regarding the capabilities of
AI and consumers must be made aware. The Consumer Protection Safety
Commission, in its report, states that there must be some special programme
that facilitates an essential capability to screen for AI and also finds out
whether AI contributes to hazards that could potentially hamper the safety
and security of the consumers.560

Consumer Protection Safety Commission, ‘Artificial Intelligence in Machine Learning in


560

Consumer Products’ (19 May 2021).

233
The Virtues and Vices of Digital Cloning and its Legal Implications

Properly regulated norms are required for business persons who use AI so
that the risk to consumers and society can be minimised without jeopardising
the potential advantages. The law must be formed considering the impact on
consumers, such as protection of their privacy, equality in information
sharing, security from cyber attacks, and also liability in the case of damage
caused by the AI. Since AI is a complex and technical concept, the
reinforcement of laws must be on a technical level with a body of highly
specialised technical experts. Consumers must be protected from the unfair
discrimination and differentiation caused by AI. Transparency should be
maintained on how a machine decides, with the intention to enhance trust in
AI transactions. The consumers must know about their personal information
that is being shared through AI. Those who are at the forefront of the AI
revolution hold much power, which in turn carries with it many
responsibilities. They are required to make people understand its
technicalities in the simplest possible way. It must be ensured that the
information which consumers receive through AI is reliable.

An Artificial Intelligence Act should be separately enacted by the Indian


Parliament, taking suggestions from the proposal given by the European
Commission. It is inevitable to provide a legal and unified regulatory
framework for the growing area of Artificial Intelligence so that consumer
protection can be attained. The laws should be human-centric to achieve the
ultimate goal of enhancing human well-being.

It is good to have certain activities listed in the Act itself for which the use of
AI shall be prohibited. Some AI systems are considered unacceptable,
against moral values, and in violation of fundamental rights. There must be
provisions prohibiting such AIs. Furthermore, some AI technologies could

234
Acing the AI: Artificial Intelligence and its Legal Implications

be used to manipulate a specific vulnerable group of consumers, such as


minors or people with disabilities, in such a way that their decisions could
harm them or anyone else. Hence, these AI technologies must be used with
necessary precautions if they cannot be prohibited completely. In this
technological era, the contribution of AI to consumer welfare cannot be
denied. In customer services, AI provides immediate and on-demand service
to consumers.

XI. Concluding Remarks

Now it is a time when AI is becoming an intrinsic part of human life, but as


discussed above, there are several hurdles in the path of proper
implementation of AI. AI brings its negative effects along with its positive
impact, which is why it becomes important to implement AI, perfectly. The
need of the hour is to implement AI without neglecting the negative impact
on consumers. In this economically-driven world, consumers are the main
stakeholders. It must be ensured that the consumers are not harmed, or else
the growing trend of AI in the tech industry will reverse.

235
The Virtues and Vices of Digital Cloning and its Legal Implications

THE VIRTUES AND VICES OF DIGITAL CLONING AND ITS


LEGAL IMPLICATIONS
Siva Mahadevan
(Student at VIT School of Law, Chennai)

Abstract
Digital cloning is an Artificial Intelligence (AI) technology that has
seen a prominent rise in use in recent years. The Massachusetts
Institute of Technology (MIT) dedicated a podcast episode to the new
technological advancement that allowed humans to talk with people
who had died. Thus, the promise and potential of this technology
cannot be understated. In this chapter, the author shall critically
examine the concept of digital cloning, its meaning, and its various
types, such as Audio and Visual cloning and Mind cloning. Apart from
the fundamentals, the chapter shall address the issue in two spectrums
– criminal and civil. For the criminal spectrum, the author shall give a
preferential focus to the tremendous rise of the use of deepfake
technologies for crimes such as theft and pornography. In the civil
spectrum, the focus shall be placed on several new digital cloning
technologies used in several sectors such as education, entertainment,
and healthcare. Finally, the author shall consider the legal aspects
surrounding digital cloning. With the concept of digital cloning being
considerably new, the lack of regulation can become a severe problem
and lead to several violations. Apart from analysing existing or newly-
founded criminal laws, the author shall consider concerns such as data
privacy and copyright issues. As a result of this comprehensive study,
the author shall determine whether digital cloning can be practically
applied successfully in its current state.

236
Acing the AI: Artificial Intelligence and its Legal Implications

I. Introduction

Can we imagine speaking to people close to us who have already passed


away?561 Can we imagine communicating with our digital identical twin?
Can we imagine a doctor who can give us medical treatment, except this
doctor is digital? These are the types of highly unrealistic scenarios that
digital cloning makes a reality.

‘Digital cloning’ is a term that generally entails several types of AI


algorithms, process replication, and related data types. 562 It also contains
digitally manipulated images, audio, videos, data and other elements. As
noted by Jon and Rafael, a digital clone does not necessarily have to have a
person’s appearance, voice, and other physical characteristics. A person’s
actions, preferences, and behaviours on the internet can also constitute a
digital clone, or more precisely, a digital thought clone.563 The primary
concern regarding digital cloning is its comprehensive scope and the lack of
laws governing it. According to Robert O’Brien, the US National Security
Advisor, the collection of information that creates a digital clone leads to the
accumulation of incredible power and can be potentially used to exploit the
hopes and fears of people.

561
Charlotte Jee, ‘Technology That Lets Us ‘Speak’ to Our Dead Relatives Has Arrived. Are
We Ready?’ (MIT Technology Review, 19 October 2022)
<https://www.technologyreview.com/2022/10/18/1061320/digital-clones-of-dead-people/>
accessed 18 November 2022.
562
Truby Jon and Brown Rafeal, ‘Human Digital Thought Clones: The Holy Grail of
Artificial Intelligence for Big Data’ (2020) 30(2) Information & Communications
Technology Law 140 <https://doi.org/10.1080/13600834.2020.1850174> accessed 19
November 2022.
563
ibid.

237
The Virtues and Vices of Digital Cloning and its Legal Implications

It is noteworthy that we, the common citizens, consistently share personal


data through several providers for convenience and free services. Author
Shoshana Zuboff notes this phenomenon as “surveillance capitalism.”564
Digital cloning thus has a significant negative potential for exploitation.
While this fact alone does not demerit the use of digital cloning, it certainly
raises concerns about the technology. Therefore, digital cloning warrants
further study to understand and address the issues it raises.

II. Types of Digital Cloning

There are four types of digital cloning currently recognised, namely:


 Audio and visual (AV) cloning;
 Memory and personality (mind cloning);
 Consumer behaviour cloning;
 Digital thought cloning.

i. Audio and visual (AV) cloning

Audio and visual cloning refers to the direct manipulation of audio and
visuals. Such cloning can help create fake images, videos, audio, and
avatars.565 AV cloning forms a crucial part in the unfortunate rise of
“Deepfakes”, which the author shall discuss in detail further in the chapter.
When we talk about audio cloning, Deepfakes do not even require an entire
audio file. A sound clip of 3.7seconds is enough for “Baidu” to clone human

564
Shoshana Zuboff, ‘You Are Now Remotely Controlled’ (The New York Times, 24
January 2020) <https://www.nytimes.com/2020/01/24/opinion/sunday/surveillance-
capitalism.html> accessed 19 November 2022.
565
ibid.

238
Acing the AI: Artificial Intelligence and its Legal Implications

voices.566 In 2019, a UK-based Energy company’s CEO was scammed out of


$243,000 using an AI Deepfake voice. 567 The CEO thought the caller was his
boss, as the voice carried a German accent and had his ‘melody.’

When we consider visual cloning, the Mark Zuckerberg Deepfake will serve
as a perfect example.568 In the fake video, Zuckerberg states, “Imagine this
for a second: One man, with total control of billions of people’s stolen data,
all their secrets, their lives, their futures.” Mark Zuckerberg previously
showed a hypocritical attitude towards Deepfake videos of other people. In
May 2019, a Deepfake video of Nancy Pelosi, House Speaker, showed her
speaking slurred.569 While YouTube removed the video, Facebook refused to
remove it. Once Zuckerberg got his Deepfake video, he changed Facebook’s
policy during the 2020 elections.570 The author believes that the lack of
awareness and consequences regarding AV cloning is creating such issues.
Zuckerberg’s attitude shift came too late, as it would not be possible to
reverse the damage.

566
Sercan O Arik, Jitong Chen, Kainan Peng, Wei Peng and Yanqi Zhou, ‘Neural Voice
Cloning with a Few Samples’ (32nd Conference on Neural Information Processing Systems,
October 2018).
567
Jesse Damiani, ‘A Voice Deepfake Was Used to Scam a CEO out of $243,000’ (Forbes,
3 September 2019) <https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-
deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=76a2dfd22416> accessed 18
November 2022.
568
Rachel Metz and Donie O’Sullivan, ‘A Deepfake Video of Mark Zuckerberg Presents a
New Challenge for Facebook’ (CNN, 12 June 2019)
<https://edition.cnn.com/2019/06/11/tech/zuckerberg-deepfake/index.html> accessed 20
November 2022.
569
Donie O’Sullivan, ‘Doctored Videos Shared to Make Pelosi Sound Drunk Viewed
Millions of Times on Social Media’ (CNN, 24 May2019)
<https://edition.cnn.com/2019/05/23/politics/doctored-video-pelosi/index.html> accessed 20
November 2022.
570
Hadas Gold, ‘Facebook Tries to Curb Deepfake Videos as 2020 Election Heats up’
(CNN, 7 January 2020) <https://edition.cnn.com/2020/01/07/tech/facebook-deepfake-video-
policy/index.html> accessed 18 November2022.

239
The Virtues and Vices of Digital Cloning and its Legal Implications

ii. Mind cloning

‘Mind cloning’ or memory and personality cloning refers to digitally making


a copy of a person’s mind.571 The technique is prominent within companies
for collecting digital data and behavioural and decision-making patterns of
individuals. These companies create a digital clone through the concept of
‘mindfiles.’ Mindfiles comprise the digitalised version of an individual’s
recollections, feelings, thoughts, attitudes, preferences, and values, and the
software known as ‘mindware’ shall process this data.572 Mind cloning is the
most problematic stream of digital cloning. Firstly, the current research does
not propose to help people with mental health issues. Instead, it primarily
focuses on luxurious activities that do not contribute towards social benefit
on a large scale.

We can first consider the Terasem Movement. The Terasem Movement is a


group of three organisations focusing on creating “a conscious analogue of a
person.” Their objective further extends to downloading it into either a
biological or nanotechnological body. In other words, the organisation is
trying to achieve digital immortality. Martine Rothblatt, the founder of
Terasem, created a mindclone of her deceased husband and embedded it into
a robotic replica.573 Due to this success, the Terasem Movement has started
to create thousands of clones for the 56,000 people who willingly shared

571
Jon and Rafeal (n 562) 143.
572
ibid.
573
Natalie O’Neill, ‘Companies Want to Replicate Your Dead Loved Ones with Robot
Clones’ (VICE, 16 March 2016) <https://www.vice.com/en/article/pgkgby/companies-want-
to-replicate-your-dead-loved-ones-with-robot-clones> accessed 19 November 2022.

240
Acing the AI: Artificial Intelligence and its Legal Implications

information about their family members. The movement has stated that they
plan to start commercialisation within 10-20 years. 574

 A critique of HereAfter AI

Besides Terasem, mind cloning for resurrecting dead family members is also
part of a Los Angeles-based technological company – HereAfter AI. In her
article, Charlotte Jee notes her experience using the services HereAfter AI
provides. She goes on to note that although her parents are alive and well,
she was able to fall under the illusion that her digital parents were real while
talking with their digital avatars.575 We can therefore see that the company’s
technology is quite effective.

The company states its objective as “preserving a person’s stories and voice
forever.” Photo albums and life story books are non-interactive and thus, this
technology intends to make memories interactive. The author’s concerns
regarding HereAfter AI revolve around their privacy policy. The question
about whether they would sell it outside to others is posted on the company’s
FAQ page. Instead of discussing the company’s compliance with US privacy
norms such as the California Consumer Privacy Act (CCPA) of 2018 and
GDPR principles, they only say, “Nope, never”. The website does not
contain any privacy policy that users can go through before opting for their
services. Despite such grave data privacy concerns, the lack of awareness
amongst the common public is attracting many people to invest in the
products. As Charlotte Jee noted, if an individual’s loved ones could be

574
ibid.
575
Jee (n 561).

241
The Virtues and Vices of Digital Cloning and its Legal Implications

connected with them again after their death, would they think it was wrong
to try?576

When we examine the categories of questions the company shall ask for
utilising its product, such as ‘Advice,’ ‘Ancestry,’ ‘Celebrations,’
‘Childhood,’ ‘Children,’ ‘College,’ ‘Dating,’ ‘Feelings,’ ‘Friends’ and much
more, we can see how each of these categories can contain sensitive
information about a person. They will have people answer at least six
questions of a highly personal nature. When data privacy problems continue
to exist even with a privacy policy, the lack of one creates a severe problem
for the users of this product. Thus, we must ask ourselves how this company
can do this without having a proper privacy policy page on its website.

While HereAfter AI’s lack of a transparent privacy policy may initiate an


array of data violation concerns, the same is not valid for another company
that works on similar AI technologies. StoryFile, a five-year-old start-up
company, also does similar work to HereAfter AI. They record videos of
their customers instead of recording audio alone. StoryFile has an explicit
privacy policy that all users can access before purchasing their services. The
policy addresses the data they collect and also points out that the users would
have to check the separate privacy policies of other company websites that
work with StoryFile. The privacy policy addresses the collection of
information, the use of information, the sharing of information and the
transfer of information. They also allow the data subjects to access their data
whenever they deem fit. While this does not solve all the potential privacy
issues that may arise in the future, the company does provide a basic format

576
ibid.

242
Acing the AI: Artificial Intelligence and its Legal Implications

in which users can address any future issues even before purchasing a
service.

iii. Consumer behaviour cloning

In consumer behaviour cloning, the primary focus is on clustering and


personalising customers. This method of cloning is a critical process in
several sectors. In a survey, 41% of US consumers said they would abandon
a brand without personalisation.577 As a result, not only do people want
personalization, but businesses benefit from it as well.

There are three types of user profiles within consumer behaviour cloning,
namely:

a. Explicit User Profiles;


b. Implicit User Profiles;
c. Hybrid User Profiles.578

Explicit User Profiles are created using surveys and ratings. This profile type
is commonly seen on websites like Amazon and Flipkart. Systems create
Implicit User Profiles through digital footprints. This method is slightly
problematic as it could lead to privacy and data protection violations. Hybrid
User Profiles are a simple combination of both Explicit and Implicit User
Profiles.

Along with the types, there are three methods to create user profiles. The
Content-based method refers to creating user profiles using past behaviour in

577
Unfold Labs, ‘AI Driven Personalization’ (Medium, 11 June 2019)
<https://unfoldlabs.medium.com/ai-driven-personalization-6dc9c47c1418> accessed 18
November 2022.
578
ibid.

243
The Virtues and Vices of Digital Cloning and its Legal Implications

cyberspace; the Collaborative method collects the same categories of people


based on age, gender, sexuality, social class and other categories; finally, the
Hybrid method attempts to inculcate both methods while parrying the
weaknesses they may possess. 579 The author believes that consumer
behaviour cloning is the lesser evil of the former two types discussed, albeit
it has many privacy concerns without appropriate legislative regulation.

(ii) Digital thought cloning

The concept of digital thought cloning is extremely novel and is currently


rarely addressed due to legal concerns. Professors Jon Truby and Rafael
Brown were the first to invent this idea. 580 Their idea was also discussed in
the Netflix documentary, ‘The Social Dilemma’. 581 The professors state that
a digital thought clone comprises user activities from apps, social media
accounts, GPS tracking, and other such activities. The purpose of creating a
digital thought clone is to further increase the accuracy of a consumer’s
choices, thereby allowing a more detailed prediction regarding their
preferences and subsequently manipulating those preferences. 582

The information collected to attain this goal could be the time taken to
compare the price of a product, political views read over a period of time,
location history from an individual’s phone, and other such sensitive data.
Not only does digital thought cloning allow for accumulation of general data,
but it can also collect data such as likes, pages followed, and comments.583

579
ibid.
580
Jon and Rafeal (n 562).
581
‘Digital Thought Clones Manipulate Real-Time Online Behavior’ (Help Net Security, 7
December 2020) <https://www.helpnetsecurity.com/2020/12/07/digital-thought-clones/>
accessed 6 January 2023.
582
Jon and Rafeal (n 562) 145.
583
ibid.

244
Acing the AI: Artificial Intelligence and its Legal Implications

The professors further share a table that shows how easily one can predict
users’ sensitive data from their likes on Facebook. The likes translate into
people who are single, in a relationship, become parents at 21 years, smoke
cigarettes, drink alcohol, use drugs, are of diverse races, have different
political affiliation and have various sexual preference. 584

The critical problem regarding digital thought cloning is the lack of any other
knowledge regarding the concept. The professors note that the concept is
novel and is a term they coined. Another significant fact is that digital
thought cloning is a theoretical concept that does not have a practical
technology in use yet.585 Thus, its true practical implications cannot be
entirely ascertained at present. However, from the professors’ study, it is
clear that if the theoretical notion ever comes to fruition, the lack of norms
governing the concept will lead to massive exploitation. The Facebook-
Cambridge Analytica scandal stands as proof of what such technology can
do if left unmonitored. Thus, it is imperative for pre-emptive and
prophylactic action to safely use any digital thought cloning technology that
may arise in the future.

III. Deepfake Technology – A Dangerous Vice for Digital Cloning

Deepfakes are a form of synthetic media where software replaces an existing


image with a fake one. It uses machine learning and artificial intelligence to
make the output look much more original or harder to detect. 586 According to
a Congressional Research Service (CRS) report, deepfakes are created using

584
ibid 146.
585
ibid 150.
586
J Kietzmann et al, ‘Deepfakes: Trick or Treat?’ (2020) 63(2) Business Horizons 135
<https://doi.org/10.1016/j.bushor.2019.11.006> accessed 7 January 2023.

245
The Virtues and Vices of Digital Cloning and its Legal Implications

a technique called ‘Machine Learning’ (ML). Within machine learning, there


is a specific process known as ‘Generative Adversarial Networks’ (GAN).587
The GAN subsequently creates two networks known as the generator and the
discriminator. The generator shall create fake data like photos, audio
recordings and video footage, which the discriminator goes through looking
for discrepancies in the data. The cycle shall continue until the discriminator
cannot differentiate between the real and fake data.588 From this explanation,
we can see how deepfake technology can be both beneficial and concerning.
However, it is unfortunate that the concerns significantly overlap with the
benefits of deepfake technology.

Deepfake technology has led to a significant boost in newer forms of crime.


One of them is Deepfake Pornography. Deepfake pornography is a
dangerous and malicious weapon that exclusively targets women. It is one of
the most severe crimes constantly being committed against women, and the
act by itself is not considered a crime in several jurisdictions. The process of
creating deepfake pornography involves the use of artificial intelligence,
whereby another person’s face replaces a person’s face most realistically. 589
It is to be noted that when a man’s image is presented to a deepfake software
known as “DeepNude”, it converts male genitalia into female genitalia 590.
Thus, the entire purpose of deepfake pornography is to attack and demean
women. A more significant concern is the ease with which deepfake
pornography is created. Nearly five years ago, the photo of actress Gal Gadot

587
Kelly M Sayler and Laurie A Harris, ‘Deep Fakes and National Security’ CRS Report
IF11333.
588
ibid.
589
Anne Pechenik Gieseke, ‘“The New Weapon of Choice”: Law’s Current Inability to
Properly Address Deepfake Pornography’ (2020) 73(5) Vanderbilt Law Review 1479
<https://scholarship.law.vanderbilt.edu/vlr/vol73/iss5/4> accessed 7 January 2023.
590
ibid.

246
Acing the AI: Artificial Intelligence and its Legal Implications

was used to create a deepfake pornography video.591 The chapter previously


noted the novelty of digital thought cloning and its lack of legal
jurisprudence. It is evident how technology’s growth is swift when compared
to the laws that govern it.

One of the critical concerns when discussing deepfake technology is the


inability to take legal action in several jurisdictions. For example, we can
consider deepfake pornography and non-consensual pornography. In the
United States, nearly every state (apart from South Carolina and
Massachusetts) has laws regulating non-consensual pornographic depiction.
However, for deepfake pornography, only four states have proposed specific
laws punishing the same, namely California, 592 Georgia,593 Virginia, 594 and
New York.595

In California, Section 1708.86 of the California Civil Code discusses ‘altered


depiction’. As stated in Section 1708.86(a)(1), the ‘altered depiction’ refers
to a performance that the depicted individual performed; however, it was
later changed without prior consent. Such a depiction can be categorised as
‘despicable conduct’ under Section 1708.86(a)(5). Section 1708.86(a)(6)
discusses ‘digitisation’, where the nude body parts of one individual are
depicted as the ones of another. Sub-section (b) points out the computer-
generated aspect. This particular provision is interpreted as deepfake
pornography. Section 1708.86(b) allows an affected person to raise a cause

591
Samantha Cole, ‘AI-Assisted Fake Porn Is Here and We're All Fucked’ (VICE, 11
December 2017) <https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn>
accessed 10 January 2023.
592
California Civil Code, s 1708.86.
593
Georgia Code, s 16-11-90.
594
Virginia Code, s 18.2-386.2.
595
New York Civil Rights Law, s 52-C.

247
The Virtues and Vices of Digital Cloning and its Legal Implications

of action against the injuring party, thereby granting some relief to an


affected party. Section 1708.86 also provides damages amounting to at least
$1,000 but not more than $30,000. The author would like to note that while
the provision does not talk about preventive action against deepfake
pornography or even criminal liability, it at least addresses the civil liability
that can make malicious individuals hesitate before committing this grave
crime.

Similar to California, the State of New York also has a civil law concerning
deepfake pornography. The New York Civil Rights Law, in Section 52-C,
talks about the private right of action for unlawful dissemination or
publication of a sexually explicit depiction of an individual. Like the
California Civil Code, Section 52-C addresses altered depiction and
digitisation and levies civil liability in case of malicious intent. The provision
additionally levies a limitation period of either three years from the date of
publication of the sexually explicit material or one year from the date the
affected party discovered the sexually explicit material.

However, the States of Virginia and Georgia go a step further and levy
criminal liability upon the offenders. The Virginia Code for Crimes and
Offenses, under the unlawful dissemination or sale of images of another law,
states that the offence of selling videos or still images created through any
means with malicious intent shall be liable for a Class 1 misdemeanour
charge. Due to its holistic nature, deepfake pornography is also included in
its consideration. Similarly, the Georgia Code for Crimes and Offenses,
under the prohibition on nude or sexually explicit electronic transmissions,
states that a person who knowingly transmits sexually explicit content of
another person without their consent through electronic or any other means

248
Acing the AI: Artificial Intelligence and its Legal Implications

shall be guilty of a misdemeanour or higher charge and shall be punished


with not less than five years of prison or a fine of a maximum of
$100,000.00 or both. Thus, currently, only Virginia and Georgia have
criminal liability laws for deepfake pornography, even at an international
level.

IV. Issues and Concerns

As previously established, there are several data protection and privacy


violations possible with digital cloning. Whether AI falls under GDPR
principles or GDPR setups is still debatable. For example, if we consider the
HereAfter AI example, a data subject’s right to access data is questionable.
This right is undeniable under Article 15 of the GDPR. However, with the
company being based in the United States and without having a
comprehensive international privacy policy, the rights of international users
become questionable. Even US privacy law norms, such as the California
Consumer Privacy Act, are not completely clear about the status of digital
cloning. In 2019, a bill before the US Senate titled the ‘Deepfakes
Accountability Act’ was given, but it is still pending and has not received
any formal authorisation. Another critical concern is that there is no
informed consent with digital cloning. As we saw with HereAfter AI and
Terasem, people are rushing into using their services without understanding
the consequences simply due to their understandable desperation. Thus,
companies can exploit their weaknesses for their personal profit in a grey
area of the law.

249
The Virtues and Vices of Digital Cloning and its Legal Implications

Discrimination is another significant issue with digital cloning. The AI


Rapper FN Meka was creating much positive buzz but was recently found to
be discriminative against the African-American community. 596 Copyright
issues regarding product ownership between the AI and its programmer are
also a critical concern. Finally, we can also consider the cultural and
religious sentiments against digital cloning to be a hurdle for the technology
to become universal in its application.

V. Conclusion

In the current scenario, the author believes that digital cloning cannot
become practical in its current form and stage. While there are certain
benefits for society, those benefits cannot currently outweigh the cons that
come with the technology. The number of risks and possible violations
without any legitimate accountability makes it untrustworthy. Perhaps if the
Deepfakes Accountability Act gets passed in the US, we might get a
foundation for the legal jurisprudence for this technology. When we consider
the scenario in a country like India, where the concept of digital cloning or
deepfake has not even been mentioned in the current legislative proposals,
the dangerous possibilities for exploitation are extremely alarming. Although
there are certain companies such as Soul Machines, that use this technology
with the aim of improving education, medicine, and even the entertainment
industry, the cons of digital cloning without proper laws are higher than the
pros. Thus, until then, it would be better for the technology to remain in a
research and development stage rather than enter commercial use.

596
M Conteh, ‘So What the Hell Was FN Meka, Anyway?’ (Rolling Stone, 31 August 2022)
<https://www.rollingstone.com/music/music-features/fn-meka-controversy-ai-
1234585293/> accessed 20 November 2022.

250
Acing the AI: Artificial Intelligence and its Legal Implications

You might also like