The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis WP 2023-01
The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis WP 2023-01
The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis WP 2023-01
Floris Mertens
Floris Mertens
Abstract
Praised popular applications of artificial intelligence (AI) such as ChatGPT are merely a
demonstration of AI’s potential in the business world. AI is on the verge of assuming a
common role in the management of companies, since its steady emergence as a support
tool for the administrative and judgement work of directors and managers. While only a
handful of companies worldwide have attempted to appoint a robo-director, the general
use of AI in corporate governance has proven to rationalize board decision-making, chal-
lenge groupthink and strengthen the independence of directors. Contrastingly, company
law frameworks around the world remain rooted in exclusively human decision-making
and deny the role of technology in corporate governance, resulting in inefficient regulatory
strategies with regard to AI systems bestowed with governance powers. As a result, uncer-
tainty exists about the legal permissibility and legal consequences of the implementation
of AI in the corporate realm, which could discourage corporations from adopting AI, in
spite of the technology being likely to enhance the business judgement of directors.
Therefore, this paper attempts to highlight the growing importance of AI in corporate gov-
ernance by classifying its gradual levels of autonomy vis-à-vis the board of directors. Then,
this paper proceeds to make a preliminary legal analysis of the potential roles of AI in the
management of memberless entities, leaderless entities and traditional corporations. The
strongest focus of this paper lies on fundamental questions of corporate law pertaining to
the delegation of decision rights to AI, the full replacement of human directors by AI, the
required human supervision of AI and the attribution of liability for algorithmic failure.
1
© Financial Law Institute, Ghent University, 2023
The use of artificial intelligence in corporate decision-making at board
level:
A preliminary legal analysis
Floris Mertens1
I. INTRODUCTION
Emergence of AI. In times when conversational chatbots such as ChatGPT are hailed
as game-changers for the era of artificial intelligence (hereafter: AI),2 many maintain that AI is
bound to be the central engine of a fourth industrial revolution, which will have a considerable
impact on the lives of individuals and organizations in our society.3 Since new powerful foun-
dation models4 are making their entrance into the real world, palpable excitement about the
corporate use of AI is emerging as it is no longer inconceivable that AI will become indispen-
sable to many aspects of the corporate realm.5 In fact, AI is on the verge of playing a crucial
role in the management of companies. There is a growing recognition in the business world
that AI systems can usefully assist human directors in their decision-making at management
level, while only a handful of corporations worldwide have already attempted to grant AI true
decision-making powers akin to those of human directors. However, the use of so-called “(ar-
tificial) governance intelligence”6, even as a mere support tool to directors, generates unprec-
edented issues of corporate law, which call for a thorough legal analysis.
Use cases of AI in the corporate realm. Governance bodies such as the board of direc-
tors increasingly deploy AI to assist the decision-making on their corporate strategy, personnel
selection, procurement, sales, marketing and even movie greenlighting.7 In fact, assisting
1 Doctoral researcher at the Financial Law Institute, Ghent University; Fellow of the Flemish Research Foundation (FWO).
The author would like to express his sincere gratitude to Hans De Wulf for his valuable comments and contributions.
Thank you to Julie Goetghebuer, Sergio Gramitto Ricci, Julie Kerckaert, Sinan Vanden Eynde and Francis Wyffels for the
helpful remarks and suggestions. My appreciation also goes to John Armour, Florian Möslein and Eva Lievens for the
valuable remarks on the general research project.
2 Open AI’s ChatGPT dialogue-based AI chatbot has gained a great deal of attention, as it is capable of understanding
natural human language and generating impressively detailed human-like written text; see S. LOCK, “What is AI chatbot
phenomenon ChatGPT and could it replace humans?”, The Guardian 2022, www.theguardian.com/technol-
ogy/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans; S. MURPHY KELLY, “This AI
chatbot is dominating social media with its frighteningly good essays”, CNN Business 2022, https://edi-
tion.cnn.com/2022/12/05/tech/chatgpt-trnd/index.html.
3 R. GIRASA and G.J. SCALABRINI, Regulation of Innovative Technologies – Blockchain, Artificial Intelligence and Quantum
(e.g. finetuned) to various downstream cognitive tasks; R. BOMMASANI et al., “On the Opportunities and Risks of Foun-
dation Models”, 2021, https://arxiv.org/abs/2108.07258, 3.
5 X, “Artificial intelligence is permeating business at last”, The Economist 2022, www.economist.com/busi-
ness/2022/12/06/artificial-intelligence-is-permeating-business-at-last.
6 The term “artificial governance intelligence” or “corporate AI” is already used in the literature. See inter alia M. HILB,
“Toward artificial governance? The role of artificial intelligence in shaping the future of corporate governance”, Journal of
Management and Governance 2020, vol. 24 (4), 851-870; E. HICKMAN and M. PETRIN, “Trustworthy AI and Corporate
Governance: The EU’s Ethics Guidelines for Trustworthy Artificial Intelligence from a Company Law Perspective”, EBOR
2021, vol. 22, 593-625; M.A. TOKMAKOV, “Artificial Intelligence in Corporate Governance” in S.I. ASHMARINA and
V.V. MANTULENKO (eds.), Digital Economy and the New Labor Market: Jobs, Competences and Innovative HR Technologies,
Cham, Springer, 2021, 667-674.
7 For example, Warner Bros. deploys an algorithm from Cinelytic to predict the box office results of movie projects before
being greenlit; CINELYTIC, “Data Driver Cinelytic Engages Warner Bros. Pictures International to Utilize Their Revolu-
tionary AI-Driven Content and Talent Valuation System”, Business Wire 2020, www.business-
wire.com/news/home/20200108005856/en/Data-Driver-Cinelytic-Engages-Warner-Bros.-Pictures-International-to-Uti-
lize-Their-Revolutionary-AI-Driven-Content-and-Talent-Valuation-System.
2
© Financial Law Institute, Ghent University, 2023
algorithms are already used in the management models proposed by McKinsey, Bain and BCG
as strategic advisors for investments.8 Relatedly, one of the most popular applications of gov-
ernance intelligence today is its support for the discovery and due diligence process of mergers
and acquisitions.9 Both processes are essential steps for the eventual board decision of the ac-
quirer, which involve a highly coordinated effort among experts such as company personnel,
accountants, lawyers and investment bankers.10 When these processes are assisted by AI, there
is an increased likelihood that the board will be able to negotiate an accurate price and appro-
priately tailored deal structure.11 In addition, AI systems are deployed by corporate directors
to profile investors, audit annual reports,12 review the risk of financial instruments and deter-
mine optimal market supply and demand.13
Back in 2014, the Hong Kong-based venture capital group Deep Knowledge Ventures took this
further by allegedly appointing an algorithm named “VITAL” to its board of directors.14 This
AI system was purportedly given the right to vote on whether the firm were to invest in a
specific company or not, similar to the other – human – directors of the corporation.15 Conse-
quently, VITAL has been widely acknowledged as the world’s first robo-director.16 After suc-
cesses stemming from VITAL’s decisions (such as investments in biotech start-ups Insilico
Medecine and Pathway Pharmaceuticals17), other companies also de facto implemented AI sys-
tems as board members (such as Tietoevry and Salesforce).18 Yet, most legal systems do not
allow the appointment of a robo-director (see no. 20). While only a handful of companies have
chosen this untrodden path of robo-directors, many have already assigned a supportive role
to AI in its corporate decision-making process.19
Prospects. One can therefore ascertain that AI is steadily emerging in the boardrooms
of innovative companies, which is supported by the recent EY-study ordered by the European
Commission, claiming that 13% of the respondent EU-companies already use governance
8 M. SCHRAGE, “4 Models for Using AI to Make Decisions”, Harvard Business Review 2017, https://hbr.org/2017/01/4-
models-for-using-ai-to-make-decisions.
9 M.R. SIEBECKER, “Making Corporations More Humane Through Artificial Intelligence”, The Journal of Corporation Law
16.
13 J.B.S. HIJINK, “Robots in de boardroom”, Ondernemingsrecht 2019, 11.
14 R. WILE, "A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors — Here's What It Actually Does",
tum Light Capital and NN Investment Partners strongly rely on their AI platforms for core financial decision-making.
3
© Financial Law Institute, Ghent University, 2023
intelligence, and an additional 26% plan to do so in the future.20 In respect of M&A, a 2022
study suggested that over 69% of its respondents (executives from large US public corpora-
tions and private equity funds) are utilizing AI tools for the due diligence process.21 On top of
that, in a survey report, the World Economic Forum made the claim that by 2026, corporate
governance will have faced a robotization process of a massive scale, with the result that hu-
man directors sharing their decision-making powers with artificial directors will become the
new normal.22 Even though this claim was made back in 2015, the corporate sector of today
shows an increasingly notable interest in AI.23 Momentum in computational power, break-
throughs in AI technology, and advanced digitalisation will therefore inevitably lead to a more
established support of corporate directors by AI, if not their full replacement by autonomous
systems.24
Corporate law. The rise of AI in corporate governance is contrasted by static company
law, which has not kept pace at all with governance-relevant advances at the technological
front. A good illustration is VITAL, which may have been widely acknowledged as the world’s
first robo-director,25 but from a legal point of view, Hong Kong corporate law did not recog-
nize the AI system as such.26 To bypass the law, VITAL was treated as a member of the board
with “observer status”.27 On a more general note, corporate law is not adapted to governance
intelligence, since it is rooted in human decision-making. Therefore, corporate law will have to
cope with novel legal questions, once the use of AI as a support tool or replacement of human
directors becomes more common.
It is the purpose of this paper to create awareness of the legal uncertainty arising from gov-
ernance intelligence, and to signal its legal issues from a company law perspective. To attain
this, Part II articulates the reasons for introducing AI in the boardroom, along with a taxonomy
20 ERNST & YOUNG, “Study on the relevance and impact of artificial intelligence for company law and corporate gov-
ernance – Final report”, 2021, https://op.europa.eu/en/publication-detail/-/publication/13e6a212-6181-11ec-9c6c-
01aa75ed71a1/language-en, 13-14.
21 DELOITTE, “2022 M&A Trends Survey: The future of M&A – Dealmaking trends to help you pivot on M&A’s fast-
discarded for this paper. It is a fact that AI is already being used in corporate boardrooms today, which prompts an inquiry
into its legal implications regardless of its desirability.
25 R. WILE, "A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors — Here's What It Actually Does",
ness/Artificial-intelligence-gets-a-seat-in-the-boardroom.
4
© Financial Law Institute, Ghent University, 2023
of governance intelligence on the basis of its autonomy level, which could serve as a bench-
mark for future differentiated rules. Then, Part III maps the current state of the legal art, iden-
tifies corporate law issues arising from AI under current legal frameworks and makes sugges-
tions on where the law should be headed to tackle or at least alleviate these legal issues.
Another useful distinction can be made on the basis of the system’s goals. Most AI systems
existing today are narrow, as they model intelligent behaviour for narrowly defined specific
28 S. LUCCI and D. KOPEC, Artificial Intelligence in the 21st Century: A Living Introduction, Dulles, Mercury Learning and
Information, 2016, 4; S. SAMOILI, M. LÓPEZ COBO, E. GÓMEZ, G. DE PRATO, F. MARTÍNEZ-PLUMED, and B.
DELIPETREV, “AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial
intelligence”, 2020, https://publications.jrc.ec.europa.eu/repository/handle/JRC118163, 7.
29 P. WANG, “On the Working Definition of Intelligence”, 1995, www.researchgate.net/publica-
tion/2339604_On_the_Working_Definition_of_Intelligence, 3.
30 L.S. GOTTFREDSON, “Mainstream Science on Intelligence: An Editorial With 52 Signatories, History, and Bibliog-
ciplines. Definition developed for the purpose of the AI HLEG’s deliverables”, 2019, https://digital-strategy.ec.eu-
ropa.eu/en/library/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines, 6.
33 E. ALPAYDIN, Introduction to Machine Learning, Cambridge, Massachusetts Institute of Technology, 2014, 1-4.
34 See inter alia T.M. MITCHELL, Machine learning, New York, McGraw-Hill, 1997, 414 p; K.P. MURPHY, Machine Learning:
A Probabilistic Perspective, Cambridge – London, MIT Press, 2012, 3; D. LEHR and P. OHM, “Playing with the Data: What
Legal Scholars Should Learn About Machine Learning”, UC Davis Law Review 2017, vol. 53, 673.
35 S. RUSSELL and P. NORVIG, Artificial Intelligence – A Modern Approach, Harlow, Pearson, 2022, 671.
5
© Financial Law Institute, Ghent University, 2023
tasks and fail to operate outside of their programmed domain of use cases. In spite of being
designed to fulfil limited tasks, even narrow systems can display autonomous capabilities by
operating under limited human supervision within the boundaries of their application field.36
A popular narrow AI application is ChatGPT, but also in the business world, the majority of
AI systems is narrow. AI systems such as autonomous vehicles are considered broad, as they
are designed to handle a variety of tasks.37 Finally, some scholars contend that it will not be
long until AI displays human-like intelligence with an unlimited operational domain, there-
fore achieving general intelligence.38
Critical evaluation of the technological state of the art. In reality, artificial general
intelligence might still be a decade or even a century away.39 In addition, critics highlight that
AI has been handicapped by an incomplete understanding of “intelligence”, as it is only able
to detect hidden correlations in large datasets and does not comprehend causal relationships.40
Furthermore, the use of foundation models and artificial neural networks for “deep learning”
(a subset of supervised machine learning) may pose transparency challenges, as these models
may embody “black box” characteristics causing the underlying reasons and the logic of their
decisions to be hard to comprehend, even for the developers of the system.41 The opacity issue
is combatted by the explainable AI movement (“XAI”), which strives for the decisions and
predictions of AI to become understandable in the eye of human beings.42 Finally, while AI
36 G. LUSARDI and A. ANGILLETTA, “The interplay between the new Machinery Regulation and Artificial Intelligence,
IoT, cybersecurity and the human-machine relationship”, 2022, www.technologyslegaledge.com/2022/04/the-interplay-
between-the-new-machinery-regulation-and-artificial-intelligence-iot-cybersecurity-and-the-human-machine-relation-
ship/#page=1.
37 Both narrow and broad AI belong to the category of weak AI, i.e. intelligent systems with limited goals. Computer
science has not managed to develop an AI system of which the intelligence goes beyond the “weak” attribute yet; H.
SHEVLIN, K. VOLD, M. CROSBY and M. HALINA, “The limits of machine intelligence – Despite progress in machine
intelligence, artificial general intelligence is still a major challenge”, EMBO reports 2019, vol. 20 (49177), 1-5.
38 For an overview of currents standpoints in the literature, see inter alia K. GRACE, J. SALVATIER, A. DAFOE, B. ZHANG,
and O. EVANS, “When Will AI Exceed Human Performance? Evidence from AI Experts”, Journal of Artificial Intelligence
Research 2018, vol. 62, 729-754; J. VINCENT, “This is when AI’s top researchers think artificial general intelligence will be
achieved”, The Verge 2018, www.theverge.com/2018/11/27/18114362/ai-artificial-general-intelligence-when-achieved-
martin-ford-book; J. NAVEEN, “How Far Are We from Achieving Artificial General Intelligence?”, Forbes 2019,
www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelli-
gence/#6aa0eda26dc4; H. HIRSCH‑KREINSEN, “Artificial intelligence: a “promising technology””, AI & SOCIETY 2023,
https://doi.org/10.1007/s00146-023-01629-w.
39 This claim is made by Geoffrey HINTON with regard to AI passing the Turing test in M. FORD, Architects of Intelligence
– the truth about AI from the people building it, Birmingham, Packt Publishing, 2018, 89. As of now, computer engineering is
far away from the dystopian scenario that philosopher Nick BOSTROM envisaged, in which general AI (“superintelli-
gence”) assigned with producing paperclips would sacrifice all of the planet’s resources to achieve its final goal and ulti-
mately convert the entire universe into paperclips; N. BOSTROM, Superintelligence: Paths, dangers, strategies, Oxford, Ox-
ford University Press, 2014, 122-125.
40 J. PEARL and D. MACKENZIE, The Book of Why: The New Science of Cause and Effect, New York, Basic Books, 2018, 418 p.
41 R.T. KREUTZER and M. SIRRENBERG, Understanding Artificial Intelligence – Fundamentals, Use Cases and Methods for a
Journal of Law & Technology 2018, vol. 31, 889-938; A. PREECE, “Asking ‘Why’ in AI: Explainability of intelligent systems
– perspectives and challenges”, Intelligent Systems in Accounting, Finance and Management 2018, vol. 25, 63–72; P. HACKER,
R. KRESTEL, S. GRUNDMANN and F. NAUMANN, “Explainable AI under contract and tort law: legal incentives and
technical challenges”, Artificial Intelligence and Law 2020, vol. 28, 415-439; S. LU, “Algorithmic Opacity, Private Accounta-
bility, and Corporate Social Disclosure in the Age of Artificial Intelligence”, Vanderbilt Journal of Entertainment & Technology
Law 2020, vol. 23, 99-159; A. BIBAL, M. LOGNOUL, A. DE STREEL and B. FRÉNAY, “Legal requirements on explainability
in machine learning”, Artificial Intelligence and Law 2021, vol. 29, 149-169; G. VILONE and L. LONGO, “Notions of explain-
ability and evaluation approaches for explainable artificial intelligence”, Information Fusion 2021, vol. 76, 89-106; G. DEL
GAMBA, “Machine Learning Decision-Making: When Algorithms Can Make Decisions According to the GDPR” in G.
BORGES and C. SORGE (eds.), Law and Technology in a Global Digital Society – Autonomous Systems, Big Data, IT Security
and Legal Tech, Cham, Springer, 2022, 75-87.
6
© Financial Law Institute, Ghent University, 2023
can help to reduce direct human biases, there is a risk that its model is trained and tested with
biased data, sometimes originating from an external developer.
b. Reasons for the use of artificial governance intelligence
Growing popularity of AI in the corporate realm. In spite of the aforementioned crit-
icism on the current capabilities of AI, this study signals that a growing number of companies
adopts AI for corporate decision-making. The concept of corporate decision-making refers to
the decision-making processes relating to the core functions of the board of directors and top
management (i.e. monitoring, strategy formulation and daily management). Of course, com-
puter systems have been deployed in support of corporate decision-making for decades now,
ranging from early decision support systems to executive support systems.43 But today, the
techniques and functionalities of AI enable useful and advanced applications for corporate
governance, such as retrieving relevant information, coordinating real-time data delivery, an-
alysing data trends, providing financial and other forecasts, monitoring financial transactions,
optimizing logistics flows and making useful predictions and scenario analyses for potential
courses of action in the decision-making process.44
Advantages of AI for corporate decision-making. On a more general level, AI support
rationalizes board decisions, some of which typically call for large amounts of data. The more
complex a decision is, the greater the amount of data that the board needs to consider in order
for it to make a rational and well-informed decision.45 The processing of a plethora of factors
to reach an optimal market-based decision is difficult for human directors, since they are often
unfamiliar with analytics. Because of this, board decisions are frequently made with little data
analysis and an emphasis on sheer gut feelings.46 Here, the main advantage of AI comes down
to the rapid analysis of large data arrays. Today’s best AI programmes are at heart statistical
(analytical) models,47 which can detect hidden correlations and patterns in large datasets.48
Thus, AI is able to complement the broad capabilities and knowledge of the human board
members by providing them with a clear analysis of intangible mountains of data, and there-
fore increase the pace at which difficult decisions are taken. Furthermore, boards could use AI
simulation tools (to generate e.g. Monte Carlo Simulations) to design and test scenarios. This
enables board decisions to be based on a rational and objective analysis of corporate patterns
43 R.H. SPRAGUE and E.D. CARLSON, Building Effective Decision Support Systems, Englewood Cliffs, Prentice-Hall, 1982,
xx + 329 p; D.L. OLSON and J.F. COURTNEY, Decision Support Models and Expert Systems, New York, Macmillan, 1992, xiii
+ 418 p; D.L. OLSON and G. LAUHOFF, Descriptive Data Mining, Singapore, Springer, 2019, 2.
44 P. BHATTACHARYA, “Artificial Intelligence in the Boardroom: Enabling ‘Machines’ to ‘Learn’ to Make Strategic Busi-
ness Decisions” in Fifth HCT Information Technology Trends (ITT), Dubai, IEEE Computer Society, 2018, 170-171; H.
DRUKARCH and E. FOSCH-VILLARONGA, “The Role and Legal Implications of Autonomy in AI-Driven Boardrooms”
in B. CUSTERS and E. FOSCH-VILLARONGA (eds.), Law and Artificial Intelligence – Regulating AI and Applying AI in Legal
Practice, Den Haag, Asser Press, 2022, 352.
45 T.A. LIEDONG, T. RAJWANI and T.C. LAWTON, “Information and nonmarket strategy: Conceptualizing the interre-
lationship between big data and corporate political activity”, Technological Forecasting & Social Change 2020, vol. 157, 1-12;
Z. LIPAI, X. XIQIANG and L. MENGYUAN, “Corporate governance reform in the era of artificial intelligence: research
overview and prospects based on knowledge graph”, Annals of Operations Research 2021, separate online issue, 12.
46 M.R. SIEBECKER, “Making Corporations More Humane Through Artificial Intelligence”, The Journal of Corporation Law
New Science of Cause and Effect, New York, Basic Books, 2018, 418p.
7
© Financial Law Institute, Ghent University, 2023
and industry trends instead of gut feelings.49
Additionally, some authors insist that governance intelligence, in addition to traditional solu-
tions such as a required form of diversity and independence of board members, is useful to
counteract groupthink.50 The latter is a psychological mode of thinking in highly cohesive
groups such as boards of directors, where the desire to reach consensus (or majority) by the
group members overrides critical thinking and correct judgment.51 In a scenario where the
board of directors has failed to consider alternate courses of action (either because there was
no time to process all relevant information or because it was hesitant to challenge the manage-
ment), the board will have to evaluate the output of the implemented AI system, which is
uninfluenced by groupthink as this cognitive tendency is inherently human. Thus, the board
will be able to consider aspects of a situation or courses of action that might have been missed
because of blind spots caused by groupthink.52
The technological neutrality53 of AI will also strengthen the independence of the board for two
reasons. First, AI support gives the independent board members more leverage to protest each
other’s opinions in board meetings.54 Since AI machines are fundamentally impartial and free
of conflict of interest, their output is not influenced by any friendship and ensures bolstered
independence of decision-making,55 as long as the human directors do not feel pressured to
put a blind faith in the AI output.56 The neutral outcomes of the machine can challenge strong
interpersonal relationships that may have grown between directors over the years, which led
to a holdback of protesting friends on the board.57 In other words, the board dynamics, includ-
ing interpersonal relationships, will shift considerably when AI is granted a dominant or even
49 A. HAMDANI, N. HASHAI, E. KANDEL and Y. YAFEH, “Technological progress and the future of the corporation”,
Journal of the British Academy 2018, vol. 6, 219-220.
50 A. KAMALNATH, “The Perennial Quest for Board Independence - Artificial Intelligence to the Rescue?”, Albany Law
Review 2019-20, vol. 83, 52; M.A. TOKMAKOV, “Artificial Intelligence in Corporate Governance” in S.I. ASHMARINA
and V.V. MANTULENKO (eds.), Digital Economy and the New Labor Market: Jobs, Competences and Innovative HR Technolo-
gies, Cham, Springer, 2021, 669.
51 I.L. JANIS, “Groupthink”, Psychology Today 1971, vol. 5, 43-46. The phenomenon of groupthink is often blamed in the
literature for the failures at Enron, Worldcom and the financial crisis of 2007-2008, see inter alia M.A. O’CONNOR, “The
Enron Board: The Perils of Groupthink”, Corporate Law Symposium 2003, vol. 71, 1233-1320; S. ALLEN, “The Death of
Groupthink”, Bloomberg 2008, https://www.bloomberg.com/news/articles/2008-02-05/the-death-of-groupthinkbusi-
nessweek-business-news-stock-market-and-financial-advice; M. SKAPINKER, “Diversity fails to end boardroom group-
think”, Financial Times 2009, www.ft.com/content/433ed210-4954-11de-9e19-00144feabdc0; P. SCHRANK, “A better
black-swan repellent”, The Economist 2010, www.economist.com/leaders/2010/02/11/a-better-black-swan-repellent.
52 R.J. THOMAS, R. FUCHS and Y. SILVERSTONE, “A machine in the C-suite”, 2016, https://ecgi.global/sites/de-
overview and prospects based on knowledge graph”, Annals of Operations Research 2021, separate online issue, 12; M.
EROĞLU and M.K. KAYA, “Impact of Artificial Intelligence on Corporate Board Diversity Policies and Regulations”,
EBOR 2022, vol. 23, forthcoming.
56 S.A GRAMITTO RICCI, “Artificial Agents in Corporate Boardrooms”, Cornell Law Review 2020, vol. 105, 899 (arguing
that human directors may feel overly compelled to conform to AI output. Should board members disagree with the sys-
tem, they might feel compelled to explain why they chose to disregard entirely, or deviate from, the output of the system.
As a result, the alleged pressure for human directors to explain why they disagree with AI could ultimately affect the
directors’ ability to exercise independent judgment when making a decision).
57 A. KAMALNATH, “The Perennial Quest for Board Independence - Artificial Intelligence to the Rescue?”, Albany Law
8
© Financial Law Institute, Ghent University, 2023
a mere assisted role in the board. Second, AI augmentation will help directors to process data
in a shorter period of time. Independent directors are known to hold positions in multiple
boards, where decisions sometimes need to be taken on a short notice. Being outsiders to the
company, they are unable to digest all decision-relevant data in this brief timespan. Govern-
ance intelligence can aid them in quickly distilling the crucial information,58 which may lead
to an increased board activity.59
c. Classification of artificial governance intelligence in terms of the level of auton-
omy
Importance of classification. There are, to this day, no universal standards defining
different kinds of AI systems used in the corporate realm. However, for the purpose of a legal
analysis of governance intelligence, it is imperative to first discover the potential use cases of
AI for corporate decision-making. A taxonomy offers a clear overview of how boards of direc-
tors and top managements may use AI to their benefit, irrespective of its lawfulness or legal
impact. In a second step, a taxonomy could also allow the legislator, should he be so inclined,
to use its categories as a benchmark for issuing AI-specific differentiated corporate rules. Such
a method would diminish legal uncertainty, as it enables companies to precisely determine the
applicable rules to a certain type of governance intelligence they would like to implement.
The level of autonomy as demarcation criterium. Within the corporate realm, the var-
ious agency conflicts and corresponding corporate rules are closely connected to the decision-
making power of agents in the corporation. Therefore, it makes sense to develop a governance
intelligence continuum that distinguishes categories on the basis of the allocation of decision
rights between man and machine. In other words, the level of autonomy of the AI system
serves as a benchmark here. Autonomy, not to be confused with automation,60 refers to the
ability to perform specific (narrow) tasks (based on the system’s utility functions) inde-
pendently from human guidance or intervention.61 Besides the allocation of decision rights,
the classification of governance intelligence in this study also takes into account the decision
type of the board, the task that is assigned to the AI system62 and the scope of the goals of both
the implemented AI system and the company as a whole (narrow or broad), which all have an
impact on the autonomy level of the AI system – and likewise the required oversight by human
directors.63 It should be stressed that, while a taxonomy of AI systems will follow, similar
58 A. KAMALNATH, “The Perennial Quest for Board Independence - Artificial Intelligence to the Rescue?”, Albany Law
Review 2019-20, vol. 83, 49-51.
59 ERNST & YOUNG, “Study on the relevance and impact of artificial intelligence for company law and corporate gov-
programmed supervised tasks on behalf of the user, most often executed in a repeating pattern; F. GALDON, A. HALL,
and S.J. WANG, “Designing trust in highly automated virtual assistants: A taxonomy of levels of autonomy” in A.
DINGLI, F. HADDOD and C. KLÜVER (eds.), Artificial Intelligence in Industry 4.0 – A Collection of Innovative Research Case-
studies that are Reworking the Way We Look at Industry 4.0 Thanks to Artificial Intelligence, Cham, Springer, 2021, 200.
61 W. XU, “From Automation to Autonomy and Autonomous Vehicles – Challenges and Opportunities for Human-Com-
capabilities of the technology. Instead, the demarcation between both categories depends on the enhancing nature of the
output of the AI system, as well as the role definition of the system, i.e. the task that is ad hoc entrusted to it, cf. K. WALCH,
“Is There A Difference Between Assisted Intelligence Vs. Augmented Intelligence?”, Forbes 2020,
www.forbes.com/sites/cognitiveworld/2020/01/12/is-there-a-difference-between-assisted-intelligence-vs-augmented-
intelligence/?sh=2322431526ab.
63 The general “AAAI”-classification of RAO and its equivalents, as well as the SAE standard J3016 for autonomous vehi-
cles, are two AI classifications built on a similar notion. See A. RAO, “AI: Everywhere and Nowhere (Part 3)”, 2016,
9
© Financial Law Institute, Ghent University, 2023
autonomy levels may be achieved with “non-intelligent” systems running on pre-pro-
grammed rules. However, the number of rules that a human developer can devise beforehand
is limited. Consequently, the need for AI increases as a task becomes more complex.
Levels of autonomy for artificial governance intelligence. As a starting point for this
classification, one should consider traditional board practices where human directors are the
sole decision-makers within the board. They are potentially supported by simple technology
applications without AI capabilities such as calculators and spreadsheets. These systems are
purely practical, task-specific and lack any form of autonomy in the corporate governance
process. However, from the moment that AI is deployed in the corporate realm, it is possible
that traditional board practices will fade in a gradual manner. According to the respective level
of autonomy and corresponding decision rights granted to the AI system, a distinction can be
made between assisted, augmented and autonomous governance intelligence.64
Assisted Governance Intelligence. In the assisted form of AI, human directors are still the sole
decision-makers within the board of directors, but they rely on selective support from narrow
AI systems for mostly practical and administrative tasks. These AI applications are task-spe-
cific and their output is restricted to an assisted nature that does not enhance the human deci-
sion-making or touch upon the core judgement work of directors. Assisted governance intel-
ligence is occasionally assimilated with the intelligent (i.e. AI-driven) “automation” of business
processes in management literature.65 Examples include business virtual assistants,66 intelli-
gent document and material processing,67 in addition to accounting and reporting robotics.68
Augmented Governance Intelligence. At the augmented level, human directors are still the final
decision-makers within the board of directors who rely on the sustained support from AI sys-
tems for certain specific decisions, in a manner that enhances or improves human intelligence
or decision-making. The human directors use AI output to improve the informative basis of
governance decisions for which they have a certain amount of policy freedom (decisions
ancethoughtleadership.com/ai-machine-learning/ai-everywhere-and-nowhere-part-3.
65 E.g. J. NALDER, “Future-U A3 Model: How to understand the impact of tech on work, society & education”, 2017,
https://static1.squarespace.com/static/52946d89e4b0f601b40f39a4/t/58ec53789de4bb1ee3bf3bc5/1491882887958/FU-
TURE-U+A3+MODEL+v2.pdf.
66 E.g. the (now discontinued) application Amy Ingram of the now-acquired company X.ai, which was developed to sched-
ule meetings by reading and writing e-mails, coordinating with participants and managing calendar invites.
67 Specialised AI platforms such as Automation Anywhere IQ Bot, docBrain, Kofax TotalAgility, Metamaze, Super.ai and
UiPath Document Understanding are using this process to automate complex document-based workflows of enterprises
in general.
68 As part of the so-called Robotic Process Automation (RPA) within businesses, see INSTITUTE FOR ROBOTIC PROCESS
10
© Financial Law Institute, Ghent University, 2023
pertaining to their “business judgement”). AI augmentation allows for the analysis of large
amounts of data, and the reduction of uncertainties essential to the decision-making with pre-
dictive analyses.69 Therefore, the human board members and AI systems perform decision-
making tasks jointly, but the AI system itself does not enjoy standalone decision rights as it is
exclusively entrusted with the preparation of decisions (there is human-in-the-loop70 over-
sight). Augmented governance intelligence can serve multiple purposes, such as searching for
information (i.e. intelligent search or enterprise search), classifying information (a form of su-
pervised learning), clustering information (a form of unsupervised learning) and rendering
precise recommendations and/or predictions (such as Monte Carlo scenario analyses). Cur-
rently, AI systems of this autonomy level are used to support M&A transactions and strategic
decision-making of the management. The predictive and prescriptive analytics of this level
also allow for cutting-edge forecasting applications in the field of finance and creative fields
such as the film industry (by predicting box office results for movie projects).71
Autonomous Governance Intelligence. At the final autonomous stage, AI systems are bestowed
with their own standalone decision rights for governance decisions, as they operate inde-
pendently from the guidance and control of human directors – if there are any. Such an auton-
omous level can be achieved through the delegation of core corporate governance powers to
the AI system or through the de facto appointment of an AI system as a member of the board.
The AI systems of this category vary in the scope of the operational domain for which they are
designed and consequently implemented. In case of a core power delegation to AI or an ap-
pointment of AI among other human directors (a hybrid board), the AI system operates within
a limited domain of the decision-making process. Human directors are still present to monitor
the overall operations and decisions of the AI system where desirable (called human-in-com-
mand oversight). For instance, a human director may delegate part of its monitoring duty to
an AI system or a robo-director may be appointed as additional board member. Another pos-
sibility entails the situation where the de facto board of a company exclusively consists of one
AI system (a fused board) or multiple artificial directors (an artificial board). Then, human direc-
tors are completely absent (human-out-of-the-loop), as the AI system operates independently
from any human intervention. A scenario of fused or artificial boards is currently only possible
(from a mere technological perspective) when the goals of the AI system and the company are
restricted to a narrow operational domain, in addition to being closely connected and inter-
twined. Examples include algorithmic trading, robo-taxi and vending machine companies,
where a limited purpose naturally facilitates the technological conceivability of the autono-
mous system. Self-driving subsidiaries with narrow goals, as defined by leading scholars,72
69 M.H. JARRAHI, “Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision mak-
ing”, Business Horizons 2018, vol. 61, 580; S. FRIEDRICH, G. ANTES, S. BEHR, H. BINDER, W. BRANNATH, F. DUMPERT,
K. ICKSTADT, H.A. KESTLER, J. LEDERER, H. LEITGÖB, M. PAULY, A. STELAND, A. WILHELM and T. FRIEDE, “Is
there a role for statistics in artificial intelligence?”, Advances in Data Analysis and Classification 2021,
https://doi.org/10.1007/s11634-021-00455-6.
70 Cf. the AI oversight models defined in HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE, “Ethics
11
© Financial Law Institute, Ghent University, 2023
also fall in this category. In order for AI systems to be able to govern corporations with broad
goals (i.e. with an unlimited number of operational domains), the achievement of strong or
general intelligence is required, which remains science fiction at this point in time. If ever ac-
complished, the upward potential of AI decision-making in corporate governance rises to a
superhuman level for all domains.
73 M. HILB, “Toward artificial governance? The role of artificial intelligence in shaping the future of corporate govern-
ance”, Journal of Management and Governance 2020, vol. 24 (4), 862-863.
74 For an account on the three agency problems within the corporation, see J. ARMOUR, H. HANSMANN and R. KRAAK-
and Uncertainty Quantification”, Journal of Uncertainty Analysis and Applications 2016, vol. 4, 1-21; B.D. MITTELSTADT, P.
ALLO, M. TADDEO, S. WACHTER and L. FLORIDI, “The ethics of algorithms: Mapping the debate”, Big Data & Society
2016, vol. 3, 1-21.
77 F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PAGALLO
(eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 666; J. ARMOUR and H.
EIDENMÜLLER, “Self-Driving Corporations?” in H. EIDENMÜLLER and G. WAGNER (eds.), Law by Algorithm, Tü-
bingen, Mohr Siebeck, 2021, 177; C. PICCIAU, “The (Un)Predictable Impact of Technology on Corporate Governance”,
Hastings Business Law Journal 2021, vol. 17, 119.
12
© Financial Law Institute, Ghent University, 2023
“intentions”, if any, should be attributed to its human coding, training data and learning pro-
cess.78 AI is thus claimed to be unbiased as opposed to humans, albeit in the limited sense that
the technology does not follow its own agenda.79 However, human biases may be reflected in
the algorithm and/or the data, and algorithmic biases may emerge from the learning process.80
The ex post remedies of corporate law are inefficient to counter these biases or to encourage
rule-compliant behaviour from governance intelligence,81 as these dissuasive methods are not
appropriate for systems of which the learning process is pre-programmed. Finally, the broad
standards of conduct that directors should adhere to when fulfilling their core functions, such
as the fiduciary duty of loyalty and care, are hardly intelligible for AI and cannot be coded
into the algorithm,82 since they are subject to interpretation and often ambiguously shaped by
case law.
Generally, the absence of specific corporate rules for artificial governance intelligence creates
legal uncertainty about whether it is lawful at all to “enhance” human governance decisions
with the help of AI, and how liability should be attributed when a decision based on AI output
causes harm to third parties. As mentioned earlier, Hong Kong corporate law did not recog-
nize VITAL as a director, since the legal status of a corporate director is reserved for natural
persons in Hong Kong.83 In this respect, its “appointment” was exaggerated, as no true legal
decision rights could be granted to the system. This renders the question to what extent a di-
rector is legally allowed to hand over (i.e. delegate) core decision-making powers to AI from a
corporate governance perspective, and if allowed, which level of supervision by humans is
required.84 A full replacement of corporate bodies by AI encounters even greater legal issues,
although prominent scholars acknowledge the possibility of creating algorithmic entities in
the US and the EU.85 Interestingly, the previously mentioned EY-study ordered by the Euro-
pean Commission purports that the use of AI as a support tool for decision-making is regarded
as permissible under the current corporate frameworks, in absence of specific statutory
78 In respect of the possibility of AI forming intent, see e.g. L.B. ELIOT, “On The Beguiling Question Of Whether AI Can
Form Intent, Including The Case Of Self-Driving Cars”, Forbes 2020,
https://www.forbes.com/sites/lanceeliot/2020/06/06/on-the-beguiling-question-of-whether-ai-can-form-intent-in-
cluding-the-case-of-self-driving-cars/?sh=2f28a81c448d.
79 S. DHANRAJANI, “Board Rooms Strategies Redefined By Algorithms: AI For CXO Decision Making”, Forbes 2019,
https://www.forbes.com/sites/cognitiveworld/2019/03/31/board-rooms-strategies-redefined-by-algorithms-ai-for-
cxo-decision-making/?sh=4861ddee3154; S.A GRAMITTO RICCI, “Artificial Agents in Corporate Boardrooms”, Cornell
Law Review 2020, vol. 105, 901 and 903; M. PETRIN, “Corporate Management in the Age of AI”, Columbia Business Law
Review 2019, vol. 3, 1006.
80 Human biases may be embedded in the algorithm, test data and training data, reflecting the financial interests of human
actors involved. In addition, algorithmic biases may emerge in the learning process. See e.g. S. BAROCAS and A.D.
SELBST, “Big Data’s Disparate Impact”, California Law Review 2016, vol. 104, 692-693 regarding the risk of human decision-
makers masking their intentions by using biased data; A.H. RAYMOND, E. ARRINGTON STONE YOUNG and S.J.
SHACKELFORD, “Building a Better Hal 9000: Algorithms, the Market, and the Need to Prevent the Engraining of Bias”,
Northwestern Journal of Technology and Intellectual Property 2018, vol. 15, 222-232 regarding algorithmic biases.
81 F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PAGALLO
(eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 666-667; M. PETRIN, “Cor-
porate Management in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 1013-1018.
82 M. PETRIN, “Corporate Management in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 1013; A. KAMAL-
NATH, “The Perennial Quest for Board Independence - Artificial Intelligence to the Rescue?”, Albany Law Review 2019-20,
vol. 83, 55; C. PICCIAU, “The (Un)Predictable Impact of Technology on Corporate Governance”, Hastings Business Law
Journal 2021, vol. 17, 119.
83 See no. 4.
84 See no. 18 on corporate power delegation and no. 21 on the required supervision of the delegated powers.
85 See no. 14.
13
© Financial Law Institute, Ghent University, 2023
provisions or case law.86 The latter is debatable to say the least, considering that the study itself
signals the complex legal questions arising from the use of assisted and augmented intelli-
gence.87 In addition, leading scholars in the field have expressed the opinion that existing cor-
porate law frameworks as a whole are currently unfit for the adoption of governance intelli-
gence with higher autonomy levels.88 Beyond corporate law, it is also unclear if the worldwide
initiatives to regulate AI may impose legal obligations on companies adopting governance
intelligence, besides the many specific rules concerning data and financial transactions.89 As a
result of this legal uncertainty from a company law and technology law perspective, compa-
nies could be discouraged to adopt AI at a governance level, even when its implementation
would likely improve the quality of decision-making.
Memberless and leaderless entities. Before legal literature started paying attention to
the emerging potential of AI entering the boardroom of existing corporations, some scholars
depicted the creation of entirely new businesses without any ongoing human involvement.
This idea was first put forward by BAYERN, who asserted that one can de facto confer legal
personhood on an autonomous computer by putting it in control of a US limited liability com-
pany (LLC’s), thus creating a memberless or algorithmic entity exclusively governed by an
algorithm.90 According to BAYERN, the default corporate governance rules do not prevent so-
called algorithmic management, and state law allegedly allows LLC’s to continue their opera-
tions after becoming shareholderless over time under the exclusive control of an AI system.91
Even if state law would not allow the foregoing, circular ownership and “vetogates” could
achieve a comparable result.92 Other scholars contend that a similar result can be reached with
other corporate forms and in different jurisdictions, such as EU Member States.93 The latter
could be achieved by establishing algorithmic entities in countries with flexible regulatory
standards, and then invoke the principle of Freedom of Establishment in order to conduct
86 ERNST & YOUNG, “Study on the relevance and impact of artificial intelligence for company law and corporate gov-
ernance”, 2021, 48-51.
87 While the study views AI support tools for corporate decision-making as permissible under the current laws, it para-
doxically attaches two broad and ambiguous conditions to this permissibility: “(i) that duties and decisions laying at the
heart of the management function (e.g. definition and supervision of corporate strategy) remain with human directors,
and (ii) that directors oversee the selection and activities of AI tools, which in turn requires them to have at least some
basic understanding of how the specific AI tools operate”. In respect of autonomous intelligence, the study concludes that
AI cannot legally replace corporate bodies under the existing frameworks; see ERNST & YOUNG, “Study on the relevance
and impact of artificial intelligence for company law and corporate governance”, 2021, 48-51.
88 F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PAGALLO
(eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 666-667; M. PETRIN, “Cor-
porate Management in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 1015-1022; J. ARMOUR and H. EI-
DENMÜLLER, “Self-Driving Corporations?” in H. EIDENMÜLLER and G. WAGNER (eds.), Law by Algorithm, Tübingen,
Mohr Siebeck, 2021, 175-182.
89 See no. 21 for an illustration on specific rules of data protection law and financial law.
90 S. BAYERN, “Of Bitcoins, Independently Wealthy Software, and the Zero-Member LLC”, Northwestern University Law
Review 2014, vol. 108, 1495-1500; S. BAYERN, “The Implications of Modern Business-Entity Law for the Regulation of
Autonomous Systems”, Stanford Technology Law Review 2015, vol. 19, 93-112; S. BAYERN, “Are Autonomous Entities Pos-
sible?”, Northwestern University Law Review 2019, vol. 114, 23-47.
91 S. BAYERN, “Of Bitcoins, Independently Wealthy Software, and the Zero-Member LLC”, Northwestern University Law
Review 2014, vol. 108, 1496-1497; S. BAYERN, “The Implications of Modern Business-Entity Law for the Regulation of
Autonomous Systems”, Stanford Technology Law Review 2015, vol. 19, 101-104.
92 For these techniques, see S. BAYERN, “Are Autonomous Entities Possible?”, Northwestern University Law Review 2019,
vol. 114, 28-33; L.M. LOPUCKI, “Algorithmic Entities”, Washington University Law Review 2018, vol. 95, 919-924.
93 S. BAYERN, T. BURRI, T.D. GRANT, D.M. HÄUSERMANN, F. MÖSLEIN, and R. WILLIAMS, “Company Law and
Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators”, Hastings Science and Technology Law Jour-
nal 2017, vol. 9, 139-153; L.M. LOPUCKI, “Algorithmic Entities”, Washington University Law Review 2018, vol. 95, 907-912.
14
© Financial Law Institute, Ghent University, 2023
business in other EU Member States.94 However, the absence of any type of human control
creates a risk of undesirable activities and liability attribution problems. In addition, the tech-
nological conceivability and utility of memberless entities is rightfully criticized,95 as they have
no legitimate or serious raison d'être, let alone a for-profit purpose.96 More generally, the ne-
cessity of granting legal personhood to AI, whether or not under the corporate veil or via spe-
cial types of citizenship, is hotly debated.97
On the other end of the spectrum, one should be wary of AI-driven leaderless entities, namely
Decentralized Autonomous Organizations (DAO’s). DAO’s belong to the broader family of
decentralized organizations. The latter are computer programs without a distinct governance
body, running on a peer-to-peer network, which involves a set of users interacting with each
other, in accordance with a coded protocol, enforced on a blockchain.98 DAO’s are designed to
run autonomously on this blockchain since they are solely controlled by code,99 whereas tra-
ditional decentralized organizations require heavy involvement from humans on each end of
various transactions.100 Put another way, DAO’s have a structure that does not entail directors
or managers, since it is directly controlled by its members through an autonomous and decen-
tralized system.101 The utility functions of DAO’s span greater lengths than those of member-
less entities, as they originated in the world of Decentralized Finance (DeFi). However, the
DAO is faced with numerous legal hurdles in light of its lacking legal recognition, which may
result in courts qualifying it as an unincorporated partnership to impose personal liability on
94 T. BURRI, “Free Movement of Algorithms: Artificially Intelligent Persons Conquer the European Union's Internal Mar-
ket” in W. BARFIELD and U. PAGALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward
Elgar, 2018, 543-549; L.M. LOPUCKI, “Algorithmic Entities”, Washington University Law Review 2018, vol. 95, 927-928.
95 See inter alia D.M. HÄUSERMANN, “Memberless Legal Entities Operated by Autonomous Systems – Some Thoughts
on Shawn Bayern’s Article “The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems”
from a Swiss Law Perspective”, https://ssrn.com/abstract=2827504, 10-12; M.U. SCHERER, “Of Wild Beasts and Digital
Analogues: The Legal Status of Autonomous Systems”, Nevada Law Journal 2019, vol. 19, 264-279.
96 Contra L.M. LOPUCKI, “Algorithmic Entities”, Washington University Law Review 2018, vol. 95, 900-901.
97 E.g. L. SOLUM, “Legal Personhood for Artificial Intelligences”, North Carolina Law Review 1992, vol. 70, 1231-1287; F.P.
HUBBARD, “Do Androids Dream? Personhood and Intelligent Artifacts”, Temple Law Review 2010, vol. 83, 405-474; R.
DOWELL, “Fundamental Protections for Non-Biological Intelligences or: How We Learn to Stop Worrying and Love Our
Robot Brethren”, Minnesota Journal of Law Science & Technology 2018, vol. 19, 305-336; G. TEUBNER, “Digitale
Rechtssubjekte? Zum privatrechtlichen Status autonomer Softwareagenten”, Ancilla Iuris 2018, 36-78; V. A.J. KURKI, A
Theory of Legal Personhood, Oxford, Oxford University Press, 2019, 175-189; T.L. JAYNES, “Legal personhood for artificial
intelligence: citizenship as the exception to the rule”, AI & Society 2020, vol. 35, 343-354; N. BANTEKA, “Artificially Intel-
ligent Persons”, Houston Law Review 2021, vol. 58, 537-596; A. LAI, “Artificial Intelligence, LLC: Corporate Personhood As
Tort Reform”, Michigan State Law Review 2021, 597-653; E. MIK, “AI as a Legal Person?” in J.-A. LEE, R.M. HILTY, K.-C.
LIU (eds.), Artificial Intelligence and Intellectual Property, Oxford, Oxford University Press, 2021, 419-439.
98 This definition is, with minor alterations, adopted from L. METJAHIC, “Deconstructing The DAO: The Need for Legal
Recognition and the Application of Securities Laws to Decentralized Organizations”, Cardozo Law Review 2018, vol. 39,
1541-1542.
99 Blockchain networks (or other Distributed Ledger Technologies) serve as an interoperable layer for AI to interact and
potentially coordinate themselves with other code-based systems through a set of smart contracts; P. DE FILIPPI and A.
WRIGHT, Blockchain and the Law: The Rule of Code, Cambridge – London, Harvard University Press, 2018, 147-150.
100 Ibid, 148.
101 V. BUTERIN, “DAOs, DACs, DAs and More: An Incomplete Terminology Guide”, Ethereum Foundation Blog 2014,
https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an-incomplete-terminology-guide.
15
© Financial Law Institute, Ghent University, 2023
its members.102 Other legal risks include the unclear attribution of corporate fiduciary duties,103
in addition to the debatable security law qualification of initial offerings of tokens and coins.104
Aside from the foregoing legal aspects, the spectacular failure of Ethereum-based “The
DAO”105 underpins the general risks and shortcomings of a business entity without centralized
management.
Artificial management of traditional corporations – legal state of the art. The far-
fetched ideas of both memberless and leaderless corporations eventually drew attention to the
more realistic hypothesis of AI playing a role in the boardroom of traditional and already es-
tablished corporations. Over the past few years, a vast number of research papers have been
written about the legal status of governance intelligence in various jurisdictions.106 This re-
search wave was sparked by the original work of MÖSLEIN,107 in addition to the paper of
ARMOUR and EIDENMÜLLER.108 Prompted by the growing attention paid to governance
intelligence in the literature, the European Commission ordered a study report from EY on its
Corporate Governance 2021, vol. 5, 26; B. MIENERT, Dezentrale autonome Organisationen (DAOs) und Gesellschaftsrecht – Zum
Spannungsverhältnis Blockchain-basierter und juristischer Regeln, Tübingen, Mohr Siebeck, 2022, 189-192.
104 The US SEC published a report in 2017 about the qualification of DAO tokens as securities, which is a fact-based anal-
ysis; US SECURITIES AND EXCHANGE COMMISSION, “SEC Issues Investigative Report Concluding DAO Tokens, a
Digital Asset, Were Securities”, 2017, www.sec.gov/news/press-release/2017-131.
105 E.g. U. RODRIGUES, “Law and the Blockchain”, Iowa Law Review 2019, vol. 104, 697-706.
106 See inter alia A. KAMALNATH, “The Perennial Quest for Board Independence - Artificial Intelligence to the Rescue?”,
Albany Law Review 2019-20, vol. 83, 43-60; M. PETRIN, “Corporate Management in the Age of AI”, Columbia Business Law
Review 2019, vol. 3, 965-1030; Y.R. SHRESTHA, S.M. BEN-MENAHEM and G. VON KROGH, “Organizational Decision-
Making Structures in the Age of AI”, California Management Review 2019, vol. 61, 66-83; M.R. SIEBECKER, “Making Cor-
porations More Humane Through Artificial Intelligence”, The Journal of Corporation Law 2019, vol. 45, 95-149; M.E. DIA-
MANTIS, “The Extended Corporate Mind: When Corporations Use AI to Break the Law”, North Carolina Law Review 2020,
vol. 98, 893-931; L. ENRIQUES and D.A. ZETZSCHE, “Corporate Technologies and the Tech Nirvana Fallacy”, Hastings
Law Journal 2020, vol. 72, 55-98; S.A GRAMITTO RICCI, “Artificial Agents in Corporate Boardrooms”, Cornell Law Review
2020, vol. 105, 869-908; N. LOCKE and H. BIRD, “Perspectives on the current and imagined role of artificial intelligence
and technology in corporate governance practice and regulation”, 2020, https://ssrn.com/abstract=3534898; G.D.
MOSCO, “AI and the Board Within Italian Corporate Law: Preliminary Notes”, European Company Law Journal 2020, vol.
17, 87-96; U. NOACK, “Künstliche Intelligenz und die Unternehmensleitung” in G. BACHMANN, S. GRUNDMANN, A.
MENGEL and K. KROLOP (eds.), Festschrift für Christine Windbichler zum 70. Geburtstag am 8. Dezember 2020, Berlin, De
Gruyter, 2020, 947-962; C.M. BRUNER, “Artificially Intelligent Boards and the Future of Delaware Corporate Law”, 2021,
https://ssrn.com/abstract=3928237; J. LEE and P. UNDERWOOD, “AI in the boardroom: let the law be in the driving
seat”, https://ssrn.com/abstract=3874588; C. PICCIAU, “The (Un)Predictable Impact of Technology on Corporate Gov-
ernance”, Hastings Business Law Journal 2021, vol. 17, 67-136; J. LEE and P. UNDERWOOD, “AI in the boardroom: let the
law be in the driving seat”, 2021, https://ssrn.com/abstract=3874588; M.A. TOKMAKOV, “Artificial Intelligence in Cor-
porate Governance” in S.I. ASHMARINA and V.V. MANTULENKO (eds.), Digital Economy and the New Labor Market: Jobs,
Competences and Innovative HR Technologies, Cham, Springer, 2021, 667-674; S. WEI, “When FinTech meets corporate gov-
ernance: opportunities and challenges of using blockchain and artificial intelligence in corporate optimisation”, Journal of
International Banking Law and Regulation 2021, vol. 36, 53-68; H. DRUKARCH and E. FOSCH-VILLARONGA, “The Role
and Legal Implications of Autonomy in AI-Driven Boardrooms” in B. CUSTERS and E. FOSCH-VILLARONGA (eds.),
Law and Artificial Intelligence – Regulating AI and Applying AI in Legal Practice, The Hague, Asser Press, 2022, 345-364; A.
KAMALNATH and U. VAROTTIL, “A Disclosure-Based Approach to Regulating AI in Corporate Governance”, 2022,
https://ssrn.com/abstract =4002876; M.M. RAHIM and P. DEY, “Directors in Artificial Intelligence-Based Corporate Gov-
ernance – an Australian Perspective” in I. DUBE (ed.), Corporate Governance and Artificial Intelligence – a Conflicting or Com-
plementary Approach, Cheltenham, Edward Elgar, forthcoming.
107 F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PA-
GALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 649-670.
108 J. ARMOUR and H. EIDENMÜLLER, “Self-Driving Corporations?” in H. EIDENMÜLLER and G. WAGNER (eds.),
16
© Financial Law Institute, Ghent University, 2023
impact for corporate law and corporate governance. However, as mentioned earlier, this EY-
study claimed categorically that AI support for directors is permissible in absence of specific
cases and statutory law (even though great legal problems and uncertainty exist), while AI
replacing corporate bodies is legally impossible in the EU.109 In 2021 and 2022, the Commission
proposed a heavily criticized AI Act,110 AI Liability Directive111 and revised Product Liability
Directive112 without any rules tailored to governance intelligence, as the EY-study advised to
await further developments in the field. It goes without saying that this is a missed oppor-
tunity, as the Commission had the chance to anticipate potential AI developments and reflect
about their impact on legal rules of corporate governance today, in order to prevent develop-
ments at the technological front simply dictating the evolution of legal rules in the future. Rule-
makers should make up their minds today about how to regulate future AI developments for
the corporate realm, to ensure they are ready to react on the basis of calm analysis when tech-
nology progresses, instead of then improvising rule changes on the hoof.113 As long as no
adapted rules are introduced, the literature (and the judicial system) ought to bring legal clar-
ity for the legal questions arising from governance intelligence, of which the most prominent
are identified in Part III.
A legal research agenda. The increasing use of governance intelligence, in conjunction
with its speedy development, has led researchers to the consensus that AI will soon enter
stages of autonomous intelligence where it is bestowed with core board powers without any
role for humans. As controversy exists about the legal status of higher autonomy levels of
governance intelligence, the purpose of this study is twofold. First, the legal issues raised by
the several autonomy levels of governance intelligence should be identified from a corporate
law perspective (de lege lata). Second, it is the goal to suggest changes to current corporate law
frameworks, in order to solve or at least alleviate the identified problems (de lege ferenda).
b. Current legal problems arising from artificial governance intelligence
General. The continuum of artificial governance intelligence, ranging from assisted in-
telligence to autonomous intelligence,114 provokes three research questions in the field of cor-
porate law.115 In case of assisted intelligence, no true decision rights are granted to the machine.
From a corporate governance perspective, the use of AI in an assisted form therefore seems
permissible, but may still impose liability questions if the system contributes to negligence of
the company. From augmented intelligence and onwards, however, AI plays a more crucial
role in the judgment work of the board, where greater legal uncertainty comes into play. Here,
109 ERNST & YOUNG, “Study on the relevance and impact of artificial intelligence for company law and corporate gov-
ernance”, 2021, 45-51.
110 Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence and Amending Certain Union
Legislative Acts, 21 April 2021, COM(2021)206 final – 2021/0106 (COD) (hereafter: Draft AI Act).
111 Proposal for a Directive on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence, 28 September 2022,
the market, see e.g. S. STEIN SMITH, “Crypto Regulation Needs Clarity, But Rushing It Is A Bad Idea”, Forbes 2021,
www.forbes.com/sites/seansteinsmith/2021/08/03/crypto-regulation-needs-clarity-but-rushing-it-is-a-bad-idea/.
114 See no. 11.
115 As first identified by F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BAR-
FIELD and U. PAGALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018,
657-666.
17
© Financial Law Institute, Ghent University, 2023
the question arises if directors have the legal right to rely on AI output and/or delegate gov-
ernance powers to AI. This question may also be inverted, by wondering if directors could, for
some decisions, have the duty to rely on the narrow but superhuman analytical capabilities of
AI. Finally, autonomous AI systems may in the future also be able to fully replace human
directors and obtain all governance powers,116 which introduces more existential challenges
for company law as we know it today. The goal of these research questions is to diminish legal
uncertainty and ease the legal concerns of companies eager to adopt governance intelligence.
The right of a director to rely on the output of AI and/or delegate decision rights to
AI (core power delegation). As mentioned earlier, company directors already use AI output
to improve the informative basis of their decisions. The latter is most often the case when pure
data analysis is at the heart of the decision, which occurs in the asset management industry for
instance.117 At a higher autonomy level, AI could also be bestowed with certain tasks, decision
rights or core powers, such as monitoring the management and the overall performance of the
company. However, no clarity exists about the legality of such delegations in many corporate
frameworks. For example, UK company law provides the option for directors to delegate their
powers to a person or committee if foreseen in the articles of incorporation,118 but it is doubtful
whether AI can be considered as one of both.119 Italian law, on the other hand, completely
forbids delegation to other agents than board members,120 while Belgian law contains no stat-
utory rules on board power delegation. Even if the delegation would be legally permitted,
restrictions to the delegation authority of directors still need to be taken into account. To illus-
trate, Delaware courts insist that the “heart of the management” remains with the board of
directors.121 In fact, most corporate laws do not allow the delegation of core management de-
cisions,122 although it is usually unclear what these decisions include. The ambiguity of the
existing legal framework could be clarified by comparing AI task delegations to the established
literature concerning a director’s right to seek help from an (independent) auditor or expert,
in addition to literature on outsourcing board tasks.123
The duty of a director to rely on the output of AI and/or delegate decision rights to
116 According to MÖSLEIN, this may happen either because humans increasingly trust the machines’ abilities to decide or
because decisions have to be taken so quickly or require so many data that humans are simply unable to decide; F.
MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PAGALLO
(eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 657.
117 Investment firms whose sole goal is to maximize financial returns, have already to a large extent handed over share
GALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 658.
120 Art. 2381 Codice Civile (IT); G.D. MOSCO, “AI and the Board Within Italian Corporate Law: Preliminary Notes”, Eu-
“Artificially Intelligent Boards and the Future of Delaware Corporate Law”, 2021, https://ssrn.com/abstract=3928237, 9.
122 Such as Swiss and Spanish company law; see art. 716b, par. 1 Obligationenrecht (CH) and art. 249bis and 529ter Ley de
Sociedades de Capital (ES). Dutch company law, on the other hand, does not impose limitations on a director’s delegation
authority; K.H.M. DE ROO, “Delegatie van bestuursbevoegdheden”, WPNR 2019, vol. 150, 473.
123 See inter alia for an account on internal corporate governance structures in Europe: H. DE WULF, Taak en loyauteitsplicht
van het bestuur in de naamloze vennootschap, Antwerpen, Intersentia, 2002, 235-361; K.J. HOPT and P.C. LEYENS, “Board
Models in Europe. Recent Developments of Internal Corporate Governance Structures in Germany, the United Kingdom,
France, and Italy”, European Company and Financial Law Review 2004, vol. 1, 135-168; S. DE GEYTER, Organisatieaanspra-
kelijkheid: bestuurdersaansprakelijkheid, corporate governance en risicomanagement, Antwerpen, Intersentia, 2012, 289-478. See
also S.M BAINBRIDGE and M.T. HENDERSON, Outsourcing the board: how board service providers can improve corporate
governance, Cambridge – New York, Cambridge University Press, 2018, xiii + 234 p.
18
© Financial Law Institute, Ghent University, 2023
AI (core power delegation). Most corporate laws expect the board to make governance deci-
sions on a well-informed basis.124 Some systems even impose minimum requirements for the
gathering of information.125 Considering that the analytical capabilities of AI may be superior
to those of humans for a number of specific tasks, the ubiquitous expectation for directors to
act on a well-informed basis may very well evolve into the duty to rely on the output of AI.126
The duty of a director to rely on AI is most probable to emerge when data analysis is at the
basis of a decision, as merely following gut feelings may be considered uncareful in this case.
With regard to the board’s oversight function, Delaware case law already facilitates a potential
duty of AI delegation, as the reasonable use of formal monitoring systems in corporate gov-
ernance has been interpreted to follow from a director’s duty of loyalty.127 As of now, however,
the costs of data governance and the AI system’s operation do not justify the establishment of
any obligation to use AI. The latter may change in the near future, because AI technology ad-
vances rapidly.
The replacement of one or more human directors by AI (hybrid or artificial board).
A more distant prospect is the potential replacement of human directors by AI-driven robo-
directors, resulting in autonomous governance intelligence. It is possible that human directors
will soon share seats in the boardroom with one or more computers (a hybrid board128), that a
single algorithm will replace all human directors (a fused board129) or that the board will be
composed by multiple robo-directors, originating from differing manufacturers (an artificial
board). To discover whether AI can ever replace human directors in conformity with the ex-
isting corporate laws, two preliminary questions must first be answered. First, from a techno-
logical point of view, the type of tasks that are suited for AI replacement must be clarified. In
this respect, management literature acknowledges that, on the one hand, administrative work
could be placed in the hands of AI, going further down the road of time.130 Judgement work,
on the other hand, requires creative, analytical and strategic skills131 of which it is debated if
124 F. MÖSLEIN, Grenzen unternehmerischer Leitungsmacht im marktoffenen Verband: Aktien- und Übernahmerecht, Rechtsver-
gleich und europäischer Rahmen, Berlin, De Gruyter, 2007, 131-134.
125 Such as US, UK, French and Italian law, see F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corpo-
rate Law” in W. BARFIELD and U. PAGALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham,
Edward Elgar, 2018, 661-663.
126 Ibid, 660-662; U. NOACK, “Künstliche Intelligenz und die Unternehmensleitung” in G. BACHMANN, S. GRUND-
MANN, A. MENGEL and K. KROLOP (eds.), Festschrift für Christine Windbichler zum 70. Geburtstag am 8. Dezember 2020,
Berlin, De Gruyter, 2020, 953-955.
127 See Delaware Supreme Court (US) September 27, 2018, Marchand v. Barnhill, 2018 Del. Ch. Lexis 316; as an evolution
with regard to Delaware Court of Chancery (US), September 25, 1996, In re Caremark International Inc. Derivative Litiga-
tion, 1996 Del. Ch. 698 A.2d, 959. See for an account on the evolving case law on the board’s discretion regarding the design
of compliance monitoring systems: C.M. BRUNER, “Artificially Intelligent Boards and the Future of Delaware Corporate
Law”, 2021, https://ssrn.com/abstract=3928237, 11-19.
128 S.A GRAMITTO RICCI, “Artificial Agents in Corporate Boardrooms”, Cornell Law Review 2020, vol. 105, 900-903.
129 M. PETRIN, “Corporate Management in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 1002-1003.
130 See inter alia T.H. DAVENPORT and R. RONANKI, “Artificial Intelligence for the Real World - Don’t start with moon
with individuals and shareholders; see A. AGRAWAL, J. GANS and A. GOLDFARB, “What to Expect From Artificial
Intelligence”, MIT Sloan Management Review 2017, vol. 58, 24.
19
© Financial Law Institute, Ghent University, 2023
AI will ever achieve them.132 It is also uncertain whether AI will be able to balance stakeholder
interests.133 Second, most corporate laws presuppose that only natural and/or legal persons
may be appointed as directors,134 while AI is neither of those. In spite of the apparent impos-
sibility to appoint AI as director, prominent scholars believe that algorithmic entities can be
created in countries with flexible standards.135 Moreover, it is interesting to explore what the
legal implications would be if human directors could hypothetically be replaced by AI ma-
chines.
In this respect, it seems that current corporate frameworks are unfit for the adoption of auton-
omous AI. As mentioned above, existing corporate governance best practices are predomi-
nantly based on agency conflicts between human directors and shareholders, which will not
necessarily occur when the goals of the AI system are set in favour of the shareholders. 136
Moreover, robo-directors earn no money nor work towards the objective of doing so, with the
consequence that pay-for-performance regimes will be of no use to make AI pursue the corpo-
rate interest.137 Fiduciary duties such as the duty of loyalty and care are hardly intelligible for
algorithms,138 while the business judgement rule (e.g. in the US, UK, Italy and Germany) seems
impossible to apply to AI for various reasons.139 In addition, some authors argue that the po-
tential black box characteristic of inter alia neural networks hinders the collegiality of the
board,140 even though the thought process of human directors may be equally opaque.141 All
of the foregoing demonstrates that the introduction of robo-directors would prompt funda-
mental – if not existential – challenges for traditional corporate law.
The required human supervision and control of governance AI. As a first control
question to the foregoing research questions, one must verify the extent to which human
132 For an overview of the predominant views in the literature, see S. MAKRIDAKIS, “The forthcoming Artificial Intelli-
gence (AI) revolution: Its impact on society and firms”, Futures 2017, vol. 90, 50-53; M. PETRIN, “Corporate Management
in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 986-993. See also IBM, “The quest for AI creativity”,
https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/ai-creativity.html (retrieved on 31
October 2022).
133 See no. 13.
134 For example, corporations whose registered office is located in Belgium and the Netherlands are permitted to appoint
both natural and legal persons as a director; cf. art. 7:85, par. 1 WVV (BE) and art. 2:11 BW (NL). Contrastingly, UK com-
pany law requires at least one director to be a natural person, while German law totally forbids a legal person from attain-
ing the status of board member; cf. S. 155 (1) Companies Act (UK), S. 6 (2) GmbHG (DE) and S. 76 (3) AktG (DE).
135 S. BAYERN et al., “Company Law and Autonomous Systems: A Blueprint for Lawyers, Entrepreneurs, and Regulators”,
GALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 666-667.
138 See no. 13.
139 The business judgement rule essentially comes down to the rebuttable presumption that directors pursue the corpora-
tion’s interests in good faith when their actions are being challenged. As a prerequisite for this rule, most legal systems
require that the director must have had a certain amount of policy freedom, which seems to be absent when a decision is
delegated to AI, since AI reasons linear in pursuance of its set goals. Moreover, the contribution of AI to the decision-
making process is sometimes not exactly traceable and ascertainable, whilst this is also a common requirement for the
protection of the business judgement rule. See in this respect A. KAMALNATH, “The Perennial Quest for Board Inde-
pendence - Artificial Intelligence to the Rescue?”, Albany Law Review 2019-20, vol. 83, 55-56; G.D. MOSCO, “AI and the
Board Within Italian Corporate Law: Preliminary Notes”, European Company Law Journal 2020, vol. 17, 95; U. NOACK,
“Künstliche Intelligenz und die Unternehmensleitung” in G. BACHMANN, S. GRUNDMANN, A. MENGEL and K.
KROLOP (eds.), Festschrift für Christine Windbichler zum 70. Geburtstag am 8. Dezember 2020, Berlin, De Gruyter, 2020, 955.
140 A.-G. KLECZEWSKI, “L’intelligence artificielle au service des administrateurs: une mise à l’épreuve de la collégialité?”,
Rationalizing Biased Judicial Decisions: Evidence from Experiments with Real Judges”, Journal of Empirical Legal Studies
2019, vol. 16, 630–670.
20
© Financial Law Institute, Ghent University, 2023
directors should supervise the various levels of governance intelligence in accordance with
existing laws. Naturally, in most jurisdictions, the power of delegation does not relieve a di-
rector from its duty to supervise the exercise of delegated rights,142 but the supervision duty
differs from jurisdiction to jurisdiction and is often vaguely defined. Some authors argue that
directors must at least generally oversee the selection and activities of governance intelligence,
which requires the board members to have a basic understanding of how these devices oper-
ate.143 An oversight model were humans are expected to intervene in each decision cycle is
undesirable as it will diminish all efficiency gains.144
Besides the case law on corporate delegation, there are other rules that may require supervi-
sion of governance intelligence. For example, it is principally prohibited in the EU to entrust
AI with a decision that produces legal effects or similarly significantly effects for a natural
person, when the decision is based on personal data of that person (e.g. the selection of a
CEO).145 Surprisingly, the EU Draft AI Act does not require special oversight duties for gov-
ernance intelligence, since it does not qualify as high-risk in the current draft. However, exist-
ing regimes on directors’ conflicts of interests may be invoked by shareholders to challenge
AI’s decision, when its training data, test data or algorithm reflects the financial interests of its
human “controllers” – the directors. For financial decisions, specific rules may come into play
as well, such as the German financial regulation that requires substantial oversight of algorith-
mic trading on securities markets.146 Even if the law does not demand human directors to su-
pervise governance AI, they may be financially inclined to do so, if they or the company could
be held liable for damage caused by the system.
Liability for algorithmic failure. The second control question pertains to the liability
attribution for (un)lawful decisions made by governance intelligence that harm business part-
ners of the company and/or third parties.147 It remains an open question to what extent current
statutory and case law on product liability, general tort law and specifically director’s liability
can be applied to AI failures in governance context. Clearly, the AI system itself cannot be held
liable for its faulty predictions or decisions, as it cannot pay damages or make amends.148 Be-
cause of this liability gap, the law is required to turn to legal entities or natural persons in order
to enforce a liquidation of damages. In this respect, general tort law in most jurisdictions seems
142 E.g. A.N. MOHD-SULAIMAN, “Directors' Oversight Responsibility and the Impact of Specialist Skill”, 2010,
https://ssrn.com/abstract =1635154.
143 F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PA-
GALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 660.
144 E. HICKMAN and M. PETRIN, “Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines for Trustwor-
thy Artificial Intelligence from a Company Law Perspective”, EBOR 2021, vol. 22, 602.
145 The right of the individual ex art. 22 GDPR to oppose automated decision-making is widely interpreted as a general
prohibition for the data processor; see ARTICLE 29 DATA PROTECTION WORKING PARTY, “Guidelines on Automated
individual decision-making and Profiling for the purposes of Regulation 2016/679”, https://edpb.europa.eu/our-work-
tools/our-documents/guidelines/automated-decision-making-and-profiling_en, 19-20; J. GOETGHEBUER, “De invloed
van artikel 22 AVG op het gebruik van robo-advies binnen de beleggingssector. Met de rug tegen de muur?”, TBH-RDC
2020, 146-147, no. 21.
146 S. 80, par. 2 Wertpapierhandelsgesetz (DE); A. FLECKNER, “Regulating Trading Practices” in N. MOLONEY, E. FER-
RAN, and J. PAYNE (eds.), The Oxford Handbook of Financial Regulation, Oxford, Oxford University Press, 2015, 619-623.
147 In my opinion, governance intelligence may be able to cause harm at any given autonomy level. Therefore, the current
ambiguity of liability regimes applies to assisted, augmented and autonomous governance intelligence.
148 AI systems have “no soul to be damned, and no body to be kicked”, as they do not own assets or bear liabilities. Neither
do they have a social reputation or professional persona to protect. See S.A GRAMITTO RICCI, “Artificial Agents in Cor-
porate Boardrooms”, Cornell Law Review 2020, vol. 105, 886; C. PICCIAU, “The (Un)Predictable Impact of Technology on
Corporate Governance”, Hastings Business Law Journal 2021, vol. 17, 120.
21
© Financial Law Institute, Ghent University, 2023
to hold the owner of the AI system liable for algorithmic failure, which is mostly the company
using it. The same result is achieved by applying the agency theory, whereby the AI system is
considered an agent of the company (which is usually the case for directors), making its actions
attributable to the company.149 Therefore, one can assume prima facie that judges will be in-
clined to rely on fault liability of the company in case of governance AI failures. The latter does
not exclude the personal liability of (human) directors who have contributed to the failure.
Current fault liability regimes, however, put a great burden of proof on the shoulders of vic-
tims, as they are required to prove the faulty AI supervision of the company or its directors.
Of course, special liability regimes, such as the ones imposed by the EU’s GDPR, may also be
invoked when personal data is processed by governance intelligence without human interven-
tion,150 or when the potential black box embedded in inter alia neural networks makes its rea-
soning opaque in case of decisions with significant or legal effects.151
c. Potential solutions to the legal problems arising from artificial governance intel-
ligence
General. The identified corporate law problems should be solved or at least alleviated
for the instances in which it is shown that AI support can lead to more informed board deci-
sions or that for a certain type of board decision, the decision-quality of AI is clearly superior
to that of human intelligence. The latter is in my view the case when a board decision is based
on large amounts of data – considering that AI is able to translate incomprehensible mountains
of data into consolidated chunks of information, which are easily managed and understanda-
ble for human directors. AI is also superior to human intelligence if it could be proven to make
decisions faster than humans under the condition that the speed of decision-making is crucial
for a specific board decision. That being said, limitations to AI power transfers may be needed
as well, especially if biases, a black box or other technological flaws hinder the transparency
or independence of the system. In addition, if high risks to the fundamental rights of individ-
uals are involved (for example in case of a sensitive decision on the company’s personal data
policy), the allowed degree of AI autonomy should be reduced or even prohibited.
Shifting foundations of corporate law. It is clear that rule-makers should provide legal
frameworks that enable AI board appointments and AI task delegations on a decision-specific
basis, i.e. only with regard to decisions for which the normative assumption explained above
is fulfilled. However, for all cases where AI delegation or replacement would be legally per-
mitted, due respect for the regulatory concerns on maintaining human control of AI ought to
be considered. In this respect, a form of obligatory oversight for each decision cycle of the
system must be rejected, or should at least remain a last resort if the decision-making entails
149 On the agency or “organ” theory, see e.g. A. AVIRAM, “Officers’ Fiduciary Duties and The Nature of Corporate Or-
gans”, University of Illinois Law Review 2013, 763-784.
150 See no. 21.
151 The required transparency or “explainability” of AI under art. 22 GDPR remains controversial. See inter alia B. GOOD-
MAN and S. FLAXMAN, “EU regulations on algorithmic decision-making and a right to explanation”, AI Magazine 2017,
50-57; P. HACKER, R. KRESTEL, S. GRUNDMANN and F. NAUMANN, “Explainable AI under contract and tort law:
legal incentives and technical challenges”, Artificial Intelligence and Law 2020, vol. 28, 415-439; A. BIBAL, M. LOGNOUL,
A. DE STREEL and B. FRÉNAY, “Legal requirements on explainability in machine learning”, Artificial Intelligence and Law
2021, vol. 29, 149-169. Art 52 (1) Draft AI Act does not require a form of transparency of low-risk AI systems towards
individuals subject to its decisions. The provision solely requires natural persons interacting with the system to be aware
of its artificial nature, while these persons are the directors in this case.
22
© Financial Law Institute, Ghent University, 2023
certain risks for the rights of individuals, as efficiency gains and incentives for innovation are
stifled otherwise.
In respect of autonomous governance intelligence specifically, its legal recognition will require
rule-makers to introduce profound changes to the fundamentals of corporate law. Existing ex
post remedies in corporate law such as the fiduciary duties and liability of directors, designed
to control human agency conflicts and director’s behaviour, must be reimagined for the hy-
pothesis of AI entering the boardroom. As elaborated on before, an AI system cannot be held
liable and does not have its own interests, although inherent biases of its controllers may be
reflected as AI is only as good as its inputs and programming.152 For this reason, robo-directors
will be less inclined to abuse corporate assets for personal gains. On the other hand, whilst the
system can be programmed to pursue the interests of its principals, there is no guarantee that
it will follow all applicable legal rules and have a reasonable aversion to risks and losses. When
AI is trusted with a crucial role in board decision-making, rule-compliant behaviour will there-
fore need to be embedded in the algorithm’s code. The latter calls for cutting-edge ex-ante reg-
ulatory strategies,153 such as abstract coding requirements for appointed robo-directors, which
will implicate far-reaching “surgeries” to the anatomy of corporate law. After positioning the
algorithm in the corporate structure, its abstract oversight requires technological know-how,
which can neither be expected from shareholders, human directors, nor specialized courts.
Therefore, some contemplate the need for direct governmental control of governance intelli-
gence,154 which might collide with fundamental principles such as private autonomy and en-
trepreneurial freedom.
One particular ex ante strategy, proposed by prominent scholars, is the regulation or calibra-
tion of corporate objectives.155 As elaborated on before, algorithms pursue set goals. These
goals reflect the exact content of the interests AI should pursue, which are, assumedly, aligned
with the (best) interests of the company. Therefore, it may be more efficient for the legislator
to regulate corporate goals, i.e. the AI system’s goals. As a result, human biases embedded in
the algorithm and data, in addition to algorithmic biases generated by the learning process,
may be reduced to a minimum. However, it will require a hard balancing exercise to regulate
corporate goals or purposes in general while also leaving ample scope for firm-specific goals,
as the aforementioned fundamental principles must not be endangered.
Overall, however, if the implementation of autonomous governance intelligence would be per-
mitted, AI could allow the company to benefit from the separation of ownership and control,
while protecting the shareholders with a smart decision-maker that is loyal and careful to
them.156 To achieve this, the option of shareholder “say-on-manufacturer” rights in respect of
152 E.g. A.H. RAYMOND, E. ARRINGTON STONE YOUNG and S.J. SHACKELFORD, “Building a Better Hal 9000: Algo-
rithms, the Market, and the Need to Prevent the Engraining of Bias”, Northwestern Journal of Technology and Intellectual
Property 2018, vol. 15, 222-232.
153 F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in W. BARFIELD and U. PA-
GALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham, Edward Elgar, 2018, 666-667.
154 A form of control by state agencies on governance intelligence could endanger the fundamental principles of private
autonomy and entrepreneurial flexibility; see F. MÖSLEIN, “Robots in the Boardroom: Artificial Intelligence and Corpo-
rate Law” in W. BARFIELD and U. PAGALLO (eds.), Research Handbook on the Law of Artificial Intelligence, Cheltenham,
Edward Elgar, 2018, 667.
155 J. ARMOUR and H. EIDENMÜLLER, “Self-Driving Corporations?” in H. EIDENMÜLLER and G. WAGNER (eds.),
23
© Financial Law Institute, Ghent University, 2023
robo-directors could be considered by lawmakers, as a complement to the traditional say-on-
pay rights regarding human executives.
Liability for algorithmic failure de lege ferenda. Whilst current regimes point towards
the liability of the company deploying the AI system for harm caused by the system’s deci-
sions, it is not my opinion that the primary result of these regimes should be altered. The com-
pany using governance intelligence exclusively decides on the design and deployment of the
system, as opposed to the third-party developer, vendor, provider or operator of the platform.
In this respect, the company is the “least-cost avoider” of algorithmic failure and thus its lia-
bility is most justifiable.157 What matters, and should be up for debate, is the type of liability
that the company should face, i.e. fault liability or strict liability? On the one hand, fault liabil-
ity puts the difficult task of proving negligence or faulty supervision in the hands of victims,
often resulting in no compensation.158 If one were to prefer fault liability, then rebuttable pre-
sumptions of fault and/or causal link are necessary to alleviate the burden of proof for injured
parties, as the company itself has the best access to information on its AI system. On the other
hand, a strict liability regime would stifle the company’s incentive to innovate and requires it
to have sufficient funds to compensate all victims.159 The latter could be countered by imposing
a liability cap,160 or a mandatory liability insurance with minimum amount of coverage, as is
suggested before in the literature on self-driving cars.161 Some authors contend that companies
should be liable for harms of their “employed” algorithms just like they currently are for harms
of their human employees,162 but this idea is not appropriate for jurisdictions where directors
– likewise robo-directors – are not considered as employees.
In the literature, a plethora of alternative liability approaches can be found for artificial gov-
ernance intelligence. Regarding self-driving subsidiaries, for instance, a general liability of the
controlling shareholder for corporate debts (i.e. piercing of the corporate veil) seems well
founded vis-à-vis tort creditors.163 Others argue that actions against the AI system itself should
be made possible, which purportedly necessitates its bestowment with legal personality.164
The latter condition ought to be discarded, however, as legal personhood is not a prerequisite
for states to grant rights or duties to an entity. On the contrary, legal personhood is merely a
linguistic symbol or heuristic formula to conveniently label a set of legal capacities designated
157 J. ARMOUR and H. EIDENMÜLLER, “Self-Driving Corporations?” in H. EIDENMÜLLER and G. WAGNER (eds.),
Law by Algorithm, Tübingen, Mohr Siebeck, 2021, 180-182.
158 For a general account on fault liability versus strict liability, see S. SHAVELL, “Strict Liability versus Negligence”, The
potential, a liability cap may not be justified; see e.g. M. CHATZIPANAGIOTIS and G. LELOUDAS, “Automated Vehicles
and Third-Party Liability: A European Perspective”, University of Illinois Journal of Law, Technology & Policy 2020, 192-193.
161 H. EIDENMÜLLER, “The Rise of Robots and the Law of Humans”, in H. EIDENMÜLLER and G. WAGNER (eds.), Law
72, 844-848.
163 J. ARMOUR and H. EIDENMÜLLER, “Self-Driving Corporations?” in H. EIDENMÜLLER and G. WAGNER (eds.),
Law by Algorithm, Tübingen, Mohr Siebeck, 2021, 182. See more generally on piercing of the corporate veil: H. HANS-
MANN and R. KRAAKMAN, “Toward Unlimited Shareholder Liability for Corporate Torts”, The Yale Law Journal 1991,
vol. 100, 1879-1934.
164 M. PETRIN, “Corporate Management in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 1015-1016.
24
© Financial Law Institute, Ghent University, 2023
by the state,165 of which the suitability for AI is disputed.166 It seems more appropriate to give
AI systems limited rights and duties that enable actions against them, instead of granting these
systems the full package that legal personhood entails. This hypothesis prompts new ques-
tions pertaining to these entities’ lack of financial resources and the applicable standards of
behaviour.167
Recently, the European Commission proposed a differentiated approach of fault and strict li-
ability for damage caused by AI. For material damage (including data losses) suffered by nat-
ural persons, a strict liability regime for defective AI is suggested, which results in liability of
the manufacturer for defects in the algorithm – instead of the company using AI.168 For all
other damage in a non-contractual environment, the Commission did not opt for a strict lia-
bility regime. Instead, a (limited and conditional) rebuttable presumption of causal link be-
tween fault and AI output was put on the table, whilst maintaining the Member States’ own
regimes of fault liability.169 These proposals came after years of discussion on the application
field of the 1985 Product Liability Directive.170 The result, however, has little significance for
governance intelligence. Moreover, harm caused by AI in a contractual and commercial con-
text (B2B-contracts) will be of a greater magnitude, but is not covered by any of the proposed
regulatory measures in the EU as of now.171 Therefore, Member States should opt for a broader
application field of its transposition laws, which has been allowed by the ECJ in other con-
texts.172
IV. CONCLUSION
This research paper underpinned that corporate law is about to embark on a new era,
that is, the era of artificial governance intelligence. As a matter of fact, this paper has shown
that AI is increasingly deployed as a support tool for the core functions of the board of direc-
tors and the management of corporations, such as monitoring, corporate strategy setting and
daily management. Since AI is said to rationalize decision-making, reduce the risk of group-
think and boost the independence of board members, it is expected that the assisted role of AI
in corporate governance will soon transform into a leading one, as more corporations are at-
tempting to appoint algorithms as directors. Consequently, the World Economic Forum made
165 J. ARMOUR, H. HANSMANN and R. KRAAKMAN, “What Is Corporate Law?” in R. KRAAKMAN et al. (eds.), The
Anatomy of Corporate Law: A Comparative and Functional Approach, Oxford, Oxford University Press, 2017, 8; S.A
GRAMITTO RICCI, “Artificial Agents in Corporate Boardrooms”, Cornell Law Review 2020, vol. 105, 892-893.
166 See no. 14.
167 M. PETRIN, “Corporate Management in the Age of AI”, Columbia Business Law Review 2019, vol. 3, 1016.
168 See the Draft Revised Product Liability Directive.
169 Art. 4 Draft AI Liability Directive.
170 E.g. G. WAGNER, “Robot Liability”, 2018, https://ssrn.com/abstract=3198764; M. CHATZIPANAGIOTIS and G.
LELOUDAS, “Automated Vehicles and Third-Party Liability: A European Perspective”, University of Illinois Journal of Law,
Technology & Policy 2020, 118-131; J. DE BRUYNE, E. VAN GOOL and T. GILS, “Tort Law and Damage Caused by AI
Systems” in J. DE BRUYNE, and C. VANLEENHOVE (eds.), Artificial Intelligence and the Law, Antwerpen, Intersentia,
2021, 359-403.
171 See for a critical account on both the Draft AI Liability Directive and the Draft Revised Product Liability Directive: P.
HACKER, “The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future”,
2022, https://ssrn.com/abstract=4279796.
172 E.g. ECJ July 12, 2012, C-602/10, SC Volksbank România SA v. Autoritatea Naţională pentru Protecţia Consumatorilor
25
© Financial Law Institute, Ghent University, 2023
the claim that by 2026, human directors sharing their decision-making powers with AI will
become the new normal.173
Systems of artificial governance intelligence can be classified into several categories on the
basis of their level of autonomy, which determines the allocation of decision rights between
the AI system vis-à-vis the board of directors. In case of assisted intelligence, human directors
selectively rely on AI for administrative tasks. At the augmented stage, human directors use
AI output to enhance the informative basis of their decisions. Here, AI contributes to core de-
cision-making or judgement work of the board, but it does not enjoy standalone decision
rights. In its final autonomous stage, AI is bestowed with independent decision-rights through
a delegation of core governance powers or through its appointment as director. Here, a hybrid
board of humans and machines is possible, or all human members could be replaced by one
or more AI systems, resulting in a fused or artificial boardroom. Unless when the goals of the
corporation are very narrow, the latter remains science fiction today.
The emergence of AI in the corporate realm raises many questions of corporate law, which is
tailored to human decision-makers. While assisted intelligence seems permissible under cur-
rent corporate frameworks, human directors do not have the right to delegate core governance
tasks to AI. However, it is uncertain which decisions belong to this category. Even when del-
egation is permitted, it seems likely that humans should still supervise artificial governance
intelligence to a certain extent that does not diminish efficiency gains. As boards are generally
expected to make decisions on an informed basis, the right of a director to delegate limited
decision rights to AI could evolve into a duty, considering the analytical capabilities of AI. An
outright appointment of AI as director seems impossible in current regimes, but via a detour,
algorithmic entities seem plausible. The latter poses major challenges to the foundations of
corporate law, which is focused on controlling human agency conflicts through director’s fi-
duciary duties and liability. Instead, AI pursues set goals, does not have self-interests, does
not act in good or bad faith, but may still reflect biases of its human controllers. Therefore,
regulatory strategies should be revised to ex ante remedies, such as regulating corporate pur-
poses and imposing the embedding of rule-compliant behaviour into the code of robo-direc-
tors. The liability for algorithmic failure will likely be attributed to the corporation itself, but
policy debates on the burden of proof should determine the very nature of this liability regime.
New phenomena such as entities without leaders (DAO’s) or without members (algorithmic
entities) will undoubtedly challenge corporate systems even more than robo-directors do. It is
clear that further research on the shifting anatomy of corporate law is needed, to ensure that
novel corporate rules are not dictated by the quickly evolving AI technology, but based on
calm reasoning instead.
173WORLD ECONOMIC FORUM, “Survey Report: Deep Shift - Technology Tipping Points and Societal Impact”, Global
Agenda Council on the Future of Software & Society 2015, https://www3.weforum.org/docs/WEF_GAC15_Technologi-
cal_Tipping_Points_report_2015.pdf, 21.
26
© Financial Law Institute, Ghent University, 2023
The Financial Law Institute is a research and teaching unit in the Faculty of Law and Criminology of Ghent Uni-
versity, Belgium. The research activities undertaken within the Institute focus on various issues of company and
financial law, including private and public law of banking, capital markets regulation, company law and corporate
governance.
The Working Paper Series, launched in 1999, aims at promoting the dissemination of the research output of the
Financial Law Institute’s researchers to the broader academic community. The use and further distribution of the
Working Papers is allowed for scientific purposes only. The working papers are provisional.
27
© Financial Law Institute, Ghent University, 2023