AI and The Role of The Board of Directors

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

AI and the Role of the Board of Directors 24/09/24, 3:11 PM

Harvard Law School Forum on Corporate


Governance
AI and the Role of the Board of Directors
Posted by Holly J. Gregory, Sidley Austin LLP, on Saturday, October 7, 2023

Tags: Artificial intelligence, Board of Directors, Corporate governance, R&D


More from: Holly Gregory, Sidley Austin

Editor's Note: Holly J. Gregory is a Partner and co-chair of the Global Corporate Governance practice at Sidley
Austin LLP. This post is based on The Governance Counselor piece.

Artificial intelligence (AI) has the capacity to disrupt entire industries, with implications for corporate strategy and risk,
stakeholder relationships, and compliance that require the attention of the board of directors.

In 1950, Alan Turing, the father of computer science, famously asked, “Can machines think?” Since then, the
application of computer science and algorithms to collect and analyze data, identify patterns, make predictions, and
solve problems has advanced significantly. Today’s AI has become much better at mimicking aspects of human
intelligence, such as “understanding” language, “perceiving” images, or “generating” new, albeit derivative, content
(generative AI). AI is also advancing in its ability to self- improve its own performance (machine learning). (For more
on AI and machine learning, see Artificial Intelligence and Machine Learning: Overview on Practical Law; for more
on key terms used in descriptions of AI, machine learning, and generative AI, see Artificial Intelligence Glossary on
Practical Law.)

As a transformative technology, AI has the capacity to disrupt entire industries, creating new business opportunities
and presenting new risks for companies. Companies also play a key role in AI-related research and development
(R&D) and deployment, with the potential for considerable societal impact. Boards of directors and their advisors need
to consider:

How AI is currently used by the company and its competitors.


How AI may disrupt the company’s business and industry.
The strategic implications and risks associated with AI products and services.
The impact of AI applications on the workforce and other stakeholders.
The implications for compliance with legal, regulatory, and ethical obligations.
The governance implications of the use of AI and related policies and controls.

Board responsibility for managing and directing the company’s affairs requires oversight of the exercise of authority
delegated to management, including oversight of legal compliance and ethics and enterprise risk management. The
board’s oversight obligations extend to the company’s use of AI, and the same fiduciary mindset and attention to
internal controls and policies are necessary. Directors must understand how AI impacts the company and its
obligations, opportunities, and risks, and apply the same general oversight approach as they apply to other
management, compliance, risk, and disclosure topics.

This article sets out:

A three-part approach to aid corporate boards in their oversight of the company’s AI-related activities, including:

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 1 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

understanding AI as a matter of corporate strategy and risk;


considering the impact of AI activities on employees, customers, other key stakeholders, and the
environment; and
overseeing the company’s compliance with laws and regulations that are relevant to AI and the
development of related policies, information systems, and internal controls.
Practice pointers and issues that boards and their advisors should consider in connection with AI.

Understanding AI as a Matter of Corporate Strategy and Risk


As with any emerging technology, AI presents opportunities for competitive advantage and innovation while also
presenting significant potential for risks. Boards and corporate managers need to assess the impact of AI on corporate
strategy and risk, and specifically consider:

How AI is currently used, for example, to support innovation and R&D, enhance customer experiences,
manage the supply chain, reduce risk, or otherwise improve efficiency and reduce costs.
How AI may change the industry and the business.
Strategic opportunities AI presents that have not yet been captured.
Associated operational, financial, compliance, and reputational risks.

Boards should explore with management how to best use AI to help achieve business objectives, and further
opportunities to capitalize on AI. Consideration should be given to how AI is likely to disrupt the industry in the future,
implications for the current business model, and any changes needed for the company to capture opportunities to use
AI for competitive advantage through innovation and the creation of new business models or revenue streams.

In addition to providing opportunities for competitive advantage, AI has the potential to both create and manage risk.
On the risk creation side, AI systems are highly dependent on data, and there is a risk of bias and other errors with the
data, the algorithm, or both that could lead to error and unintended outcomes. For example, the data may be biased
by the way the information is obtained or used, and the algorithms may be biased due to erroneous assumptions in
the machine learning process. High data dependency also gives rise to the risk of disputes over data rights, privacy
violations, and cybersecurity breaches.

There is also considerable concern about how AI is being developed and deployed and whether it will harm society. At
a recent Yale summit of CEOs, 42% indicated that they were concerned about the potentially catastrophic impact of AI
(see Chief Executive, At Yale CEO Event Honoring Steven Spielberg, Little Consensus on AI Future). Boards
need to understand the potential for AI-related risks and the systems in place to help manage and mitigate those risks,
including systems to safeguard data, and ensure both that the AI systems are secure and that the company is in
compliance with AI-related rules and regulations. (For more on AI risks, see What’s Market: Artificial Intelligence
Risk Factors on Practical Law.)

On the risk management side, AI applications can help identify and mitigate risks. For example, AI has proven useful
in the financial services industry for detecting credit card and other financial fraud, in cybersecurity defense by further
automating vulnerability management and intrusion detection, and in compliance applications to identify a variety of
policy violations.

In addition to categorizing various types of AI-related risks, the NIST AI Risk Management Framework describes four
specific functions — govern, map, measure, and manage — to help organizations address AI risks. The “govern”
function relates to cultivating and implementing a risk management culture and applies to all stages of an
organization’s risk management. This function targets certain categories of outcomes set forth in abbreviated form
below:

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 2 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

Policies, processes, procedures, and practices related to the mapping, measuring, and managing of AI risks
are in place, transparent, and implemented effectively.
Accountability structures ensure appropriate teams and individuals are empowered, responsible, and trained for
mapping, measuring, and managing AI risks.
Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and
managing of AI risks.
Organizational teams are committed to a culture that considers and communicates AI risk.
Processes are in place for robust engagement with relevant AI actors.
Policies and procedures are in place to address AI risks and benefits arising from third-party software and data
and other supply chain issues.(For more on the NIST framework, including its govern function, see NIST
Releases Artificial Intelligence Risk Management Framework on Practical Law.)

Significant work is underway to help companies address AI-related risks. In January 2023, the US Department of
Commerce’s National Institute of Standards and Technology (NIST) issued the Artificial Intelligence Risk
Management Framework (AI RMF 1.0) , which was developed with input from both the private and public sectors
as a voluntary, flexible risk framework to “promote trustworthy and responsible development and use of AI systems.”
The framework identifies characteristics of trustworthy AI systems, such as being “valid and reliable, safe, secure and
resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias
managed.” Notably, NIST has called on companies to establish policies that define AI risk management roles and
responsibilities, including for the board (see Box, NIST AI Risk Management Framework: Governance).

Considering the Impact of AI Activities


AI, and in particular generative AI, raises considerable concerns about the potential for misuse and unintended
consequences. Boards and management teams need to consider the responsible corporate use of AI, and the
potential impact on employees, customers, other key stakeholders, and the environment. Consideration should be
given to avoiding and mitigating unintended consequences, including through the use of policies and internal controls
overseen by the board or an appropriate board committee.

AI can provide efficiency with routine tasks and is improving in its ability to aggregate data, recognize patterns, and
create content, with the prospect of freeing up employees to increase their focus on tasks that require judgment and
creativity. AI’s growing potential to automate skilled tasks has implications for workforce training, management, and
productivity. Boards should consider how the company’s use of AI is impacting employees and the talent pipeline,
whether employees are being trained to use AI in a manner that leverages their skills and mitigates AI risks, and what
types of policies should be implemented to encourage appropriate use of AI by employees for approved use cases —
particularly in highly regulated or otherwise high-risk contexts, such as health care, financial services, or hiring and
promotion decisions.

AI’s potential for perpetuating bias in the data sets on which it relies raises concerns about the use of AI in
employment and promotion decisions. The board should understand whether and, if so, how the company uses AI for
these purposes and what policies are in place regarding these uses. The use of AI in employment-related decisions
may be subject to regulation, and regulation in this area is likely to expand. For example, New York City Local Law
144 of 2021 prohibits employers and employment agencies from using an “automated employment decision tool”
unless the tool has been subject to a bias audit within the prior year, information about the bias audit is publicly
available, and certain notices have been provided to employees or job candidates (N.Y.C. Admin. Code §§ 20-870 to
20-874; 6 RCNY §§ 5- 300 to 5-304 (rules implementing Local Law 144)). Enforcement of this law and related rules
began on July 5, 2023.

In 2021, the Equal Employment Opportunity Commission (EEOC) announced an initiative aimed at ensuring AI tools
used in employment decisions comply with the federal civil rights laws. The initiative includes gathering information

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 3 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

about the adoption, design, and impact of AI technologies and issuing guidance for employers on AI use and
algorithmic fairness. The EEOC also issued, in May 2023, technical assistance regarding AI and algorithmic decision-
making tools and the potential for those tools to result in illegal discrimination under Title VII. This guidance focuses on
disparate impact in employment selection processes. Additionally, state privacy laws (such as the California Consumer
Privacy Act) and the EU General Data Protection Regulation apply to employee data and can add regulatory
obligations around profiling and automated decision-making. (For more on using AI in employment decisions, see
Artificial Intelligence (AI) in the Workplace on Practical Law.)

Boards should also understand how AI is used by customers and suppliers, as well as its impact on the environment.
The use of AI is already well-embedded in various industries, such as the automotive, e- commerce, entertainment,
financial services, health care, hospitality, insurance, logistics, manufacturing, marketing, retail, and transportation
industries. Boards may not understand the ways in which their companies or others in their industries are using AI to
interface with customers and suppliers, and the extent to which these uses involve data gathering, with privacy
implications and related concerns about bias. Additionally, boards may not appreciate the environmental impact of, for
example, training an algorithm to identify reliable patterns, which can require heavy energy use to analyze millions of
data sets.

(For more on the potential legal issues surrounding AI, see Artificial Intelligence: Key Legal Issues in the January
2023 issue of Practical Law The Journal and see Artificial Intelligence Toolkit on Practical Law.)

Overseeing AI-Related Compliance and Controls


AI raises compliance issues that require board consideration. AI systems can incorporate bias and lack transparency,
which may lead to concerns about equity and accountability. Additionally, AI systems are heavily reliant on data and
often implicate privacy and data protection regulation. The use of AI — and generative AI in particular — may have
implications for intellectual property (IP) protections.

Not surprisingly, AI is the focus of legislative and regulatory initiatives in the US and abroad. Boards need to
understand and stay apprised of these developments and oversee the company’s compliance, as well as the
development of relevant policies, information systems, and internal controls, to ensure that AI use is consistent with
legal, regulatory, and ethical obligations, with appropriate safeguards to protect against potential risks (as discussed
above). In doing so, they should be mindful of the variety of ways in which the company may face exposure to AI-
related compliance and other risks, including through AI technology that is internally developed, licensed from others,
or acquired through M&A activity.

AI-related regulation seeks to balance the interest in encouraging innovation with concerns about human rights and
civil liberties, including privacy rights, anti-discrimination interests, consumer safety and protection, IP protection,
information integrity, security, and fair business practices. Select regulatory developments in the US, EU, and UK are
outlined below. (For more on key developments in AI regulation, see Trends in AI Regulation on Practical Law.)

US
A variety of legislative and regulatory activities at both the federal and state levels are in play. While there currently is
no comprehensive AI regulation, US regulators have made clear that this is not an unregulated space. Significant
developments include:

Enactment of the National Artificial Intelligence Initiative Act of 2020 (NAIIA). In the NAIIA, which was
passed on January 1, 2021, Congress defined AI as a “machine-based system that can, for a given set of
human-defined objectives, make predictions, recommendations or decisions influencing real or virtual
environments” (National Defense Authorization Act for Fiscal Year 2021, Div. E, § 5002(3)). The NAIIA provides
for coordination across federal agencies to help accelerate AI research in promotion of economic prosperity
and national security, and amends the NIST Act to include a mission to advance and support the development

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 4 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

of AI standards. Overall, the approach appears to be one of encouraging investment and prudent risk
management. (For more information, see New AI Initiative Act Sets Up National AI Initiative Office and
Amends NIST Act on Practical Law.)
Federal agencies’ guidance and enforcement activities. Agencies including the Consumer Financial
Protection Bureau (CFPB), Department of Justice (DOJ), Equal Employment Opportunity Commission (EEOC),
Food and Drug Administration (FDA), Federal Trade Commission (FTC), and Securities and Exchange
Commission (SEC) have issued guidance or otherwise indicated through enforcement activity that they view AI
as within the purview of their current regulatory and enforcement authorities. For example:

in April 2023, the CFPB, DOJ, EEOC, and FTC issued a joint statement emphasizing their view that
responsible innovation in the area of AI is compatible with established law and regulation;
in May 2023, the EEOC released a technical assistance document, entitled Assessing Adverse Impact
in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures
under Title VII of the Civil Rights Act of 1964, which is intended to help employers prevent the use of
AI from leading to discrimination in the workplace; and
the SEC and the Financial Industry Regulatory Authority (FINRA) have expressed expectations
regarding the “model governance” of AI for investment managers through SEC enforcement actions and
FINRA’s 2020 Artificial Intelligence (AI) in the Securities Industry Report.
The Blueprint for an AI Bill of Rights. This white paper, issued by the White House Office of Science and
Technology Policy in October 2022, expresses a set of five principles and associated practices to help guide
the design, use, and deployment of AI systems in a manner that protects the interests of individuals and
communities.
Various proposed congressional bills and policy objectives. These include:
the Algorithmic Accountability Act of 2022 (H.R. 6580; S. 3572), which would require entities that use
generative AI systems in making critical decisions pertaining to housing, health care, education, and
employment, to assess potential bias and other impacts;
the DEEP FAKES Accountability Act of 2021 (H.R. 2395), which would require transparency about the
creation and public release of false personations;
the Digital Services Oversight and Safety Act of 2022 (H.R. 6796), which would require transparency
about misinformation created by generative AI; and

the SAFE Innovation Framework , proposed in June 2023 by Senator Schumer, which is intended to
guide Senate efforts toward AI regulation and focuses on: (1) safeguarding national security and
economic security for workers; (2) supporting the deployment of responsible systems to address
concerns around misinformation and bias, IP concerns, and liability; (3) protecting elections and
promoting societal benefits while avoiding potential harms; (4) determining what information the federal
government and the public need to know about an AI system, data, or content; and (5) supporting US-
led innovation in security, transparency, and accountability in a manner that unlocks the potential of AI
and maintains US leadership in the technology.
State laws. State consumer protection laws, including privacy laws and prohibitions of unfair and deceptive
business practices, are widely viewed as applying to AI. Additionally, a number of states (and local
governments) have enacted or are considering specific legislation or regulation of AI.

EU
In June 2023, the European Parliament passed the EU Artificial Intelligence Act (EU AI Act) to hold AI developers,
providers, and users accountable for safe implementation. While additional steps are required before it becomes law,
it would require, among other things, transparency regarding copyrighted material used in training an AI system and

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 5 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

safeguards to prevent use of illegal content.

The EU AI Act builds on the European Commission proposal of a risk-based regulatory scheme that would define four
levels of risk: unacceptable, high, limited, and minimal. Unacceptable risks are defined as AI uses that present a clear
threat to the safety, livelihoods, and rights of people and will be banned. An example is the use of certain remote
biometric identification systems in publicly accessible spaces for law enforcement purposes, except in limited cases.
High-risk uses include AI uses that could put safety or health at risk, such as in some transportation or surgery
applications, as well as uses that involve access to employment or education. High-risk AI will be subject to strict
regulation to ensure adequate risk assessment and mitigation systems, high quality datasets, traceability of results,
informed use and compliance, human oversight, and a high level of security and accuracy. (European Parliament,
Briefing: EU Legislation in Progress: Artificial Intelligence Act (June 2023).)

UK
In March 2023, the UK Department for Science, Innovation, and Technology published a white paper, AI Regulation:
A Pro-Innovation Approach, with a proposal intended to encourage responsible innovation and reduce regulatory
uncertainty through a principles-based approach that focuses on: (1) safety, security, and robustness; (2) transparency
and “explainability”; (3) fairness; (4) accountability and governance; and (5) contestability and redress. UK regulators
have been asked to consider how to implement these five principles within their specific sectors, and a public
consultation closed June 21, 2023.

In March 2023, the UK Information Commissioner’s Office updated its Guidance on AI and Data Protection,
including guidance on how to apply the principles of the UK’s General Data Protection Regulation to the use of
information in AI systems.

Practice Pointers
As corporate use of AI expands, the board needs to understand the role of AI in the business and how it is integrated
into decision-making processes. The board should also consider how AI can be used to support its own efforts, for
example, its ability to collect and analyze data and identify trends that may improve financial projections and capital
allocation decisions. AI may also help anticipate and identify potential risks, inform efforts to mitigate risks, and predict
outcomes. AI may be used in information and reporting systems to identify matters for board attention in a more timely
manner or otherwise used to improve the information available to the board.

Board oversight is key to ensuring that AI is used in a responsible and ethical manner in alignment with the company’s
strategic objectives and values. This requires that board members understand how AI is used in the company and
what potential opportunities and risks AI presents, including risks from algorithm and data bias. Mitigating these risks
requires ensuring that the data used to train AI algorithms is diverse and representative, and that the algorithms
themselves are transparent and explainable.

To effectively oversee the use of AI, the board should consider establishing clear reporting lines and metrics for
measuring the effectiveness of AI, as well as ensuring that it receives regular updates on the company’s use of AI and
any associated opportunities and risks. From a practical perspective, in approaching oversight of AI, directors can rely
on the same fiduciary mindset and attention to internal controls and policies as they apply to other matters.

Together with its advisors, the board should consider a variety of AI-related issues. The board should ensure it has an
adequate understanding of:

How the business and industry could be disrupted by AI, and what strategic opportunities and risks AI presents.
How AI is used in company processes and third-party products used by the company.
How the company is positioned to leverage its data assets, the risks that may stem from the use of data for the
AI use cases, and management’s existing processes for tracking and protecting the data used to train or be fed

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 6 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

into AI tools.
The AI governance system management has put in place, including whether the system has appropriate input
from relevant business functions, IT, human resources, legal, risk, and compliance.
The company’s goals and why AI is the right tool for achieving them. In evaluating this issue, the board should
seek management’s input on:
whether the company has the expertise and resources to pursue a strategy that relies on AI in a
responsible way;
how resilient the company’s use of AI is in terms of cybersecurity, operations, and data access and
management;
how success will be measured;
what proof of concept will look like, and how to test for efficacy and compliance as an AI initiative
launches and as AI use develops over time;
what the key risks are and what risk mitigation tools are available; and
whether there are material disclosure considerations relevant to any audience (such as users,
regulators, business partners, or shareholders).

The board should also evaluate:

The need for or outcome of discussions with management about the NIST AI Risk Management Framework
and its application to the company.
Whether it has appropriate access to information, advice, and expertise on AI matters to be able to understand
strategic opportunities and risks and consider related controls.
What additional steps should be taken to ensure that the board is kept appropriately informed on AI matters.
Whether management has identified:
risks to the business from AI; and
any mission-critical compliance or safety risks related to the company’s use of AI and, if so, discussed
them with the board (these risks should be mapped to a board committee for more frequent and in-depth
attention, and reflected in the committee charter, agenda, and minutes).
Whether the board (or the responsible board committee):
appropriately reflects AI issues on its agenda;
is regularly updated about rapidly emerging legislative and regulatory developments related to the use of
AI;
has reviewed the company’s policies and procedures concerning the use of AI;
has considered, together with management, the implications of AI for the company’s cybersecurity,
privacy, and other compliance policies and controls programs;
has discussed with management the potential for misuse and unintended consequences from the
company’s use of AI with respect to employees, customers, other key stakeholders, and the
environment, and how to avoid or mitigate those risks;
understands the extent to which AI is used in tracking and assessing employee performance, and has
ensured that controls are in place to foster compliance with any relevant regulation;
understands who in the company is responsible for monitoring AI use, and AI-specific compliance and
risk management, and how the company ensures compliance with AI-specific requirements, such as
“secure by design” requirements; and

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 7 of 8
AI and the Role of the Board of Directors 24/09/24, 3:11 PM

oversees the company’s policies and procedures related to the use of generative AI, including whether
those policies and procedures consider the potential for bias, inaccuracy, breach of privacy, and related
issues of consumer protection, cyber and data security, IP protection, and quality control.

Both comments and trackbacks are currently closed.

https://corpgov.law.harvard.edu/2023/10/07/ai-and-the-role-of-the-board-of-directors/ Page 8 of 8

You might also like