Academia.eduAcademia.edu

Adoption of Artificial Intelligence by Electric Utilities

2024, Energy Law Journal

Adopting Artificial Intelligence (AI) in electric utilities signifies vast, yet largely untapped potential for accelerating a clean energy transition. This requires tackling complex challenges such as trustworthiness, explainability, privacy, cybersecurity, and governance, balancing these against AI's benefits. This article aims to facilitate dialogue among regulators, policymakers, utilities, and other stakeholders on navigating these complex issues, fostering a shared understanding and approach to leveraging AI's transformative power responsibly. The complex interplay of state and federal regulations necessitates careful coordination, particularly as AI impacts energy markets and national security. Promoting data sharing with privacy and cybersecurity in mind is critical. The article advocates for 'realistic open benchmarks' to foster innovation without compromising confidentiality. Trustworthiness (the system's ability to ensure reliability and performance, and to inspire confidence and transparency) and explainability (ensuring that AI decisions are understandable and accessible to a large diversity of participants) are fundamental for AI acceptance, necessitating transparent, accountable, and reliable systems. AI must be deployed in a way that helps keep the lights on. As AI becomes more involved in decision-making, we need to think about who's responsible and what's ethical. With the current state of the art, using generative AI for critical, near real-time decision-making should be approached carefully. While AI is advancing rapidly both in terms of technology and regulation, within and beyond the scope of energy specific applications, this article aims to provide timely insights and a common understanding of AI, its opportunities and challenges for electric utility use cases, and ultimately help advance its adoption in the power system sector, to accelerate the equitable clean energy transition.

FINAL 5/13/24 © COPYRIGHT 2024 BY THE ENERGY BAR ASSOCIATION ADOPTION OF ARTIFICIAL INTELLIGENCE BY ELECTRIC UTILITIES Daniel D. Slate, Alexandre Parisot, Liang Min, Patrick Panciatici & Pascal Van Hentenryck* Synopsis: Adopting Artificial Intelligence (AI) in electric utilities signifies vast, yet largely untapped potential for accelerating a clean energy transition. This requires tackling complex challenges such as trustworthiness, explainability, privacy, cybersecurity, and governance, balancing these against AI’s benefits. This article aims to facilitate dialogue among regulators, policymakers, utilities, and other stakeholders on navigating these complex issues, fostering a shared understanding and approach to leveraging AI’s transformative power responsibly. The complex interplay of state and federal regulations necessitates careful coordination, particularly as AI impacts energy markets and national security. Promoting data sharing with privacy and cybersecurity in mind is critical. The article advocates for ‘realistic open benchmarks’ to foster innovation without compromising confidentiality. Trustworthiness (the system’s ability to ensure reliability and performance, and to inspire confidence and transparency) and explainability (ensuring that AI decisions are understandable and accessible to a large diversity of participants) are fundamental for AI acceptance, necessitating transparent, accountable, and reliable systems. AI must be deployed in a way that helps keep the lights on. As AI becomes more involved in decision-making, we need to think about who’s responsible and what’s ethical. With the current state of the art, using generative AI for critical, near real-time decision-making should be approached carefully. While AI is advancing rapidly both in terms of technology and regulation, within and beyond the scope of energy specific applications, this article aims to provide timely insights and a common understanding of AI, its opportunities and challenges for electric utility use cases, and ultimately help advance its adoption in the power system sector, to accelerate the equitable clean energy transition. * Daniel D. Slate (J.D. Stanford Law School, Ph.D. Candidate, Stanford Political Science Department) is co-author of The Architecture of Privacy: On Engineering Technologies That Can Deliver Trustworthy Safeguards. Alexandre Parisot is the director of ecosystem, AI and energy systems, at Linux Foundation Energy. Liang Min is Managing Director of the Bits & Watts Initiative at Stanford University. Patrick Panciatici is Senior Scientific Advisor at RTE: Réseau de Transport d’Electricité (French TSO). Pascal Van Hentenryck is the A. Russell Chandler III Chair and Professor at the Georgia Institute of Technology. He is also the director of the NSF AI Institute for Advances in Optimization. The authors wish to thank Gary Ackerman, Regina DeAngelis, Diane Fellman, Sasha Goldberg, David Hochschild, Travis Kavulla, Cheryl LaFleur, Elliot Mainzer, Jan Pepper, John Reynolds, Ken Rider, and Aram Shumavon, many of whom agreed to interviews or conversations in December 2023 and January 2024 to discuss how utilities, regulators, and system operators are assessing and implementing artificial intelligence. The views expressed in this article are those of the authors and do not necessarily represent the views of their companies, clients or affiliated institutions. This article does not contain or constitute legal advice. 1 2 I. II. III. IV. V. VI. VII. ENERGY LAW JOURNAL [Vol. 45.1:1 Introduction ........................................................................................ 2 Current Landscape of AI Adoption By Utilities ................................ 3 A. Grid-Interactive Smart Communities .......................................... 3 B. Energy System Resiliency........................................................... 4 C. Environmental Impacts ............................................................... 6 D. Intelligent & Autonomous Operations and Maintenance ............ 6 Navigating The Regulatory Landscape .............................................. 7 A. Traditional Energy Regulators and AI ........................................ 7 B. Privacy Regulators as Energy Regulators ................................. 12 AI Data Privacy and Cybersecurity Challenges ............................... 13 A. Critical requirements on utility data and need for clarity.......... 13 B. Privacy preserving techniques................................................... 14 C. Cybersecurity and critical infrastructure protection .................. 14 D. A framework to balance confidentiality, cybersecurity and innovation for AI applications .................................................. 15 Trustworthiness: How to Ensure Explainability, Transparency, Reliability and Liability ................................................................... 16 Ethical Considerations ..................................................................... 18 A. Energy Requirements & Environmental Impacts of Artificial Intelligence................................................................................ 19 B. Ethics and AI Governance ........................................................ 20 Key Takeaways and Recommendations ........................................... 21 I. INTRODUCTION In the rapidly evolving landscape of energy systems, the integration of artificial intelligence (AI) stands at the forefront of technological innovation and efficiency. As the world increasingly prioritizes decarbonization and digitalization, AI emerges not just as a tool of convenience, but as a pivotal enabler of these critical transformations. This article delves into the current state of AI adoption within the electric utility sector, exploring a range of practical use cases where AI is already reshaping critical operations, and touches on issues associated with further deployment of AI in the sector. While we focus on electric utilities, much of our discussion has applications to other utilities more generally. The journey towards widespread AI integration is not without its challenges. While technical complexities and substantial investment requirements are hurdles that utilities face, some of the most significant barriers are regulatory in nature. AI, as a field, continues to evolve rapidly, with regulations being debated and enacted in domains extending beyond energy systems. Although this broader context is pertinent to AI applications in energy systems, our discussion will focus on aspects specific to the energy sector. The technical, operational, economic, and political intricacies of energy systems are often complex, interwoven, and unique in nature. Constructive and informed dialogue among various stakeholders – including regulators, policymakers, operators, and solution providers – is essential for devising relevant solutions. Our objective with this article is to provide regulators and policymakers with a better and broader understanding of the present issues and challenges related to 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 3 AI adoption in the energy sector. We begin with a technical perspective, examining potential applications under investigation and the current state of adoption in utilities. From there, we delve into the common legal and regulatory aspects emerging in discussions among experts. Through this analysis, we aim to stimulate an informed and forward-looking policy debate, one that can pave the way for AI to fully realize its potential in revolutionizing utility operations, supporting decarbonization efforts, and leading the charge towards a more efficient, sustainable, and resilient energy future. II. CURRENT LANDSCAPE OF AI ADOPTION BY UTILITIES John McCarthy, considered a founding father of AI, defines it as “the science and engineering of making intelligent machines, especially intelligent computer programs.”1 Loosely speaking, AI is defined as how machines can imitate human intelligence, such as learning from experience. So, very often used interchangeably with AI, machine learning (ML) is the study of data-driven computer algorithms that improve automatically through experience. Deep learning, natural language processing, and neural networks are among the many approaches to machine learning. Definitions for machine learning focus more on data, learning, making predictions and decisions. In this section, we discuss AI/ML as a toolbox of approaches and algorithms that use data to solve interesting electric utility problems. The utility sector is undergoing a swift digital transformation, leveraging advanced sensors, and deploying advanced computing technologies. While AI techniques, widely successful in various industries, are undergoing pilot programs in the utility sector, the industry’s high-reliability standards and rigorous regulations contribute to a conservative and deliberate approach to adopting AI/ML technologies. EPRI and Stanford University co-hosted a series of meetings in 2021, bringing together over 100 different utilities, universities, national labs, and AI organizations to bring the two industries together and identify opportunities. Through these public events, common themes between challenges and opportunities were identified and pulled together through a set of grand challenges for the AI and electric power industries,2 which will be discussed in the following sections. A. Grid-Interactive Smart Communities Recent research from the Brattle Group estimates the potential of “load flexibility” from many distributed technologies in smart communities – including electric heat pump, electric vehicle managed charging, and demand response – to provide additional services beyond peak capacity reductions, which could total 1. John McCarthy, What is AI? / Basic Questions, STAN. UNIV., http://jmc.stanford.edu/artificial-intelligence/what-is-ai/ (last visited Apr. 5, 2024). 2. ELEC. POWER RSCH. INST., FIVE ARTIFICIAL INTELLIGENCE GRAND CHALLENGES FOR THE ELECTRIC POWER INDUSTRY (Sep. 2021), https://www.epri.com/research/products/000000003002022804. 4 ENERGY LAW JOURNAL [Vol. 45.1:1 approximately 200 GW by 2030.3 In communities, customers rarely think about when and how they utilize energy and are unlikely to take advantage of optimization opportunities unless they are simple and easy to use. AI/ML algorithms can assist with many complex optimizations required for seamless implementation and simplification of these tasks to increase the likelihood of success of these initiatives. In addition, AI technologies can support networking communities that interact with the power grid to optimize energy efficiency, load shifting, and usage of low or zero-carbon generation sources for economy-wide decarbonization in a way that is equitable for the entire community. The energy systems of the future will connect and coordinate operators of communities (homes, buildings, or communities) and power grids sharing the benefits from the advances in AI that could improve community-to-grid-operator communication, optimize cost, improve energy utilization and energy equity for both producers and consumers. For example, NextEra Energy’s ControlComm, powered by Autogrid’s intelligent demand response optimization system, provides business customers with “opportunities to lower their energy bills by adjusting their energy consumption” “with an automated solution, during times of peak energy demand or high wholesale electricity prices.”4 B. Energy System Resiliency Catastrophic events such as the 2021 Texas winter storm event severely disrupt the normal functioning of critical electrical grid infrastructure for significant durations. In 2022, the National Oceanic and Atmospheric Administration identified eighteen separate billion-dollar weather-related disasters in the United States, see Figure 1. 3. Ryan Hledik et al., The National Potential for Load Flexibility, BRATTLE GRP. 1, 2, 13 (June 2019), https://www.brattle.com/wp-content/uploads/2021/05/16639_national_potential_for_load_flexibility_-_final.pdf. 4. AUTOGRID, NEXTERA ENERGY SERVICES TEAMS UP WITH AUTOGRID TO OFFER NEW DEMAND RESPONSE PROGRAMS IN PJM (June 21, 2016), https://www.auto-grid.com/news/nextera-energy-services-teamsup-with-autogrid-to-offer-new-demand-response-programs-in-pjm/. 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 5 Figure 1. 2022 U.S. Billion Dollar Weather Related Disasters5 “From enhancing accuracy in weather forecasts to reducing disaster risks, AI is already helping,” according to the World Meteorological Organization (WMO), “which operates a disaster risk reduction program and multi-hazard early warning system that serves countries, communities, and humanitarian agencies.”6 Extreme events can have substantial impacts on the operation of the electrical grid. AI algorithms can help predict anomalies, equipment failure, and potentially damaging events, such as wildfires, before they occur.7 This would maximize the lifetime of critical generation, transmission, and distribution assets, boosting efficiency, reducing costs, and increasing public safety and customer satisfaction. Another potential application for AI is predicting outages in underrepresented communities by integrating grid, climate, calamity, and social science data. For example, Buzz Solution’s PowerAI, deployed at several New York utilities, automates the process of electricity infrastructure inspection using data collection from autonomous drones as well as fault detection using a software platform with predictive analytics.8 5. NAT’L CTR. ENV’T INFO., U.S. 2022 BILLION-DOLLAR WEATHER AND CLIMATE DISASTERS (Jan. 2023), https://www.climate.gov/sites/default/files/2023-01/2022-billion-dollar-disaster-map.png. 6. UNITED NATIONS, EXPLAINER: HOW AI HELPS COMBAT CLIMATE CHANGE (Nov. 3, 2023), https://news.un.org/en/story/2023/11/1143187#:~:text=As%20extreme%20weather%20events%20unfold,local%20and%20national%20response%20plans. 7. Jonah Feigleson, AI’s Role in the Fight Against Wildfires, CTR. FOR GROWTH & OPPORTUNITY AT UTAH STATE UNIV. 2 (MAY 23, 2023), https://www.thecgo.org/benchmark/ais-role-in-the-fight-against-wildfires/. 8. BUZZ SOLUTIONS, POWER AI SOFTWARE PLATFORM 5, https://buzzsolutions.co/powerai/ (last visited Apr. 4, 2024). 6 ENERGY LAW JOURNAL [Vol. 45.1:1 C. Environmental Impacts While AI holds the potential to contribute to a more sustainable world, it also raises concerns about emissions that could contribute to global warming. The training process alone for OpenAI’s GPT-3 LLM is estimated to have consumed 1.3 gigawatt-hours of energy, equivalent to the yearly consumption of 120 average U.S. households, and resulted in 552 tons of carbon emissions, matching the annual emissions of 120 U.S. cars.9 OpenAI’s latest model, GPT-4, could be ten times larger. Leading IT companies are actively procuring renewable energy sources to power their data centers, with Google, for instance, committing to 100% renewable energy for all its cloud regions.10 However, a significant portion of these data centers remains connected to the grid. The current grid infrastructure faces challenges, evidenced by a prolonged interconnection queue in the United States. The increasing demand for data centers may force utilities to defer the retirement of fossil fuel generation.11 Simultaneously, efforts are underway to enhance the energy efficiency of AI tools. In April 2020, MIT introduced a system designed to reduce the energy required for training and running neural networks.12 Additionally, in July 2020, researchers from Stanford University unveiled the ‘experiment impact tracker’ and provided recommendations for developers aiming to minimize their carbon footprint.13 These initiatives reflect a growing awareness of the environmental impact of AI and a commitment to finding sustainable solutions. D. Intelligent & Autonomous Operations and Maintenance Automating tasks enables plant operational and grid integration efficiency improvements. It also preserves energy system assets and equipment while enabling energy system operators to focus on the most valuable maintenance, asset management and integration tasks. AI applications such as digital twins, machine learning/reinforcement learning, machine vision, and automatic diagnostics optimize inspection, monitoring, and utilization. For example, research shows the current gearbox cumulative failure rate during twenty years of operation is in the range of 30% (best case scenario) to 70% (worst case scenario). When a component like a gearbox prematurely fails, operation and maintenance (O&M) costs increase, and production revenue is lost. A full gearbox replacement may cost more than $350,000. Researchers are testing a physics-based machine-learning 9. Alex de Vries, The growing energy footprint of artificial intelligence, JOULE (Oct. 10, 2023), https://www.cell.com/joule/abstract/S2542-4351(23)00365-3. 10. GOOGLE DATA CTRS., 24/7 CARBON-FREE ENERGY BY 2030 1, https://www.google.com/about/datacenters/cleanenergy/ (last visited Apr. 4, 2024). 11. Daniel Geiger & Ellen Thomas, Data Centers are booming. Their need for power is causing utilities to retreat on green energy., BUS. INSIDER 4-5 (OCT. 9, 2023), https://www.businessinsider.com/data-centers-energy-demand-utilities-green-renewable-2023-10. 12. Rob Matheson, Reducing the carbon footprint of artificial intelligence, MIT NEWS 2 (Apr. 23, 2020), https://news.mit.edu/2020/artificial-intelligence-ai-carbon-footprint-0423. 13. Edmund L. Andrews, AI’s Carbon Footprint Problem, STAN. UNIV. 2 (Jul. 2, 2020), https://hai.stanford.edu/news/ais-carbon-footprint-problem. 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 7 hybrid model that can identify gearbox damage in its early stages and extend its life. If a damaged bearing within a gearbox is identified early, the repair may only cost around $45,000, a savings of nearly 90%. 14 Another example is Palantir Foundry’s predictive maintenance & prognostics application, which allows operators to make informed, proactive, and cost-effective maintenance decisions, reducing downtime, improving availability, and optimizing maintenance scheduling. Pacific Gas and Electric uses it to model transformer health and conduct predictive maintenance across 25,000 miles of grid wire.15 III. NAVIGATING THE REGULATORY LANDSCAPE We now turn to regulatory and legal issues associated with the deployment of AI in the electric utility sector. The introduction of AI in power system operations involves profound transformations. Although some of the issues we will mention are not new, they are intensified by AI applications. This in turn requires that we revisit some old questions in light of the value AI may bring and new ways to approach and optimize utility processes. As current regulatory frameworks and associated legal precedents reflect tradeoffs and compromises before this new age of AI, it is only natural there will be changes and evolutions to accompany this transformation, and sometimes on profoundly fundamental aspects. Later sections will delve into three such issues that warrant revisiting in light of AI applications: data sharing and access, trustworthiness and ethical considerations. Before discussing these issues, however, it is useful to survey the landscape of regulations and regulatory responsibility in this area; the discussion below will focus on the United States and briefly touch on Europe as well. A. Traditional Energy Regulators and AI Several factors exist that drive developments in energy law, including changes in the operation and structure of energy markets, entanglement with other areas of law such as environmental law, as well as the development and diffusion of new technology.16 The latter concerns us here, as artificial intelligence systems are already changing interactions between utilities and regulators and will likely change what regulators decide is reasonable to demand from utilities. At the same time, AI may stretch the capacity of current administrative law to accommodate its features and thus may drive innovations in the law itself.17 In the United States, regulators split responsibility for utility regulation between the federal and state levels. The Federal Power Act (FPA) of 1935 is the 14. Raja V. Pulikollu & Jeremy Renshaw, EPRI Develops AI Model to Reduce Wind Turbine Operations Cost, T&DWORLD 5 (Sept. 8, 2021), https://www.tdworld.com/renewables/article/21174662/epri-develops-aimodel-to-reduce-wind-turbine-operations-costs-utilities-see-significant-benefits. 15. PALANTIR, PALANTIR FOR UTILITIES 2, https://www.palantir.com/offerings/utilities/ (last visited Apr. 5, 2024). 16. Joseph T. Kelliher & Maria Farinella, The Changing Landscape of Federal Energy Law, 61 ADMIN. L. REV. 611, 613–24 (2009). 17. See Alicia Solow-Niederman, Administering Artificial Intelligence, 93 S. CAL. L. REV. 633, 694-95 (2020). 8 ENERGY LAW JOURNAL [Vol. 45.1:1 authorizing statute that defines the jurisdiction of the Federal Energy Regulatory Commission (FERC).18 Under the law, the states retain jurisdiction over their internal energy markets (typically regulated by state utility commissions) while the U.S. Congress empowered FERC to regulate interstate energy commerce — in particular, FERC is authorized to regulate the transmission and wholesale selling of electric energy in interstate commerce.19 FERC assesses whether “any rule, regulation, practice, or contract” that is “affecting” a “rate, charge, or classification” in use by a public utility subject to its jurisdiction is “unjust, unreasonable, unduly discriminatory or preferential.”20 The U.S. Supreme Court has explained that while “FERC has the authority—and indeed the duty—to ensure that rules or practices ‘affecting’ wholesale rates are just and reasonable,”21 FERC’s jurisdiction is limited to those “rules and practices that directly affect the [wholesale] rate.”22 The Court stated this legal rule when it found that demand response programs were a practice meant to reduce wholesale rates, reduce pressure on the grid, and avoid service problems.23 It is readily foreseeable that deploying AI systems can be adjudged a similar “practice” that will “directly affect” wholesale electricity prices. Depending on their effect on the wholesale market and whether they may be deemed to increase wholesale competition, even utilities’ AI applications that are local or state-situated may also be subject to federal regulation.24 For example, FERC issued Order No. 881 on December 16, 2021.25 With this rule, FERC required that utility transmission providers implement ambient-adjusted ratings when calculating the maximum transfer capability of their transmission lines, and also make possible dynamic line rating, giving the transmission providers three years to effect the change. FERC justified this rule in part as fulfilling its own legal requirement under the Federal Power Act to ensure customers are paying rates that are “just and reasonable.”26 For energy transmission, AI promises to enable adjustments to line ratings in real time, whether ambient-adjusted ratings or dynamic line ratings, through automated processing of temperature and weather 18. Federal Power Act, 41 Stat. 1063 (codified as amended at 16 U.S.C. §791a et seq.). 19. 16 U.S.C. § 824(b)(1). 20. 16 U.S.C. § 824e(a). 21. FERC v. Elec. Power Supply Ass’n, 577 U.S. 260, 277 (2016). 22. Id. at 278 (citing Cal. Indep. Sys. Operator Corp. v. FERC, 372 F.3d 395, 403 (2004)) (internal quotation marks omitted); 16 U.S.C. §§ 824d–824f (2018); see FERC Rule to Improve Transmission Line Ratings Will Help Lower Transmission Costs, Docket No. RM20-16 (Dec. 16, 2021), https://www.ferc.gov/newsevents/news/ferc-rule-improve-transmission-line-ratings-will-help-lower-transmission-costs. For a detailed discussion of the meaning of “just and reasonable” as glossed by the courts, see Steve Isser, Just and Reasonable: The Cornerstone of Energy Regulation, (Energy Law and Economics Working Paper 2015-1, 2015); see also Sotheby Shedeck, Note A Clarification on FERC’s Discretion in Finding Just and Reasonable Rates in the Electricity Market: Public Citizen, Inc. v. FERC, 44 Energy L.J. 119 (2023). 23. Elec. Power Supply Ass’n, 577 U.S. at 279. 24. See Nat’l Ass’n of Regul. Util. Comm’rs v. FERC, 964 F.3d 1177, 1186 (D.C. Cir. 2020). 25. Managing Transmission Line Ratings, 177 FERC ¶ 61,179 (2021). 26. 16 U.S.C. §§ 824d–824f (just and reasonable rates); see FERC Rule to Improve Transmission Line Ratings, supra note 22; For a detailed discussion of the meaning of “just and reasonable” as glossed by the courts, see Isser, supra note 22; Shedeck, supra note 22. 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 9 data and changing conditions on the grid. Transmission providers can thus use AI systems to better meet their regulatory requirements; at the same time, these AI uses will likely be themselves subject to further scrutiny from regulators. Additionally, to ensure the applicability of the Mobile-Sierra doctrine’s presumption that contracted rates are “just and reasonable” under the Federal Power Act, regulators should monitor whether AI is ever being used to engage in illegal market manipulation. As the Supreme Court has said, “Like fraud and duress, unlawful market activity that directly affects contract negotiations eliminates the premise on which the Mobile–Sierra presumption rests: that the contract rates are the product of fair, arms-length negotiations.”27 If any given entity in the energy industry gained market power through its adoption of AI, this presumption would no longer hold true. Under the Energy Policy Act of 2005, FERC also certifies and reviews Energy Reliability Organizations “to establish and enforce reliability standards for the bulk-power system,” giving regulators broad jurisdiction over electricity reliability standards.28 These include requirements for electricity system stability and cybersecurity protection.29 National security and intelligence professionals have opined that AI can help detect and respond in real time to cybersecurity threats.30 Cyber-attacks and electronic disruptions have been consistently on the rise for years, and longstanding suggestions to disconnect electrical infrastructure from the open internet appear even less feasible than before; to the degree that utilities increasingly deploy AI, they typically become more dependent on broader network connections for access to relevant data sources.31 Additionally, in an executive order issued October 31, 2023 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the White House directed the Department of Energy, alongside many other executive agencies, to assess AI threats to critical infrastructure.32 The president also directed the Department of Energy to produce and publish a report on “the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans.”33 While its implications and implementing rulemaking are still evolving, the executive order sets rapid deadlines for many of its instructions and can be expected to shape public and private decisions about AI development and deployment in the near future, including in the energy sector. 27. Morgan Stanley Cap. Grp. Inc. v. Pub. Util. Dist. No. 1 of Snohomish Cnty., Wash., 554 U.S. 527, 554 (2008). 28. 16 U.S.C. § 824o(a(2)-b). 29. Id. at § 824o(a)(3-8). 30. Diego Laje, Securing Critical Infrastructure in the Age of Artificial Intelligence, AFCEA SIGNAL (Nov. 17, 2023), https://www.afcea.org/signal-media/securing-critical-infrastructure-age-artificial-intelligence. 31. See Amy L. Stein, Regulating Reliability, 54 HOUSTON L. REV. 1191, 1229–31 (2017), https://houstonlawreview.org/article/3936-regulating-reliability. 32. Exec. Order No. 14,110, 88 Fed. Reg. 75191, 75196, 75199 (§4.1(b), §4.3) (Nov. 1, 2023), https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf. 33. Id. at 75208 (§5.2(g)(i)). 10 ENERGY LAW JOURNAL [Vol. 45.1:1 The judiciary has emphasized, however, that even after the Supreme Court’s broad reading of federal jurisdiction, “States retain their authority to impose safety and reliability requirements” without federal interference.34 State utility commissions are already relying on that authority to surface and regulate utilities’ use of AI. Here, public utility commissions are engaged in a familiar regulatory role in which they have long assessed, approved, or denied utilities’ adoption of new technology.35 Utilities report their AI use cases to the commissions, such as intelligent image processing and machine learning to handle millions of images collected by drones deployed to inspect energy systems for issues.36 AI deployments can help utilities and providers assess the ground truth of their systems and prepare regulatory compliance documents, such as natural disaster mitigation plans.37 Commissions already assess and adjudicate proposed AI use cases, whether initiated by the utility or suggested by an intervenor prior to the conclusions of an administrative law judge or the final decision of the commission.38 State utility commissions also already assess whether utilities’ algorithmic systems conform to the relevant legal standards in their state.39 Additionally, regulators and the courts may also find reason to mandate the deployment of artificial intelligence if the safety benefits are such as to create an affirmative duty of care.40 State energy planning commissions and independent system operators also expect to use new AI systems to improve interconnection queues and to support energy efficiency and demand forecasting efforts, for which they have historically used previous generations of advanced data analysis technologies including neural networks.41 At the same time, regulators and legislators around the world face important choices about how exactly to categorize and type artificial intelligence systems: which concepts they apply “for constructing the meaning of AI systems in the law” 34. Nat’l Ass’n of Regul. Util. Comm’rs, 964 F.3d at 1188. 35. See Jonas J. Monast & Sarah K. Adair, Completing the Energy Innovation Cycle: The View from the Public Utility Commission, 65 HASTINGS L. J. 1345, 1347 (2014). 36. See, e.g., Application of San Diego Gas & Electric Company (U 902 M) to Submit its 2021 Risk Assessment and Mitigation Phase Report SDG&E 1-55–56 (Cal. P.U.C. 2021); see also Catherine J. K. Sandoval, Net Neutrality Powers Energy and Forestalls Climate Change, 9 SAN DIEGO J. CLIMATE & ENERGY L. 1, 19 (2018) (“Paired with software analytics and artificial intelligence, live video can be a powerful tool to detect grid threats or conditions”) (anticipating a similar use case). 37. See, e.g., CAL. PUB. UTIL. CODE § 8386 (wildfire mitigation). 38. See, e.g., Application of Pacific Gas & Electric Company for Approval of Its Mobile Application and Supporting Systems Pilot. (U39e)., 2020 Cal. P.U.C. Decision 20-10-003 (Cal. P.U.C. Oct. 8, 2020) 39. See, e.g., Pennsylvania Public Utility Commission Bureau of Investigation and Enforcement Office of Consumer Advocate Office of Small Business Advocate Philadelphia Industrial and Commercial Gas User Group Grays Ferry Cogeneration Partnership and Vicinity Energy Philadelphia, Inc. James M. Williford v. Philadelphia Gas Works Grays Ferry Cogeneration Partnership and Vicinity Energy Philadelphia, Inc., 2023 WL 8714853 (Pa. P.U.C. 2023) (ratemaking case governed by the just and reasonable standard announced in state law, 66 Pa. C.S. § 1301(a) and §2212(e) and 52 Pa. Code §§ 69.2701–2703, and defined by relevant U.S. Supreme Court precedent; claim against PGW for delegating customer payment plans to an algorithm with claimants arguing this violated 52 Pa. Code § 56). 40. See Amy L. Stein, Assuming the Risks of Artificial Intelligence, 102 B.U. L. REV. 979, 1028 n.275 (2022). 41. Video Interviews with Anonymous Planning and System Operating Officials (Jan. 10, 2024, and January 12, 2024). 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 11 carry significant and divergent consequences.42 For instance, commissioners and administrative law judges will need to assess whether expenditures for AI technologies are capital investments or operating expenses, decisions with substantial implications for rate-making cases.43 Such decisions will send important signals, given how, under the traditional rate-making formula, utilities’ ability to recover capital expenses can be a powerful incentive. Along similar lines, while AI software plausibly fits regulators’ past experience with utilities’ efforts to modernize and deploy new technology, regulators should take note if utilities are proposing to build their own in-house AI-specific data centers – clearly a capital investment, but a questionable one in light of the traditional least cost standard, given prevailing business best practices argue for contracting out for commercially available cloud and data center services. Regulatory commissions may nonetheless also need to attend to the advantages new technologies offer to early adopters (when costs are typically higher), especially in what appears at present to be the early stages of an artificial intelligence boom.44 Regulators themselves often use AI systems to conduct their statutorilyrequired oversight and we expect this trend to grow. In California, as one scholar has observed, “The regulatory body tasked with ensuring that private utilities meet renewable generation, grid reliability, and emissions reduction goals relies on a mathematical model to identify gaps in energy generation buildout.”45 An additional regulatory AI use case is to detect where compliance may be difficult under the current regulatory scheme and identify or suggest where and when a new rulemaking may be needed.46 Public utility commissioners, administrative law judges, and case intervenors have expressed interest in increasing their use of artificial intelligence to support their oversight work, from drawing on past regulatory decisions to scrutinizing utilities’ proposals and models as well as the underlying data that must support them.47 Regulators can expect to receive more filings from industry that have been written or at least co-authored by generative AI. Tech companies seeking to en- 42. Video Interviews with Anonymous Former and Current Regulatory Officials (Dec. 7, 2023, Jan. 10, 2024); see also Margot Kaminski, Regulating the Risks of AI, 103 B.U. L. REV. 1347, 1347 (2023). 43. This point emerged in conversations with several current and former regulators we interviewed. See Monast & Adair, supra note 35, at 1356–57. 44. See id. at 1359–60. At the same time, a point that emerged in several of our interviews with former and current regulators is a sense of the history of data technologies, whether denominated under the names of predictive analytics, big data, advanced analytics, machine learning, or artificial intelligence. Officials are wary of hype, fads, and over-promising while also interested in leveraging the real-world gains new technology is capable of delivering. 45. Sonya Ziaja, How Algorithm-Assisted Decision Making Is Influencing Environmental Law and Climate Adaptation, 48 ECOLOGY L.Q. 899, 924-25 (2021). 46. FED. ENERGY REGUL. COMM’N, STRATEGIC PLAN FY 2018–2022 6 (2018) (describing “algorithmic screens” of market data); see also Cary Coglianese & Lavi M. Ben Dor, AI in Adjudication and Administration, 86 BROOKLYN L. R. 791 (2021). 47. Video Interviews with Anonymous Regulatory Officials (Jan. 10, 2024). 12 ENERGY LAW JOURNAL [Vol. 45.1:1 courage new nuclear power generation have been exploring training large language models to make the regulatory process cheaper and more efficient.48 However, with market participants and their attorneys deploying generative AI as they compose and submit regulatory filings, regulators can also expect many of the same challenges the courts are facing. There have already been many documented cases of hapless lawyers relying on AI in their court filings and litigation, only to have the courts discover that many of the sources relied upon as binding or persuasive authority are nonexistent, invented by the AI’s “hallucinations.”49 Regulators can also expect the courts to engage in complementary adjudication of AI-related cases. Recent scholarship suggests that early AI cases seeking both statutory and common law remedies are already helping set incentives and expectations in a regulatory-like function.50 It will be important for regulators to watch how this caselaw develops —both to identify what principles and harms the judiciary is announcing and recognizing, and to assess where courts are silent about harms, externalities, or other market failures that regulators will need to take up to address. B. Privacy Regulators as Energy Regulators By adopting AI systems, utilities may also find themselves subject to additional regulation due to privacy laws. Supranational jurisdictions like the European Union are already implementing comprehensive AI regulation. As of this writing, the European Parliament has just passed the Artificial Intelligence Act, a new regulation soon to come into force that will have substantial implications for both providers and organizations that deploy AI systems.51 In the absence of a federal data privacy statute in the U.S., many states have gone ahead with their own legislation, a process we expect to continue. Much of this state legislation is new and the implications have not yet been tested in the courts. For example, in the state of Washington, to the degree that data about a person’s health could be inferred from energy use data, utilities deploying AI to data collected in that state may need to confirm they fall within a defined exception of the new My Health My Data Act, coming into full force in 2024.52 At one extreme, the Indiana Data Privacy Law, set to come into effect January 1, 2026, 48. Jennifer Hiller, Microsoft Targets Nuclear to Power AI Operations, WALL ST. J. (Dec. 12, 2023), https://www.wsj.com/tech/ai/microsoft-targets-nuclear-to-power-ai-operations-e10ff798. 49. See, e.g., Eugene Volokh, Another Example of a Lawyer-Filed Brief that Apparently Includes Citations Hallucinated by AI, VOLOKH CONSPIRACY (Nov. 17, 2023, 3:23 PM), https://reason.com/volokh/2023/11/17/another-example-of-a-lawyer-filed-brief-that-apparently-includes-citations-hallucinated-by-ai/. 50. See Alicia Solow-Niederman, Do Cases Generate Bad AI Law?, 25 COLUM. SCI. & TECH. L. REV. (forthcoming 2024), https://ssrn.com/abstract=4680641. 51. EUR. COMM’N, PROPOSAL FOR A REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS (Apr. 21, 2021), https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF, adopted March 13, 2024, https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html. A corrigendum (errata) was published in mid-April: https://www.europarl.europa.eu/doceo/document/TA-9-20240138_EN.html. 52. WASH. REV. CODE § 19.373.010(19) (2023). 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 13 explicitly exempts public utilities.53 Some older statutes already regulated utilities’ use of customer data, imposing a requirement of anonymity on aggregated data.54 The variation across states and between countries means that, in some jurisdictions, privacy regulators may find themselves becoming part of the energy regulatory system. In other jurisdictions, energy regulators may be the only ones in a position to provide meaningful oversight and accountability to an industry that holds tremendous data on the private details of millions of customers.55 IV. AI DATA PRIVACY AND CYBERSECURITY CHALLENGES Harnessing the transformation potential of AI and ML hinges critically on data access and sharing, entangled with concerns of privacy, cybersecurity, and regulatory compliance. AI and ML technologies thrive on large datasets, offering insights and predictions with precision far surpassing traditional statistical analyses. The efficacy of AI and ML in utilities is contingent upon the availability of high-quality, relevant data. A. Critical requirements on utility data and need for clarity In the utility sector, such data encompasses various sensitive aspects: 1. Privacy and confidentiality: Individual meter readings can unveil personal lifestyle choices or expose confidential economic information about companies. Such individual data is protected by general regulations like GDPR56 and California Consumer Privacy laws57, and explicit energy specific privacy laws and regulations. This limits the collection and use of meter data to specific entities and to specific needs and uses. Any other use can require explicit consent. Some AI and ML applications rely on mining large amounts of granular data without a priori identification of relevant features, and are structurally limited by such privacy and confidentiality restrictions; 2. Intellectual property: Operational data can reveal proprietary techniques and processes through reverse engineering. AI and ML prove very effective at identifying underlying patterns and information from available data, and therefore reinforces this risk; 3. Critical Infrastructure Protection: Data security is paramount to prevent malicious attacks on essential utility services. Restricting data access has long been a common practice to limit exposure to this risk. However, in practice, this limits innovation and interoperability, while much of this information (like the physical location of the infrastructure or equipment parameters) can anyway be obtained or estimated through other means. Reliance on data access restriction can then give a false sense of security and prevent focusing efforts on securing 53. IND. CODE § 24-15-1-1(b)(6) (2023). 54. See, e.g., WASH. REV. CODE § 19.29A.100(8) (2023). 55. See Kevin Frazier, Updating the Legal Profession for the Age of AI, YALE J. ON REG. (Dec. 6, 2023), https://www.yalejreg.com/nc/updating-the-legal-profession-for-the-age-of-ai-by-kevin-frazier. 56. The General Data Protection Regulation, first enacted in 2016, defines data protection rules in the European Union and when data collection impacts EU citizens. Commission Regulation 2016/679, 2016 O.J. (L 119) 1–88. 57. Cal Civ Code § 1798.100 (2020). 14 ENERGY LAW JOURNAL [Vol. 45.1:1 high critical data and ensuring physical and cyber-security by more robust techniques and systems. The discussion around AI, privacy, and data sharing in energy systems is not new but is becoming increasingly critical with AI’s growth. Ensuring open and equal access to utility data while meeting the requirements above is vital for fostering transparency and innovation. Stakeholders expect regulatory bodies to provide clear guidelines that facilitate both the protection of sensitive data and the advancement of AI technologies. For lack of clear guidelines, utility data holders will tend to resist sharing data, fearing litigation and exposure. B. Privacy preserving techniques Issues of privacy and confidentiality are not specific to the energy sectors, and techniques and frameworks have been developed 58 to allow exploiting the wealth of information in granular data while preserving confidentiality. Much of this is readily applicable to energy data. In applying these techniques, it is essential to guarantee that privacy and confidential information has been effectively obfuscated. Techniques like differential privacy offer a rigorous approach to this question, with several available frameworks to ensure the desired level of protection balancing privacy requirements and the utility of the openly available data for specific use cases. Open-source frameworks can be useful in this context, to provide transparent standards and tools to help certify if data is sufficiently anonymized for sharing. Whether or not such techniques are used, it is essential to clarify the issue of consent, especially regarding household level data where a large population of individuals are involved. But even for data involving commercial or industrial level customers, obtaining consent for large scale studies and AI applications could be untrackable. Legal clarity on disclosure to third parties and acceptable use is key and needs to be provided by dedicated privacy regulations.59 C. Cybersecurity and critical infrastructure protection Section 215A of the Federal Power Act, most recently amended by the FAST Act of 2015, mandates regulations to prohibit the disclosure of Critical Electrical Infrastructure Information (CEII), as designated by FERC or DOE. The Commission further defined60 CEII as: [S]pecific engineering, vulnerability, or detailed design information about proposed or existing critical infrastructure that: (i) Relates details about the production, generation, transportation, transmission, or distribution of energy; (ii) Could be useful to a person in planning an attack on critical infrastructure; [...] (iv) Does not simply give the general location of the critical infrastructure. 58. See, e.g., Georgina Evans et al., Statistically Valid Inferences from Privacy-Protected Data, 117 AM. POL. SCI. REV. 1275 (2023). 59. Decision Adopting Rules to Protect the Privacy and Security of the Electricity Usage Data of the Customers of Pacific Gas and Electric Company, Southern California Edison Company, and San Diego Gas & Electric Company, 2011 Cal. P.U.C. Decision 11-07-056 2-3 (Cal. P.U.C. July 28, 2011). 60. 18 C.F.R. § 388.113I(2) (2024). 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 15 In the context of data access for AI applications, this broad definition can be interpreted to apply to most if not all detailed data about electric infrastructure and systems. Regulations allow and facilitate voluntary sharing of data between utilities, operators and government entities, taking into account ERO CIP standards. However, beyond these authorized entities, information sharing is severely restricted. Cybersecurity and protection of critical electric infrastructure is a paramount concern given its strategic nature and importance for national security. While stringent measures are well justified given clear and present threats, in practice, current CEII regulations create a black and white approach to data access which can be considered detrimental to AI innovation and adoption. They make it very difficult for non-utility partners to access realistic data to support R&D and innovation efforts. Here again, striking the right balance between advancement of AI adoption and national security is a key challenge facing regulators and policymakers. D. A framework to balance confidentiality, cybersecurity and innovation for AI applications To address these challenges, establishing legal and regulatory frameworks is crucial. First, one should ensure the collection and secure handling of granular high quality energy data. This requirement will primarily fall onto electric utilities and system operators, especially regarding meter data, which make them the focal point of grid and grid-facing AI applications. For customer facing services, meter data should be accessible to third parties with the customer’s consent, under equal conditions but with equally stringent requirements regarding privacy and cybersecurity. The foundations for third party access to granular energy data were developed in the 2010s with the deployment of smart meter and energy efficiency or benchmarking programs.61 AI and ML has the potential to improve such services greatly, which makes such programs even more critical. In addition, regulatory provisions can mandate that a neutral public entity should have access to this data for transparency and audit purposes. Utilities and such entities can be mandated to release aggregate data through open data transparency programs, again allowing open and equal access to such data to support the development and provision of AI and ML based services. Facilitating partnerships under confidentiality agreements with research institutions can enable innovation and comprehensive analysis. These institutions should have secure data handling capabilities or access to such facilities. Data owners can also provide dedicated private spaces on their own infrastructure for third parties to deploy their algorithms and test them on confidential and sensitive data. In both cases, however, confidentiality agreements prevent peer reviewed 61. See, e.g., PAC. GAS AND ELEC. CO., Electric Rule No. 25: RELEASE OF CUSTOMER DATA TO THIRD PARTIES (2018), https://www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_RULES_25.pdf; PAC. GAS AND ELEC. CO. ELECTRIC RULE NO. 27: PRIVACY AND SECURITY PROTECTIONS FOR ENERGY USAGE DATA (2012), https://www.pge.com/tariffs/assets/pdf/tariffbook/ELEC_RULES_27.pdf. 16 ENERGY LAW JOURNAL [Vol. 45.1:1 published research. Other parties cannot reproduce, benchmark and build upon the developed techniques. Promoting open benchmark initiatives that encourage the use of both synthetic and real data is therefore essential to support public research and collaborative innovation. Synthetic data alone often lacks the realism necessary for effective AI training. Real data remains indispensable, both for direct use and as a foundation for generating high-quality synthetic datasets. The key is maintaining consistency and realism. One approach is training synthetic models on real datasets while ensuring consistency and maintaining confidential equivalent datasets for verification by authorized entities. Even if they have been obfuscated in terms of private and sensitive information, open benchmarks should strive to be realistic. They should aspire to sufficiently reflect reality such that there is a fair chance techniques developed and tested on them would transpose well on real test datasets. Data holders have a key role both in the elaboration for such open benchmarks in support of research and innovation, and in the evaluation and validation of techniques and algorithms on associated real test data sets. Regulations and policy should not hinder such initiatives and could even support such initiatives, especially when research and innovation efforts are otherwise supported by public funds. Irrespective of the chosen specific policies, ensuring data quality, reliable collection and access and meeting cybersecurity requirements imply investment in infrastructure and skilled professionals. Cost recovery for the mandates and actions mentioned above is therefore an important aspect of the discussion, especially since the return on investment may not be direct and immediate. This comprehensive approach to data governance in AI deployment in the utility sector aims to balance the need for data access with the imperative of data protection, fostering an environment conducive to innovation while maintaining privacy and security. This can be used as a basis for discussions between utilities, regulators, policymakers and other stakeholders. V. TRUSTWORTHINESS: HOW TO ENSURE EXPLAINABILITY, TRANSPARENCY, RELIABILITY AND LIABILITY Definition of Trustworthiness: Trustworthiness in the context of AI adoption by electrical power utilities refers to the system’s ability to ensure reliability and performance, to inspire confidence and transparency among stakeholders in a context of energy transition with more volatility and an evolving system complexity. It encompasses a combination of factors, including the reliability of the technology, the accountability of AIgenerated outcomes and the clarity of decision-making processes. Achieving trustworthiness is essential for fostering acceptance, mitigating risks, and ensuring responsible use of AI in critical decision-making within the energy sector. Reliability for critical system operation: A conceptual framework of operational processes for electrical power utilities can be organized as three interacting layers: optimize, control, and protect. AI can be deployed in each of these layers but with increasing need of reliability certification. The protection layer ensures the ultimate integrity of the equipment, operators and the population at large, in this layer, AI solutions must be certified 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 17 with the highest standards of reliability which themselves may need to be clearly specified. The control layer ensures the service continuity in line with the regulatory procedures (e.g. LOLE: Loss of Load Expectation). This layer is also fully automated, and AI has significant potential to address the increasing complexity but there is a strong need for establishing and enforcing rigorous validation standards. It is understood that this layer can never account for all possible system conditions, which justifies the existence of the protection layer. The optimize layer implements the market design, maximizing the welfare under physical and engineering constraints managed by the control and protect layers. Here, AI could be transformative by increasing the modeling fidelity and improving risk and uncertainty management. But any AI assisted system must be compliant with market regulations and provide explainability and transparency essential for human operators. Explainability for multiple stakeholders: Tailoring explanations to a wide range of stakeholders is integral to building trust. Explainability ensures that AI decisions are understandable and accessible to a large diversity of participants, from technical experts to market designers, regulators and policy makers. AI systems should have the ability to articulate the rationale behind their decisions or recommendations using the proper ontology for the relevant stakeholders. The AI should generate narratives using the existing and elucidated ontology to capture essential features driving the recommended decisions. Translating AI system complex inferences in clear and interpretable explanations in the cognitive model of humans is a fundamental open research issue. The understanding of human-AI interaction is still in its infancy. Human-Centered AI62, which promotes a partnership between humans and AI based on the extension of “Thinking Fast and Slow” (Kahneman 2011), is certainly a step in the right direction. Transparency as a Pillar of Trust: Transparency is foundational to trust in AI systems. Any AI system should be auditable by independent experts with respect to the reliability criteria mentioned previously. It should also involve communicating the methodologies, data sources, and decision pathways employed by the AI. The benefits of open source and open data should be carefully examined. As a result, these transparent processes will ensure that the fairness, integrity, and reliability of the AI system can be independently verified. This transparency not only builds trust among endusers and regulatory bodies but also fosters a culture of accountability within the organization. Liability and Accountability in AI Decisions: Defining liability and accountability procedures has always been fundamental in ensuring the reliability of power system operation. It will be exacerbated by the deployment of AI and increasing complexity. Most utilities accept responsibility for damages caused by their negligence, but make exceptions for events “outside” of their control. Power shutoffs pose risks beyond economic or property damage. Individuals who depend on powered 62. Adam Dahlgren Lindström et al., Thinking Fast And Slow In Human-Centered AI, HAL OPEN SCI. (Feb. 17, 2023), https://inria.hal.science/hal-03991946. 18 ENERGY LAW JOURNAL [Vol. 45.1:1 medical equipment, whether at home or in a medical facility are especially vulnerable. Defining ‘negligence’ in cases where decisions are made based on the advice of an AI assistant is an unresolved issue. As an illustrative example, in the case of PG&E63 (wildfires caused by power lines), AI could be used to assist in vegetation management and to perform preventive load shedding to avoid wildfires.64 But what if the AI assistant’s advice is wrong, but highly plausible, and causes deaths related to uncontrolled wildfires started by power lines? We can find very similar problems for the “self-driving car.” One difficult issue is the need to perform a counterfactual analysis of what the operator would have done without the advice of its AI assistant. In this context, preserving the traces of the interaction between the operator and his AI assistant seems essential. The ability to replay the process for post-mortem analysis is also very important to detect possible malfunctions of the AI technology, in which case the companies responsible for developing the software may also be liable. New questions are being asked about generative AI and Large Language Models. There are specific risks associated with these general-purpose AI assistants that are not dedicated to specific use cases and technical domains. The paper “Taxonomy of Risks posed by Language Models”65 identifies twenty-one types of risks. The risk of “Disseminating false or misleading information” (also called hallucinations) is certainly very serious in technical utility-related activities. For the time being, it should be recommended that this type of AI assistant not be used for critical, near real-time decision making, where verification is nearly impossible. Trustworthiness requires a clear delineation of responsibilities and mechanisms for addressing decision making in assisted AI systems. VI. ETHICAL CONSIDERATIONS The rapid and extensive spread of artificial intelligence has motivated a growing scholarly literature on AI ethics.66 AI systems have also been simultaneously hailed as necessary for addressing climate change and marked as ethically problematic—both for implicitly embedding particular moral judgments, values, and 63. Order Instituting Investigation on the Commissions Own Motion into the Maintenance, Operations and Practices of Pacific Gas and Electric Company (U39e) with Respect to its Electric Facilities; and Order to Show Cause Why the Commission Should Not Impose Penalties and/or Other Remedies for the Role PG&Es Electric Facilities Had in Igniting Fires in Its Service Territory in 2017. 2 (Cal. P.U.C. 2020). 64. See John McCormick, California Utilities Hope Drones, AI Will Lower Risk of Future Wildfires, WALL ST. J. (Sep. 11, 2020), https://www.wsj.com/articles/california-utilities-hope-drones-ai-will-lower-risk-of-futurewildfires-11599816601. 65. Laura Weidinger et al., Taxonomy of Risks posed by Language Models, FAccT ’22 ACM CONF. ON FAIRNESS, ACCOUNTABILITY, & TRANSPARENCY 214, 215 (2022). 66. See, e.g., Changwu Huang et al., An Overview of Artificial Intelligence Ethics, 4 IEEE TRANSACTIONS ON A.I. 799, 799-800 (2023); see also Thilo Hagendorff, The Ethics of AI Ethics: An Evaluation of Guidelines, 30 MINDS AND MACHINES 99, 99 (2020) (discussing twenty-two of the ethical principles that have circulated in AI ethics). 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 19 biases and for cloaking the real-world uncertainty behind the mathematical approximations that make such models work.67 The machine learning models that power many contemporary AI implementations have been extolled for the welfare improvements they were or still are expected to deliver.68 They have also been heavily criticized for the harms they have been observed to cause or are anticipated to bring about, for the exact same reasons arising from how machine learning techniques work in theory and practice.69 A frequent anxiety about inscrutable AI models is their opacity, which naturally invites calls for greater transparency in both their development and operation as deployed production systems.70 Utilities and regulators may be able to address some of the ethical concerns (and in Europe, legal requirements) that motivate demands for model transparency and explainability by inviting greater participation and co-development of their AI models.71 Additionally, although the public is becoming more aware of the threat of AIdriven discrimination, scholars have also suggested that AI can assist in detecting discrimination harms in ways not previously possible.72 In other words, AI generates both ethical dilemmas and opportunities. A. Energy Requirements & Environmental Impacts of Artificial Intelligence Training and operating large machine learning and AI models often takes considerable electricity and thus typically produces substantial emissions.73 The extent of this issue has led to calls for sustainable approaches to the development and deployment of AI74 and the integration of environmental ethics into discussions about AI ethics more generally.75 At the same time, while language models are large, not every AI model is, and many such systems will be relatively small, tailored to a particular, energy-specific use case.76 Jurisdictions that have sustain- 67. See Sonya Ziaja, How Algorithm-Assisted Decision Making Is Influencing Environmental Law and Climate Adaptation, 48 ECOLOGY L. Q. 899, 902, 912–18, 920 (2021); see also Amy L. Stein, Artificial Intelligence and Climate Change, 37 YALE J. ON REG. 890 (2020) (examining AI’s promise in the energy sector). 68. See, e.g., Jon Kleinberg et al., Prediction Policy Problems, 105 AM. ECON. REV. PAPERS & PROC. 491, 494 (2015). 69. See, e.g., CATHY O’NEIL, WEAPONS OF MATH DESTRUCTION: HOW BIG DATA INCREASES INEQUALITY AND THREATENS DEMOCRACY (2016) (other AI challenges include what have been termed algorithmic harms, where software models “exploit consumers’ imperfect information or behavioral biases.”); see also Oren Bar-Gill et al., Algorithmic Harm in Consumer Markets, 15 J. LEG. ANALYSIS 1, 3 (2023). 70. For skepticism about the efficacy of increasing transparency in algorithmic systems to address their potential harms, see Deven R. Desai & Joshua A. Kroll, Trust But Verify: A Guide to Algorithms and the Law, 31 HARV. J.L. & TECH. 1, 4-5 (2017); see generally Joshua A. Kroll et al., Accountable Algorithms, 165 U. PA. L. REV. 633 (2017); see also Ziaja, supra note 67, at 3. 71. See Ziaja, supra note 67. For the value of considering legal and ethical imperatives at every stage of software development, including from the very beginning. See COURTNEY BOWMAN ET AL., THE ARCHITECTURE OF PRIVACY: ON ENG’G TECHS. THAT CAN DELIVER TRUSTWORTHY SAFEGUARDS (O’Reilly Media, Inc. 2015). 72. Jon Kleinberg at al., Discrimination in the Age of Algorithms, 10 J. LEG. ANALYSIS 113 (2018). 73. See Stein, supra note 67, at 917–18. 74. See, e.g., Aimee van Wynsberghe, Sustainable AI: AI for Sustainability and The Sustainability of AI, 1 AI & ETHICS 213, 213-14 (2021). 75. Seth D. Baum & Andrea Owe, A.I. Needs Env’t Ethics, 26 ETHICS, POL’Y & ENV’T 139 (2022). 76. See Stein, supra note 67, at 918. 20 ENERGY LAW JOURNAL [Vol. 45.1:1 ability or carbon-reduction commitments will need to consider how market participants’ use of AI impacts those environmental goals, as AI systems vary, with some promising to advance while others would undermine climate policy objectives.77 AI may also change the costs associated with sourcing different types of energy; it is not yet clear whether deploying AI might make fossil fuels or renewables more competitive vis-a-vis the other.78 B. Ethics and AI Governance A traditional criterion for government legitimacy is justice, or at least a commitment to it in some form, such as the fair application of the procedures that constitute the rules of everyday life. One of the more troubling (although perhaps philosophically unsurprising) findings among computer science scholars in recent years is that alternative conceptions of fairness are mutually incompatible and cannot be simultaneously implemented in an algorithm’s code.79 Many organizations have taken to establishing AI ethics committees to work to align their AI development efforts with institutional and societal values.80 Additionally, those charged with managing electricity grids face the challenge of preventing a range of system disruptions, up to and including energy emergencies, which continue to draw scholarly attention.81 At the same time, professionals in the military, law enforcement, public health, and disaster and emergency management have been promoting the advantages of artificial intelligence for effective responses during periods of crisis.82 Yet there are definite risks to the potential junction of AI and emergency response. AI systems are often inscrutable—even to those who program them—and lack traditional accountability 77. Id.; see also Anders Nordgren, A.I. and Climate Change: Ethical Issues, 21 J. OF INFO., COMMC’N & ETHICS IN SOC. 1 (2023) (considering ethical issues arising from how AI could mitigate and/or contribute to climate change). 78. Stein, supra note 67, at 919. 79. Jon Kleinberg et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, INNOVATIONS IN THEORETICAL COMPUT. SCI. CONF. 8, https://arxiv.org/pdf/1609.05807.pdf (2017). 80. See, e.g., Steven Tiell, Create an Ethics Comm. to Keep Your AI Initiative in Check, HARV. BUS. REV. (Nov. 15, 2019), https://hbr.org/2019/11/create-an-ethics-committee-to-keep-your-ai-initiative-in-check; see also Jianlong Zhou & Fang Chen, AI Ethics: From Principles to Practice, 38 AI & SOCIETY 1, 4, 5 (2023) (listing setting up AI committees as stage two of a three-stage process proposed to operationalize AI ethics). 81. See, e.g., Amy L. Stein, Energy Emergencies, 115 NW. U. L. REV. 799 (2020). 82. Wenjuan Sun et al., Applications of artificial intelligence for disaster management, 103 NATURAL HAZARDS 2631 (2020); Ferda Ofli et al., Using Artificial Intelligence and Social Media for Disaster Response and Management: An Overview, in AI AND ROBOTICS IN DISASTER STUDIES (T. V. Vijay Kumar & Keshav Sud, eds., 2020); Nathaniel O’Grady, Automating security infrastructures: Practices, imaginaries, politics, 52 SECURITY DIALOGUE 231 (2021); Minho Lee et al., AI advisor platform for disaster response based on big data, 35 CONCURRENCY & COMPUTATION PRACT. & EXPERIENCE e6215 (2021); Ania Syrowatka et al., Leveraging artificial intelligence for pandemic preparedness and response: a scoping review to identify key use cases, 4 NPJ DIGIT. MED. 96 (2021). 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 21 checks,83 while also frequently encoding biases and leading users to make prejudiced choices.84 AI may yet offer many advantages when responding to emergencies. Its potential—and in many cases, demonstrated ability—to fuse, process, and respond to vast and disparate data flows have generated considerable optimism about improving or augmenting human decision making under challenging informational conditions. Some of that optimism may prove to be well-founded, but there are also latent risks in delegating high-stakes decisions at critical moments to algorithms.85 A government that relies on AI can easily claim it does a better job meeting all the legitimating principles detailed in the classic theories of emergency powers: in particular, it would enjoy in even greater degree than a human crisis leader the advantages of better information and speedier response in the face of imminent threats.86 Yet, relying on AI at times of emergency or crisis would be to fragment what has long been thought a unitary, sovereign decision made by a legitimate human leader — elected by the citizenry or appointed by elected officials — and instead delegate it, at least partially, to a nonhuman, unelected, and likely inexplicable automated entity. Regulators and policymakers face the task of finding the right balance between efficacy, caution, and the participatory practices that legitimize democratic polities as they strive to ensure the participants in the energy market operate ethically. VII. KEY TAKEAWAYS AND RECOMMENDATIONS Adoption of AI by utilities is advancing, yet a vast potential remains untapped. The scope of AI applications in this field is extensive, signaling a transformative era ahead. Policymakers and legal professionals at federal and state levels, including members of public utility commissions and state energy offices, are at the forefront of navigating the complex legal and regulatory landscapes that shape this emerging technology. In doing so, they need to engage in constructive and informed dialogue with other stakeholders to best navigate the complex issues 83. See Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 WASH. L. REV. 1 (2014); see also FRANK PASQUALE, THE BLACK BOX SOCIETY: THE SECRET ALGORITHMS THAT CONTROL MONEY AND INFO. (2016); see also Stein, supra note 73, at 937–38. 84. Hammaad Adam et al., Mitigating the Impact of Biased A.I. in Emergency Decision-Making, 2 COMMC'NS MED. 149 (2022). 85. Cf. Asaf Tzachor et al., A.I. in a Crisis Needs Ethics with Urgency, 2 NATURE MACH. INTEL. 365 (2020). 86. JOHN LOCKE, TWO TREATISES OF GOVERNMENT 374–75 (Peter Laslett ed., Cambridge Univ. Press 1988) (1690) (on prerogative); CLINTON ROSSITER, CONSTITUTIONAL DICTATORSHIP: CRISIS GOVERNMENT IN THE MODERN DEMOCRACIES (1963); MICHAEL WALZER, JUST AND UNJUST WARS 251–68 (1977) (“supreme emergency”); John Ferejohn & Pasquale Pasquino, The Law of the Exception: A Typology of Emergency Powers, 2 INT’L J. CON. L. 210 (2004); OREN GROSS & FIONNUALA NÍ AOLÁIN, LAW IN TIMES OF CRISIS: EMERGENCY POWERS IN THEORY AND PRACTICE (1st ed. 2006); Daniel Statman, Supreme Emergencies Revisited, 117 ETHICS 58 (2006); CLEMENT FATOVIC, OUTSIDE THE LAW: EMERGENCY AND EXECUTIVE POWER (2009); EXTRA-LEGAL POWER AND LEGITIMACY: PERSPECTIVES ON PREROGATIVE (Clement Fatovic & Benjamin Kleinerman eds., 2013); Daniel D. Slate, Crisis Government’s Legitimacy Paradox: Foreseeability and Unobservable Success, in INTERSECTIONS, REINFORCEMENTS, & CASCADES 248 (Daniel Zimmer, Trond Undheim, & Paul N. Edwards eds., 2023). 22 ENERGY LAW JOURNAL [Vol. 45.1:1 of privacy, cybersecurity, explainability, transparency, liability, and AI governance and balance them with the transformative benefits of AI. In the United States, the intersection of state and federal regulations adds layers of complexity. AI applications in utilities, depending on their impact on the wholesale market, national security or broader AI concerns, may fall under federal regulation but states will add their own regulations over safety, privacy and reliability concerns. This reinforces the need for effective dialogue at all levels. The shift from model-based simulations to data-driven analysis with AI exacerbates existing concerns around data access, privacy, and cybersecurity. Data availability is a pivotal factor in enabling AI and machine learning adoption, and regulations should not unduly hinder access to relevant data. As already permitted by existing regulations, data sharing should be encouraged and facilitated between utilities, system operators and public entities in charge of oversight and energy policy. Collaboration with non-utility stakeholders is complicated under current regulatory frameworks – yet it is critical to mobilize the innovation ecosystem around AI and machine learning which leads the research and development of these new technologies. To create a level playing field and spur collaborative innovation, producing ‘realistic open benchmarks’-datasets that are closely aligned with real-world data but modified for privacy and security – is recommended as a critical enabler. These benchmarks would allow for world leading innovation and research in AI without compromising confidentiality and security. This would also allow validation of promising beneficial techniques against real data in partnership with original data owners – provided the “realistic open benchmarks” remain sufficiently similar. Policymakers and regulators should support such initiatives, as the best compromise to balance privacy and security issues against rapid development of AI and global leadership in this new technology. Achieving trustworthiness in AI is fundamental for its acceptance and responsible use in critical decision-making within the energy sector. Trustworthiness by design is non-negotiable, especially for the most critical applications. A conceptual framework of operational processes for electrical power utilities can be organized as three interacting layers that optimize, control, and protect. AI can be deployed in each of these layers but with the increasing need of reliability certification. Tailoring AI explanations to various stakeholders and investing in human-AI interaction research are crucial steps in building this trust. Transparency forms the foundation of trust in AI systems. Ensuring that AI systems are auditable by independent experts and communicating their methodologies and decision-making processes are also key steps. In taking them, the potential benefits of open source and open data approaches warrant careful consideration. Regarding liability and accountability, a critical aspect involves analyzing what decisions would have been made without AI assistance. Preserving interaction traces between operators and AI assistants is vital. Large language models are probabilistic engines that should be coupled with other models before using them in critical, near real-time decision-making. Reliance on AI during emergencies or crises would also represent a paradigm shift in decision-making, as it would 2024] ADOPTION OF AI BY ELECTRIC UTILITIES 23 move away from solely human-led processes to a collaborative approach with automated systems. This change raises important questions about accountability and governance. To tackle these issues and develop effective AI regulations, constructive dialogue among stakeholders, including regulators, policymakers, operators, and solution providers, is essential. This dialogue should balance privacy, cybersecurity, and trustworthiness requirements with AI’s benefits. Exchanging best practices and innovative solutions with other industries and countries can enhance this process. Communication and outreach regarding AI deployments are also crucial. As regulators and policymakers engage with industry and academia, it is vital to also communicate with the public, gather stakeholder feedback, and conduct community outreach and transparency initiatives. AI is a new technology and its use and regulations are still very much a work in progress, yet it is already clear it will be instrumental in addressing climate change and the transformation of energy systems.