Computational Antitrust: An Introduction and Research Agenda

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

ARTICLE

Computational Antitrust:
An Introduction and Research Agenda

Dr. Thibault Schrepel, LL.M.*

Abstract. Computational antitrust is a new domain of legal informatics which seeks


to develop computational methods for the automation of antitrust procedures and
the improvement of antitrust analysis. The present article first introduces it, then
explores how agencies, policymakers, and market participants can benefit from it.
Against this background, it sets out a research agenda for the years ahead in view
of providing answers to the challenges created by computational antitrust, and
better understand its limits.

* Faculty Affiliate at Stanford University CodeX Center (creator of the project on Computational
Antitrust), Assistant Professor at Utrecht University School of Law, Associate Researcher at University
of Paris 1 Panthéon-Sorbonne, and Invited Professor at Sciences Po Paris. I express my deepest gratitude
to my friend Nicolas Petit for his comments on the draft version of this article. I would also like to thank
Catalina Goanta, Kirill Ryabtsev, and Martin James Sicilian for their suggestions, and Roland Vogl for
encouraging me to start a new project at the CodeX Center. This article serves as an introduction to it.
Lastly, I am grateful that many professors and antitrust agencies have agreed to take part in this project.
No outside funding was received or relied upon for this paper.
2 Stanford Computational Antitrust VOL. 1

Introduction

Antitrust 1.0 appeared in 1890 with the Sherman Act and was introduced in Europe
with the Treaty of Rome in 1957. It has been shaped by several schools of thought
(antitrust 1.1 for the Brandeis School, antitrust 1.2 for the Roosevelt School...), but
always within the framework of a textual interpretation. Antitrust 2.0 then came
along with the Chicago School in the early 1960s (antitrust 2.1 being the Harvard
School, antitrust 2.2 the post-Chicago School...). Antitrust law became more
economical to fit with the dynamic sectors falling within its scope. The method
matched the subject matter.

Antitrust 3.0 is emerging but remains incomplete. It appeared in the early 2010s
when antitrust agencies have shifted their focus on the issues related to the digital
economy. But while there are passionate discussions about the practices
implemented by digital players, the use of technological tools to address them is
very little debated. This disconnection between diagnosis and treatment is
becoming problematic. Antitrust agencies struggle to remedy anticompetitive
practices in increasingly complex, fast-paced, and evolutive markets. Soon, firms
will also suffer from this struggle leading to fewer decisions and well-informed
guidelines. Legal certainty will decrease while the number of judicial errors will be
on the rise. Against this background, one must increase antitrust law with new
technologies to make antitrust 3.0 complete. Enters “computational antitrust.”1 The
present article first explains what it is (I) before discussing its potential (II), and the
challenges ahead (III).

I. What is Computational Antitrust?

First, this article discusses the core idea and concepts behind computational law
(A), after which it introduces computational antitrust (B). As one shall explain, the
challenges encountered by the jurists, philosophers, and mathematicians in
computational law matters are also found in computational antitrust.

A – Computational Law

Computational law is a “branch of legal informatics concerned with the


mechanization of legal analysis (whether done by humans or machines).”2
Computational law is today a subject of growing enthusiasm,3 although the idea to
compute the law is centuries old. German philosopher Gottfried Wilhelm Leibniz
(1646–1716), known for his defense of rationalism, argued in the 17th century that
each legal question has a single answer.4 From then on, “if controversies were to
arise, there would be no more need of disputation between two philosophers than

1 See https://law.stanford.edu/computationalantitrust for more information about the Computational

Antitrust project at Stanford University, CodeX Center (The Stanford Center for Legal Informatics).
2 Michael Genesereth, Computational Law: The Cop in the Backseat (2015). See also Nathaniel Love &

Michael Genesereth, Computational Law (2005) (“The techniques of computational logic, applied to the
semantic rules as well as the data, form the basis of a computational law system”).
3 Google Books, Ngram Viewer, archived at
https://perma.cc/9HPD-QXEU#t1%3B%2Ccomputational%20law%3B%2Cc0.
4 Hanina Ben-Menahem, Leibniz on Hard Cases, 79 ARCH. RECHTS SOZIALPHILOS 198, 209 (1993).
2021 “Computational Antitrust” 3

between two accountants. For it would suffice for them to take their pencils in their
hands and to sit down at the abacus and say to each other (with a friend if they
wish): Let us calculate.”5

Other philosophers like Jeremy Bentham also argued that codifying the law
would help make it more practical and accessible6—which Emperor Napoleon did
in France.7 With that in mind, Leibniz and his descendants always faced the
difficulty of codifying the entire law, which, being the product of natural languages,
could not be fully consolidated. Today, digital technologies give new life to these
ambitions aspiring to mechanize the rule of law in its entirety (enforcement
included).8 Of course, technologies are subject to combinatorial evolution, making
it very difficult to forecast which form they will take.9

One can nonetheless imagine a world in which artificial intelligence (“AI”)10 and
blockchain combined with quantum computing will soon provide valuable support
by enabling a better understanding of the world’s complexity, and eventually,
capturing part of it. Today already, multiple computational tools are currently
being deployed in legal fields, such as data mining, machine learning, deep
learning simulations, natural language techniques, social epidemiology, document
management, legal text analytics, computational game theory, network analysis,
and information visualization.11 These tools capture rich and detailed data about
the external world, make them computable,12 and process them to reach a broader
and more granular level of analysis.13

5 GOTTFRIED WILHELM LEIBNIZ, DISSERTATIO DE ARTE COMBINATORIA (1666).


6 Dean Alfange Jr., Jeremy Bentham and the Codification of Law, 55 CORNELL L. REV. 58 (1969). In that regard,

common law might be more complex to compute than civil law, see Sarah B. Lawsky, Formalizing the
Code, 70 TAX L. REV. 377, 379 (2017) (stressing that formalization “could help move the law closer to
legibility by a computer”). Also, see Mark A. Lemley, The Law and Economics of Internet Norms, 73 CHI.-
KENT L. REV. 1257, 1294 (1998) (arguing that common law is “doing a pretty good job of adapting existing
law to the new and uncertain circumstances of the Net”).
7 Ross Levine, Law, Endowments and Property Rights, 19 J. ECON. PERSP. 61, 63 (2005).
8 Michael A. Livermore, Rule by Rules, in COMPUTATIONAL LEGAL STUDIES: THE PROMISE AND

CHALLENGE OF DATA-DRIVEN RESEARCH 238, 261 (Ryan Whalen ed., 2020).


9 W. BRIAN ARTHUR, THE NATURE OF TECHNOLOGY: WHAT IT IS AND HOW IT EVOLVES 18 (2010).
10 Catalina Goanta, Gijs van Dijck & Gerasimos Spanakis, Back to the Future: Waves of Legal Scholarship on

Artificial Intelligence, in TIME, LAW, AND CHANGE AN INTERDISCIPLINARY STUDY 329, 335 (Sofia
Ranchordás & Yaniv Roznai eds., 2020) (creating a timeline of what the authors call “A Brief History of
Artificial Intelligence” and showing an increase in the popularity of artificial intelligence in the
academic literature, especially after 2016).
11 See Nicola Lettieri, et al., Ex Machina: Analytical platforms, Law and the Challenges of Computational Legal

Science, 10 FUTURE INTERNET 37, 39 (2018) (listing several of these tools). For a larger database (also
including non-computational tools), see Legaltechlist, https://techindex.law.stanford.edu/, archived at
https://perma.cc/9WT8-PJNY. On the subject of deep learning, see Blagoj Delipetrev, et al., Historical
Evolution of Artificial Intelligence, EUR 30221EN, Publications Office of the European 12 (2020) (explaining
that deep learning involves artificial neural networks and the use of multiple layers). On legal text
analytics, see GitHub, Legal Text Analytics https://github.com/Liquid-Legal-Institute/Legal-Text-
Analytics, archived at https://perma.cc/7VY9-8F6E. Explaining how these tools, such as machine
learning, will increase trust in our institutions, see Balázs Bodó, Mediated trust: A Theoretical Framework
to Address the Trustworthiness of Technological Trust Mediators, NEW MEDIA & SOCIETY 13 (2020) (arguing
that “[m]achine learning-based systems produce trust from insight” by providing more data and
transparency in the way they are analyzed).
12 On this subject, see How AI could help market intelligence gathering, Commission tender

COMP/2017/017 (Consultancy “Artificial Intelligence Applied to Competition Enforcement”), October


2017, available on Commission, Contracts > Ex-ante Publicity on Low and Middle Value Contracts, European
Commission: Competition, https://ec.europa.eu/competition/calls/exante_en.html, archived at
https://perma.cc/M6Z5-8YT3.
13 See European Commission supra note 11. Also, J.B. Ruhl & Daniel Martin Katz, Measuring, Monitoring,

and Managing Legal Complexity, 101 IOWA L. REV. 191 (2015) (identifying potentially useful metrics and
methods for measuring or monitoring legal complexity).
4 Stanford Computational Antitrust VOL. 1

In the end, all pointers indicate that computational methods will first
supplement the functioning of our legal system and will end up taking over a large
part of it.14 This substitution process will trigger critical questions. Getting ready for
it—and, eventually, for shaping it—requires discussing which principles ought to
be preserved and developed. The study of computational law as a complement,
which it currently is (i.e., a way to automate processes and improve existing
analyses), might be our best shot at it.

B – Computational Antitrust

Markets are becoming increasingly complex and dynamic in today’s economy.15


This complicates the task of antitrust agencies, each day a little more. Against this
background, the implementation of computational methods is becoming necessary
to maintain and improve antitrust agencies’ ability to detect, analyze, and remedy
anticompetitive practices.16

These tools and methods are rarely used in antitrust law today, in fact, most
antitrust agencies are just beginning to acquire the technical expertise to develop
and use them. Eventually, computational tools should be widely adopted and allow
the integration of more variables in anticompetitive cases, whether from economic
theory, business and management science, computer science, statistics, or
behavioral insights.17 These tools will also simplify merger control, freeing up some
of the teams within each antitrust agency. Accordingly, one must want to explore
where and how to develop computational antitrust—a specialist field of
computational law that purports to improve antitrust analysis and procedures by
assistance of legal informatics.18

14 Commission White Paper On Artificial Intelligence – A European Approach to Excellence and Trust

(Brussels, 19.2.2020) (COM(2020) 65 final) 8 (2020) (underlining the need for public services to make use
of artificial intelligence).
15 David J. Teece, Gary Pisano & Amy Shuen, Dynamic Capabilities and Strategic Management, 18 STRATEG.

MANAG. J. 509, 515 (1997). Also, see Friedrich August von Hayek, The Theory of Complex Phenomena, in
READINGS IN THE PHILOSOPHY OF SOCIAL SCIENCES 55, 56 (1994) (defining complexity as the number of
elements determining a pattern). For empirical studies, see Statista, Digital Economy Compass 6 (2019)
(showing that the total amount of data generated across numerous digital layers was 33 zettabytes in
2018. It may grow to 175 in 2025, over 600 in 2030, and over 2,100 in 2035). Documenting the exponential
growth of numerous digital industries such as eCommerce, FinTech, digital advertising, cloud hosting,
and smart home, see Statista, Digital Economy Compass 133-243 (2020). Lastly, documenting markets’
dynamism at the national level, see the OECD Insights on Productivity and Business Dynamics
https://www.oecd.org/sti/ind/oecd-insights-on-productivity-and-business-dynamics.htm, archived at
https://perma.cc/A4CJ-MGMG.
16 Richard A. Posner, The Decline of Law as an Autonomous Discipline: 1962-1987, 100 H ARV. L. R EV. 761,

777 (1987) (explaining that “despite all the false starts and silly fads that have marred its reaching out
to other fields, the growth of interdisciplinary legal analysis has been a good thing, which ought to
(and will) continue”).
17 They enable “the man of the future” such as described by Oliver Wendell Holmes, The Path of the Law, 10

HARV. L. REV. 457, 468 (1897) (explaining that “[f]or the rational study of the law, the black-letter man may
be the man of the present but the man of the future is the man of statistics and the master of economics”).
18 As I shall explain in this article, “better” (mainly) means faster and more accurate, in short, closer to

the analog world.


2021 “Computational Antitrust” 5

II. The Potential of Computational Antitrust

The development of computational antitrust benefit enforcers, policymakers,


and companies in all areas of antitrust law. That applies to anticompetitive
practices (A), merger control (B), and the design or monitoring of antitrust
policies (C).

A – Anticompetitive Practices

First, computational tools benefit agencies by increasing the availability of data


about markets. In doing so, they help creating new forensics capabilities by
increasing the flow of information available to agencies (therefore reducing
Hayekian informational asymmetries), and, as a result, improving their ability to
detect antitrust infringements.19

These tools are most welcomed considering that antitrust agencies are (to this
day) mostly relying on reactive methods (such as leniency applications) for
detecting collusion20 whereas their effectiveness is declining.21 Considering that
technologies—such as powerful AI systems and blockchain—help market players
implement and sustain their anticompetitive practices, the use of computational
tools (as a proactive response) is becoming necessary.22

Against this background, the development of new market screening tools could
help to identify anticompetitive patterns and behaviors.23 Machine learning will

19 As reported by Matthew Newman, Online Pricing Algorithms Prompt EU Antitrust Regulator to Boost

Detection Tools, MLEX (Sept. 24, 2018), Margrethe Vestager underlined that the European Commission
could “make more use of algorithms in order to be able to police or supervise what’s going on in the
marketplace.” She shared the same desire a few months earlier in Brussels according to Foo Yun Chee,
EU Considers Using Algorithms to Detect Anti-Competitive Acts, REUTERS (May 4, 2018) (declaring, “we would
like to have our own algorithms to be out there”) [emphasis added]. As reported by Maxwell Fillion &
Rodrigo Russo, Alexandre Barreto de Souza (The Administrative Council for Economic Defense’s
President) said during the GCR Live 8th Annual Antitrust Law Leaders Forum in February 2019 that he
was pushing the agency toward “continuous investments on tools and investigative techniques (…)
capable of detecting cartels and other anticompetitive conduct,” CADE Investing in New Tools, ‘Brain
Project’ to Combat Bid-Rigging, MLEX (Feb. 1, 2019). As reported by Matthew Holehouse, the CMA is
currently developing tools to monitor digital platforms’ compliance with competition law, Competition
Regulators Need AI and Behavior Experts, MLEX (Jun. 24, 2019). Discussing how AI could help the detection of
anticompetitive practices, see Lance B. Eliot, Antitrust and Artificial Intelligence (AAI): The Antitrust Vigilance
Lifecycle and AI Legal Reasoning Autonomy (2020). In short, computational tools will lead antitrust to get its
Billy Beane moment, see MICHAEL LEWIS, MONEYBALL: THE ART OF WINNING AN UNFAIR GAME (2003).
20 OECD, Ex Officio Cartel Investigations and the Use of Screens to Detect Cartels 9, 108 (2013).
21 See Johan Ysewyn & Siobhan Kahmann, The Decline and Fall of the Leniency Programme in Europe, 1

COMP. L. REV. 44, 45 (2018) (“In 2014 there were 46 leniency applications, which dropped to 32
applications in 2015, and finally only 24 applications have been registered in 2016”); Charles McConnell,
Type A Leniency Applications Down, US DOJ Official Says, GLOBAL COMPETITION REV. (Jun. 15, 2018),
https://globalcompetitionreview.com/article/1170614/type-a-leniency-applications-down-us-doj-official-says,
archived at https://perma.cc/ZT6J-7NER.
22 See Thibault Schrepel, Collusion by Blockchain and Smart Contracts, 33 HARV. J.L. & TECH. 117, 159 (2019)

(explaining how blockchain could increase cartels’ stability).


23 The Competition and Markets Authority, for example, has developed a monitoring tool to track and

monitor RPM in the musical instruments sector, see CMA, Restricting resale prices: how we're using data to
protect customers (Jun. 29, 2020) https://competitionandmarkets.blog.gov.uk/2020/06/29/restricting-
resale-prices-how-were-using-data-to-protect-customers/, archived at https://perma.cc/JV89-LXTB.
Generally, on the use of computational tools for detecting cartels, see Andreas von Bonin & Sharon
Malhi, The Use of Artificial Intelligence in the Future of Competition Law Enforcement, 11 J. EUR. COMP. L. &
PRAC. 468, 470 (2020); Melissa S. Baucus & Janet P Near, Can Illegal Corporate Behavior Be Predicted? An
Event History Analysis, 34 ACAD. MANAGE. J. 9, 11 (1991) (discerning industry patterns in which antitrust
violations are more likely); see Michal Gal, Algorithms as Illegal Agreements, 34 BERKELEY TECH. L.J. 67, 115
(2019) (stressing that “enforcement is likely to become an up-hill battle”); Lilian Petit, The Economic
6 Stanford Computational Antitrust VOL. 1

prove helpful in that regard.24 Techniques of natural language understanding could


also automate the identification of illegal intentions when analyzing companies’
internal documents.25 The more complex (and dynamic) the practices, the more
useful these tools will be.26 In the long term, one can imagine that application
programming interfaces (“APIs”) will facilitate the transformation of data into
information and create new channels for the automatic sending of certain data
from companies to agencies, and vice versa.27

Second, computational tools enable agencies to process data more efficiently


and understand practices better.28 They are indeed improving the speed by which

Detection Instrument of the Netherlands Competition Authority (2012); Ai Deng, Cartel Detection and
Monitoring: A Look Forward, 5 J. ANTITRUST ENFORC. 488, 494 (2017) (discussing how statistical test-based
screens could help dynamic and real-time cartel detection); and Rosa M. Abrantes-Metz & Albert Metz,
Can Machine Learning Aid in Cartel Detection, ANTITRUST CHRON. COMPETITION POL’Y INT’L Dec. 2018, at
1, 3 (calling “cartel detection” a problem of classification). More specifically, on cartel rigging, see Martin
Huber & David Imhof, Machine Learning with Screens for Detecting Bid-Rigging Cartels, 65 INT. J. IND. ORGAN.
277 (2019); Sanchez-Graells A, “Screening for Cartels” in Public Procurement: Cheating at Solitaire to Sell Fool’s
Gold?, 10 J. EUR. COMP. L. & PRAC. 199, 199 (2019); Autorité de la Concurrence & Bundeskartellamt, Algorithms
and Competition 65 (2019) (“competition authorities could develop their own machine-learning algorithms
to detect algorithmic collusion (…) competition authorities of Brazil, Germany, Mexico, Portugal, Russia,
South Korea, Spain, Switzerland and the United Kingdom have used data screening techniques to assist in
detecting cartels”). For more details at the national level, see Korean OECD, supra note 20, at 139; India, see
id at 117; Italy, see id at 129; Lithuania, see id at 152; Chinese Taipei, see id at 192-193; Switzerland, see Summary
of the Workshop on Cartel Screening in the Digital Era 6 (2018). For Brazil, see Non-price Effects of Mergers -
Note by Brazil (2018); and for Mexico, see David Imhof, et el., Screening for Bid-Rigging: Does It Work? 5 (2017).
24 Whether they adopt a technique of supervised learning, unsupervised learning, semi-supervised

learning, or reinforcement learning. For insights about their differences, see R. Sathya & Annamma
Abraham, Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification, 2 INT. J.
ADV. RES. ARTIFICIAL INTELL. 34 (2013). See also Delipetrev, supra note 11, at 6. Stefan Hunt, From Maps to
Apps: the Power of Machine Learning and Artificial Intelligence for Regulators, Beesley Lecture Series on
Regulatory Economics 6 (Oct. 19, 2017) (explaining that “much of what regulators do is ultimately about
recognising patterns in the data,” and that supervised machine learning, sometimes better than
econometrics, can help find these patterns efficiently). Discussing one of the limits of machine learning,
see Battista Biggio & Fabio Roli, Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning, 84
PATTERN RECOGNIT. 317 (2018) (explaining how “adversarial input perturbations carefully crafted either
at training or at test time can easily subvert” learning-based pattern classifiers such as deep networks).
In future years, one should expect to find some companies using these adversarial techniques to mislead
antitrust agencies’ machine learning investigative tools.
25 Explaining Natural Language Processing, see Robert Dale, Industry WatchLaw and Word Order: NLP in

Legal Tech 25 NAT. LANG. ENG. 211 (2018). In the context of antitrust, see Suzanne Rab, Artificial Intelligence,
Algorithms and Antitrust 18 COMPETITION L.J. 141, 142 (2019). Explaining Natural Language Understanding
in the context of antitrust, see Jonathan Grudin, From Tool to Partner: The Evolution of Human-Computer
Interaction § 7.5 (2017). In practice, the Federal Trade Commission is using a computational tool called
Relativity, thanks to which it can search automatically for the substitutes of specific keywords. For more
information, see RelativityOne, Analytics Overview, RelativityOne,
https://help.relativity.com/RelativityOne/Content/Relativity/Analytics/Analytics.htm, archived at
https://perma.cc/RP5U-Y38H.
26 Marcela Mattiuzzo, Algorithms and Big Data: Considerations on Algorithmic Governance and Its

Consequences For Antitrust Analysis, REV. DE ECON. CONTEMP., Jan. 2019, at 1, 14 (explaining that algorithms
used by companies could be audited by agencies using sophisticated tools). Also, Wolfgang Alschner,
Sense and Similarity: Automating Legal Text Comparison, in COMPUTATIONAL LEGAL STUDIES: THE PROMISE
AND CHALLENGE OF DATA-DRIVEN RESEARCH 9 (Ryan Whalen ed., 2020) (explaining how pattern
recognition works with large legal corpora). Also, Rob Nicholls, Regtech as an Antitrust Enforcement Tool, J.
ANTITRUST ENFORC. 12 (2020) (suggesting that antitrust agencies could create a database of expected price
distributions for certain products and use machine learning to detect deviations). Furthermore, see Schrepel,
supra note 22, at 164 (explaining the need for antitrust agencies to analyze the coding of digital products). Lastly,
see OECD, supra note 20, at 130 (underlining that market screenings are, for now, expensive).
27 Discussing the rule of APIs in the context of data sharing, see Oscar Borgogno & Giuseppe Colangelo,

Data Sharing and Interoperability: Fostering Innovation and Competition Through APIs, 35 COMPUT. LAW
SECUR. REV. 1, 4 (2019). More generally, one could think of a system in which companies and agencies
will share databases featuring a continuous and automatic addition of new (specified) data.
28 Computational tools may help generate data about companies’ behaviors (as living organisms),

therefore allowing one to study them in action and increasing one’s understanding of the ecology in
which they evolve. The help of data scientists will prove crucial in the field. On the subject, see Makan
Delrahim, “Here I Go Again”: New Developments for the Future of the Antitrust Division 7 (2020) (“We’ve
trained thousands of criminal investigators, certified fraud examiners, auditors, data scientists, and
2021 “Computational Antitrust” 7

agencies analyze documents. For example, these tools have allowed the European
Commission to study 1.7 billion search queries for its investigation in the Google
Shopping case.29 In this respect, computational tools are bringing the “law time”
closer to “market time.”30

Besides, computational tools increase agencies’ analytic capacities. They do so by


allowing the comparison of large data sets across different periods and industries to
detect anomalies.31 These tools also enable agencies to integrate data from other
agencies.32 Much can be done to improve the cross-institutional use of data residing
within different agencies from a same country. Similarly, the international dialogue
between antitrust agencies, which is currently ensured by various networks such as
the ICN, the OECD, and the ECN+, could be further automated.

Simultaneously, computational tools enable market players to conduct more


thorough internal audits. In the future, one could imagine the design of new tools
for assessing compliance with antitrust laws (almost instantaneously). It would
require companies to compute the known parameters of any practice and assess the
associated legal liability risk thanks to algorithms trained to antitrust laws. One
could imagine that antitrust agencies will provide companies with their own
computational tools to evaluate the risk even more accurately. These tools could
improve over time using deep reinforcement learning models.33

B – Merger Control

Merger control differs from investigations of anticompetitive practices. As it


turns out, these differences have implications for computational antitrust.

First, antitrust agencies must make a decision in all the concentrations notified
to them. And they have a limited time to do so. As a result, the probability that
agencies are making decisions under uncertainty is greater in merger control than
in anticompetitive cases where they pick investigations that may go on for long

procurement officials from nearly 500 federal, state, and local government agencies on recognizing
collusion risks in the procurement process”). Also discussing the “Data Analytics Project” (using data for
detecting wrongdoing) and “Collusion Analytics models that proactively identify red flags of antitrust
crimes and related fraud schemes in bid and award data,” see Id.
29 Commission, Antitrust: Commission Fines Google € 2.42 Billion for Abusing Dominance as Search Engine by

Giving Illegal Advantage to Own Comparison Shopping Service, European Commission (Jun. 27, 2017),
https://ec.europa.eu/commission/presscorner/detail/en/IP_17_1784, archived at https://perma.cc/49UU-UTBU.
30 See Richard Posner, Antitrust in the New Economy, 68 ANTITRUST L.J. 925, 939 (2001) (stressing, in 2001

already, how “troubling” was the “mismatch between law time and new-economy real time”).
31 The CMA calls that “predictive coding”, see Simon Nichols, Predictive Coding: How Technology Could

Help to Streamline cases, Gov.UK (July 24, 2020),


https://competitionandmarkets.blog.gov.uk/2020/07/24/predictive-coding-how-technology-could-help-
to-streamline-cases/, archived at https://perma.cc/GHL8-ZQLK (explaining that “predictive coding is a
computer assisted process, programmed to search a company’s electronic documents to find those that
are relevant to the case”).
32 See Nicolas Petit, Big Tech and the Digital Economy: The Moligopoly Scenario 36-37 (2020) (showing how

data in the hand of the U.S. Securities and Exchange Commission might enlighten antitrust law).
33 See Abby Norman, Researchers Develop a New Algorithm to Teach AI to Learn — and How to Adapt,

FUTURISM (Jul. 25, 2017), https://futurism.com/researchers-develop-a-new-algorithm-to-teach-ai-to-


learn-and-how-to-adapt, archived at https://perma.cc/AP4J-PFZW (explaining that deep reinforcement
algorithms work on their own by “trial and error” to achieve certain rewards); also Dom Galeon, New
Algorithm Lets AI Learn From Mistakes, Become a Little More Human, FUTURISM (Mar. 2, 2018)
https://futurism.com/ai-learn-mistakes-openai, archived at https://perma.cc/S2FS-KPBA (explaining
the different techniques for training reinforcement algorithms, for example, giving them a cookie only
when they reach the correct outcomes, or pushing them to look at their mistakes as potential successes).
8 Stanford Computational Antitrust VOL. 1

periods. The more documents there are, the greater the uncertainty considering
that agencies may face great complexity during the analytical process.34 For
example, the European Commission has examined over 2.7 million documents in
the merger between Bayer and Monsanto.35 The Department of Justice has been
facing similar issues.36 These difficulties in processing all the data (in the allotted
time) is problematic considering that data are the backbone of merger analysis.37
Computational antitrust could then prove helpful by providing agencies with the
tools to analyze extensive data sets within the time constraint.38

Second, companies are very much in charge of the data being sent—as there are
no injunctions to produce specific records, no dawn raids, and no discovery
procedures (where applicable). It creates a first asymmetry between companies and
agencies. For example, the European Commission underlined in Dow/DuPont that
“the Parties did not mention their internal databases on crop protection patents and
did not provide their competitive intelligence reports on competitors’ crop
protection patents in their responses to several initial Commission’s requests for
information.”39 This made the analysis more “difficult” than it should have been.40
At times, this asymmetry even leads to questioning the integrity of the data. In the
WhatsApp case, for instance, the Commission imposed a €110 million sanction on
Facebook for providing misleading information.41

Once the agency has received the data, it processes it without sending it back to
the companies.42 That triggers a second asymmetry, thus making merger
procedures more obscure than they could be. Computational antitrust could fix
these asymmetries by introducing a systematized communication tool between
companies and antitrust agencies. It could ensure that companies send (in real-
time) agencies all information in specified databases and that firms get access to it

34 Rupprecht Podszun & Sarah Langenstein, Data as an Input in Competition Law Cases: Standards,

Difficulties and Biases in EU Merger Control, in LEGAL CHALLENGES OF BIG DATA (Cannataci, Falce &
Pollicino eds., 2020) 174, 182 (2020). Also, Daniel A. Crane, Rethinking Merger Efficiencies, 110 MICH. L. REV.
347, 352 (2011) (explaining that “modem merger review requires predictions about the likely
consequences of an event that has not yet occurred and that cannot be sampled, studied, or tested”).
35 Commission, Mergers: Commission Clears Bayer’s Acquisition of Monsanto, Subject to Conditions, European

Commission (March 21, 2018), https://ec.europa.eu/commission/presscorner/detail/en/IP_18_2282,


archived at https://perma.cc/KAY5-A5T6?type=image.
36 Delrahim, supra note 28, at 4 (“Increasingly, many transactions involve the acquisition of large troves

of data. Understanding how that data may be used—and the potential competitive implications—is
becoming more and more important in merger analysis”).
37 Albert A. Foer, Prediction and Antitrust, 56 ANTITRUST BULL. 505, 505 (2011) (underlining that “almost

everything of importance about antitrust relates to the future”). Also, Thomas B. Leary, The Inevitability
of Uncertainty, 3 COMPETITION L. INT’L 27 (2007) (stressing that “virtually all antitrust analysis involves
predictions”). More generally, for a review of the literature encompassing legal judgment prediction, see
Lance B. Eliot, Legal Judgment Prediction (LJP): Amid the Advent of Autonomous AI Legal Reasoning (2020).
38 Also, OECD, Algorithms and Collusion: Competition Policy in the Digital Age 43 (2017) (underlining that

“automated computer systems” may help “to organise and select relevant information”).
39 Commission Decision of 27.3.2017 Declaring a concentration to be compatible with the internal market

and the EEA Agreement (Case M.7932 – Dow/DuPont) (C(2017) 1946) §97 (2017).
40 Id. at §133.
41 Commission, Mergers: Commission fines Facebook €110 million for providing misleading information about

WhatsApp takeover, European Commission (May 18, 2017),


https://ec.europa.eu/commission/presscorner/detail/en/IP_17_1369, archived at
https://perma.cc/6XXC-SQ3S?type=image.
42 Matthew Jennejohn, Innovation and the Institutional Design of Merger Control, 41 J. CORP. L. 167, 207 (2015).
2021 “Computational Antitrust” 9

once it has been processed.43 Besides, one could use blockchain for creating
immutable databases and ensure their integrity.44

Finally, one can imagine that computational tools will ultimately lead to more
dynamic merger analyses.45 Automated processing of big data is already allowing
agencies to understand market power better. The first advances in computational
antitrust have been made in that field starting in the mid-1990s thanks to simulation
models’ implementation.46 They are used, for example, to measure product
substitutability or efficiency claims.47 Over time, computational methods will open
new possibilities. One could think that they will allow companies and agencies to
understand the competitive pressure between non-substitutable products, to
quantify dynamic capabilities, and model pro-innovation policies.48 Static variables
will slowly make room for dynamic ones, if so desired.

C – Antitrust Policies

Computational methods will benefit the design, monitoring, and evaluation of


antitrust policies. This will be achieved thanks to a combination of retroactive and
predictive analyses.

First, computational techniques will improve retrospective of antitrust


investigations, merger control decisions, and public policies. These retrospectives
are notoriously challenging, and costly, to conduct. To be sure, antitrust agencies
carry out high-level retrospective analyses in their annual activity reports, but these
studies are mostly qualitative, and the level of aggregation is high.

43 The Assistant Attorney General Makan Delrahim called the use of new technologies “pivotal” for this

reason, see Delrahim, supra note 28, at 5 (“the Division’s new training initiative helps ensure that we are
well-equipped to assess the competitive implications of the next transaction or course of conduct where
these cutting-edge business technologies may be pivotal.”). Also, see David Colarusso & Erika J. Rickard,
Speaking the Same Language: Data Standards and Disruptive Technologies in the Administration of Justice, 50
SUFFOLK U. L. REV. 387, 404 (2017) (explaining that “the revolution will be standardized” thanks to new
“tools to share information between court users and the courts”).
44 Schrepel, supra note 22, at 121-122 (explaining that all modifications to blockchain ledgers are visible,

therefore making their manipulation or corruption knowable).


45 See, for example, Ben Mermelstein, et al., Internal versus External Growth in Industries with Scale

Economies: A Computational Model of Optimal Merger Policy, 128 J. POLIT. ECON. 301 (2019) (developing a
computational model for merger policy in a dynamic setting in which investment plays a central role).
Also, W. Brian Arthur, Complexity Economics A Different Framework for Economic Thought, in COMPLEXITY
AND THE ECONOMY 2, 9-11 (W. Brian Arthur ed., 2015) (explaining that computation could be used to
obtain general insights about how the world functions).
46 Jonathan B. Baker, Merger Simulation in an Administrative Context, 77 ANTITRUST L.J. 451, 452 (2011)

(arguing that merger simulation should be used for creating screening techniques or establishing
presumptions). Also, Oliver Budzinski & Isabel Ruhmer, Merger Simulation in Competition Policy: A
Survey, 6 J. COMPETITION LAW ECON. 277 (2009) (underlining that antitrust agencies have been using
merger simulation models in horizontal mergers since the mid-1990s). Lastly, see Thomas Buettner,
Giulio Federico & Szabolcs Lorincz, The Use of Quantitative Economic Techniques in EU Merger Control, 31
ANTITRUST 1 (2016) (explaining that, on top of simulation models, the European Commission has been
using direct estimation methods).
47 Computational tools could (partially) reconcile accuracy and predictability. Discussing the matter, see

Jan Broulík, Preventing Anticompetitive Conduct Directly and Indirectly: Accuracy Versus Predictability, 64
ANTITRUST BULL. 115 (2019).
48 See David J. Teece, Gary Pisano & Amy Shuen, Dynamic Capabilities and Strategic Management, 18

STRATEG. MANAG. J. 509 (1997). Also, see Sandro Claudio Leraa, Alex Pentland & Didier Sornette,
Prediction and Prevention of Disproportionally Dominant Agents in Complex Networks, 117 PROC. NATL. ACAD.
SCI. U.S.A. 27090, 27094 (2020) (developing a holistic model to address an industry’ dynamism). Lastly,
see Erik Brynjolfsson & Lorin M. Hitt, Beyond Computation: Information Technology, Organizational
Transformation and Business Performance, 14 J. ECON. PERSPECT 23, 25 (2000) (explaining that the value of
information technology is not “well captured by traditional macroeconomic measurement approaches,”
and stressing the need for developing a new methodology in response).
10 Stanford Computational Antitrust VOL. 1

Recently, several agencies—including the Federal Trade Commission and the


French antitrust agency49—have expressed their intention to conduct more
targeted empirical studies for analyzing past merger decisions involving large
digital firms.50 Using a computational approach, agencies could carry out similar
studies regarding their jurisprudence in which anticompetitive practices have been
punished. After sanctions have been imposed, the (automatic) collection of market
data could, for example, provide valuable information on their effectiveness,
whether they are strictly monetary or also including structural and behavioral
remedies.51 They could also better estimate consumer savings thanks to their
decisions and orient them accordingly.52 Furthermore, antitrust agencies could
systematically audit their processes to ensure they stay effective in a fast-changing
technological environment.53 Finally, they could carry out empirical studies of
specific industries,54 for example, to understand what conditions have allowed the
emergence of new players when the market was deemed to have tipped.55

Second, computational antitrust will be predictive. Most of the laws passed in a


majority of countries undergo a cost-benefit evaluation. The development of
computational methods will help simulate the effects of new public policies and
legislation, thus making the assessment more accurate while complying with what
Jean Tirole called the requirement of “information-light” policies.56 Although it is
illusory to believe that one will soon attain a perfect simulation of reality, these
methods will allow policymakers and regulators to be better informed.57 Here
again, both companies and agencies will benefit from computational antitrust
deployment as long as one considers its limits (see below).

49 FTC, Overview of the Merger Retrospective Program in the Bureau of Economics, Federal Trade

Commission, https://perma.cc/Q7BP-C2TP; OCDE, Start-ups, Acquisitions Anticoncurrentielles et Seuils de


Contrôle des Fusions – Contribution de la France 4 (2020).
50 These retroactive studies could lead to the creation of tools that are useful to the consumer. Discussing

the subject, see Fabiana Di Porto & Mariateresa Maggiolino, Algorithmic Information Disclosure by
Regulators and Competition Authorities, 19 GLOBAL JURIST 1, 2 (2019).
51 William E. Kovacic, Roads Not Taken: The Federal Trade Commission and Google, CONCURRENTIALISTE

(Mar. 9, 2020) www.leconcurrentialiste.com/william-kovacic-ftc-google/, archived at


https://perma.cc/JK44-L2HC (stressing the need for the Federal Trade Commission to conduct
retrospective studies regarding its investigation against Google). Also, Cary Coglianese & David Lehr,
Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, 105 GEO. L.J. 1147, 1218
(2017) (explaining that computational regulation will require agencies to engage in the quantitative
coding of their decisions).
52 Federal Trade Commission, Agency Financial Report 17 (2020) (estimating how many dollars are saved

per consumer for every dollar spent by the agency in law enforcement).
53 Alex ‘Sandy’ Pentland, A Perspective on Legal Algorithms, MIT COMPUTATIONAL LAW REPORT

(December 6, 2019), https://law.mit.edu/pub/aperspectiveonlegalalgorithms, archived at


https://perma.cc/2AVJ-LLGA.
54 The Competition and Markets Authority, for example, has “conducted a detailed analysis of all of the

[3-4 billion] search events seen by Google and Bing in a one-week period in the UK, in order to
understand better the differences in the query data seen by these search engines,” see CMA, Online
platforms and digital advertising 93 (2020). This underscores the need to give antitrust agencies the power
to require companies to send them data.
55 Pinar Akman, Competition Policy in a Globalized, Digitalized Economy 10 (2019) (underlining the need for

empirical research regarding digital markets). Also, discussing the “essential role of antitrust in tipped
markets,” see NICOLAS PETIT, BIG TECH AND THE DIGITAL ECONOMY: THE MOLIGOPOLY SCENARIO, 190
(2020); and see Mark A. Lemley & David McGowan, Legal Implications of Network Economic Effects, 86
CALIF. L. REV. 479, 497 (1998) (discussing tipping in the context of antitrust enforcement).
56 Foer, supra note 37, at 522 (discussing the possibility of using predictive tools to anticipate the effects of

a public decision over a period ranging from two to five years). Jean Tirole, Market Failures and Public
Policy, 105 AM. ECON. REV 1665, 1666 (2015) (defining information-light policies as “policies that do not
require information that is unlikely to be held by regulators”)
57 For a fictional take on it, see Alex Garland, Devs (2020) https://www.imdb.com/title/tt8134186, archived

at https://perma.cc/A9HW-F4N4.
2021 “Computational Antitrust” 11

III. The Challenges of Computational Antitrust

Computational antitrust is coming. This simple observation does not mean that
one should adopt a passive (some would say defeatist) attitude. On the contrary,
several challenges deserve one’s full attention. Some are common to (the use of)
computational methods in all legal fields (A), while others are specific to
computational antitrust (B). Eventually, they come down to human issues rather
than technological ones (C).

A – General Challenges to Computational Law

In the coming years, computational law will be the subject of a growing number
of fantasies and criticisms. The further computational law will infiltrate our
institutions, the greater those fantasies and criticisms will become.58

Often, the careful observer will find the concept of control at the heart of these
reactions. Those who believe computers can do better than humans in all matters
have been utterly enthusiastic about transferring control from humans to
machines. The opposite is also true; algorithms and AI have been described as a
“black box” to denounce their supposedly obscure decision-making processes.59

Transparency will be the referee—showing the nuts and bolts, stressing the
possibilities, the limits, and actual functioning of computational antitrust. Of
course, not all the data used and generated by computational tools should be
publicly available, but perhaps processes and mechanisms should be. Contrary to
human decision-making, one can reveal the working of computational decision-
making.60 In other words, the process of computation does not have to be a black
box. Human reasoning is “noisy,” but computation does not have to be;61 after all,
computational tools are what we decide to build. If transparency is an important
value, one can design computational tools that emphasize it.

58 For example, artificial intelligence will allow the automation of many tasks, see Harry Surden, Machine

Learning and Law, 89 WASH. L. REV. 87, 88 (2014) (“there may be a limited, but not insignificant, subset of
legal tasks that are capable of being partially automated using current Al techniques despite their
limitations relative to human cognition”). One could also seek to simplify search, grant easier document
access, provide better document analysis, send automatic court or agencies’ updates, and give API access
to court filings, see Michelle Hook Dewey & G. Patrick Flanagan, Litigation Analytics: Bringing Experience
to New Tools, 23 AALL SPECTRUM 41, 42-43 (2019).
59 Leo Breiman, Statistical Modeling: The Two Cultures, 16 STAT. SCI. 199, 199 (2001) (describing algorithmic

modeling as a “black box” for the first-time). On the contrary, see Cary Coglianese & David Lehr,
Transparency and Algorithmic Governance, 71 ADMIN. L. REV. 1, 21 (2019) (explaining that the use of
algorithms does not necessarily engender a “black-boxed” government, but could also increase “public’s
ability to peer inside government and acquire information about what officials are doing”).
60 See Coglianese & Lehr, supra note 51, at 1214 (explaining that machine-learning algorithms are directed

toward objectives decided by humans). Even when the machines will define the objectives themselves,
their decision-making process could be made transparent—if designed to this end.
61 Daniel Kahneman, Artificial Intelligence and Behavioural Economics: Comment, in THE ECONOMICS OF

ARTIFICIAL INTELLIGENCE 587, 610 (Ajay Agrawal, et al. 2019) (explaining that, because computers are not
noisy, they are “better at statistical reasoning” than humans. According to the author, the consequence
should be to “replace humans by algorithms whenever possible”). Also, see Mary-Anne Williams, Robot
Social Intelligence 48 (2012) (underlining that “[r]obots and people need cognitive skills to explain and
predict each others’ behaviour”). Furthermore, Mark A. Lemley & Bryan Casey, Remedies for Robots, 86
U. CHI. L. REV. 1311, 1365 (2019) (explaining that although “[t]ransparency is a desirable goal in the
abstract. (…) We may be able to find out what an Al system did. But, increasingly, we may not be able to
understand why it did what it did”). Interestingly, understanding the “why” behind AI decision-making
might be possible thanks to AI. For example, GPT-3 can be used for translating code into plain English,
see Jason Morris, Computational Law Diary: What does GPT-3 Mean for Rules as Code?, Round Table Law
MEDIUM (July 17, 2020), archived at https://perma.cc/4UX6-JUFM.
12 Stanford Computational Antitrust VOL. 1

In fact, transparency could ensure that computational approaches maximize


two objectives: (i) maintaining and (ii) improving our legal systems. The first is
about guaranteeing the many human rights currently recognized and enforced.62
Computational tools must not put them at risk. For example, procedural fairness
ought to be maintained despite the use of sophisticated tools by one party only.63
One will have to define a new framework for the admissibility of the evidence
produced by these tools. Indeed, computational tools should not lead to the
production of evidence whose creation process cannot be verified. Karl Popper’s
work will be valuable in this regard.64

The second item relates to using computational methods for ensuring the
effectiveness of different rights that are still being disregarded (in practice) or
denied. For example, legal scholars have been documenting discriminatory court
decisions and legislation.65 If the decision-making becomes transparent, and if the
data is duly secured using an architecture designed for the purpose,
computational tools will be positioned to reduce these unfortunate outcomes.
Indeed, one will be able to identify when and how human rights have been
infringed66 and assign liability on that basis.67 For these reasons, transparency
seems to be a prerequisite for the broad adoption of computational tools.68

B – Challenges Specific to Computational Antitrust

Computational antitrust is also facing specific challenges. The first relates to


developing the right tools. It implies coding antitrust laws and rulings to create
efficient methods for (consistently) assessing compliance with them. It also implies
helping agencies to automate enforcement and merger control procedures.
Subsequently, one will be required to engage in the actual development of these
tools and test which ones are most adapted to automating different parts of
antitrust law. This testing is one of computational antitrust top priorities—it should
lead to rapid results.

The second is tied to data, and, in the end, to their scope. To identify which data
(and data structures) companies and agencies will need to feed the above tools, one

62 See Delipetrev, supra note 11, at 21 (identifying four ethical principles: (i) respect for human autonomy,

(ii) prevention of harm, (iii) fairness, (iv) explainability).


63 Bonin & Malhi, supra note 23, at 471 (underlining that legal procedures in European law must comply

with the Charter of Fundamental Rights of the European Union and the principle of “equality of arms”).
64 See, in particular, KARL POPPER, CONJECTURES AND REFUTATIONS: THE GROWTH OF SCIENTIFIC

KNOWLEDGE (2002).
65 See Jonathan P. Kastellec, Racial Diversity and Judicial Influence on Appellate Courts, 57 AM. J. POL. SC. 167, 169

(2012) (exploring different empirical studies documenting the differences in judicial voting across groups).
Also, see Christina L. Boyd, Representation on the Courts?: The Effects of Trial Judges' Sex and Race, 69 POL. RES.
Q. 788, 788 (2016) (showing that “trial judge’s sex and race have very large effects on his or her decision
making”), and David S. Abrams, Marianne Bertrand & Sendhil Mullainathan, Do Judges Vary in Their
Treatment of Race, 41 J. LEGAL STUD. 347, 350 (2012). Outside of the United States, see Nienke Grossman, Sex
on the Bench: Do Women Judges Matter to the Legitimacy of International Courts, 12 CHI. J. INT'L L. 647 (2012).
66 Giovanni De Gregorio & Sofia Ranchordás, Breaking Down Information Silos with Big Data: A Legal

Analysis of Data Sharing 214, in LEGAL CHALLENGES OF BIG DATA (Cannataci, Falce & Pollicino eds., 2020)
(underlining that “the creation of a centralized form of data storage and sharing could put at stake the
privacy of citizens”). Also, discussing why (big) data controllers could be held liable, see id. 224.
67 Coglianese & Lehr, supra note 51, at 1223 (2017).
68 This goes further than providing access to the information that results from these mechanisms. On

the subject, the First and Fourth Amendments already oblige the U.S. government to provide open
access to the information it is creating (using computational tools or not). This implies making the
processing transparent. Making it obscure would be a missed opportunity.
2021 “Computational Antitrust” 13

will be required to address the objective(s) of computational antitrust. It appears


that the use of computational tools for retrospective analysis and the making of
counterfactuals is a logical application69 while predictive analysis is more debatable
in the field of antitrust.

To be sure, our ability to measure variables is increasing over the decades.


Patterns are discernible, and regularities can be deduced from a real-world
phenomenon, making it increasingly possible to use predictive tools to measure the
effects of antitrust laws, decisions, and related policies over specific variables
(translating the objective(s) of antitrust agencies).70

That being said, computational tools (as others) are poorly suited for predicting
all the forces in the economy.71 They do not lead to a “theory of everything.”72
Quantum computing will help to integrate more variables. Combined with AI
systems, quantum computers will better identify the reasons for economic growth.
And yet, one should not be under the impression that it will be possible to capture
the entire world around us in a computer program—at least, not anytime soon.73
On that basis, one should doubt whether computational antitrust could justify ex-
ante regulations and other market design objectives.74 In fact, one could even argue
that, by allowing a faster and more accurate enforcement of antitrust laws,
computational tools will reduce the need for ex-ante regulations. In this sense,
perhaps the curious task of computational antitrust will be “to demonstrate to men
how little they really know about what they imagine they can design.”75

The third is tied to the role one should give computational tools in decision-
making processes (that is, where they are used). One will want to discuss the extent
to which they could justify decisions on their own, including their probative force
in anticompetitive investigations and mergers.76 It will require hiring the personnel
for designing and exploiting them as they will not get to the right answers by
themselves. 77 It will also require discussing the (comparative) importance of non-

69 See William M. Landes & Richard A. Posner, Market Power in Antitrust Cases, 94 HARV. L. REV. 937, 982

(1981) (developing a price prediction model for counterfactuals).


70 Di Porto & Maggiolino, supra note 50, at 7 (defending the idea of testing remedies before implementing them).
71 If only because unexpected elements cannot be computed in advance, see NASSIM NICHOLAS TALEB,

THE BLACK SWAN: THE IMPACT OF THE HIGHLY IMPROBABLE 1-10 (2d ed. 2010) (explaining that inductive
reasoning does not allow to tackle the be never-before-seen events—described as “Black Swans”).
72 Arthur, supra note 4, at 9 (showing how computer experiments can be used for isolating economic

phenomena). Discussing the limits, see JOSEPH A. SCHUMPETER, HISTORY OF ECONOMIC ANALYSIS 241
(1954, ed. 2006) (explaining that one can establish “certain theorems” around the economic life of our
societies, “but we can never observe them all”). Similarly, Hayek, supra note 15, at 66 (stressing that one
cannot grasp the whole complex phenomenon, but some patterns only).
73 For this reason, computational analyses are complemented with other ones curated by the European

Commission, see Buettner, et al., supra note 46, at 20. It will be interesting to analyze the extent to which
computational tools will also improve in qualitative terms. For a long-term perspective, see Kahneman,
supra note 61, at 610 (“I do not think that there is very much that we can do that computers will not
eventually be programmed to do”).
74 See Landes & Posner, supra note 69, at 982 (developing a price prediction model for counterfactuals).
75 Hayek assigned this “curious task” to economics, see FRIEDRICH AUGUST VON HAYEK, ET AL., THE

COLLECTED WORKS OF F. A. HAYEK 76 (1989).


76 Daniel A. Crane, Rules versus Standards in Antitrust Adjudication, 64 WASH. & LEE L. REV. 49, 86 (2007)

(arguing that predictive analyses are not sufficient to justify antitrust rules). Foer, supra note 37, at 522
(underlining that “[i]n addition to predicting, the tools of prediction might be aimed at better
understanding the present”).
77 Some of these tools will require to be fed with adequate data. Others will need to be set up within the

agency’s own ecosystem. When agencies do not design them (but, for example, borrow them from another
agency), issues will arise regarding the integration of these tools into their existing networks.
14 Stanford Computational Antitrust VOL. 1

computable elements.78 In reaction to antitrust 2.0, part of the doctrine had rightly
underlined the need to consider more than quantifiable factors.79 We must not fall
into the same trap while constructing antitrust 3.0.80

C – In the End: A Human Question

Eventually, technical questions will get technical answers. The most critical
challenge for developing computational antitrust is related to the interaction
between our legal systems and technical tools. This is a human challenge. As one
author put it in 1962, “we must bear in mind that it is not machines that have
changed the lives of men, but the adaptations that men themselves have adopted
in response to machines. It is not the invention of tools, however subtle, complex,
or powerful, that constitutes man’s greatest achievement, but the ability to use the
tools that man has developed within himself.”81

Providing this challenge a satisfying answer will require West Coast Code and
East Coast Code to cooperate.82 This collaboration requires coders (computer
scientists, data scientists, developers) and the antitrust community (companies,
policymakers, regulators) to prepare their respective fields so computational
antitrust can thrive. Coders will be required to devote their time to developing the
recipes—the programming of sequences of specific instructions to achieve a
specific result. Companies and agencies will provide the ingredients—which
requires identifying the ones needed and how to get them (i.e., how to transform
data into information). In the end, these recipes and ingredients will complement
each other; the nature of one will change the nature of the other.

Institutional changes will facilitate the cooperation of these two communities.


Several actions are required in this regard. First, one will be required to create
proper incentives for computational antitrust development. On the side of coders,
computer scientists, and data scientists, this implies the creation of different reward
systems to ensure they get (and stay) involved in the field. On the side of companies
and agencies, this requires enlarging the teams dedicated to these subjects and
giving them appropriate means.83 Second, one will have to establish the conditions

78 Salil K. Mehra, Antitrust and the Robo-Seller: Competition in the Time of Algorithms, 100 MINN. L. REV. 1232,

1373 (2015) (underlining that we ought to “avoid making the perfect the enemy of the good in an area
undergoing such rapid and uncertain change,” indeed, “cooperatively generating norms and best
practices” may help enforcing antitrust in digital markets).
79 See Eleanor M. Fox, Modernization of Antitrust: A New Equilibrium, 66 CORNELL L. REV. 1140, 1140 (1980-

19081) (proposing “a formulation for achieving a new equilibrium designed to advance the efficiency
goals and harmonize the non-efficiency goals”).
80 George A. Akerlof, Sins of Omission and the Practice of Economic, 58 J. ECON. LIT. 405, 406 (2020)

(explaining that economic research is ignoring “important topics and problems that are difficult to
approach in a hard way,” which, by analogy, could be transposed to antitrust 3.0).
81 Lee Loevinger, Jurimetrics: Science and Prediction in the Field of Law, 3 M.U.L.L. MOD. USES LOG. L. 187,

205 (1962).
82 Lawrence Lessig, The Code Is the Law, TECH INSIDER (April 9, 1999),
https://tech-insider.org/berkman-center/research/1999/0409.html, archived at https://perma.cc/P9KV-DDX8
(calling West Coast Code the one coming from the Valley, and East Coast Code the one coming from
Congress); also, LAWRENCE LESSIG, CODE: VERSION 2.0 72 (2006) (explaining that, over time, “[t]he power
of East Coast Code over West Coast Code has increased”).
83 According to Matthew Newman, Margrethe Vestager confirmed the European Commission is

increasing its computing power to process large amounts of data in merger and antitrust cases, see
Matthew Newman, Online Pricing Algorithms Prompt EU Antitrust Regulator to Boost Detection Tools, MLEX
(Sept. 24, 2018). On top of financial means, developing computational antitrust also requires giving these
2021 “Computational Antitrust” 15

for sustained collaboration between companies and agencies. Computational


antitrust should not become a zero-sum game in which the gains made by
companies or agencies systematically penalize the other. One must question the
creation of safe harbors, new procedural rules, and, overall, a less confrontational
approach between the different stakeholders. They will cooperate if they have a
vested interest in doing so.84 Of course, achieving such cooperation will be difficult;
Justice Holmes’s “bad man” will not disappear.85 But it is possible.

teams access to the entire decision-making process, rather than simply involving them at the beginning
and the end of it.
84 Discussing such a system, see Anthony J. Casey & Anthony Niblett, The Death of Rules and Standards,

92 IND. L.J. 1401, 1411 (2017) (underlining the possibility to create “microdirectives” for regulating
individuals and companies by sending them appropriate legal information). One could also imagine that
different legal obligations could be put on companies, see Di Porto & Maggiolino, supra note 50, at 7
(arguing for the creation of tailored regulation, for example, when it comes to disclosure). Lastly, see
Thorsten Kaeseberg, The Code-ification of Law and Its Potential Effects, 20 COMPUT. L. REV. INT. 107, 238
(2019) (underlining that “[w]hether we call it algorithmic, adaptive, personalized, granular, or simply
digital law—there will be an increasing trend towards granularization, which will affect the legislature,
executive branch and judiciary”).
85 Holmes, supra note 17.

You might also like