Foreign Affairs
Foreign Affairs
Foreign Affairs
the technology could add more than $4 trillion dollars annually to the global economy. This would
be on top of the $11 trillion that nongenerative AI and other forms of automation could contribute.
These are enormous numbers: by comparison, the entire German economy--the world's fourth
largest--is worth about $4 trillion. According to the study, produced by the McKinsey Global
Institute, this astonishing impact will come largely from gains in productivity.
At least in the near term, such exuberant projections will likely outstrip reality. Numerous
technological, process-related, and organizational hurdles, as well as industry dynamics, stand in
the way of an
AI-driven global economy But just because the transformation may not be immediate does not
mean the eventual effect will be small.
By the beginning of the next decade, the shift to AI could become a leading driver of global
prosperity. The prospective gains to the world economy derive from the rapid advances in ai--now
further expanded by generative AI, or AI that can create new content, and its potential applications
in just about every aspect of human and economic activity. If these innovations can be harnessed,
AI could reverse the long-term declines in productivity growth that many advanced economies now
face.
This economic revolution will not happen on its own. Much recent debate has focused on the
dangers that AI poses and the need for international regulations to prevent catastrophic harm. As
important, however, will be the introduction of positive policies that foster AI's most productive
uses. These policies must promote technologies that augment human capabilities rather than
simply replace them; encourage AI's widest possible implementation, both within and across
different sectors, especially in areas that tend to have lower productivity; and ensure that firms and
sectors undergo necessary process and organizational changes and innovations to effectively
capitalize on AI's potential. To unleash the full force of an AI-powered economy, then, will require
not only a new policy framework but also a new mindset toward artificial intelligence. Ultimately, a I
technologies must be embraced as tools that can enhance, rather than undermine, human
potential and ingenuity.
Other factors have also created supply-side constraints in the global economy. In countries that
account for over 75 percent of global economic output, aging populations have limited the growth
of the labor supply, increasing dependency ratios--the number of nonworkers relative to the
working-age population in a given country--and creating fiscal stress. Many large employment
sectors, including government, health care, traditional retail, hospitality, and construction, have
critical shortages of workers. And in some countries, such as China, Italy, Japan, and South
Korea, overall labor forces are shrinking. Labor markets have also been transformed by the
preferences of job seekers in advanced economies, who are choosing employment sectors--and
frequently shifting between them--based on flexibility, safety, level of stress, and income.
Meanwhile, geopolitical tensions, combined with the shocks of climate change and the pandemic,
have led many companies and countries to "de-risk" and diversify their supply chains at great
expense for reasons that have nothing to do with reducing costs. The era of building global supply
chains entirely on the basis of efficiency and comparative advantage has clearly come to a close.
In short, without a powerful new productivity-enhancing force, the global economy will continue to
be held back by slow growth and reduced labor supply, the persistent threat of inflation, higher
interest rates, shrinking public investments, and elevated costs of capital for the foreseeable
future. Against these headwinds, the costly clean energy transition--which will require an additional
$3 trillion in capital spending each year for several decades, according to projections by the
International Energy Agency--will be close to impossible to engineer.
These long-term global pressures are a key reason why the AI revolution is so important. It holds
the potential for a digitally enabled surge in productivity that could restore growth momentum by
easing the supply-side constraints--especially the shrinking labor pool in many countries--that have
been holding the global economy back. But for this transformation to occur, the surge will need to
have the right characteristics. It must be driven primarily by value-added growth, in which firms
and sectors expand value-added output, thereby contributing to a rise in GDP, rather than simply
by reducing inputs, such as labor, while keeping the growth in output weak or flat.
In the areas that it touched, the digital revolution was dramatic. Tasks long performed by humans
were suddenly taken over by machines. Activities such as bookkeeping, filing, and accounting,
much of consumer banking, and the control systems for entire supply chains were partially and
sometimes completely automated. In parallel, most information came to be stored and transmitted
in digital form, making it cheaper and easier to access and use. An abundance of free and low-
cost web-based services also transformed the consumer economy and social interaction.
But the economic impact of these changes, although substantial, was limited in scope. In the
sectors where the technologies were widely implemented, productivity increased, much as it did
after the first Industrial Revolution, when humans stopped digging trenches and turned instead to
steam shovels. In certain areas, jobs declined along with the incomes of some middle-class
earners in a phenomenon that has come to be known as "job and income polarization."
Nonetheless, there were many kinds of tasks that could not be automated, and the extent of digital
takeover was limited. Above all, the technologies had little effect on knowledge industries and
creative industries, such as medicine, law, advertising, and consulting, in which much of the value
comes from specific expertise and the performance of nonroutine tasks.
Now, the A I revolution has shattered those constraints. Through advances in machine learning
and pattern recognition over the past 15 years, AI researchers have shown that digital machines
can do much more, for example, many human activities that do not lend themselves easily to
codification involve pattern recognition: finding and assembling facts and insights, detecting logical
and conceptual structures embedded in language, synthesizing and reprocessing information, and
drawing on experience, expertise, and tacit knowledge to provide answers to complex and
nuanced questions. By using deep learning--multi-layered neural networks that simulate the way
neurons send and receive signals in the human brain--researchers have made swift advances in
machine learning. And with enough data and computing power, this approach has been
remarkably effective at replicating many of these pattern-recognition, predictive, and now also
generative tasks. The result has been a stunning series of breakthroughs.
Even before the advent of generative AI, machine learning had produced a number of major
innovations. A short list of these includes handwriting recognition, speech recognition, and image
and object recognition. Many of these tools have been used in smartphones and numerous
business and consumer applications. Consider Google Translate, which employs deep learning
and is used by more than one billion people; it can already handle more than 100 languages, a
number that AI researchers aim to soon expand to more than 1,000. Ai has also assisted
breakthroughs in a number of scientific fields. For example, AlphaFold, an AI system developed by
Google's AI lab, DeepMind, has been able to predict the protein structures of all 200 million
proteins known to science. Researchers around the world are now using these structures to
accelerate and assist their investigation of diseases and develop new treatments for them.
Perhaps the most striking development, however, has been the rise of large language models, or
LLMS, which provide the basis for generative AI. What underlies LLMS is the Transformer, a deep-
learning architecture that was introduced in a now famous paper by Google researchers in 2017.
Transformers make use of a mechanism of self-attention to understand the connections and
relationships between different words. Along with so-called embeddings--which map the
relationships between words and use a unique neural architecture--the Transformer makes it
possible for the model to learn in a self-supervised way. Once trained, the model can generate
human-like outputs by simply predicting the next word or sequence of words in response to a
prompt.
By training these new LLMS on billions, and now trillions, of words, and over long periods, they
can generate increasingly sophisticated human-like responses when prompted. More important,
their capabilities are not confined to any one sector or area of knowledge. Unlike many previous AI
innovations, which were tailored to specific functions, the LLMS that underlie generative AI have a
strong claim to be a truly general-purpose technology.
QUICK STUDIES
Generative AI has several features that suggest its potential economic impact could be unusually
large. One is exceptional versatility, LLMS now have the capacity to respond to prompts in many
different domains, from poetry to science to law, and to detect different domains and shift from one
to another, without needing explicit instructions. Moreover, LLMS can work not only with words but
also with software code, audio, images, video, and other kinds of inputs, as well as generated
outputs--what is often referred to as "multimodality." Their ability to operate flexibly among multiple
disciplines and modes means that these models can provide a broad platform on which to build
applications for almost any specific use. Many developers of LLMS, including OpenAI, have
created APIS--application programming interfaces-that allow others to build their own proprietary
AI solutions on the LLM base. The race to create applications for a huge diversity of sectors and
professional disciplines and use cases has already begun.
LLMS are also noteworthy for their accessibility. Because they are designed to respond to ordinary
language and other ubiquitous inputs, LLMS can be readily used by nonspecialists who lack
technical skills. All that is needed is a little practice in creating prompts that elicit effective
responses. At the same time, the models' use of the vast material on the Internet or any other
corpus for training means that they can acquire expertise in almost any field of knowledge. These
two features give LLMS far more extensive potential uses than previous digital technologies, even
those involving ai. In June 2023 alone, the ChatGPT website was visited by 1.6 billion users, a
convincing signal of the low barrier to entry and the breadth of interest in the technology.
It is hard to make detailed predictions about potential future uses for LLMS. But given their
unusual attributes, combined with continuing rapid technical innovations by researchers and the
huge amounts of venture capital pouring into AI research, their capabilities will almost certainly
grow. Within the next five years, AI developers will introduce thousands of applications built on
LLMS and other generative AI models aimed at highly disparate sectors, activities, and jobs. At the
same time, generative AI models will soon be used alongside other AI systems, in part to address
the current limitations of those systems, but also to expand their capabilities. Examples include
adapting LLMS to help with other productivity applications, such as spreadsheets and email, and
pairing LLMS with robotic systems to improve and expand the operation of these systems. If these
various applications are implemented effectively across the economy, a large and extended surge
in productivity and other measures of economic performance seems almost certain to follow.
Among the most promising uses of generative AI in the broader economy are in digital assistant
systems for the workplace. Consider an April 2023 study by Erik Brynjolfsson, Danielle Li, and
Lindsey Raymond on the impact of an AI digital assistant for customer service representatives in
the tech sector. The AI assistant had been trained on a large collection of audio recordings of
interactions between agents and customers, along with performance metrics for these interactions:
Was the problem solved? How long did it take to solve it? Was the customer happy with the result?
The AI assistant was then made available to some agents and not others.
The authors of the study identified two important results. 1he first was that productivity for the
group with the AI assistants was on average 14 percent higher. The second, and even more
significant, was that, although everyone in the group with the AI assistant had productivity gains,
the effect was much higher for relatively inexperienced agents.
In other words, the AI assistant was able to markedly close the gap in performance between new
and seasoned agents, suggesting generative AI's potential to accelerate on-the-job training.
Digital mapping tools have had a similar effect on London taxi drivers. London is an incredibly
complex city to drive in. In the past, drivers took months and even years to learn the streets well
enough to pass the city's notoriously difficult taxi driver exam, known as "the Knowledge." Then
came Google Maps and Waze. These apps did not eliminate the differential between the veterans
and the newcomers, but they certainly reduced it. This leveling-up effect on employee
performance seems likely to become a general consequence of the advent of powerful AI digital
assistants in many parts of the economy.
Given their demonstrable value, AI digital assistants will soon be performing a great assortment of
tasks. For example, they will produce first drafts in media and marketing applications and produce
much of the basic code needed for a variety of programming, thus dramatically speeding up the
work of advanced-software developers. In many professions, an AI system's ability to absorb and
process vast amounts of literature at superhuman speed will also accelerate both the pace and the
dissemination of research and innovation.
Another area in which nascent LLM applications could have a large impact is in ambient
intelligence systems. In these, AI technologies are used in conjunction with visual or audio sensors
to monitor and enhance human performance. Take the health-care sector. As a 2020 study in
Nature discussed, an ambient intelligence system could use a number of signals and inputs--say,
recorded discussions between doctors and interns as they make their hospital rounds, combined
with a given patient's charts and the updates to them--to identify missing actions or overlooked
questions. The Ai component could then produce a summary of its findings for review by the
medical staff. According to some estimates, doctors currently spend about a third of their time
writing up reports and the decisions made; such a system could reduce that time by up to 80
percent.
In the foreseeable future, ambient intelligence and digital assistants could improve efficiency and
transparency in supply-chain management as well as help with complex human tasks. According
to the McKinsey Global Institute's June 2023 report, generative AI has the potential to automate
activities that currently take up 60 to 70 percent of workers' time. Not only would this provide a
spur to productivity; it would also free up more human labor for the most advanced tasks and allow
for more rapid innovation.
CREATIVE INSTRUCTION
Despite the promise of AI, much of the public debate about it has focused on its controversial
aspects and its potential to do harm. To begin with, LLMS are not 100 percent reliable. Their
outputs can sometimes reflect the bias of their training sets, produce erroneous material, or
include so-called hallucinations--assertions that sound plausible but do not reflect the reality of the
physical world. Researchers are trying hard to address these issues, including by using human
feedback and other means to guide the generated outputs, but more work is needed.
Another concern is that AI could achieve wholesale automation of many sectors, triggering large-
scale job losses. These concerns are real, but they overlook the barriers to full automation in many
workplaces, as well as the compensatory job gains--some from growing demand for existing
occupations, others from the rise of new occupations, as a result of AI, including generative ai. For
example, research suggests that over the next couple of decades, some occupations--roughly 10
percent of all occupations according to some estimates--whose constituent tasks can almost all be
automated, will likely decline. Other occupations, both existing and new, will grow. But the largest
effect of AI on the economy overall, involving about two-thirds of occupations, will be to change the
way that work is performed, as some constituent tasks--on average about a third--are augmented
by ai. Occupations in these fields will not go away, but they will require new skills as people do
their jobs in collaboration with capable machines.
Many commentators have also noted the dangers of giving ai systems too much control. As
numerous examples have shown, generative AI platforms occasionally get things wrong or
hallucinate--that is, make things up. For example, an LLM given a prompt to write an article on
inflation not only produced the article but concluded with a list of additional reading that included
five articles and books that do not exist. Obviously, in applications that require factual accuracy,
made-up answers pose a major concern. Even when not hallucinating, LLMS can produce bad,
seriously biased, silly, or obnoxious predictions that require human review. Thus, the careless or
overly expansive implementation of generative AI could lead to the perpetuation of flawed
information or even to malpractice.
Access to better training data may lower the risks of faulty outputs, but the problem is really a
function of how LLMS work: even if trained on perfectly accurate data, the models can yield
different and even contradictory answers to the same prompt simply because they are prediction
machines operating in a probabilistic world. The mistake in all this is to think of LLMS as
databases that simply store information. In fact, because of the probabilistic mechanism by which
they learn and generate outputs from the material they are trained on, and their ability to associate
ideas and concepts that may not have been associated before, their output cannot be wholly
determined, even with perfect training data. For many companies and economic sectors, prudence
will dictate that humans cannot be entirely written out of the script, at least not any time soon.
Moreover, in some areas of the economy, facts and accuracy are not as important as new ideas or
creativity. Fashion designers have started to ask AIs to generate new clothing prototypes. AIs can
generate music, write poems, make art, and draft the outlines of novels. As a source of inspiration,
generative AI could become a useful tool. The concern for some is that AI could eventually replace
the artist. It is too soon to know whether ai-generated content will find a serious following in the
creative and performing arts. Our best guess is that it will be used more for assisting and providing
inspiration than for producing finished works of art.
Given its remarkable capabilities and range, where will the main economic impact of generative AI
occur? When Sundar Pichai, the CEO of Alphabet, Google's parent company, was asked a version
of this question, he responded that it would come in the "knowledge economy." This seems exactly
right. One could substitute the term "information economy," but across fields from scientific
research to software development and a host of service functions, the potential economic benefits
of LLM-based applications seem extremely large.
For the moment, preventing harm and damage has received the lion's share of attention. In May,
more than 350 AI industry leaders signed an open letter warning that "mitigating the risk of
extinction" from AI should be a global priority alongside preventing pandemics and nuclear war;
many, including one of us (Manyika), signed the letter to highlight the precautionary principle that
should always be applied to powerful technology. Others have warned of the risks of misuse by
bad actors with various motivations, as well as unconstrained military applications of AI in the
absence of international regulations. These issues are important and should be addressed. But it
is wrong to assume that simply limiting the misuse and harmful side effects of ai will ensure that its
economic dividends will be delivered in a broadly inclusive way. Active policies and regulations
aimed at unleashing those benefits will play a major role in determining whether AI realizes its full
economic potential.
First, policies will need to be developed to ensure that AI complements rather than replaces
human labor. In current practice, AI tools are often developed and benchmarked against human
performance, leading to an industry bias toward automation. That bias has been referred to as "the
Turing trap," a term coined by Brynjolfsson, after the mathematician Alan Turing's argument that
the most important test of machine intelligence is whether it can equal or surpass human
performance. To get around this trap, public and private research funding for AI research should
avoid an overly narrow focus on creating human-like ai. For example, in a growing number of
specific tasks, AI systems can outperform humans by substantial margins, but they also require
human collaborators, whose own capabilities can be further extended by the machines. More
research on augmenting technologies and their uses, as well as the reorganization of workflow in
many jobs, would help support innovations that use AI to enhance human productivity.
Another crucial priority will be to encourage the widest possible spread of AI technologies across
the economy. In the case of the earlier digital revolution, a large body of research has documented
highly uneven adoption across sectors and firms. Many large employment sectors tagged, leading
to a drag on productivity. This pattern could easily be repeated. In the case of generative AI, small
and medium-sized firms deserve special attention, since they may not have the resources to
conduct the experiments and develop use cases. It is possible that reductions in the current high
costs of A I development and research, as well as competition among the major developers, will
lead to affordable AI applications that can be widely implemented, by keeping costs down and
spurring entrepreneurial activity. But policymakers must be diligent in creating rules that ensure
that such competition results in broad diffusion and use of the technologies.
A related issue is how to accelerate the use of AI by the industries that stand to benefit from them
most. In many cases, some stakeholders, including employees, will understandably focus on the
risks and resist adopting AI systems. To counter this tendency, policymakers and companies will
need to consult with all parties involved and ensure that their interests are taken into account. At a
macro level, the employment and wage effects of AI adoption--including the disappearance of
some jobs even as others grow--should also be addressed. Partnerships involving government
and industry and educational institutions will be needed to help people adapt to the different skill
requirements needed for working in an AI-assisted environment. Income support during the
transition to an AI-augmented economy may be another key ingredient, particularly in occupations
such as call centers and other customer operations in which ai could put downward pressure on
wages and even cause net job loss.
But despite fears to the contrary, the prospect of large-scale Ai-induced unemployment does not
seem likely, especially given current labor shortages in a number of sectors. Those anxieties are
based on the incorrect assumption that demand is fixed, or inelastic, and hence insensitive to price
and cost changes. In such a world, productivity gains automatically produce employment
reductions. In fact, although there are likely to be lots of changes in the characteristics of many
jobs, as well as some job displacement, overall employment levels in the economy are unlikely to
change much, assuming the economy continues to grow. Research suggests that under most
scenarios, more jobs will be gained than lost over the next decade or more.
A larger challenge will be addressing the uneven effects of the new technologies, both within and
between countries. Within countries, productivity growth is likely to be concentrated in white-collar
jobs rather than blue-collar jobs because of generative Ai's particular impact on the knowledge
economy. To achieve a similar productivity surge in the industrial economy, however, will require
additional major advances in robotics. Despite good progress on that front, technological
challenges remain, with the result that automation and augmentation in manufacturing, logistics,
and autonomous vehicles are proceeding more slowly. Such a divergence in productivity growth
between the knowledge economy, the wide service sector, and industrial sectors could further
contribute to unequal distribution of AI gains.
Countries will also need to confront the uneven adoption of advanced digital technologies both
among firms within the same sector and among sectors. For example, within sectors, so-called
frontier firms, which are often the most nimble, have outstripped other firms in using digital
technologies. Similarly, the high-tech and financial services sectors have been faster to adopt new
technologies than has health care, creating unevenness that can become a barrier to economy-
wide productivity gains.
Internationally, the recent breakthroughs and innovations in AI have clearly been led by the United
States, with China in second place. These two countries are also home to the AI platform
companies with enough computing power to train advanced LLMS. By contrast, the European
Union has fallen behind the United States and China in AI, cloud computing, and other related
areas. The question, then, is how quickly advanced AI applications can be implemented
throughout the global economy. Under the open model that prevailed for several decades after
World War II, technology could spread quite rapidly across borders. But that world is no more. The
complex and increasingly restrictive constraints on flows of technology and capital--whether from
the war in Ukraine, sanctions, or rising tensions between China and the United States--have
created new barriers to international diffusion.
Because of its digital nature, AI technology will spread; in fact, it would be very hard to stop it from
doing so. But ensuring that it does so in the right way will require new forms of international
economic governance. Thus, even if it lags in Ai research, the EU will adopt the technology and
use it. But many emerging economies will also benefit from this technology, and for them, access
may be slow and uneven. The extent to which AI can be developed and used in an equitable way
worldwide will determine the magnitude of its effect on the global economy.
With its broad scope and its ease of use, generative AI could do much to counter these forces.
Moreover, the Ai revolution has unleashed an intense period of experimentation and innovation
that could add much more value to the economy. But to fully realize this potential will require
equally intense attention to policy. Governments, companies, and researchers will need to
prioritize augmenting human skills rather than replacing them. They will need to promote the use
of the technology across the whole of the economy. And they will need to build an economy in
which the use of AI systems is sensitive to the needs of workers themselves and in which shocks
are minimized and the widespread fears of excessive automation are addressed--or they will likely
encounter unnecessary resistance.
The development of AI has reached a crucial juncture. The technology's fraught potential, to bring
enormous human and economic gains but also to cause very real harms, is coming sharply into
focus. But harnessing the power of AI for good will require more than simply focusing on existential
threats and potential damage. It will demand a positive vision of what AI can do and effective
measures to turn that vision into reality. For the most likely risk that al poses to the world today is
not that it will produce some kind of civilizational catastrophe or a huge negative shock to
employment. Rather, it is that without effective guidance, AI innovations could be developed and
implemented in ways that simply magnify current economic disparities rather than bring about a
strengthened global economy for generations to come.
~~~~~~~~
By JAMES MANYIKA and MICHAEL SPENCE
JAMES MANYIKA is Senior Vice President and President of Research, Technology, and Society at
Google-Alphabet, a Distinguished Fellow at Stanford University's Human-Centered Artificial
Intelligence Institute, and Chairman Emeritus at McKinsey Global Institute.
MICHAEL SPENCE, winner of the 2001 Nobel Prize in Economics, is a Senior Fellow at the
Hoover Institution at Stanford University.
The contents of Foreign Affairs are protected by copyright. © 2004 Council on Foreign Relations,
Inc., all rights reserved. To request permission to reproduce additional copies of the article(s) you
will retrieve, please contact the Permissions and Licensing office of Foreign Affairs.