Lie Machines Howard Chapter 1
Lie Machines Howard Chapter 1
Lie Machines Howard Chapter 1
H
ow is it possible to make convincing arguments,
without evidence, about the causes of or solutions
to important social problems? Lie machines do this
work, and they seem to be more and more active
in public life. These social and technical systems generate and
spread political lies by subverting the people, organizations,
and institutions in which we have most confidence. We tend
to trust our instincts for evaluating the truth, and we tend to
trust friends and family as sources of good information; as
sources of news, many of us prefer social media to professional
print media.
Lie machines vary in size and scope. Some are large and
permanent, others more liminal—temporary teams formed for
a campaign or an election. But the inventory is growing, and
in authoritarian regimes lie machines are often partnered with
overt political surveillance initiatives. China employs two mil-
lion people to write 448 million messages a year. The previous
Mexican president had seventy-five thousand highly automated
accounts cheering for him and his policies. Vietnam has trained
come level rarely vote, or if they always vote for your oppo-
nent, it may not be worth spending time in their neighbor-
hood trying to convince them to vote for you. If a city has
consistently voted for one party for decades and there’s no
evidence that public sentiment is changing, why put cam-
paign resources into advertising there? Of course, this is a
rhetorical question, because it is healthy for civic life to have
consistent, high-quality information and debate running in
every neighborhood.
Bad Prospects
For the most part, when authoritarian governments, industry
lobbyists, and shady politicians get hold of personal data, it is
because they want to figure out how to build an effective cam-
paign for winning support and legitimacy. Sometimes such
data helps a politician or lobbyist better understand voters.
When a crisis emerges or a major decision is needed, this data
can allow political leaders to take the pulse of the community
they serve. But usually personal data isn’t used for substantive
engagement; instead, it is applied for strategic reasons, to mo-
bilize supporters and deepen existing commitments. Social
media platforms are the delivery mechanism for the misinfor-
mation, for maneuvering or manipulating the individuals in
the data set. In the chapters ahead, we’ll see how many kinds of
strategies, purposes, and outcomes there can be.
Most citizens don’t know how much information about
them is being used: it is bought and sold by campaigns, adver-
tising firms, political consultants, and social media platforms.
Sometimes political campaigns have clear sponsors and de-
clare their interests, but at other times such motives are hid-
den. Sometimes the public can trace who is spending what
resources on which ads during a campaign season, but at other
Democracy Encoded
New technologies always inspire hot debate about the nature
of politics and public life. When activists used Facebook to
organize protests across North Africa, journalist Malcolm
Gladwell argued that only face-to-face interaction could make
a true social movement and a real revolution. Technology
writer Clay Shirky countered that it no longer made sense
to distinguish street politics from internet politics. Evgeny
Morozov of the New Republic agreed that the internet was
useful in politics, but mostly for dictators, internet activist Eli
Pariser argued that too many of us were using technology to
immerse ourselves in information bubbles that protect us from
exposure to challenging new ideas. I argued that everyone
was focusing on the wrong internet—that we should be con-
cerned about data from our own devices shaping politics, not
content from our browsers. This was the means by which
political elites would “manage citizens” in the years ahead.
Unsurprisingly, research has disproved some of these
claims and confirmed others. Gladwell was wrong in that so-
cial media proved to be a sufficient cause of several contempo-
rary political uprisings and revolutions. Shirky was prescient
in identifying the ways in which the internet supported new
modes of organizing, modes that could catch dictators off
guard. Morozov was correct in suspecting that the bad guys
would quickly learn to use new technologies in their organiza-
tional efforts and would work to catch democracy advocates
and put them in jail, though his fatalism muddled his think-
ing. But as we’ll see in the chapters ahead, it turns out that the
spread junk news; at the right volume level, junk news can
permeate deeply into networks of human users.
Just a few months after the 2016 US elections, we dem-
onstrated that disinformation about national security issues,
including from Russian sources, was being targeted at US mil-
itary personnel and veterans and their families. During the
president’s State of the Union address, we learned that junk
news is particularly appetizing for far-right white suprema-
cists and President Trump’s supporters (though not “small c”
conservatives). Some of this junk content originates in ac-
counts managed by foreign governments.
Foreign intervention in politics, using social media, is
ongoing, and it doesn’t just involve the Russian government.
As protests started rocking Hong Kong in 2019, Chinese gov-
ernment propagandists activated their social media networks
to convince the world that the activists were violent radicals
with no popular appeal. By 2020, seven countries were run-
ning misinformation campaigns targeting citizens in other
countries: along with Russia and China, there were similar
operations in India, Iran, Pakistan, Saudi Arabia, and Vene-
zuela. Whether built to target neighboring countries or do-
mestic voters, these mechanisms do the same thing—they
produce, disseminate, and market lies.
nents, the next task is to figure out how the parts fit together
and affect public life. Understanding the impact and influence
of a lie machine means tracing out the flow of information
from lie producers through social media distribution systems
and marketing plans by professional political consultants. We
could call it a case study, archival research, process tracing, or
a study of political economy, but it is simpler to say that we
need to follow the money.
Lastly, if we can break a lie machine into component parts
and then see how the lies and money flow, can we anticipate
how such mechanisms might evolve in the years ahead? More
importantly, can we catch and disassemble them permanently?
Unfortunately, bots are just early forms of automation
that have filled our inboxes and social media feeds with junk.
Political chatbots backed by sophisticated machine learning
tools are now in development, and these automated accounts
provide a much more humanlike, interactive experience. They
are not simply scripted bots that can talk about politics. Are
you sure we have evolved from primates? Does smoking cause
all cancers or just some, and might this connection vary by
gender and cigarette brand? There’s a chatbot that will make
you reconsider such things. Is climate change real? Should we
inoculate our kids?
Chatbots have become the hot tool for industry lobbyists
seeking to promote junk science. And the next step is to put
artificial intelligence (AI) algorithms, which simulate learning,
reasoning, and classification, behind these bots. Several mili-
taries are looking at ways of using the latest nascent AI person-
alities and machine learning algorithms for political engage-
ment. Some of the best stories are coming out of China, where
closed media platforms allow for large-scale experimentation
on public opinion. What can we say now about the political