Liar's Dividend Schiff Schiff Bueno

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

The Liar’s Dividend: Can Politicians Use Deepfakes

and Fake News to Evade Accountability?

Kaylyn Jackson Schiff∗, Daniel Schiff†, and Natália S. Bueno‡

May 10, 2022

Abstract

This study addresses the phenomenon of misinformation about misinformation, or


politicians “crying wolf” over fake news. Strategic and false allegations that stories
are fake news or deepfakes may benefit politicians by helping them maintain support
in the face of information damaging to their reputation. We posit that this concept,
known as the “liar’s dividend,” works through two theoretical channels: by invoking
informational uncertainty or by encouraging oppositional rallying of core supporters.
To evaluate the implications of the liar’s dividend, we use three survey experiments
detailing hypothetical politician responses to video or text news stories depicting real
politician scandals. We find that allegations of misinformation raise politician support,
while potentially undermining trust in media. Moreover, these false claims produce
greater dividends for politicians than longstanding alternative responses to scandal,
such as remaining silent or apologizing. Finally, false allegations of misinformation pay
off less for videos (“deepfakes”) than text stories (“fake news”).

Word count: 11,619

Keywords: misinformation, survey experiment, deepfakes, fake news, trust, media


Ph.D. Candidate, Department of Political Science, Emory University, [email protected]

Ph.D. Candidate, School of Public Policy, Georgia Institute of Technology, [email protected]

Assistant Professor, Department of Political Science, Emory University, [email protected]
“The result of a consistent and total substitution of lies for factual truth is not that the lie will

now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our

bearings in the real world—and the category of truth versus falsehood is among the mental means

to this end—is being destroyed.” —Hannah Arendt in The Origins of Totalitarianism (1973)

Misinformation in political discourse can negatively impact political accountability, trust,


and social cohesion (Jerit and Zhao, 2020; Vaccari and Chadwick, 2020a). Concerns about
misinformation are only deepening with the emergence of new methods to generate and dis-
seminate falsified media, methods that are transforming and extending traditional strategies
of promoting misinformation. While scholars have debated the direct effects of misinfor-
mation in terms of its ability to deceive and persuade, misinformation can serve a variety
of purposes beyond direct persuasion, working through emotional and symbolic means and
shifting the foundations of the broader informational environment itself. This study devotes
attention to these indirect effects and provides novel experimental evidence related to one
such subtle and concerning consequence of misinformation: the liar’s dividend.

In particular, we seek to understand whether politicians and other public figures can lever-
age an environment of misinformation and distrust to their benefit by falsely claiming that
damaging true information about themselves (e.g., a scandal) is fake. That is, we explore
whether politicians can maintain support by spreading misinformation about misinforma-
tion—falsely claiming that true events and stories are merely “fake news” or “deepfakes.”
If such lies are used successfully, they provide a benefit, or a “liar’s dividend,” increasing
the liar’s authority, reelection prospects, or reputation (Chesney and Citron, 2019). How-
ever, they do so through deception and risk further undermining political discourse, social
cohesion, and public trust in the media and larger informational environment.

We investigate the liar’s dividend through three experimental studies using text and video
from four real politician scandals in the United States. We follow the politician scandals

1
with rebuttals from the politicians alleging that the stories are mere misinformation. The
politician allegations make use of two strategies. First, politicians may seek to undermine
confidence in the informational environment, a channel we term “informational uncertainty.”
Alternatively, they may exploit affective polarization and partisan animus to draw supporters
to their defense, which we term “oppositional rallying.” We evaluate the extent to which these
strategies bolster support for the politicians and undermine trust in the media environment
generally. We also assess whether lying pays off more for video (“deepfakes”) versus text (“fake
news”)1 stories and whether allegations of misinformation are more effective than alternative
politician responses: apologizing or simply denying (without alleging misinformation).

We find that false allegations of misinformation do pay a liar’s dividend. Allegations invok-
ing both informational uncertainty and oppositional rallying lead to increases in politician
support, and these effects may be concentrated on political moderates and co-partisans with
the politician, respectively. Allegations of misinformation also generate larger support gains
for politicians than simply ignoring the scandal (non-response) or apologizing, arguably a
normatively preferable strategy. Yet, these gains come at the cost of deceiving the public
and undermining trust in media. Finally, the results provide some reassurance: politician
attempts to discredit scandals caught on video are much less successful, suggesting that
politicians are still likely to be held accountable when audio-visual evidence is available.

In what follows, we provide context on the direct and indirect harms of misinformation, high-
lighting new challenges and features of the political and informational environment. Next,
we present a theory of the liar’s dividend and further define the informational uncertainty
and oppositional rallying strategies. After describing our pre-registered experimental de-
1
We define fake news as stories “which have no factual basis” but are published so as to create an impression
of legitimacy, generally with the “intention of misinforming” (Tandoc Jr, Lim and Ling, 2018). While our
study was motivated by the diffusion of allegations of misinformation specifically using the terminology “fake
news,” it’s important to clarify that colloquial usage of this term is imprecise, politically loaded, and not
limited to text-based media. While we use the term “fake news” at points to contextualize the study, we
discuss our approach to treating this nuance more carefully in the section on research design.

2
signs,2 we present results and review the implications of our findings for current public and
scholarly conversations regarding misinformation.

1 Direct and Indirect Harms of Misinformation

Policymakers, scholars, and members of the public have raised fears about the impacts of
misinformation on social and political cohesion, on institutional trust, and on maintaining
a basis of shared truth. These fears are fueled by more frequent, everyday encounters with
misinformation: 89% of Americans report encountering made-up news at least sometimes,
and Americans are more likely to identify made-up news as a critical problem than climate
change, racism, and illegal immigration (Mitchell et al., 2019). The political implications of
misinformation are particularly troubling, as both foreign and domestic actors have seized
upon vulnerabilities in the current informational environment to perpetuate falsehoods. No-
tably, 25% of tweets spread during the 2016 US presidential elections were fake or misleading
(Bovet and Makse, 2019), and subsequent politically-oriented misinformation culminated in
a violent insurrection after the 2020 election (Election Integrity Partnership, 2021). These
recent events highlight the potential for misinformation to deepen social and political frac-
tures by exacerbating polarization, undermining accountability and rational deliberation,
and decreasing trust in institutions and media as part of a vicious cycle (Anderson, Rainie
and Vogels, 2021).

While the use of misinformation for political ends is as old as politics itself (Arendt, 1973;
O’Shaughnessy, 2004), new trends are upending the informational environment. One such
transformative development is the emergence of new sophisticated methods to produce
digitally-altered or fabricated audio, images, or videos, known as “deepfakes,” which result
from advances in artificial intelligence techniques such as Generative Adversarial Networks
(GANs). Deepfakes are produced using approaches such as facial swapping, facial animation,
2
Our pre-analysis plan is available at: https://osf.io/qpxr8/.

3
and the creation of entirely synthetic images or audio; notably, less sophisticated techniques
(so-called “cheapfakes” or “shallowfakes”) involving basic splicing, editing, or decontextual-
izing of media present similar risks (Reuters, 2019; Tandoc Jr, Lim and Ling, 2018). While
advanced media creation and manipulation capabilities were previously restricted to profes-
sional artists and studios through time-consuming and expensive efforts, it is increasingly
possible for non-sophisticated actors to generate highly-convincing fake video, images, and
audio rapidly and at low cost (Karnouskos, 2020; Ovadya, 2021). Concerns surrounding
deepfakes have now permeated society: 90% of the public say altered video and images
cause confusion (Mitchell et al., 2019), news media and technical experts report severe chal-
lenges in determining the authenticity of content (Ker, 2019; Toews, 2020), and politicians
have raised alarm as well.

While some experts previously suggested that the impact of deepfakes would be limited,
examples of problematic uses are now accumulating. For example, deepfakes have been used
to discourage supporters of opposing parties from voting in an Indian election (Christopher,
2020) and to create fake accounts on Twitter and YouTube in support of foreign propaganda
efforts (Kan, 2020). Of particular relevance for this study, deepfakes have been used to depict
specific politicians engaging in controversial acts or making offensive statements. For exam-
ple, videos have allegedly exposed sex scandals involving Malaysian deputy minister Shamsul
Iskander and Brazilian governor João Doria, though—critically—there remains an unsettled
debate in both cases regarding whether the videos are deepfakes or authentic (Toews, 2020).
Still other deepfakes have featured U.S. presidents Donald Trump and Barack Obama and
French president Emmanuel Macron, and a cheapfake of Joe Biden was created and shared
by a prominent Republican Representative Steve Scalise (New York Times Editorial Board,
2020). In perhaps the most concerning political case to date, an allegation that a video de-
picting Gabon’s president as healthy was a deepfake helped to spur an unsuccessful military
coup (Cahlan, 2020). Given many these examples, one striking implication is that the mere
existence or allegation of deepfakes may lead to significant social and political harms, even

4
when the authenticity of the content is disputed or disproved.

Notwithstanding increased attention and the seemingly consequential examples above, there
is scholarly disagreement over the direct effectiveness of misinformation, conceived of pri-
marily in terms of the ability of the information to persuade. Analogous to the minimal
effects hypothesis in the context of political campaigns (Kalla and Broockman, 2018), some
scholars have argued that the impact of fake news may be modest. According to this perspec-
tive, consumption of misinformation may be limited depending on individuals’ media diets,
restricted to those with strong partisan preferences, and moderated by individuals’ ability
to adjust for bias in news sources (Little, 2018). Moreover, individual fake news messages
may not be especially persuasive on their own in the face of the multitude of informational
signals people receive and because fake news consumption is only a small portion of overall
news and information diets (Guess, Nyhan and Reifler, 2020; Watts, Rothschild and Mobius,
2021). However, while it is possible that the direct persuasive effects of fake news may be less
than feared, there is much unknown about the multiple possible direct and indirect impacts
of misinformation, especially in the medium-to-long-term (Lazer et al., 2018).

Indeed, it has long been understood that misinformation can serve a variety of purposes
beyond direct persuasion about the truth of particular claims. For example, in the context
of authoritarian regimes, misinformation has been used as means to signal the power of
regimes or encourage performance of loyalty (Huang, 2015; O’Shaughnessy, 2004; Wedeen,
2015). Misinformation can also promote confusion and skepticism. Deepfakes in particular
seem especially likely to drive such indirect harms on the informational environment, as in-
dividuals may feel they are no longer able to trust their eyes and ears, engendering broader
distrust in all content—whether authentic or falsified (Ternovski, Kalla and Aronow, 2022).
In this sense, while deepfakes “might not always fool viewers into believing in something
false,” they may exacerbate uncertainty and distrust, “further eroding our ability to mean-
ingfully discuss public affairs” and discern truth from fiction (Vaccari and Chadwick, 2020b).

5
Appreciating that these effects may be intentional rather than merely incidental is essential
if we are to understand the full implications of misinformation. As such, this paper examines
indirect effects of misinformation and additionally considers whether new tools to promote
misinformation (deepfakes) exacerbate or otherwise alter extant challenges.

2 A Theory of the Liar’s Dividend

In light of the importance of indirect effects, this paper is concerned with a form of misin-
formation that owes its existence (in part) to fake news. That is, the widespread awareness
of fake news has opened the door to false allegations of fake news, whereby politicians or
other public figures can—potentially credibly—claim that real news stories are merely fake
news or deepfakes, leading to what Chesney and Citron (2019) term the “liar’s dividend.”
While this tactic of denial and deflection has been made prominent by U.S. President Donald
Trump, calls of “fake news” have now been echoed by politicians in Russia, Brazil, China,
Turkey, Libya, Poland, Hungary, Thailand, Somalia, Myanmar, Syria, and Malaysia (Er-
langer, 2017). This form of misinformation has been used to target political opponents and
to deny critical media coverage, even when objective observers and experts find the report-
ing to be credible. As a few notable examples, former Spanish Foreign Minister Alfonso
Dastis claimed that images of police violence in Catalonia in 2017 were “fake photos” (Op-
penheim, 2017) and American Mayor Jim Fouts called audio tapes of him making derogatory
comments toward women and black people “phony, engineered tapes” (Wang, 2017), despite
expert confirmation.

That the systematic usage of misinformation about misinformation—alleging “fake news”


or “deepfakes” in response to real stories—has grown in recent years suggests that public
figures find this strategy to be effective or beneficial, against the expectations of a minimal
effects hypothesis. In particular, politicians may believe such a strategy can be employed
to avoid accountability for political abuses or scandals. We therefore hypothesize that this

6
strategy pays off by safeguarding politician reputations: members of the public are less
likely to penalize politicians for scandals when they “cry wolf” over fake news and deepfakes.
In particular, we expect this strategy to be more effective than three alternative politician
communication strategies: 1) non-response, representing an attempt to ignore a scandal and
let it blow over, 2) apologizing, arguably a normatively preferable response, or 3) simply
denying a scandal without invoking misinformation.

Liar’s Dividend Hypothesis: In the face of scandal, allegations of misinformation (fake


news or deepfakes) will increase average support for politicians.3

We propose that an allegation of a deepfake or fake news might improve politician support
through two potential pathways. First, the public may find allegations of “fake news” to
be credible due to uncertainty regarding the truth of signals in what many members of the
public may perceive as a distorted media environment—a channel we term “informational
uncertainty.” Here, the payoffs of allegations of fake news result from misinformation’s truth-
undermining effects, or “the principle that any information could be fake” (Ciancaglini et al.,
2020). The concern is not that “people will be deceived, but that they will come to regard
everything as deception” (Schwartz, 2018), particularly because it is easy to challenge the
veracity of evidence in a fractured political environment (Hao, 2019), and harder to disprove
these kinds of allegations (Galston, 2020). If consumers of information believe they have
no credible signals about truth or falsity of political claims, the result may be increased
uncertainty, as individuals lack sufficient information to establish a basic ground truth or
make informed choices (Vaccari and Chadwick, 2020a). Thus, even when individuals are
motivated to hold accurate beliefs, informational uncertainty undermines their capacity to
do so.

To illustrate how informational uncertainty operates in the case of the liar’s dividend, con-
sider the statement of Spanish Minister Alfonso Dastis, who attempted to discredit photos of
3
In the pre-registered pre-analysis plan (PAP), this corresponds to H1.

7
violence in Catalonia: “I’m sure you have seen what you have seen, but I have seen fake pho-
tos that date back to 2012. So, I think we have got to be patient, and look at the situation”
(Oppenheim, 2017). The uncertainty induced by a statement like this (perhaps intentionally)
may leave citizens unclear about how to update their evaluation of the politician or scandal.
More generally, after learning of an embarrassing moment or political scandal, a member
of the public will be more likely to downgrade their evaluation of the politician. However,
if the politician then issues a statement disclaiming the story as a deepfake or fake news,
then some members of the public may be more uncertain about what is true, decreasing
belief in the scandalous story and increasing average support for the politician. Further-
more, we expect these effects to be concentrated amongst individuals in the middle of the
political spectrum, representing individuals less likely to be strong supporters or opponents
of partisan politicians.

The proposed channel of informational uncertainty is perhaps most active when individuals
are motivated to hold true beliefs and engage in rational updating of beliefs and subsequent
evaluations of politicians. Yet scholars note that individuals may also be motivated by
partisan “directional” goals and engage in motivated rather than accuracy-driven reasoning
(Taber and Lodge, 2006). In an environment of heightened polarization, low social trust, and
without credible and shared informational signals to support rational processing, individuals
may be especially prone to abandoning accuracy motivations in favor of directional ones
(Druckman, 2012; Pennycook et al., 2021).

These elements of the political environment are highlighted by the second causal channel,
which we term oppositional rallying. To avoid cognitive dissonance in the face of identity-
incongruent information (a damaging news story about a preferred politician or party), core
supporters or strong co-partisans may be receptive to congenial information, and may em-
ploy motivated reasoning (Bullock et al., 2015) to maintain support. The allegation of a
deepfake or fake news can provide just this sort of cover—an excuse or reason for support-

8
ers to rally around the politician, disregarding the negative coverage and preserving their
positive evaluations of the politician. Such a response may reflect genuine changes in belief,
or instead, expressive responding and partisan cheerleading (Peterson and Iyengar, 2021).
Further, this channel often explicitly invokes references to political opponents, as allega-
tions of misinformation may strategically make use of a “devil shift” (Sabatier, Hunter and
McLaughlin, 1987) where politicians signal not only their own innocence, but also the guilt of
political opponents and media, allowing supporters to rally against the opposition. As such,
we expect this channel to be particularly strong when individuals feel that their preferred
politician or party is the target of unfair and hostile treatment by the opposition.

As an example of this strategy, American Mayor Jim Fouts alleged that his opponents were
attempting to “hijack [the annual MLK Day] ceremony by releasing more vile, vitriolic, phony
tapes against me” and that such an “effort...is designed to distract from my efforts of inclusion
for all” (Wang, 2017). A politician who employs the strategy of oppositional rallying may
thus explicitly signal to supporters because they seek to prime partisan directional motives.
We therefore expect this mechanism to be most influential when individuals have strong
positive associations with a specific politician (Flynn, Nyhan and Reifler, 2017), though
strong party identification alone may be sufficient to drive these effects, given increasing
affective polarization and a heightened connection between partisanship and identity (West
and Iyengar, 2020). Thus, we expect effects to be stronger for strong co-partisans, who are
more likely to reward allegations that employ the oppositional rallying channel with greater
support for their preferred politician (Craig and Cossette, 2020).4

2.1 Mediating Factors and Further Consequences

Given the expanded use and awareness of manipulated or synthetic video including deep-
fakes, our study also considers whether the dynamics surrounding misinformation and the
4
Subsidiary predictions related to Informational Uncertainty and Oppositional Rallying are discussed in
the PAP under H1.1 and H1.2, respectively.

9
liar’s dividend differ for text-based versus video-based content. There is much unknown
about the extent to which deepfakes constitute a major societal risk as compared to text-
based fake news, and our study aims to provide helpful evidence to answer this question.
On the one hand, given a “psychological predisposition to believe in audio-visual content
and a truth-default tendency,” individuals are more likely to find this information credible
(Ciancaglini et al., 2020). A “realism heuristic” implies that individuals find audio-visual
content more closely resembles real-world experience and thus may be more naturally as-
similated than text-based content (Vaccari and Chadwick, 2020a). This finding also extends
to recent experimental work on misinformation, comparing video against text and audio
formats (Sundar, Molina and Cho, 2021).

On the other hand, other recent studies call into question the extent to which deepfakes
are more credible and persuasive than misinformation conveyed through text (Barari, Lucas
and Munger, 2021), though video-based misinformation may be more effective for changing
beliefs (Wittenberg et al., 2021). Relatedly, there is a long-standing debate about the ex-
tent to which “vivid” content—referring to the ability of information to provoke emotions
and interest—is actually more persuasive (Taylor and Thompson, 1982). Notably, however,
a recent meta-analysis of vividness which incorporates pictorial and video representations
suggests that vivid information impacts both attitudes and behavior (Blondé and Girandola,
2016). In the context of the liar’s dividend, on balance, we hypothesize that respondents
will believe that video depicting politician scandals is harder to fake than text, such that
allegations that these videos are deepfakes will be perceived as less credible, translating into
a smaller payoffs for politicians.

Deepfakes Hypothesis: Allegations of misinformation will lead to smaller improvements


in average support for politicians when the underlying stories are video (“deepfakes”) as
compared to text (“fake news”).5
5
Corresponds to H2 in the PAP.

10
An environment of distrust in media and institutions, partly created by misinformation,
constitutes a fertile ground for the liar’s dividend. In light of this, the consequences of alleging
misinformation may extend beyond the immediate gains in politician support. In particular,
allegations of misinformation may denigrate or otherwise undermine news media, reducing
trust. These allegations can create the conditions for avoiding accountability not only for
today’s scandal or bad news story, but for tomorrow’s as well. To investigate this possibility,
we examine whether allegations of misinformation decrease average trust in media. We
expect that this reduction in trust might be driven by both pathways behind the liar’s
dividend. For the informational uncertainty channel, politicians explicitly invoke distrust
and confusion in the informational environment, likely driving individuals to increase their
uncertainty over the accuracy of news coverage (Lee, 2010). For the oppositional rallying
channel, individuals might be prompted to view the media as a biased, hostile actor itself or
as simply a tool for transmitting opinion-laden attacks by political opponents.

Trust in Media Hypothesis: Allegations of misinformation will lead to decreased trust in


media.6

3 Experimental Design

To address the hypotheses presented above, we conducted three online survey experiments
based on pre-registered designs7 with a total of 8,017 respondents.8 All three studies con-
sider how Americans react to politicians’ allegations of misinformation in response to scan-
dalous news stories. We randomly assigned participants, irrespective of political party, to a
real scandal involving one of four politicians—two Democrat and two Republican—making
statements that are arguably insensitive, embarrassing, or otherwise counter to their mes-
6
Corresponds to H3 in the PAP.
7
As a note, this paper incorporates some language previously included in our pre-analysis plans and
amendments.
8
We chose the sample size for the first study based on minimum detectable effect (MDE) calculations
using results from a pilot study in August 2020 (N = 916); for the second and third studies, we performed
power analyses using results from the prior studies. MDE calculations are available in SI Section A.4.

11
sage, identity, or agenda.9

Of note, Thompson (2000) defines scandals as “actions or events involving certain kinds of
transgressions which become known to others and are sufficiently serious to elicit a public
response.” While the events studied here—surrounding offensive comments—are different
from other types of scandals such as financial corruption or sex-based scandals, a number of
news sources indeed characterize these as scandals. In particular, all four scandals relate to
identity politics: three scandals pertain to race and ethnicity, while one scandal centers on
gender and abortion. It is thus important to understand that characteristics of the scandal
are likely to influence evaluative effects for politicians (Sikorski, 2018) and have some bearing
on the generalizability of our findings.

In Studies 1 and 2, after viewing the politician scandal, participants were then also ran-
domly assigned to one of three politician responses: no response (control), an allegation of
misinformation priming informational uncertainty, or an allegation of misinformation prim-
ing oppositional rallying.10 Thus, in Studies 1 and 2, the control non-response represents a
politician strategy of ignoring the scandal in hopes that it will blow over. In Study 3, we
consider how allegations of misinformation invoking informational uncertainty compare to
two other politician response strategies: an apology and a simple denial without an allega-
tion of misinformation. We included the simple denial and apology treatments in Study 3 to
assess whether the current informational environment makes claims of misinformation even
more effective than alternative longstanding politician responses to scandal.

For the video, text, and allegation treatments that we describe next, we aimed to reduce
9
The stories are of former politicians in order to ensure minimal impacts on current officials. Based
on pilot results presented in SI Section A.8, we identified four stories that respondents viewed as similarly
embarrassing and plausibly digitally faked. We also selected clips that were as consistent as possible given
available options in terms of length, content, and context.
10
While we contextualized the study in the context of recent discourse surrounding “fake news,” this term
is both problematic and polarizing. Indeed, in our pilot study (see SI Section A.8), we confirmed that this
terminology has a strong partisan connotation. As such, in place of “fake news,” our treatments employ the
more conservative terminology of “false and misleading,” a phrase also commonly used by political actors
when disclaiming misinformation.

12
media source cues (for example, by cropping news banners from videos) to 1) maintain
symmetry across treatments and improve internal validity, and to 2) focus on respondent
identification with the politician and party rather than the particular media source. While
we recognize that such a strategy removes an element of realism in normal news consumption,
we thought this approach best balanced concerns about internal versus external validity. Re-
assuringly, experimental designs with sparser details, albeit less naturalistic, tend to enable
researchers to identify the existence of an effect and do not necessarily imply less generaliz-
ability (Brutger et al., forthcoming).11 Further, the use of multiple stories and averaging of
results across politicians helps to ensure that our findings are not limited to a single media
source, politician, scandal, or political party.

The specific allegations are inspired by real politician statements and are designed to invoke
considerations related to informational uncertainty and oppositional rallying, they are not
strictly derived from statements made by the depicted politicians themselves. The informa-
tional uncertainty allegation draws from comments such as those made by Foreign Minister
Dastis and by Syrian President Bashar al-Assad, who in an attempt to discredit an Amnesty
International report said: “You can forge anything these days... We are living in a fake
news era” (Erlanger, 2017). Along these lines, participants in the informational uncertainty
treatment group saw the following allegation:

[Politician Name] Responds That Story is False and Misleading, People


Should Be Skeptical.

In response to the recent allegations, [Republican | Democrat] [Politician Name]


asserted that the story is false and misleading. He claimed that [the video is a
deepfake, a computer-edited video that uses fake audio and images | the story
is not based on true information]. When asked about the incident, he said that
11
We believe that studies including media cues would be relevant to examining the interaction between
media source effects and politician allegations of misinformation. Since this is one of the first empirical tests
of the liar’s dividend, we leave these additional explorations for further research.

13
it’s well known that there’s a lot of misleading information, so people should be
skeptical about what they hear. [Last Name] stated that “You can’t know what’s
true these days with so much misinformation out there.”

For the oppositional rallying allegation, we drew inspiration from statements like that of
Mayor Fouts, along with comments by then-president Donald Trump on Twitter in response
to growing criticism over his handling of the pandemic: “The Fake News Media and their
partner, the Democrat Party, is doing everything within its semi-considerable power (it used
to be greater!) to inflame the CoronaVirus situation.” Participants in the oppositional
rallying treatment group saw:

[Politician Name] Responds That Story is False and Misleading, Attack


by Opponent.

In response to the recent allegations, [Republican | Democrat] [Politician Name]


asserted that the story is false and misleading. He claimed that [the video is a
deepfake, a computer-edited video that uses fake audio and images | the story is
not based on true information]. When asked about the incident, he said that the
story is an attack by the opposition, and that people should not pay attention to
it. [Last Name] stated that, “My opponent would say anything to hurt me, but
my supporters know who’s really on their side.”

An important aspect of our study is that it addresses sensitive social and political issues
in the context of misinformation, an already fraught topic in that interaction with misin-
formation can harm participants. As such, we carefully considered ethics in the design and
administration of our surveys. Foremost, our study unavoidably involved deception, given
the focus of our research questions surrounding political misinformation and the liar’s div-
idend. Our approach to minimizing deception as much as possible was to use real videos
and stories of politicians, rather than, for example, generating a new deepfake or false story.

14
To enable the comparison of different politician communication strategies in the context of
misinformation, the research team did attribute various responses to the politicians (e.g., a
denial or apology) that they did not actually make. Given this deception, we debriefed all
participants at the end of the study. Second, to avoid exacerbating participant feelings of
distrust and uncertainty, our debrief included links to resources on media literacy and digital
literacy, such as knowing how to spot false news stories.12 Third, we wanted to avoid the
risk that participation in the study would influence real-life political behavior such as voting.
Therefore, we chose to use stories about inactive politicians, i.e., individuals who are not cur-
rently in office or running for office. We also expected that a lower degree of attachment to
these less prominent individuals would be less likely to stoke partisan animosity or otherwise
influence real-world behavior. Fourth, all participants gave consent prior to participation
and were compensated through Lucid Theorem’s survey partners. During the consent, par-
ticipants were warned that some of the information was offensive, that some information
would be withheld, and that additional information about the goals of the study would be
provided at the end. Two separate Institutional Review Boards ultimately approved of the
study’s approach to research ethics and deception.

We use a set of four outcome measures to assess whether respondents supported the politician
(“I would support the politician”, “I would defend the politician against critics,” “I would vote
for the politician,” and “I would donate to the politician”). We measure respondents’ belief in
the underlying story using two outcome questions (“I believe the story about the politician”
and “I think that the story about the politician is true”) and respondents’ trust in media
using two outcome questions (“I trust the media” and “I believe that the media reports the
news fairly”).13 All outcome questions use a bipolar 5-point Likert scale with respondents
indicating their agreement from “Strongly disagree” to “Strongly agree.” With the goal of
12
The full instrument including the participant debrief is available on SI Section A.1.
13
Our measures of general media trust resemble those used by, for example, ANES (NES, 2022), Gallup
(Brenan, 2021), and the Reuters Institute for the Study of Journalism at Oxford (Newman et al., 2019), and
psychometric studies conceptually and empirically support the use of measures of generalized media trust
(Prochazka and Schweiger, 2019).

15
reducing variance and improving content validity, we use multiple questions and create pre-
registered indices for each outcome.14

To test our hypotheses, we regress the appropriate outcome measure (e.g., the politician
support index or trust in media index) on treatment (the reference group is the group of
participants who did not receive a response message from the politician) and a set of covari-
ates (partisanship, gender, race/ethnicity, age, education, household income, region, media
literacy, and digital literacy).15 Our hypotheses and regression specifications, including co-
variates, are preregistered and are available at https://bit.ly/3EidCi6. For the primary
regressions used to test our hypotheses, we report standard nominal 2-sided p-values based
on robust standard errors. We also engage in exploratory analysis, and within hypothesis
families with multiple additional exploratory tests, we use the Benjamini-Hochberg method
to correct for multiple testing and present corrected p-values (using a false discovery rate of
0.05), following the approach of Bohlken, Iakwad and Nellis (2018). Results from the ex-
ploratory analyses with nominal and adjusted p-values are presented in SI Section A.7.

The samples for all studies were recruited using the Lucid Theorem platform and are demo-
graphically proportionate to the U.S. adult population in terms of gender, race/ethnicity,
age, and region.16 We also find no evidence of covariate imbalance in our samples.17 We
include two attention screener questions to allow for the analysis of results stratified by level
of attentiveness of respondents (shown in SI Section A.3); our main analyses do not exclude
inattentive respondents (Berinsky, Margolis and Sances, 2014; Berinsky et al., 2019). Overall,
14
The indices are constructed following a pre-registered procedure used by Kling, Liebman and Katz (2007)
which involves averaging z-scores for the component outcome questions.
15
Our estimates pool the four politicians and reflect a weighted average across them. Results with politician
fixed effects are very similar, as shown in SI Section A.6. Considerations related to the construction of the
news media literacy and digital literacy measures are presented in SI Section A.3. Covariate-unadjusted
main results are included in SI Section A.6.
16
Research supports the use of Lucid for social science research, as it has been used successfully to replicate
prior experimental results, and because its survey takers more closely match U.S. political and psychological
profiles than alternate platforms such as MTurk (Coppock and McClellan, 2019).
17
See SI Section A.5 for covariate balance information and results of F-tests evaluating whether the co-
variates jointly predict treatment assignment.

16
our findings are stronger among more attentive participants (see SI Section A.7).

4 Study 1

We conducted our first study in February 2021 with 2,503 respondents. Study 1 includes
the elements of the research design presented in Figure 1, and incorporates differences in the
media format through which the scandalous news stories are presented. This creates a 2x3
factorial design with variation in both the presentation of the politician scandal (video18 or
text) and the subsequent politician response (no response, an allegation invoking informa-
tional uncertainty, or an allegation invoking oppositional rallying). Thus, the design of Study
1 allows for examination of the Deepfakes Hypothesis as well as the Liar’s Dividend Hypoth-
esis through exploring differences in responses to politician allegations after video versus
text-based treatments. In order to ensure consistency across the text and video treatments,
we create transcripts of the video clips to produce the text-based treatments. We present
results pooling across video and text treatments and then separately by media format.

4.1 Results

Figure 2 presents standardized treatment effects from Study 1 in order to assess the impact
of allegations of misinformation on politician support. Overall, the results provide strong
support for the Liar’s Dividend Hypothesis. Figure 2 shows that allegations of misinforma-
tion (either invoking informational uncertainty or oppositional rallying) increase politician
support by 0.07 and 0.16 standard deviations for text and video (combined) and for text
only, respectively. For allegations priming informational uncertainty in particular, politi-
cian support increases by 0.09 (text and video) and 0.18 (text) standard deviations, and for
oppositional rallying, politician support increases by 0.06 (text and video) and 0.14 (text)
18
We used a pilot study to assess whether participants could successfully see and hear the videos, and 98%
of subjects reported no difficulty. Nevertheless, we also added subtitles to the videos and used a timer to
encourage participants to engage with the treatments.

17
Figure 1: Study 1 Design

18
standard deviations. All of these effects are statistically significant at the conventional 0.05
alpha level, except for oppositional rallying, which only has significant effects for text. No-
tably, these effects are even larger for three of the support measures used to create the
index—willingness to support, defend, and vote for the politician—while reticence to donate
to the politician attenuates combined support as measured by the index (see Figure A5 in
SI Section A.6).

Figure 2: Liar’s Dividend Results for Study 1

Notes: All figures display 95% confidence intervals based on robust standard errors. “Allegation” refers to a
pooled treatment group with either Informational Uncertainty or Oppositional Rallying allegations, and
the reference group is composed of respondents who received a non-response from the politician. Full table
of results with covariates available as SI Table A4.

These effects are meaningful, with effect sizes of 0.1 considered ‘small’ and 0.2 ‘medium’ in
the political psychology literature (Funder and Ozer, 2019). For context, the largest stan-
dardized treatment effects for a single component outcome measure—e.g., “I would support
the politician”—correspond to an unstandardized 0.25 point increase in support along the

19
5-point Likert scale. Another way of making sense of these effects is to examine the impact
of the allegations of misinformation on critics of the politicians. In the control group, around
44% of respondents were critics, measured as the percentage of respondents who disagreed or
strongly disagreed that they would “support the politician.” In contrast, in the text-scandal
treatment groups, allegations of misinformation substantially decreased the percentage of
critics to around 32-34%, a 10-12 percentage point reduction in the number of critics.

Across all types of allegation (pooled or considering the distinct channels separately), video
evidence of scandals reduces the effectiveness of politician allegations for generating support
gains. We find that when politicians allege “deepfake,” they do not receive a liar’s dividend.
Individuals may find video sufficiently persuasive or allegations insufficiently persuasive such
that a politician’s reputation does not recover from a scandalous video story. In contrast, as
discussed above, there is a payoff for allegations of misinformation in response to text-based
stories (addressing the Deepfakes Hypothesis, pre-registered as H2). Indeed, the treatment
effects for politician support are substantially larger in the case of text stories versus video,
and this difference is statistically significant for both informational uncertainty (p = 0.048)
and oppositional rallying (p = 0.042).19 These results are somewhat reassuring. While
scholars and the public are justifiably concerned about misinformation perpetuated through
the use of ultra-realistic deepfakes, an interesting irony is that video content may be so
believable that politicians gain little ground when trying to pretend that real video content
is faked. Yet, to the extent that public figures find themselves increasingly needing to rebut
real deepfakes, they may find there is no truth-teller’s dividend either.
19
Arguably, the treatment wording for the video condition is slightly stronger than the treatment for text-
based misinformation, meaning that our finding of null effects for allegations of deepfakes is conservative, if
anything.

20
5 Study 2

In Study 2, we replicate key elements of Study 1 to increase our confidence in the robustness
of the original findings. Study 2 differs from the prior study by focusing on text exclusively
(no video treatment) because the effects of allegations of misinformation on support appear
to operate primarily through text. Additionally, by pooling across text conditions from Study
1 and Study 2, we increase our statistical power and ability to explore relevant heterogeneity
by partisan identity.

For Study 2, we recruited 2,518 additional participants in April 2021 via Lucid. Participants
were randomly assigned to one of the four text-based politician scandals, followed by: no
response from the politician (control) or the informational uncertainty allegation.20 After
seeing one of the two responses to the politician scandal, participants answered the same
outcome questions as in Study 1 to preserve the integrity of the replication for the informa-
tional uncertainty treatment. We followed up this component of Study 2 with a secondary
experiment embedded in the same survey and used to separately assess the oppositional
rallying treatment. In particular, Study 2 participants were also assigned to a second politi-
cian scandal followed by either the control condition or oppositional rallying allegation, and
accompanied by the same outcome questions.21

Similar to Study 1, we test our hypotheses by regressing the outcomes of interest on treatment
and the same set of pre-registered covariates. For ease of comparability, we report estimates
from Study 2 along with text-only estimates from Study 1 and pooled estimates. The pooled
20
We also included another treatment: the informational uncertainty allegation rebutted by a subsequent
fact-checking statement. The results indicate that fact-checking may eliminate the liar’s dividend. However,
we also found that most subjects were uninterested in clicking to access resources about media and digital
literacy, potentially raising a concern about individuals’ willingness to seek out and consume fact-checking
information. We move the discussion of this pre-registered hypothesis (PAP Amendment 1, fact-checking
hypothesis) to the SI due to space constraints. Results are available in SI Section A.7.
21
The wording introducing the oppositional rallying treatment was slightly modified to avoid raising survey
taker suspicion that the study was manufactured by researchers. Moreover, because the secondary experiment
comes after another experiment, priming effects are a concern and this component of the study should not
be considered a perfect replication.

21
estimates are precision-weighted averages of treatment effects from each study using fixed
effects specifications and allow us to make use of the larger sample of respondents across
both studies.22

5.1 Results

Figure 3 shows the results from both studies and the pooled estimates. Across both studies,
there is again strong support for the Liar’s Dividend Hypothesis. Participants who were
exposed to allegations of misinformation reported higher average levels of willingness to
support the politicians. While there is some variation in effect sizes across studies (the
effect of informational uncertainty in Study 2 is smaller in magnitude, with a p-value of
0.08), estimates are all in the same direction and are statistically indistinguishable from
each other. Moreover, we find clear evidence of impacts on support through the oppositional
rallying channel, ranging from 0.14 to 0.16 standard deviations across studies. Overall,
given that these dividends are produced through a single politician allegation, the gains in
politician support are substantial.23

Misinformation about misinformation does seem to produce a liar’s dividend in terms of


gains in politician support. Yet, how are these gains produced? In the case of oppositional
rallying, we hypothesized that allegations of misinformation invoking friends and foes would
prime partisan political identity and stir up negative sentiments towards perceived political
opponents. As a result, we expected allegations invoking oppositional rallying to produce the
strongest effects for sympathetic co-partisans of the politician. Alternatively, in the case of
informational uncertainty, we expected allegations invoking this strategy to produce stronger
effects on moderates, whose lack of partisan attachments may make them more susceptible
22
Results for specifications using random effects are nearly identical. Results are also largely the same
if we combine samples and perform a single regression as opposed to combining separate studies’ estimates
through precision weighting.
23
When we incorporate the additional 1,254 respondents who received the video scandals in the first wave,
results are generally consistent, though smaller in magnitude. SI Section A.6 presents the findings from the
combined sample of 5,021 from Studies 1 and 2.

22
Figure 3: Liar’s Dividend Results for Study 2

Notes: We use only text treatments for comparability across Studies 1 and 2. Full table of results with
covariates available as SI Table A5.

to feelings of uncertainty.24

Figure 4 displays heterogeneous effects of both theoretical channels of the liar’s dividend
compared to control. Effects are disaggregated by the co-partisanship25 of respondents with
the politician in their respective treatment, and are produced by pooling respondents across
studies and focusing on text scandals for comparability.

In line with our expectations, we find that politician allegations meant to shift the focus to
24
We include a set of exploratory analyses from Study 2 in SI Section A.7 to assess how belief operates in
the context of the liar’s dividend. In short, the relationship between belief and politician support is complex
and begs further research.
25
The way in which we code co-partisanship deviates from our pre-analysis plan. Rather than comparing
strong co-partisans (respondents that are co-partisans with their treatment politicians, excluding leaners)
to all other respondents, we instead compare co-partisans, anti-partisans, and moderates in order to better
identify distinct subgroups of respondents.

23
Figure 4: Heterogeneous Effects of Oppositional Rallying and Informational Uncertainty

Notes: Co-partisans are respondents whose self-reported partisanship matches that of the politician whose
scandal they saw/read. Anti-partisan respondents are from the opposing political party to treatment
politicians. Moderates are independents. For example, self-identified Strong Democrats, Democrats, and
Lean Democrats are identified as co-partisans with the Democrat politicians and anti-partisans with the
Republican politicians depicted in the treatments. Those who identify as independents are classified as
moderates regardless of the politician party. Full table of results with covariates available as SI Table A6.

opponents trigger more support among individuals who harbor stronger co-partisan feelings.
That is, oppositional rallying produces effects that are larger in magnitude for co-partisans
of politicians, on the order of 0.25 standard deviations and borderline statistically different
(p = 0.081) from the effects for anti-partisans. Notably, there are also apparent increases
in support for moderates, and even borderline statistically significant (p = 0.087) gains for
anti-partisans, suggesting that oppositional rallying has even more widespread appeal than
anticipated, for example, with no evident backlash effect for anti-partisans. Meanwhile,
results from Figure 4 suggest that moderates are more greatly impacted by informational
uncertainty than their more partisan peers. While the general pattern for informational
uncertainty aligns with our expectations, the differences between moderates and other sub-

24
groups are not statistically significant.

Furthermore, we expected informational uncertainty to also influence belief in the story about
the politician (pre-registered H1.1), lowering belief in the scandal and resulting in higher
politician support. Oddly, however, our experimental results do not indicate that allegations
priming informational uncertainty influence belief in the story. As shown in SI Section A.6,
we find in both Studies 1 and 2 that politician allegations have largely insignificant impacts
on belief in the scandal. To make sense of these counterintuitive results, SI Section A.7
provides further exploratory evidence assessing the complex relationship between belief and
support. In particular, drawing on additional survey questions, we find descriptive evidence
that self-reported belief in the politicians’ allegations correlates with the feeling that “it’s
hard to know what’s true these days” and that respondents who believed the allegations
agreed that this affected their support for politicians. Yet, we also find that belief in the
politician allegation is not correlated with belief in the underlying scandal. This may be
evidence of a belief-support disconnect, expressive reporting, or something else. Overall,
there appear to be substantial inconsistencies in the ways in which individuals process their
beliefs, presenting a challenge for understanding the mechanism behind allegations invoking
informational uncertainty.

In sum, both hypothesized channels do produce a liar’s dividend, with preliminary evidence
of differences between political subgroups. Yet, further research is needed to better under-
stand how subgroups are affected by different politician messaging strategies, as well as how
politicians may strategically design their messages to target particular audiences. Also, our
findings from both studies suggest some evidence of a belief-support disconnect, which merits
further investigation.

25
5.2 Trust in Media

Beyond the immediate dividends to politicians, does misinformation about misinformation


produce additional, more indirect consequences on society as a whole? To answer this ques-
tion, we examine whether a politician allegation changes participants’ trust in media. Table 1
presents results that address the Trust in Media Hypothesis. In Study 1, we observe that
politician allegations using either strategy in response to text-based scandals (Allegation)
lead to small reductions in trust in news media on average, though this result is not statisti-
cally significant. For Study 2, we observe slightly greater reductions in trust in news media
when allegations of informational uncertainty (Info. Uncertain) are used in particular, with
effects on the order of 0.07 standard deviations (p = 0.075).

Dependent variable:
Trust in Media
(1) (2)
Allegation −0.044
(0.052)

Info. Uncertain −0.072∗


(0.041)

Observations 1,249 2,518


R2 0.239 0.265
Study Study 1 (Text-only) Study 2
∗ ∗∗ ∗∗∗
p<0.1; p<0.05; p<0.01

Table 1: Impacts on Trust in Media Index

Notes: With robust SEs and including covariates. Full table with covariates available as SI Table A7.

While we find small, borderline significant impacts on trust in media in the context of a
survey experiment, we cannot evaluate the extent to which these effects persist, compound,
or decay over time in real-world contexts. Nevertheless, it is somewhat disconcerting that
even a single instance of misinformation about misinformation might lead to decreased overall

26
trust in news media, a concern further highlighted by our findings in Study 3 below.

6 Study 3

Results from Studies 1 and 2 indicate that politicians gain a liar’s dividend when alleging
misinformation rather than remaining silent after scandal. Yet, remaining silent in the face
of scandal and attempting to allow a controversy to blow over is only one among several
possible politician messaging strategies. It is possible that any active reply by a politician
would proffer benefits compared to a non-response. To address this possibility, Study 3 com-
pares allegations of misinformation with two additional politician responses: a simple denial
and an apology. These latter strategies have been studied previously as prominent types of
politician reactions to transgressions (Gonzales et al., 1995) and have been found to mitigate
reputational damage (Brenton, 2011). Compared to these tried-and-true approaches, then,
Study 3 allows us to assess whether allegations of misinformation are effective in boosting
support specifically because they invoke an environment saturated with informational un-
certainty. That is, is there something novel about today’s informational environment, due
to technological, social, or political conditions, that renders allegations of misinformation
especially effective?

For this study, we recruited 2,996 new participants in October 2021 via Lucid. Participants
were again randomly assigned to one of the four text-based politician scandals that we used
in the previous studies, followed, via random assignment, by one of three responses: the
informational uncertainty allegation from the prior two studies, a simple denial that does
not invoke misinformation, or an apology. We chose to use the informational uncertainty
allegation (as opposed to the one invoking oppositional rallying) because it most directly
references the indirect harms to the informational environment due to misinformation and
thus is most salient for answering the question articulated above.

27
We structured the denial and apology allegations statements to be as similar as possible to
the allegation of misinformation, to preserve symmetry across treatments and thus isolate the
unique aspects of each politician communication strategy.26 The denial allegation included
below is based on the form of flat-out denials common, for example, to politician sex scandals,
such as Bill Clinton’s infamous denial:

[Politician Name] Denies that Events in Story Occurred.

In response to the recent allegations, [Republican | Democrat] [Politician Name]


firmly denied the story. When asked about the incident, he said that it never
occurred. [Last Name] stated that “That did not happen. I never said that.”

The apology statement provides an alternative in which the politician acknowledges truth
to the story and accepts responsibility. This allows us to assess whether members of the
public are more receptive to politicians who accept responsibility, arguably a normatively
preferable response. The wording for this treatment is based off of the real reaction by
John Murtha to the scandal used in our experiment. After critical news coverage, Murtha
released a statement saying “I apologize for making the comment that Western Pennsylvania
is a racist area.” The apology treatment is as follows:

[Politician Name] Acknowledges Story and Offers Apology.

In response to the recent allegations, [Republican | Democrat] [Politician Name]


acknowledged the story and apologized. When asked about the incident, he said
that it did occur. [Last Name] stated that “Yes, I did say that, and I apologize
26
All three treatments are structured similarly, though the simple denial and apology treatments are
slightly shorter than the informational uncertainty treatment. In balancing internal and external validity
goals, we determined that increasing the length of the alternative treatments to achieve further symmetry
risked introducing other sources of variation and would be perceived as less believable. Additionally, across
all three studies, we used timers to require participants to spend at least ten seconds viewing each scandal
as well as each politician allegation before moving on. Notwithstanding slight differences in video or text
length, this ensured that participants spent both a sufficient and similar amount of time engaging with all
treatments.

28
for making those comments.”

6.1 Results

Figure 5 displays the support gains that accrue to politicians from alleging misinformation
relative to other communication strategies in the face of scandal. Unlike the prior two studies,
the control group is not politician non-response. Instead, allegations of misinformation are
compared to an apology or simple denial.

Figure 5: Liar’s Dividend Results for Study 3: Allegations of Misinformation Compared to


Alternative Politician Communication Strategies

Notes: Full table of results with covariates available as SI Table A8.

We find some evidence that allegations of misinformation invoking informational uncertainty


are more effective than alternative politician responses. In particular, allegations of misinfor-

29
mation are more effective than apologizing (0.10 standard deviations, p = 0.012). However,
while support gains from allegations of misinformation are larger in magnitude than support
gains due to more simple denials, this difference is not significant at conventional levels (0.06
standard deviations, p = 0.116). This finding could imply that a liar’s dividend in today’s in-
formational environment is not meaningfully larger than dividends that would have accrued
to liars in the past. Yet another possibility is that even simple denials are more effective
in today’s informational ecosystem, a possibility that this study cannot directly address.
Additionally, while both apologies and simple denials are common politician responses to
scandal, an open question is how allegations of misinformation compare to still other types
of responses or variations of the treatments used here.

Indeed, the results are most concerning when comparing allegations of misinformation to
apologies, and raise the concern that politicians can benefit by falsely alleging misinforma-
tion rather than taking responsibility for a scandal. This extends recent scholarship finding
that politicians benefit more from denying scandals than from conceding and offering to take
corrective action (Johnson, 2018). Relatedly, while our results indicate that simple denials
are not statistically more effective than apologies (p = 0.344), denials that employ the ex-
tra step of alleging misinformation are indeed more beneficial to politicians than apologies.
In combination, these findings caution that political accountability in today’s informational
environment is especially difficult. Public figures may be incentivized to cry wolf over mis-
information even when doing so undermines principles of political accountability.

To shed more light on this challenge, Table 2 presents compares the effects of allegations
of misinformation versus apologies on politician support, belief in the underlying scandal,
and trust in media. As indicated in the coefficient plot, allegations of misinformation drive
support gains for politicians on the order of 0.1 standard deviations. Yet these benefits
to politicians are socially costly: they require deceiving the public and undermine trust in
media, creating the conditions for more uncertainty and less accountability. Different from

30
studies 1 and 2, belief in the story declines by a sizable 0.31 standard deviations when the
public is misled in this way, likely because the apology response explicitly acknowledges the
truth of the scandal. Finally, trust in media declines by 0.12 standard deviations.27 The
liar’s dividend therefore comes at a substantial social cost, involving a zero-sum trade-off
between politicians who benefit and social welfare which declines.

Support Index Belief Index Trust Index


(1) (2) (3)
Info. Uncertain 0.103∗∗ −0.311∗∗∗ −0.120∗∗∗
(0.040) (0.041) (0.038)
N 1,994 1,994 1,994
R2 0.082 0.081 0.223
F Statistic (df = 26; 1967) 6.728∗∗∗ 6.650∗∗∗ 21.688∗∗∗
∗ ∗∗ ∗∗∗
p < .1; p < .05; p < .01

Table 2: Allegations of Misinformation Versus Apologies

Notes: With robust SEs, including covariates, and using Study 3 data. Full table with covariates available
as SI Table A9.

7 Conclusion

This study is the first to provide experimental evidence of the liar’s dividend. Using real
politician scandals presented to participants, we find that unscrupulous politicians willing
to falsely allege misinformation may be rewarded with a reputational boost in the face of
an otherwise damaging story. Alleging misinformation bolsters politician support more than
remaining silent and allowing a scandal to blow over. It is also significantly more effective
than apologizing—a preferable behavior for promoting trust and political accountability—
and it is at least as effective as a more simple denial. Indeed, these strategies and their effects
may be enabled by developments in technological tools for generating and disseminating
misinformation (e.g., deepfakes, social media platforms) and by the broader sociopolitical
27
Compared to simple denials, allegations of misinformation marginally decrease belief (standardized effect
= −0.07, p = 0.08) and have no effect on trust in media (standardized effect = −0.04, p = 0.34).

31
environment in which frequent instances of real misinformation render even false allegations
credible.

Interestingly, while the liar’s dividend concept was originally developed in the context of
concerns over the implications of deepfakes, we find that crying wolf about fake news is
far more likely to pay off. Scholars have debated the extent to which deepfakes are more
believable and persuasive than text-based misinformation (Barari, Lucas and Munger, 2021;
Wittenberg et al., 2021). Our results provide compelling evidence that video and text do
operate differently in the context of allegations of misinformation. In particular, attempts
to discredit video appear to be much less persuasive. Deepfakes may not (yet) pose the
particular indirect threat suggested by the liar’s dividend, but they remain highly novel
additions to the informational environment, and warrant much more research to unpack
their direct and indirect effects.

We also sought to better understand the strategies employed in allegations of misinformation


and the channels through which they affect individuals’ attitudes. Drawing on real-world
attempts to allege misinformation by public figures and relevant scholarly literature, this
study proposes and evaluates two such strategies, which we term informational uncertainty
and oppositional rallying. We find that both strategies are effective in raising politician
support, and preliminary evidence suggests they may work in distinct ways. Politicians
employing the oppositional rallying strategy receive more support from co-partisans, likely
the intended targets of these messages in the first place, as they seek to exploit polarization
and foment political animus. Meanwhile, informational uncertainty’s effects may appear
concentrated on political moderates and, under some circumstances, influence belief in the
scandal. Yet, the evidence on both counts is mixed: the differences between partisan groups
are not statistically distinguishable and the effects on belief are unstable across studies.
Further research is needed to unveil the underpinnings of the informational uncertainty
effect we document in our three studies.

32
Also importantly, our study examines a certain type of political scandal, surrounding of-
fensive comments largely related to race, ethnicity, gender, and identity. While some may
consider these to be relatively minor gaffes, there are reasons to think that both making and
responding to these kinds of comments is becoming more prevalent, such that this constitutes
an important feature of modern political discourse worth understanding. An open question
is whether other types of scandals, including potentially more severe ones, make the payout
of the liar’s dividend even greater. Additionally, for ethical reasons, this study is centered
on inactive politicians and scandals that are not especially politically salient today. The
liar’s dividend may pay out even more for current high-profile political leaders, reinforced
when political actors, organizations, and certain media sources act in concert to amplify
misinformation and undermine trust. Yet, as members of the public may have stronger at-
tachments and more information about currently active politicians, it is also possible that
public attitudes may be more polarized and less malleable.

Overall, how concerned should we be about these findings? On the one hand, we find that
false allegations of misinformation are not effective in the case of video. Further, even if
based on real scandals and videos, our results established in an experimental context may
overstate the effects that occur in real-world settings, especially if effects decay over time or
are countered by fact-checking. On the other hand, many of our analyses are deliberately
designed to be conservative. For example, we include inattentive respondents in our main
results but find substantially larger effects for attentive participants. We also find larger
effects when we omit more demanding support outcomes from our index like “I would donate
to the politician,” an outcome we included understanding that large effects are unlikely. Even
when using our conservative estimation approach, we witness harmful impacts for political
accountability including a decline in trust in media when allegations of misinformation are
employed rather than apologizing. Might the indirect effects of misinformation be even more
consequential for political accountability and trust than the direct effects? Those political
figures attempting to reap the liar’s dividend certainly hope so.

33
References

Anderson, Janna, Lee Rainie and Emily A. Vogels. 2021. Experts Say the ‘New Normal’ in
2025 Will Be Far More Tech-Driven, Presenting More Big Challenges. Technical report
Pew Research Center.
URL: https://www.pewresearch.org/internet/2021/02/18/experts-say-the-new-normal-in-
2025-will-be-far-more-tech-driven-presenting-more-big-challenges/

Arendt, Hannah. 1973. The Origins of Totalitarianism. First edition ed. New York: Harcourt,
Brace, Jovanovich.

Barari, Soubhik, Christopher Lucas and Kevin Munger. 2021. Political Deepfakes Are As
Credible As Other Fake Media And (Sometimes) Real Media.
URL: https://osf.io/cdfh3

Berinsky, Adam J., Michele F. Margolis and Michael W. Sances. 2014. “Separating the
Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered
Surveys.” American Journal of Political Science 58(3):739–753.

Berinsky, Adam J., Michele F. Margolis, Michael W. Sances and Christopher Warshaw. 2019.
“Using Screeners to Measure Respondent Attention on Self-Administered Surveys: Which
Items and How Many?” Political Science Research and Methods pp. 1–8.

Blondé, Jérôme and Fabien Girandola. 2016. “Revealing the elusive effects of vividness: a
meta-analysis of empirical evidences assessing the effect of vividness on persuasion.” Social
Influence 11(2):111–129.

Bohlken, Anjali Thomas, Nikhar Iakwad and Gareth Nellis. 2018. “The Politics of Public
Service Formalization in Urban India.” p. 51.

Bovet, Alexandre and Hernán A. Makse. 2019. “Influence of Fake News in Twitter during
the 2016 US Presidential Election.” Nature Communications 10(1):7.

34
Brenan, Megan. 2021. “Americans’ Trust in Media Dips to Second Lowest on Record.”.
URL: https://news.gallup.com/poll/355526/americans-trust-media-dips-second-lowest-
record.aspx

Brenton, Scott. 2011. “When the personal becomes political: Mitigating damage following
scandals.” Current Research in Social Psychology 18.

Brutger, Ryan, Joshua D Kertzer, Jonathan Renshon, Dustin Tingley and Chagai M Weiss.
forthcoming. “Abstraction and detail in experimental design.” American Journal of Polit-
ical Science .

Bullock, John G., Alan S. Gerber, Seth J. Hill and Gregory A. Huber. 2015. “Partisan Bias
in Factual Beliefs about Politics.” Quarterly Journal of Political Science 10(4):519–578.

Cahlan, Sarah. 2020. “How Misinformation Helped Spark an Attempted Coup in Gabon.”
Washington Post .
URL: https://www.washingtonpost.com/politics/2020/02/13/how-sick-president-suspect-
video-helped-sparked-an-attempted-coup-gabon/

Chesney, Bobby and Danielle Citron. 2019. “Deep Fakes: A Looming Challenge for Privacy,
Democracy, and National Security.” California Law Review 107(6):1753–1820.
URL: https://heinonline.org/HOL/P?h=hein.journals/calr107i=1789

Christopher, Nilesh. 2020. “We’ve Just Seen the First Use of Deepfakes in an Indian Election
Campaign.”.
URL: https://www.vice.com/eni n/article/jgedjb/the − f irst − use − of − deepf akes −
in − indian − election − by − bjp

Ciancaglini, Vincenzo, Craig Gibson, David Sancho, Odhran McCarthy, Maria Eira, Philipp
Amann and Aglika Klayn. 2020. Malicious Uses and Abuses of Artificial Intelligence.
Technical report Trend Micro Research, United Nations Interregional Crime and Justice

35
Research Institute (UNICRI), Europol’s European Cybercrime Centre (EC3).
URL: https://www.europol.europa.eu/sites/default/files/documents/malicious_uses_and_abuses_of_a

Coppock, Alexander and Oliver A. McClellan. 2019. “Validating the Demographic, Political,
Psychological, and Experimental Results Obtained from a New Source of Online Survey
Respondents.” Research & Politics 6(1):2053168018822174.

Craig, Stephen C. and Paulina S. Cossette. 2020. “Eye of the Beholder: Partisanship, Identity,
and the Politics of Sexual Harassment.” Political Behavior .
URL: https://doi.org/10.1007/s11109-020-09631-4

Druckman, James N. 2012. “The Politics of Motivation.” Critical Review 24(2):199–216.

Election Integrity Partnership. 2021. The Long Fuse: Misinformation and the 2020 Election.
Technical Report v1.2.0 Election Integrity Partnership.
URL: https://purl.stanford.edu/tr171zs0069

Erlanger, Steven. 2017. “ ‘Fake News,’ Trump’s Obsession, Is Now a Cudgel for Strongmen.”
The New York Times .
URL: https://www.nytimes.com/2017/12/12/world/europe/trump-fake-news-
dictators.html

Flynn, D. J., Brendan Nyhan and Jason Reifler. 2017. “The Nature and Origins of Mispercep-
tions: Understanding False and Unsupported Beliefs About Politics.” Political Psychology
38(S1):127–150.

Funder, David C. and Daniel J. Ozer. 2019. “Evaluating Effect Size in Psychological Re-
search: Sense and Nonsense.” Advances in Methods and Practices in Psychological Science
2(2):156–168.

Galston, William A. 2020. “Is Seeing Still Believing? The Deepfake Challenge to Truth in
Politics.”.

36
URL: https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-
to-truth-in-politics/

Gonzales, Marti Hope, Margaret Bull Kovera, John L. Sullivan and Virginia Chanley. 1995.
“Private Reactions to Public Transgressions: Predictors of Evaluative Responses to Allega-
tions of Political Misconduct.” Personality and Social Psychology Bulletin 21(2):136–148.

Guess, Andrew M., Brendan Nyhan and Jason Reifler. 2020. “Exposure to Untrustworthy
Websites in the 2016 US Election.” Nature Human Behaviour 4(5):472–480.

Hao, Karen. 2019. “The Biggest Threat of Deepfakes Isn’t the Deepfakes Themselves.” MIT
Technology Review .
URL: https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-
deepfakes-isnt-the-deepfakes-themselves/

Huang, Haifeng. 2015. “Propaganda as Signaling.” Comparative Politics 47(4):419–444.

Jerit, Jennifer and Yangzi Zhao. 2020. “Political Misinformation.” Annual Review of Political
Science 23(1):77–94.

Johnson, Tyler. 2018. “Deny and Attack or Concede and Correct? Image Repair and the
Politically Scandalized.” Journal of Political Marketing 17(3):213–234.

Kalla, Joshua L. and David E. Broockman. 2018. “The Minimal Persuasive Effects of Cam-
paign Contact in General Elections: Evidence from 49 Field Experiments.” American
Political Science Review 112(1):148–166.

Kan, Michael. 2020. “Pro-China Propaganda Act Used Fake Followers Made With AI-
Generated Images.” PC Magazine .
URL: https://www.pcmag.com/news/pro-china-propaganda-act-used-fake-followers-
made-with-ai-generated-images

37
Karnouskos, Stamatis. 2020. “Artificial Intelligence in Digital Media: The Era of Deepfakes.”
IEEE Transactions on Technology and Society 1(3):138–147.

Ker, Nic. 2019. “Is the Political Aide Viral Sex Video Confession Real or a Deepfake?”
Malay Mail .
URL: https://www.malaymail.com/news/malaysia/2019/06/12/is-the-political-aide-
viral-sex-video-confession-real-or-a-deepfake/1761422

Kling, Jeffrey R., Jeffrey B. Liebman and Lawrence F. Katz. 2007. “Experimental Analysis
of Neighborhood Effects.” Econometrica 75(1):83–119.

Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Green-
hill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David
Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson,
Duncan J. Watts and Jonathan L. Zittrain. 2018. “The Science of Fake News.” Science
359(6380):1094–1096.

Lee, Tien-Tsung. 2010. “Why They Don’t Trust the Media: An Examination of Factors
Predicting Trust.” American Behavioral Scientist 54(1):8–21.

Little, Andrew T. 2018. “Fake News, Propaganda, and Lies Can Be Pervasive Even If They
Aren’t Persuasive.” Critique 11(1):21–34.

Mitchell, Amy, Jeffrey Gottfried, Galen Stocking, Mason Walker and Sophia Fedeli. 2019.
“Many Americans Say Made-Up News Is a Critical Problem That Needs To Be Fixed.”.
URL: https://www.journalism.org/2019/06/05/many-americans-say-made-up-news-is-a-
critical-problem-that-needs-to-be-fixed/

NES. 2022. “ANES Continuity Guide.”.


URL: https://electionstudies.org/resources/anes-continuity-guide/

38
New York Times Editorial Board. 2020. “Congress Must Be Clear: No Doctored Videos.”
The New York Times .
URL: https://www.nytimes.com/2020/09/03/opinion/steve-scalise-ady-barkan-
video.html

Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos and Rasmus Kleis Nielsen. 2019.
Reuters Institute Digital News Report 2019. Technical report Reuters Institute and
University of Oxford Oxford, UK: .
URL: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2019-
06/DNR_2019_FINAL_0.pdf

Oppenheim, Maya. 2017. “Spanish Foreign Minister Claims Photos of Police Brutality Are
’Fake’.” The Independent .
URL: https://www.independent.co.uk/news/world/europe/catalan-independence-
referendum-photos-police-violence-fake-a7978876.html

O’Shaughnessy, Nicholas Jackson. 2004. Politics and Propaganda. Manchester: Manchester


University Press.

Ovadya, Aviv. 2021. “The Path to Deepfake Harm.”.


URL: https://aviv.medium.com/the-path-to-deepfake-harm-da4effb541bd

Pennycook, Gordon, Ziv Epstein, Mohsen Mosleh, Antonio A. Arechar, Dean Eckles and
David G. Rand. 2021. “Shifting Attention to Accuracy Can Reduce Misinformation On-
line.” Nature 592(7855):590–595.

Peterson, Erik and Shanto Iyengar. 2021. “Partisan Gaps in Political Information and
Information-Seeking Behavior: Motivated Reasoning or Cheerleading?” American Journal
of Political Science 65(1):133–147.

Prochazka, Fabian and Wolfgang Schweiger. 2019. “How to Measure Generalized Trust in

39
News Media? An Adaptation and Test of Scales.” Communication Methods and Measures
13(1):26–42.

Reuters. 2019. “Identifying and Tackling Manipulated Media.”.


URL: https://www.reuters.com/manipulatedmedia

Sabatier, Paul, Susan Hunter and Susan McLaughlin. 1987. “The Devil Shift: Perceptions
and Misperceptions of Opponents.” Western Political Quarterly 40(3):449–476.

Schwartz, Oscar. 2018. “You Thought Fake News Was Bad? Deep Fakes Are Where Truth
Goes to Die.” The Guardian .
URL: https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth

Sikorski, Christian von. 2018. “Political Scandals as a Democratic Challenge| The After-
math of Political Scandals: A Meta-Analysis.” International Journal of Communication
12(00):25.

Sundar, S Shyam, Maria D Molina and Eugene Cho. 2021. “Seeing Is Believing: Is Video
Modality More Powerful in Spreading Fake News via Online Messaging Apps?” Journal
of Computer-Mediated Communication 26(6):301–319.

Taber, Charles S. and Milton Lodge. 2006. “Motivated Skepticism in the Evaluation of
Political Beliefs.” American Journal of Political Science 50(3):755–769.

Tandoc Jr, Edson C. Tandoc, Zheng Wei Lim and Richard Ling. 2018. “Defining “Fake
News”.” Digital Journalism 6(2):137–153.

Taylor, Shelley E. and Suzanne C. Thompson. 1982. “Stalking the elusive “vividness” effect.”
Psychological Review 89(2):155–181.

Ternovski, John, Joshua Kalla and Peter Aronow. 2022. “The Negative Consequences of
Informing Voters about Deepfakes: Evidence from Two Survey Experiments.” Journal of

40
Online Trust and Safety 1(22).
URL: https://tsjournal.org/index.php/jots/article/view/28

Thompson, John B. 2000. Political scandal: power and visibility in the media age. Polity
Press; Blackwell.

Toews, Rob. 2020. “Deepfakes Are Going To Wreak Havoc On Society. We Are Not Pre-
pared.”.
URL: https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-
havoc-on-society-we-are-not-prepared/

Vaccari, Cristian and Andrew Chadwick. 2020a. “Deepfakes and Disinformation: Exploring
the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.”
Social Media + Society 6(1):2056305120903408.

Vaccari, Cristian and Andrew Chadwick. 2020b. “ ‘Deepfakes’ Are Here. These Deceptive
Videos Erode Trust in All News Media.” Washington Post .
URL: https://www.washingtonpost.com/politics/2020/05/28/deepfakes-are-here-these-
deceptive-videos-erode-trust-all-news-media/

Wang, Amy B. 2017. “A Mayor Denies It Is His Voice on Lewd, Racist Tapes. His Colleagues
Say ‘Resign.’.” Washington Post .
URL: https://www.washingtonpost.com/news/post-nation/wp/2017/01/17/a-mayor-
denies-its-his-voice-on-lewd-racist-tapes-his-colleagues-say-resign/

Watts, Duncan J., David M. Rothschild and Markus Mobius. 2021. “Measuring the News
and Its Impact on Democracy.” Proceedings of the National Academy of Sciences 118(15).

Wedeen, Lisa. 2015. Ambiguities of Domination: Politics, Rhetoric, and Symbols in Con-
temporary Syria: With a New Preface. Chicago: The University of Chicago Press.

41
West, Emily A. and Shanto Iyengar. 2020. “Partisanship as a Social Identity: Implications
for Polarization.” Political Behavior .

Wittenberg, Chloe, Ben M. Tappin, Adam J. Berinsky and David G. Rand. 2021. “The
(minimal) persuasive advantage of political video over text.” Proceedings of the National
Academy of Sciences 118(47).
URL: https://www.pnas.org/content/118/47/e2114388118

42
A Supporting Information for:

“The Liar’s Dividend: Can Politicians Use Deepfakes and Fake

News to Evade Accountability?”

Contents
A.1 Study 1 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A.1.1 Treatments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A.1.2 Outcome Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
A.1.3 Debrief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
A.2 Study 2 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
A.2.1 Fact-Checking Treatment . . . . . . . . . . . . . . . . . . . . . . . . . 3
A.2.2 Additional Exploratory Outcome and Covariate Questions . . . . . . 4
A.3 Survey Screeners and Covariate Questions . . . . . . . . . . . . . . . . . . . 5
A.3.1 Screener Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
A.3.2 Demographic Questions . . . . . . . . . . . . . . . . . . . . . . . . . 5
A.3.3 News Media and Digital Literacy . . . . . . . . . . . . . . . . . . . . 5
A.4 Ethical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
A.5 MDE Calculations and Multiple Testing . . . . . . . . . . . . . . . . . . . . 7
A.6 Covariates and Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
A.7 Regression Tables for Figures and Tables in Paper . . . . . . . . . . . . . . . 9
A.8 Alternative Specifications of Main Analyses . . . . . . . . . . . . . . . . . . 11
A.9 Exploratory Pre-Registered Analyses . . . . . . . . . . . . . . . . . . . . . . 11
A.9.1 Study 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A.9.2 Study 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
A.10 Design Choices Based on a Pilot Study . . . . . . . . . . . . . . . . . . . . . 24

1
A.1 Study 1 Survey
This section describes the treatments and outcomes used in the Study 1 survey. Covariate and
screener questions are presented separately in SI Section A.3.

A.1.1 Treatments
Respondents randomly received information about a scandal with 1) one of four politicians (two
Republican and two Democrat) and 2) either a video clip or text transcript:

A news report has come out [showing | with] the following [video clip | story excerpt] about [Re-
publican | Democrat] politician [Tim James | Todd Akin | John Murtha | Jesse Jackson].

Please [watch the following video clip | read the excerpt below].

[Republican | Democrat] [Tim James | Todd Akin | John Murtha | Jesse Jackson]
Accused of Making Offensive Remarks

Tim James: https://youtu.be/Onvy6nzsa1s or text transcript: “Politicians make us give driver’s


license exams in 12 languages. This is Alabama. We speak English. If you want to live here, learn
it! We’re only giving that test in English.”
Todd Akin: https://youtu.be/WIwu04J6lsc or text transcript: “What about in the case of rape?
Should it be legal or not? It seems to me first of all from what I understand from doctors, that’s
really rare. If it’s a legitimate rape, the female body has ways to try to shut that whole thing down.”
John Murtha: https://youtu.be/3z_ZHHZI-Jg or text transcript: “There is no question that Western
Pennsylvania is a racist area and when I say racist area I mean older people are hesitant. They are
slow in seeing change, real change.”
Jesse Jackson: https://youtu.be/YkhAAZVza5k or text transcript: “See, Barack been, um, talking
down to black people on this faith based... I wanna cut his n*ts off.”

Next, respondents randomly received one of the following politician response messages:

• Control: No politician response message

• Informational Uncertainty: [Politician Name] Responds That Story is False and Mis-
leading, People Should Be Skeptical. In response to the recent allegations, [Republican
| Democrat] [Politician Name] asserted that the story is false and misleading. He claimed
that [the video is a deepfake, a computer-edited video that uses fake audio and images | the
story is not based on true information]. When asked about the incident, he said that it’s well
known that there’s a lot of misleading information, so people should be skeptical about what
they hear. [Last Name] stated that “You can’t know what’s true these days with so much
misinformation out there.”

• Oppositional Rallying: [Politician Name] Responds That Story is False and Mislead-
ing, Attack by Opponent. In response to the recent allegations, [Republican | Democrat]
[Politician Name] asserted that the story is false and misleading. He claimed that [the video

2
is a deepfake, a computer-edited video that uses fake audio and images | the story is not based
on true information]. When asked about the incident, he said that the story is an attack by
the opposition, and that people should not pay attention to it. [Last Name] stated that, “My
opponent would say anything to hurt me, but my supporters know who’s really on their side.”

A.1.2 Outcome Measures


Next, respondents were presented with a set of 8 outcome questions for which they were asked to
rate the extent of their agreement or disagreement. All outcome questions use a 5-point Likert scale
from “Strongly Disagree” to “Strongly Agree.”

1. I believe the story about the politician.


2. I think that the story about the politician is true.
3. I would support the politician.
4. I would defend the politician against critics.
5. I would vote for the politician.
6. I would donate to the politician.
7. I trust the media.
8. I believe that the media reports the news fairly.

Outcome questions 1 and 2 were combined to create an index for belief in the story about the
politician. Outcome questions 3-6 were combined to create an index for politician support. Outcome
questions 7 and 8 were combined to create an index for trust in media. To create the indices, we
followed the procedure used by Kling, Liebman and Katz (2007) by averaging z-scores for the
component outcome questions.

A.1.3 Debrief
Finally, all respondents were shown a debrief paragraph providing information about the survey
and clarifying any deception/misinformation:

The information provided to you about the politician is part of a study on false/fake news and
“deepfakes,” or digitally altered video, and the impacts that they have on trust in politics and
the media. While the video or story presented to you about the politician is real, the reply by the
politician was created by a team of researchers and therefore does not represent an actual statement
made by the politician. To learn more about how to identify fake news stories and fake videos, see
the following resources from the International Federation of Library Associations and Institutions
and the MIT Media Lab Detect DeepFakes Project: https://www.ifla.org/publications/node/11174
and https://www.media.mit.edu/projects/detect-fakes/overview.

A.2 Study 2 Survey


This section describes the new fact-checking treatment and additional exploratory outcome and
covariate questions used in the Study 2 survey.

A.2.1 Fact-Checking Treatment


A non-partisan fact-checking organization has weighed in on the story and the politician’s response
to it. Please read the statement by the fact-checking organization carefully before moving on.

3
Fact Check: [Politician Name] was recently accused of making offensive comments but disputes
the truthfulness of the story. We find evidence that [Politician Name] did make the comments as
originally reported.

A.2.2 Additional Exploratory Outcome and Covariate Questions


• Experiment 1: Control vs. IU vs. IU + fact-checking.
– I think that the politician’s remarks, as reported in the news story, were offensive.
[Strongly disagree to Strongly agree]
– It’s hard to know what’s true these days. [Strongly disagree to Strongly agree]
– I believe the politician’s response that the news story is false. [Strongly disagree to
Strongly agree] [IU group and fact-checking group]
– To what extent did the politician’s response that the story was false affect your support
for the politician? [Strongly decreased my support to Strongly increased my support]
[IU group and fact-checking group]
– I would share the politician’s response with family and friends. [No - because I don’t
support it, No - although I support it, Yes - but not because I support it, Yes - because
I support it] [IU group and fact-checking group]
– I believe the fact-checking organization’s statement that the news story is true. [Strongly
disagree to Strongly agree] [Fact-checking group only]
– To what extent did the fact-checking organization’s statement that the story was true af-
fect your support for the politician? [Strongly disagree to Strongly agree] [Fact-checking
group only]
• Experiment 2: Control vs. OR.
– I think that the politician’s remarks, as reported in the news story, were offensive.
[Strongly disagree to Strongly agree]
• Additional Questions to Help with Assessing the IU Mechanism.
– How concerned are you, if at all, about made-up news and information? [Not at all
concerned, Slightly concerned, Concerned, Very concerned]
– How confident are you in your own ability to recognize news that is made up? [Not at
all confident, Slightly confident, Confident, Very confident]
– How big of a problem do you think “cancel culture” is in the U.S. today? [Not a problem,
A minor problem, A problem, A major problem]
– Do you agree or disagree that people should be more careful with language to avoiding
offending people? [Strongly disagree to Strongly agree]
– Which most closely matches your view? When a public figure makes an offensive state-
ment, they should be: [Given a second chance, Held accountable]
• Additional Exploratory Outcome Questions.
– How often can you trust the government to do what is right? [Almost none of the time
to Almost always]
– How often can you trust the information you get from political leaders and public offi-
cials? [Almost none of the time to Almost always]
– How often can you trust other people? [Almost none of the time to Almost always]
– How often can you trust [opposing political party]? [Almost none of the time to Almost
always] [Randomized for political moderates]
– We measure whether participants click on a link provided in the survey debrief to learn
more about how to identify fake news stories.

4
A.3 Survey Screeners and Covariate Questions
A.3.1 Screener Questions
We include two screener questions near the beginning of the survey to allow for analysis of results
stratified by level of attentiveness of respondents. In particular, we use two screening questions
employed by Berinsky, Margolis and Sances (2014) that test respondents’ attention by asking them
to select specific answer choices regardless of how they would answer those questions normally.
We create an attentiveness index which ranges from 0-2, corresponding to the number of correct
answers. Note that this may provide slightly different results compared to the item response theory
model used by Berinsky et al. (2019), but should be highly correlated.

These screener questions function much in the same way as manipulation or attention checks, and are
important because inattentive survey takers may fail to receive the treatment and answer questions
accurately, likely increasing noise and diluting the strength of effects (Oppenheimer, Meyvis and
Davidenko, 2009). We follow the advice of Berinsky et al. (2019) to employ multiple screener
questions targeted at identifying both high attention and low attention respondents. However, we
do not discard results from respondents who fail the screeners, because doing so could threaten
internal and external validity if characteristics that predict attention (such as levels of education)
may also predict responses on our outcome questions. Results stratified by level of attentiveness
are available in SI Section A.7.

A.3.2 Demographic Questions


We received the following demographic information directly from respondents or through Lucid,
coded as described below. All demographic questions that were directly asked were included at the
end of the survey, other than race and education, which were used as filler questions between the
screeners and which we considered unlikely to introduce priming effects.

• Partisanship is coded as a factor variable with seven levels: strong Democrat, Democrat, lean
Democrat, Independent, lean Republican, Republican, and strong Republican.1
• Gender is coded as a factor variable with male and female..
• Age is coded as a factor variable with five levels based on the Pew Research Center generation
age ranges. (https://www.pewresearch.org/topics/generations-and-age/)
• Race/ethnicity is coded as a factor variable with White, Black or African American, and
Other as the three race/ethnicity categories.
• Education is coded as a factor variable with four levels: high school graduate or less, some
college or technical or Associate degree, Bachelor’s degree, and graduate degree.
• Income is coded as a factor variable with three levels from low to high income: < $30, 000,
$30 − $74, 999, and $75, 000+.
• Region is coded as a factor variable with four levels: Northeast, South, Midwest, and West.

A.3.3 News Media and Digital Literacy


To assess potential protective factors against misinformation, we include items for news media
literacy and digital literacy. The media literacy questions used to create a news media literacy
1
We ask this question post-treatment as we are more concerned about priming effects than post-treatment
bias, and because our pilot study indicated that respondents’ answers to the partisanship question were not
affected by treatment (p-value from F-statistic = .962).

5
index each have a single correct response from a set of four possible answers. The questions come
from the Reuters Institute for the Study of Journalism at Oxford University (Newman et al., 2019).
The three questions measure: respondents’ factual knowledge of how news sources are funded, how
press releases are produced, and how news on social media is curated. Correct responses are summed
to place respondents on a 0-3 scale for news literacy. Two of the three questions were adapted from a
measure of news media literacy by Maksl, Ashley and Craft (2015) in the Journal of Media Literacy
Education. The Reuters Institute has shown that higher news literacy on the 0-3 scale is correlated
with measures important for assessing external validity, such as higher consumption of news stories
from newspaper sources, discernment when selecting news stories, and consumption of unbiased
credible news sources.

We assess digital literacy through a single measure of self-reported participant familiarity with
deepfakes, as participant responses to allegations implicating deepfakes are likely moderated by their
prior technical familiarity. As shown by Hargittai (2005), self-reported measures of digital literacy
are valid indicators of people’s factual knowledge and digital literacy skills for a wide variety of
digital literacy domains.2

The three media literacy questions and one digital literacy question are as follows:

1. Which of the following news outlets does NOT primarily depend on advertising for financial
support? [Fox News, PBS (correct response), New York Times, USA Today, Don’t know]
2. Which of the following is typically responsible for writing a press release? [A reporter for a
news organization, A producer for a news organization, A lawyer for a news aggregator, A
spokesperson for an organization (correct response), Don’t know]
3. How are most of the individual decisions about what news stories to show people on Face-
book made? [At random, By computer analysis of what stories might interest you (correct
response), By editors and journalists that work for news outlets, By editors and journalists
that work for Facebook, Don’t know]
4. Computer algorithms can now be used to create ultra-realistic fake video content. How much
had you heard about this before today? [“Not at all” to “A great deal.”]

A.4 Ethical Considerations


Misinformation is a fraught topic, one that spells potential harms for individuals and society. It is for
precisely these reasons that scholars need to study misinformation. In the context of our research,
we sought to understand if certain types of political misinformation—as well as mitigations—were
effective in influencing individuals. As described in the main text, while we chose to use real politi-
cian scandals rather than fabricated stories in order to minimize deception, we did generate a set of
politician responses and attributed all such responses to the four politicians under study. This was
essential for allowing for valid comparisons of public responses to different political communication
strategies, something that is not otherwise feasible given the infrequent and ad-hoc occurrence of
liar’s dividend type claims in natural settings. In turn, we did debrief participants and provide
media and digital literacy resources, as well as warned participants about this debrief during the
2
We have opted to include these demographic questions post-treatment because we are concerned about
potential priming effects, and the results of our pilot study suggest that the media literacy and digital
literacy questions are not significantly impacted by treatment (p-values from global F-tests: .66 and .18,
respectively).

6
consent process. All participants consented, we used approved consent language from our institu-
tions’ Institutional Review Boards to make the study’s design and risks accessible, and we did not
target any vulnerable groups. We also received feedback from other misinformation researchers on
the ethical dimensions of the study prior to administration.

Also importantly, all participants were compensated for participation in the three studies, regard-
less of overall survey completion or satisfaction of attention check questions. Per Lucid Theorem
policies, the platform is paid one dollar per participant, and this rate is fixed. As our survey
only took between three and five minutes for the large majority of participants, the effective pay
rate for the participants—all American adults—is likely above minimum wage rates and standard
market rates for online survey participants. However, as Lucid uses a variety of suppliers to re-
cruit individuals, and does not have access to the specifics of compensation for each supplier, we
do not know how much each participant was ultimately compensated. See: https://luc.id/wp-
content/uploads/2019/10/Lucid-IRB-Methodology.pdf for a discussion of Lucid’s fixed rate ap-
proach and use of suppliers. Of note, two separate Institutional Review Boards approved this
study, cognizant of our approach to deception and mitigation of any resulting harms.

A.5 MDE Calculations and Multiple Testing


We use simulations based on a pilot study of 916 respondents on MTurk to calculate minimum
detectable effects (MDE) along a range of possible sample sizes for our main outcome of interest:
support. As suggested by DeclareDesign (2019), the calculation of MDEs using pilot results is an
improvement on power calculations because the latter are based on noisy effect estimates.

With sample sizes of 2,500-3,000 for each study, our study has sufficient power to detect standardized
effects as small as 0.15 to 0.17 for our main hypothesis for each study. With a combined 5,021
respondents across Studies 1 and 2 (the research design allows these studies to be pooled), we have
sufficient power to detect standardized effects for our main hypothesis as small as 0.12. However,
our study may lack power to definitively evaluate all hypotheses of interest. These considerations
informed our decision to perform a replication of key findings from the Study 1 survey and increase
our total sample size.

For our primary regressions used to test our hypotheses, we will report standard nominal p-values
based on robust standard errors. However, within hypothesis families with multiple additional ex-
ploratory tests, we use the Benjamini-Hochberg method to correct for multiple testing and present
corrected p-values, following the approach of Bohlken, Iakwad and Nellis (2018). We use a false
discovery rate of 0.05. As defined in our pre-analysis plan, the Liar’s Dividend Hypothesis has two
exploratory tests (each with two p-values of interest), the Informational Uncertainty Hypothesis
has three exploratory tests (one p-value each), and the Oppositional Rallying Hypothesis has two
exploratory tests (one p-value each) for which we will perform corrections. Results from the ex-
ploratory analyses, including nominal and adjusted p-values, are presented in SI Section A.7.

A.6 Covariates and Balance


Table A1, Table A2, and Table A3 help to evaluate covariate balance for Studies 1, 2, and 3,
respectively. Within treatment groups, proportions in each covariate category are reported, along
with mean scores for media literacy and digital literacy.

7
Variable Level Control Text Control Video IU Text IU Video OR Text OR Video
Strong Democrat 0.18 0.14 0.18 0.15 0.15 0.16
Democrat 0.18 0.14 0.17 0.17 0.17 0.17
Lean Democrat 0.10 0.10 0.08 0.09 0.11 0.08
Independent 0.24 0.30 0.24 0.29 0.27 0.28
Lean Republican 0.12 0.11 0.13 0.10 0.10 0.11
Republican 0.09 0.08 0.09 0.08 0.09 0.11
Strong Republican 0.09 0.13 0.11 0.11 0.10 0.09
Male 0.49 0.47 0.48 0.48 0.46 0.54
Female 0.51 0.53 0.52 0.52 0.54 0.46
White 0.74 0.70 0.72 0.72 0.73 0.68
Black 0.11 0.11 0.11 0.10 0.11 0.10
Hispanic 0.07 0.08 0.08 0.09 0.07 0.10
Asian 0.06 0.07 0.05 0.06 0.07 0.09
Other race/ethnicity 0.02 0.03 0.03 0.02 0.02 0.03
Gen Z 0.13 0.10 0.14 0.13 0.11 0.14
Millennials 0.29 0.30 0.26 0.33 0.34 0.34
Gen X 0.25 0.29 0.28 0.28 0.27 0.24
Boomers 0.29 0.26 0.28 0.24 0.24 0.24
Silent 0.04 0.04 0.04 0.03 0.03 0.04
High school graduate or less 0.27 0.28 0.25 0.26 0.24 0.26
Some college 0.34 0.30 0.32 0.34 0.34 0.31
Bachelor’s degree 0.23 0.28 0.26 0.24 0.26 0.25
Graduate degree 0.16 0.14 0.17 0.16 0.16 0.17
Low income 0.32 0.30 0.33 0.32 0.29 0.30
Middle income 0.40 0.42 0.38 0.36 0.39 0.41
High income 0.27 0.28 0.29 0.31 0.33 0.28
Northeast 0.21 0.22 0.22 0.21 0.22 0.19
Midwest 0.19 0.19 0.20 0.18 0.17 0.24
South 0.38 0.37 0.37 0.41 0.41 0.32
West 0.22 0.23 0.22 0.19 0.21 0.25
Media literacy 1.03 1.04 1.09 1.09 1.02 1.06
Digital literacy 2.73 2.79 2.80 2.87 2.69 2.88

Table A1: Covariate Balance for Study 1

Variable Level Cont. + Cont. Cont. + OR FC + Cont. FC + OR IU + Cont. IU + OR


Strong Democrat 0.18 0.16 0.17 0.18 0.15 0.18
Democrat 0.14 0.18 0.15 0.16 0.18 0.18
Lean Democrat 0.12 0.09 0.10 0.12 0.10 0.10
Independent 0.26 0.27 0.29 0.20 0.27 0.26
Lean Republican 0.11 0.12 0.11 0.10 0.11 0.08
Republican 0.11 0.09 0.08 0.14 0.11 0.09
Strong Republican 0.08 0.09 0.11 0.11 0.08 0.11
Male 0.48 0.48 0.47 0.53 0.50 0.49
Female 0.52 0.52 0.53 0.47 0.50 0.51
White 0.70 0.75 0.75 0.73 0.71 0.74
Black 0.14 0.10 0.10 0.12 0.13 0.10
Hispanic 0.06 0.06 0.06 0.09 0.07 0.07
Asian 0.07 0.05 0.06 0.04 0.05 0.07
Other race/ethnicity 0.03 0.03 0.03 0.03 0.02 0.03
Gen Z 0.12 0.10 0.15 0.13 0.14 0.11
Millennials 0.32 0.32 0.30 0.27 0.29 0.30
Gen X 0.27 0.23 0.26 0.29 0.24 0.25
Boomers 0.25 0.31 0.25 0.26 0.28 0.29
Silent 0.04 0.04 0.03 0.05 0.05 0.05
High school graduate or less 0.25 0.28 0.27 0.22 0.26 0.27
Some college 0.29 0.29 0.32 0.35 0.32 0.33
Bachelor’s degree 0.27 0.20 0.20 0.25 0.22 0.22
Graduate degree 0.19 0.22 0.21 0.17 0.19 0.18
Low income 0.28 0.30 0.30 0.26 0.34 0.32
Middle income 0.37 0.40 0.37 0.42 0.37 0.35
High income 0.34 0.31 0.32 0.32 0.30 0.33
Northeast 0.22 0.20 0.21 0.18 0.24 0.22
Midwest 0.21 0.18 0.20 0.20 0.16 0.18
South 0.35 0.37 0.40 0.40 0.40 0.36
West 0.22 0.25 0.20 0.21 0.21 0.24
Media literacy 0.94 0.98 0.86 0.97 0.92 0.93
Digital literacy 2.78 2.84 2.82 2.97 2.91 2.81

Table A2: Covariate Balance for Study 2

8
Variable Level Info. Uncertain Simple Denial Apology
Independent 0.27 0.27 0.28
Strong Democrat 0.14 0.18 0.16
Democrat 0.15 0.14 0.16
Lean Democrat 0.12 0.11 0.11
Lean Republican 0.11 0.10 0.11
Republican 0.09 0.10 0.08
Strong Republican 0.11 0.10 0.09
Male 0.49 0.49 0.49
Female 0.51 0.51 0.51
White 0.76 0.70 0.71
Black 0.11 0.13 0.13
Hispanic 0.08 0.09 0.09
Asian 0.04 0.05 0.05
Other 0.01 0.02 0.02
Gen Z 0.10 0.11 0.10
Millennials 0.30 0.31 0.32
Gen X 0.28 0.27 0.24
Boomers 0.28 0.28 0.30
Silent 0.04 0.04 0.03
High school graduate or less 0.26 0.30 0.24
Some college 0.35 0.35 0.36
Bachelor’s degree 0.27 0.24 0.26
Graduate degree 0.12 0.12 0.13
Middle income 0.38 0.37 0.36
Low income 0.31 0.37 0.34
High income 0.30 0.27 0.30
Northeast 0.23 0.20 0.19
Midwest 0.17 0.19 0.20
South 0.38 0.38 0.36
West 0.22 0.23 0.25
Media literacy 1.05 1.07 1.06
Digital literacy 2.84 2.91 2.85

Table A3: Covariate Balance for Study 3

To evaluate the success of randomization and to further assess covariate balance for each study,
we perform F-tests of global significance by regressing an indicator for each treatment group on
the covariates. For Study 1, the p-values of the F-tests for assignment to control, informational
uncertainty, oppositional rallying, and video (versus text) treatments are 0.97, 0.91, 0.92, and 0.69,
respectively. Thus, we fail to reject the null hypothesis that the covariates jointly do not predict
treatment assignment for Study 1. For Study 2, the p-values of the F-tests for assignment to
control, informational uncertainty, fact-checking, and oppositional rallying (versus control in Study
2 experiment two) treatments are 0.77, 0.77, 0.36, and 0.86, respectively. Finally, for Study 3, the
p-values of the F-tests for assignment to apology, simple denial, and allegation of misinformation
(IU) are 0.70, 0.26, and 0.13, respectively. Thus, again for Studies 2 and 3, we fail to reject the null
that covariates do not predict treatment, suggesting randomization was successful and balance was
achieved. Also note that the respondents are generally representative of the US population along
gender, race, age, and region, as per Lucid’s recruitment approach.

A.7 Regression Tables for Figures and Tables in Paper


SI A.7 presents regression tables corresponding to Figures 2, 3, 4, and 5, as well as Tables 1 and 2, in
the main paper. The tables display the relevant (standardized) average treatment effects associated
with our treatments, as well as the (unstandardized) coefficients associated with covariates in the
models. We use two-sided p-values and robust standard errors and report results for a variety of
models (e.g., text only sample, video only sample, etc.). Notes regarding the sample for each model
specification are included in the last row of each table. The general covariate profile of the reference
group is Moderate, Male, White, Gen Z, High School Education or Less, Medium Income, and
Northeast.

The pooled estimates for the treatment effects in Table A5 are precision-weighted averages of sep-
arate treatment effects from each study using fixed effects specifications. Results for specifications

9
Politician Support Index
(1) (2) (3) (4) (5) (6)
Allegation 0.074∗∗ 0.159∗∗∗ −0.009
(0.037) (0.052) (0.053)
Info. Uncertain 0.087∗∗ 0.178∗∗∗ 0.012
(0.042) (0.059) (0.060)
Opp. Rally 0.061 0.142∗∗ −0.032
(0.043) (0.059) (0.063)
Strong Democrat 0.022 0.021 −0.012 −0.013 0.041 0.043
(0.059) (0.059) (0.080) (0.080) (0.089) (0.088)
Democrat 0.050 0.050 −0.036 −0.037 0.136∗ 0.136∗
(0.052) (0.052) (0.075) (0.075) (0.074) (0.074)
Lean Democrat −0.096 −0.096 −0.169∗∗ −0.168∗∗ −0.024 −0.024
(0.060) (0.060) (0.079) (0.079) (0.091) (0.091)
Lean Republican 0.093 0.093 0.036 0.034 0.129 0.131
(0.057) (0.057) (0.076) (0.076) (0.086) (0.086)
Republican 0.270∗∗∗ 0.271∗∗∗ 0.093 0.093 0.444∗∗∗ 0.447∗∗∗
(0.062) (0.062) (0.084) (0.084) (0.090) (0.090)
Strong Republican 0.246∗∗∗ 0.245∗∗∗ 0.167∗ 0.166 0.312∗∗∗ 0.311∗∗∗
(0.069) (0.069) (0.101) (0.101) (0.095) (0.095)
Female −0.098∗∗∗ −0.098∗∗∗ −0.106∗∗ −0.106∗∗ −0.103∗∗ −0.105∗∗
(0.036) (0.036) (0.049) (0.049) (0.052) (0.052)
Black 0.155∗∗ 0.155∗∗ 0.065 0.065 0.236∗∗∗ 0.237∗∗∗
(0.061) (0.061) (0.085) (0.086) (0.088) (0.088)
Hispanic 0.061 0.061 0.029 0.028 0.099 0.099
(0.065) (0.065) (0.089) (0.089) (0.092) (0.092)
Asian −0.126∗ −0.124∗ −0.127 −0.127 −0.135 −0.133
(0.067) (0.067) (0.100) (0.100) (0.092) (0.092)
Other Race 0.176∗ 0.175∗ 0.099 0.094 0.249∗ 0.251∗
(0.095) (0.095) (0.132) (0.132) (0.139) (0.140)
Millennial 0.217∗∗∗ 0.218∗∗∗ 0.182∗∗ 0.184∗∗ 0.242∗∗∗ 0.241∗∗∗
(0.059) (0.059) (0.084) (0.085) (0.084) (0.085)
Gen X 0.195∗∗∗ 0.195∗∗∗ 0.097 0.098 0.290∗∗∗ 0.289∗∗∗
(0.062) (0.062) (0.088) (0.088) (0.089) (0.089)
Boomer 0.054 0.054 −0.021 −0.021 0.127 0.127
(0.064) (0.065) (0.091) (0.091) (0.094) (0.094)
Silent Gen. 0.093 0.094 −0.084 −0.083 0.305∗ 0.307∗
(0.114) (0.114) (0.154) (0.154) (0.170) (0.170)
Some College −0.078∗ −0.078∗ −0.059 −0.058 −0.115∗ −0.116∗
(0.046) (0.046) (0.063) (0.063) (0.067) (0.067)
Bachelor’s Degree −0.050 −0.050 −0.070 −0.070 −0.050 −0.049
(0.054) (0.054) (0.076) (0.076) (0.076) (0.077)
Graduate Degree 0.230∗∗∗ 0.230∗∗∗ 0.235∗∗∗ 0.234∗∗∗ 0.212∗∗ 0.213∗∗
(0.063) (0.064) (0.086) (0.086) (0.092) (0.092)
Low Income 0.044 0.043 −0.045 −0.046 0.146∗∗ 0.145∗∗
(0.043) (0.043) (0.060) (0.060) (0.061) (0.061)
High Income 0.058 0.058 −0.013 −0.012 0.131∗∗ 0.128∗∗
(0.045) (0.045) (0.061) (0.061) (0.065) (0.065)
Midwest 0.008 0.009 0.026 0.025 0.005 0.009
(0.055) (0.055) (0.077) (0.077) (0.078) (0.078)
South −0.026 −0.027 0.008 0.008 −0.070 −0.071
(0.048) (0.048) (0.066) (0.066) (0.069) (0.070)
West −0.022 −0.022 −0.077 −0.077 0.036 0.038
(0.053) (0.053) (0.076) (0.076) (0.075) (0.075)
Media Literacy −0.154∗∗∗ −0.154∗∗∗ −0.157∗∗∗ −0.157∗∗∗ −0.148∗∗∗ −0.148∗∗∗
(0.019) (0.019) (0.025) (0.025) (0.028) (0.028)
Digital Literacy 0.077∗∗∗ 0.077∗∗∗ 0.075∗∗∗ 0.075∗∗∗ 0.090∗∗∗ 0.090∗∗∗
(0.015) (0.015) (0.021) (0.021) (0.021) (0.021)
Constant −0.285∗∗∗ −0.284∗∗∗ −0.109 −0.108 −0.471∗∗∗ −0.470∗∗∗
(0.089) (0.089) (0.129) (0.129) (0.125) (0.125)
N 2,503 2,503 1,249 1,249 1,254 1,254
R2 0.096 0.097 0.106 0.106 0.110 0.111
Sample Full Full Text Only Text Only Video Only Video Only
∗ p < .1; ∗∗ p < .05; ∗∗∗ p < .01

Notes: With robust SEs

Table A4: Figure 2 Regression Results


10
using random effects are nearly identical. Results are also largely the same if we combine sam-
ples and perform a single regression as opposed to combining separate studies’ estimates through
precision weighting.

For Table A8, the table shows results for Denial vs. IU and Apology vs. IU, whereas Figure 5
presents the results of IU vs. Denial and IU vs. Apology to help depict the treatment effects
associated with IU. As such, the coefficients at the top of A8 are flipped in sign.

A.8 Alternative Specifications of Main Analyses


Figures A1 and A2 reproduce the main figures of results, Figure 2 and Figure 3, but using covariate-
unadjusted regressions. Figure A3 and Figure A4 reproduce the main results, but with the inclusion
of politician fixed effects. Results for these alternative specifications are largely consistent with main
results in the paper. Note that these figures also include additional information about the belief
outcome measure not reported in the main paper. Figure A5 presents results from Study 1 with
the support outcome measure disaggregated into its four constituent outcome measures.

Figure A1: Study 1: Without Covariate Adjustment

A.9 Exploratory Pre-Registered Analyses


In this section, we present results based on our pre-registered exploratory hypotheses.

11
Politician Support Index
(1) (2) (3)
Info. Uncertain 0.178∗∗∗
(0.059)
Opp. Rally 0.142∗∗
(0.059)
Info. Uncertain 0.072∗
(0.042)
Opp. Rally 0.164∗∗∗
(0.034)
Strong Democrat −0.013 0.258∗∗∗ 0.252∗∗∗
(0.080) (0.058) (0.059)
Democrat −0.037 0.134∗∗ 0.087∗
(0.075) (0.052) (0.052)
Lean Democrat −0.168∗∗ −0.065 −0.067
(0.079) (0.053) (0.057)
Lean Republican 0.034 0.153∗∗∗ 0.089
(0.076) (0.056) (0.056)
Republican 0.093 0.271∗∗∗ 0.115∗
(0.084) (0.058) (0.060)
Strong Republican 0.166 0.270∗∗∗ 0.222∗∗∗
(0.101) (0.072) (0.072)
Female −0.106∗∗ −0.129∗∗∗ −0.147∗∗∗
(0.049) (0.035) (0.035)
Black 0.065 0.071 0.150∗∗
(0.086) (0.058) (0.059)
Hispanic 0.028 −0.037 −0.183∗∗∗
(0.089) (0.072) (0.071)
Asian −0.127 0.008 0.012
(0.100) (0.070) (0.073)
Other Race 0.094 −0.160 0.077
(0.132) (0.112) (0.106)
Millennial 0.184∗∗ 0.099∗ 0.061
(0.085) (0.057) (0.058)
Gen X 0.098 0.099∗ 0.089
(0.088) (0.060) (0.061)
Boomer −0.021 −0.052 −0.072
(0.091) (0.060) (0.061)
Silent Gen. −0.083 −0.188∗ −0.047
(0.154) (0.098) (0.103)
Some College −0.058 −0.083∗ 0.029
(0.063) (0.045) (0.045)
Bachelor’s Degree −0.070 −0.072 −0.021
(0.076) (0.051) (0.052)
Graduate Degree 0.234∗∗∗ 0.247∗∗∗ 0.246∗∗∗
(0.086) (0.058) (0.059)
Low Income −0.046 0.062 0.007
(0.060) (0.043) (0.043)
High Income −0.012 0.105∗∗ 0.070
(0.061) (0.044) (0.045)
Midwest 0.025 −0.030 0.070
(0.077) (0.054) (0.055)
South 0.008 0.014 0.023
(0.066) (0.046) (0.046)
West −0.077 −0.001 0.020
(0.076) (0.052) (0.052)
Media Literacy −0.157∗∗∗ −0.186∗∗∗ −0.183∗∗∗
(0.025) (0.019) (0.019)
Digital Literacy 0.075∗∗∗ 0.071∗∗∗ 0.096∗∗∗
(0.021) (0.015) (0.015)
Constant −0.108 −0.204∗∗ −0.333∗∗∗
(0.129) (0.089) (0.089)
N 1,249 2,518 2,518
R2 0.106 0.137 0.135
Sample Study 1 Text Only Study 2 Study 2
∗ p < .1; ∗∗ p < .05; ∗∗∗ p < .01

Notes: With robust SEs

Table A5: Figure 312Regression Results


Politician Support Index
(1) (2)
Info. Uncertain 0.132∗∗∗
(0.044)
Opp. Rally 0.141∗∗∗
(0.038)
Anti-partisan −0.108∗ −0.112∗∗
(0.062) (0.054)
Co-partisan 0.434∗∗∗ 0.344∗∗∗
(0.059) (0.053)
Wave −0.041 −0.054
(0.036) (0.033)
Female −0.112∗∗∗ −0.139∗∗∗
(0.034) (0.030)
Black −0.030 0.140∗∗∗
(0.057) (0.050)
Hispanic −0.033 −0.159∗∗∗
(0.071) (0.059)
Asian −0.025 −0.037
(0.065) (0.061)
Other Race −0.058 0.055
(0.105) (0.092)
Millennial 0.084 0.117∗∗
(0.058) (0.050)
Gen X 0.052 0.115∗∗
(0.060) (0.052)
Boomer −0.047 −0.019
(0.060) (0.053)
Silent Gen. −0.154∗ −0.040
(0.094) (0.089)
Some College −0.064 −0.011
(0.044) (0.038)
Bachelor’s Degree −0.106∗∗ −0.074∗
(0.049) (0.044)
Graduate Degree 0.233∗∗∗ 0.215∗∗∗
(0.058) (0.051)
Low Income −0.024 −0.004
(0.042) (0.036)
High Income 0.094∗∗ 0.051
(0.042) (0.037)
Midwest 0.017 0.067
(0.053) (0.046)
South 0.025 0.028
(0.045) (0.039)
West −0.021 0.019
(0.050) (0.044)
Media Literacy −0.193∗∗∗ −0.176∗∗∗
(0.018) (0.016)
Digital Literacy 0.075∗∗∗ 0.087∗∗∗
(0.014) (0.013)
Info. Uncertain x Anti-partisan −0.070
(0.084)
Info. Uncertain x Co-partisan −0.036
(0.079)
Opp. Rally x Anti-partisan −0.035
(0.073)
Opp. Rally x Co-partisan 0.113
(0.069)
Constant −0.058 −0.175∗
(0.103) (0.094)
N 2,497 3,364
R2 0.162 0.162
Sample Studies 1 & 2 Studies 1 & 2
∗ p < .1; ∗∗ p < .05; ∗∗∗ p < .01

Notes: With robust SEs

Table A6: Figure 4 Regression Results


13
Politician Support Index
(1) (2)
Allegation −0.044
(0.052)
Info. Uncertain −0.072∗
(0.041)
Strong Democrat 0.718∗∗∗ 0.855∗∗∗
(0.079) (0.055)
Democrat 0.492∗∗∗ 0.512∗∗∗
(0.079) (0.053)
Lean Democrat 0.234∗∗∗ 0.397∗∗∗
(0.089) (0.058)
Lean Republican −0.239∗∗∗ −0.092
(0.085) (0.062)
Republican −0.374∗∗∗ −0.105
(0.095) (0.065)
Strong Republican −0.525∗∗∗ −0.411∗∗∗
(0.100) (0.067)
Female −0.055 −0.112∗∗∗
(0.050) (0.034)
Black 0.137∗ 0.096
(0.081) (0.059)
Hispanic 0.035 −0.070
(0.091) (0.074)
Asian 0.137 0.106
(0.096) (0.077)
Other Race −0.238 −0.121
(0.188) (0.099)
Millennial −0.034 0.168∗∗∗
(0.084) (0.058)
Gen X 0.050 0.173∗∗∗
(0.086) (0.061)
Boomer 0.017 0.121∗
(0.093) (0.062)
Silent Gen. 0.042 0.043
(0.164) (0.095)
Some College 0.025 −0.099∗∗
(0.064) (0.044)
Bachelor’s Degree 0.096 0.010
(0.076) (0.051)
Graduate Degree 0.311∗∗∗ 0.349∗∗∗
(0.087) (0.059)
Low Income 0.104∗ 0.066
(0.061) (0.042)
High Income 0.111∗ 0.071∗
(0.061) (0.043)
Midwest 0.079 0.014
(0.077) (0.053)
South −0.018 −0.044
(0.067) (0.046)
West −0.019 −0.044
(0.075) (0.052)
Media Literacy −0.047∗ −0.057∗∗∗
(0.026) (0.019)
Digital Literacy 0.036∗ 0.036∗∗
(0.021) (0.014)
Constant −0.266∗∗ −0.368∗∗∗
(0.128) (0.092)
N 1,249 2,518
R2 0.239 0.265
Sample Study 1 Text Only Study 2
∗ p < .1; ∗∗ p < .05; ∗∗∗ p < .01

Notes: With robust SEs

Table A7: Table 1 Full Regression Results

14
Politician Support Index
Denial vs. IU −0.062
(0.039)
Apology vs. IU −0.101∗∗
(0.040)
Strong Democrat 0.155∗∗∗
(0.057)
Democrat 0.125∗∗
(0.051)
Lean Democrat −0.069
(0.051)
Lean Republican 0.105∗
(0.055)
Republican 0.275∗∗∗
(0.059)
Strong Republican 0.237∗∗∗
(0.067)
Female −0.057∗
(0.034)
Black 0.145∗∗∗
(0.056)
Hispanic −0.046
(0.061)
Asian 0.023
(0.072)
Other Race −0.078
(0.114)
Millennial 0.286∗∗∗
(0.058)
Gen X 0.165∗∗∗
(0.061)
Boomer 0.110∗
(0.062)
Silent Gen. 0.148
(0.098)
Some College 0.011
(0.043)
Bachelor’s Degree 0.022
(0.050)
Graduate Degree 0.097
(0.063)
Low Income 0.076∗
(0.040)
High Income 0.116∗∗∗
(0.043)
Midwest −0.035
(0.053)
South −0.009
(0.046)
West −0.011
(0.050)
Media Literacy −0.164∗∗∗
(0.018)
Digital Literacy 0.080∗∗∗
(0.014)
Constant −0.317∗∗∗
(0.086)
N 2,994
R2 0.084
Sample Study 3
∗ p < .1; ∗∗ p < .05; ∗∗∗ p < .01

Notes: With robust SEs

Table A8: Figure 5 Regression Results

15
Support Index Belief Index Trust Index
(1) (2) (3)
Info. Uncertain 0.103∗∗ −0.311∗∗∗ −0.120∗∗∗
(0.040) (0.041) (0.038)
Strong Democrat 0.112 0.184∗∗ 0.687∗∗∗
(0.071) (0.076) (0.066)
Democrat 0.124∗∗ 0.326∗∗∗ 0.495∗∗∗
(0.061) (0.059) (0.060)
Lean Democrat −0.084 0.108 0.282∗∗∗
(0.061) (0.068) (0.064)
Lean Republican 0.119∗ 0.057 −0.246∗∗∗
(0.067) (0.070) (0.066)
Republican 0.232∗∗∗ 0.114 −0.220∗∗∗
(0.076) (0.075) (0.076)
Strong Republican 0.192∗∗ 0.031 −0.408∗∗∗
(0.084) (0.079) (0.078)
Female −0.049 −0.086∗∗ −0.061
(0.041) (0.042) (0.039)
Black 0.135∗∗ −0.129∗ −0.023
(0.068) (0.070) (0.064)
Hispanic 0.033 −0.254∗∗∗ −0.063
(0.079) (0.088) (0.077)
Asian 0.036 −0.038 0.032
(0.092) (0.086) (0.092)
Other Race −0.125 −0.312∗ −0.226
(0.149) (0.171) (0.153)
Millennial 0.284∗∗∗ 0.048 0.200∗∗∗
(0.070) (0.071) (0.071)
Gen X 0.117 −0.100 0.122∗
(0.074) (0.075) (0.074)
Boomer 0.103 −0.106 0.048
(0.074) (0.076) (0.076)
Silent Gen. 0.112 −0.202∗ 0.026
(0.123) (0.120) (0.111)
Some College −0.006 0.057 −0.087∗
(0.054) (0.054) (0.051)
Bachelor’s Degree 0.023 0.059 0.150∗∗
(0.063) (0.065) (0.061)
Graduate Degree 0.141∗ 0.080 0.179∗∗
(0.077) (0.080) (0.075)
Low Income 0.085∗ −0.027 0.144∗∗∗
(0.050) (0.051) (0.048)
High Income 0.106∗∗ 0.004 0.062
(0.052) (0.053) (0.048)
Midwest −0.040 −0.007 −0.127∗∗
(0.064) (0.065) (0.060)
South 0.024 −0.028 −0.108∗∗
(0.057) (0.058) (0.054)
West −0.029 −0.109∗ −0.093
(0.061) (0.063) (0.059)
Media Literacy −0.161∗∗∗ 0.015 −0.075∗∗∗
(0.022) (0.023) (0.020)
Digital Literacy 0.069∗∗∗ 0.075∗∗∗ 0.068∗∗∗
(0.017) (0.018) (0.017)
Constant −0.374∗∗∗ −0.022 −0.250∗∗
(0.107) (0.105) (0.105)
N 1,994 1,994 1,994
R2 0.082 0.081 0.223
F Statistic (df = 26; 1967) 6.728∗∗∗ 6.650∗∗∗ 21.688∗∗∗
Sample Study 3 Study 3 Study 3
∗ p < .1; ∗∗ p < .05; ∗∗∗ p < .01

Notes: With robust SEs

Table A9: Table 2 Full Regression Results

16
Figure A2: Study 2: Without Covariate Adjustment

A.9.1 Study 1
We first interact each treatment allegation with attentiveness and media literacy, separately, to
assess heterogeneous treatment effects for the Study one survey. We use the interaction with at-
tentiveness to explore whether treatment effects are stronger amongst surveytakers that are more
engaged, and as a robustness check given the possibility of surveytaker satisficing behavior. Fig-
ure A6 shows results for both theoretical channels, stratified by participant level of attentiveness
(0-2). In line with expectations, the magnitude of effect sizes is larger for attentive survey par-
ticipants, though the coefficient on the interactive term in the associated regression model is not
significant (nominal p-values are 0.23 and 0.16, and adjusted p-values are 0.25 and 0.25 for infor-
mational uncertainty and oppositional rallying, respectively). Our main results are likely to be
conservative given that they do not exclude the inattentive participants. Indeed, Figure A7 repro-
duces our main results for Study 1, but subsetted to participants who passed both screeners, and
the impacts of informational uncertainty and oppositional rallying on support are both larger in
magnitude, reaching as high as 0.25 standard deviations.

Similarly, Figure A8 shows results for both theoretical channels, stratified by participant media
literacy (0-3). Against expectations, the magnitude of effect sizes is larger for survey participants
with higher levels of media literacy, though the coefficient on the interactive term in the associated
regression model is not significant (nominal p-values are 0.25 and 0.20, and adjusted p-values are
0.25 and 0.25 for informational uncertainty and oppositional rallying, respectively). This result

17
Figure A3: Study 1: With Politician Fixed Effects

suggests that more media literate individuals are actually more susceptible to the liar’s dividend,
whereas we had hypothesized media literacy would be a mitigating factor, in line with substantial
literature and policy discourse urging media literacy education. Note, however, that these results
cannot be interpreted causally, and may reflect heterogeneous effects associated with demographic
characteristics such as education and partisanship that may be correlated with media literacy.

Next, Table A10 presents nominal and BH-adjusted p-values for three exploratory analyses related
to informational uncertainty and the belief outcome, using Study 1 results. First, we consider
whether informational uncertainty has stronger effects on belief for moderates; the effects are larger
but are not statistically distinguishable. Second, we evaluate whether the informational uncertainty
treatment increased the overall variance of the belief measure, as a reflection of uncertainty, com-
pared to control. Contrary to our expectations, it does not. Indeed, there is some evidence to
suggest that partisans with strong prior views actually moderated those views, leading to less vari-
ance for informational uncertainty (var = .79) compared to control (var = .95). That is, increased
individual uncertainty may have translated to decreased population-level variance, such that our
original hypotheses committed a compositional fallacy. Finally, we evaluate whether the coefficients
on the informational uncertainty and oppositional rallying treatments are statistically distinct for
belief. We find that they are not.

Table A11 presents nominal and BH-adjusted p-values for two exploratory analyses related to op-

18
Figure A4: Study 2: With Politician Fixed Effects

Nominal p-value Corrected p-value


IU*Moderates (ATE for Belief) 0.19 0.42
IU vs. Control (Belief Distributions) 0.39 0.42
IU vs. OR (ATE for Belief) 0.42 0.42

Table A10: Exploratory Analyses for Informational Uncertainty

positional rallying and the support outcome, again using Study 1 results only. First, we consider
whether oppositional rallying has stronger effects on support for (strong) co-partisans; the effects
are larger but are not statistically distinguishable. Note, this analysis collapses moderates and anti-
partisans into a single category whereas Figure ?? in the main paper separates them. The former
analysis fails to support our hypothesis, though the latter analysis may be more illuminating. Sec-
ond, we evaluate whether the coefficients on the informational uncertainty and oppositional rallying
treatments are statistically distinct for support. We find that they are not.

A.9.2 Study 2
In Study 2, to consider factors that may mitigate the harms of the liar’s dividend, we also introduced
a new experimental component: fact-checking statements designed to counteract the politicians’
false allegations of misinformation. We designed the fact-checking treatment based on practices
considered to be most impactful according to a recent meta-analysis by Walter et al. (2020), which

19
Figure A5: Study 1: Support Outcome Disaggregated

Figure A6: Heterogeneous Treatment Effects by Attentiveness

finds that complex statements and graphical elements are less effective, while length is not important.
Our statements are inspired by typical language used by two prominent fact checking organizations,
FactCheck.org and PolitiFact, are not overly complex or long, and omit graphics or visual elements.
The fact-checking statement, reportedly from a non-partisan fact-checking organization, informs
participants that “[Politician Name] was recently accused of making offensive comments but disputes
the truthfulness of the story. We find evidence that [Politician Name] did make the comments as
originally reported.”

Following the analyses in Studies 1 and 2, we regress politician support on the informational un-
certainty allegation and the allegation followed by fact-checking (the reference group received no

20
Figure A7: The Liar’s Dividend: Study 1 Results for Attentive Respondents

Figure A8: Heterogeneous Treatment Effects by Media Literacy

allegation), and a set of pre-registered covariates. Fortunately, Table A12 suggests that fact-checking
can eliminate the liar’s dividend. While the informational uncertainty treatment increased politician
support, a statement rebutting the politician allegation and confirming the original scandal wipes
away any politician support gains.

21
Nominal p-value Corrected p-value
OR*Co-Partisans (ATE for Support) 0.27 0.53
IU vs. OR (ATE for Support) 0.53 0.53

Table A11: Exploratory Analyses for Oppositional Rallying

Dependent variable:
Support
Info. Uncertain 0.072∗
(0.042)

IU + Fact Check −0.011


(0.042)

Observations 2,518
R2 0.137
Note: ∗ p<0.1; ∗∗ p<0.05; ∗∗∗ p<0.01

Table A12: The Impact of Fact-Checking on the Liar’s Dividend

It is reassuring that even a single fact check might counteract misinformation about misinformation,
particularly as the literature on fact-checking cautions that individuals may be reluctant to accept
fact checks that run counter to their political identity and beliefs. Yet, the fact-checking statements
in our study are presented in the context of issues that may not be highly salient to individuals’
current political priorities, with statements coming from politicians who are no longer prominent.
Furthermore, in practice, fact-checking organizations may not always get the last word and politi-
cians are likely to counter-argue and drown out fact-checkers. This may be especially problematic,
as individuals may have a low propensity to seek out fact-checking information. Indeed, in this
study, participants were uninterested in learning more about fact-checking, as less than 2% of re-
spondents clicked on additional resources in the debrief for spotting fake news and deepfakes. We
believe that analyzing the dynamics between politicians who falsely allege misinformation and the
organizations that attempt to fact check them is a fruitful area for further research.

Also in Study 2, we incorporated new exploratory outcome and covariate questions including ques-
tions related to informational uncertainty. These questions were designed to assess whether infor-
mational uncertainty indeed works through inducing uncertainty or changing belief as originally
hypothesized, or through other mechanisms. For example, we explicitly asked respondents exposed
to the informational uncertainty treatment whether they believed the politician allegation that the
original story was false. Note that this differs from our key outcome measuring respondent belief
in the scandalous story. Based on our theory of informational uncertainty, we expected that indi-
viduals who reported believing the politician allegation would also be more likely to agree with the
statement that “it’s hard to know what’s true these days,” a measure of uncertainty that directly
mirrors the language invoked by politician allegation. Logically, we expected this uncertainty to

22
then translate into relative gains in politician support via the liar’s dividend through affecting belief
in the underlying scandal.

Believe Allegation Hard to Know What’s True Alleg. Affects Support


No 74% 8%
Yes 81% 42%
p-value of difference 0.02 0.00

Table A13: Exploring Informational Uncertainty

Table A13 shows that, among those exposed to the informational uncertainty prime, individuals
who believe the allegation are also more likely to agree with the statement that “it’s hard to know
what it’s true these days.” 3 The difference is statistically significant (p = 0.02) and suggests that
the informational uncertainty channel works as intended for some individuals, at least in terms of
its most immediate effects. Moreover, believing the politician allegation and agreement with the
statement that “it’s hard to know what it’s true these days” are both correlated with increased
politician support (r = 0.38 and r = 0.22, respectively). Consistent with this finding, when we ask
participants explicitly whether the politician allegation affected their support, 42% of those who
believed the allegation responded affirmatively, compared to only 8% of those who did not believe
the allegation (p-value of difference = 0.00).

However, for members of this treatment group, believing the politician allegation is oddly not corre-
lated with belief in the scandal itself (r = −0.003). Overall, these results are puzzling. Despite some
descriptive evidence that informational uncertainty works as intended through elevating considera-
tions of uncertainty, in combination with the experimental evidence, there appear to be substantial
inconsistencies in the ways in which individuals process their beliefs. This may be evidence of a
belief-support disconnect, expressive reporting, or something else. Differences within the informa-
tional uncertainty treatment group also point to heterogeneous responses to politician allegations,
which are washed out when we consider the treatment group as a whole.

We also considered whether attitudes towards forgiveness, accountability, and cancel culture might
influence the proclivity of participants to buy into politician allegations of misinformation and
support or punish politicians as a result. We asked participants directly if hearing the politician
allegation increased their support. Table A14 displays the results from an analysis which divides
respondents into those who said allegations of misinformation increase their support of politicians,
and those who did not increase their support. For each covariate, we present average values for each
group and indicate whether the differences are statistically significant.

Amongst those who did increase their support, they are statistically significantly more likely to
favor second chances over accountability, to be more concerned about fake news, to feel confident
in their ability to detect fake news (perhaps an indicator of gullibility), to be co-partisans with the
politician in the story, to be Democrats, and to be in favor of political correctness. While some of
these differences may be informative for understanding how informational uncertainty in the liar’s
dividend operate, not all of them are clear or point in the same direction. As such, further work is
needed to understand how individuals update their evaluations of politicians in light of allegations
that invoke uncertainty.
3
We classify those who believed the politician allegation as those who strongly agreed or agreed with the
politician’s allegation that the news story is false.

23
Covariate No Support Increase Support Increase p-value of Diff.
Prefer Accountability 0.64 0.43 0.00
Cancel Culture is Problem 2.89 3.00 0.16
Concerned about Fake News 2.95 3.29 0.00
Can Detect Fake News 2.51 2.93 0.00
Find Story Offensive 3.46 3.60 0.19
Co-partisan -0.04 0.10 0.04
Republican 3.71 3.23 0.01
Favor Political Correctness 3.50 3.78 0.00

Table A14: Factors Related to Susceptibility to Informational Uncertainty

A.10 Design Choices Based on a Pilot Study


We administered a pilot study in August 2020 to 916 American adult Amazon Mechanical Turk
workers. The purpose of the study was to test a set of candidate videos for inclusion in the main
study, to evaluate potential wordings of the politician response treatments, and to perform basic
manipulation checks (i.e., whether respondents could see and hear the videos and whether they
could correctly recall the stated political party of the politician). Table A15 summarizes how the
results of the pilot study informed the design of our main study.

Question Pilot Result Design Decision


Are informational uncer- IU and OR appear to have distinct impacts on We will use IU and OR as distinct politician
tainty and oppositional ral- outcome measures: IU has large, negative im- response treatments.
lying mechanisms distinct pact on belief measure; OR has large, positive
enough to use as separate impact on support measure
treatments?
What is the best way to Use of a bi-directional uncertainty measure We will use distribution of the belief measure
measure informational un- was confusing and did not give us additional to evaluate uncertainty. Belief measure scale
certainty? information beyond the distribution of the be- and all outcome measures will be unidirec-
lief measure. tional to be clearer for participants.
Does use of the term “fake Yes, “fake news” has a statistically significant We will use alternative term “false and mis-
news” carry partisan con- association with Republican party, and is visi- leading” to describe stories in the politician re-
notation? bly a polarizing term in open-ended responses. sponse treatments.
Which video treatments All videos were generally perceived as moder- We will use four videos (2 Democrat, 2 Repub-
from candidate set of 6 are ately embarrassing and plausibly faked, which lican). Two are more familiar to respondents
best to use? makes them comparable and usable in a study and thus serve as a harder test for our theory,
of the liar’s dividend. We found respondents given that we expect respondents’ beliefs and
were more familiar with two of the politi- support to change. The second two videos are
cians/events depicted. less familiar.
Are respondents able to 98% reported no difficulties. We include subtitles in videos in case some re-
see/hear videos? spondents have trouble hearing them.
Can respondents correctly Between 77% and 88% correctly identify politi- We will mention politician party multiple
identify politician party cian party. times in the video/text description and title.
that was provided to them
from video description?
What sample size is neces- A MDE of 0.16 is possible with a sample size Our study should have at least 2,500 respon-
sary for the main study? of 2,500. dents.

Table A15: Pilot Study Results that Inform Study Design

24
References
Berinsky, Adam J., Michele F. Margolis and Michael W. Sances. 2014. “Separating the Shirkers from
the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys.” American
Journal of Political Science 58(3):739–753.

Berinsky, Adam J., Michele F. Margolis, Michael W. Sances and Christopher Warshaw. 2019. “Using
Screeners to Measure Respondent Attention on Self-Administered Surveys: Which Items and How
Many?” Political Science Research and Methods pp. 1–8.

Bohlken, Anjali Thomas, Nikhar Iakwad and Gareth Nellis. 2018. “The Politics of Public Service
Formalization in Urban India.” p. 51.

DeclareDesign. 2019. “Should a Pilot Study Change Your Study Design Decisions?”.
URL: https://declaredesign.org/blog/2019-01-23-pilot-studies.html

Hargittai, Eszter. 2005. “Survey Measures of Web-Oriented Digital Literacy.” Social Science Com-
puter Review 23(3):371–379.

Kling, Jeffrey R., Jeffrey B. Liebman and Lawrence F. Katz. 2007. “Experimental Analysis of
Neighborhood Effects.” Econometrica 75(1):83–119.

Maksl, Adam, Seth Ashley and Stephanie Craft. 2015. “Measuring News Media Literacy.” Journal
of Media Literacy Education 6(3):29–45.
URL: https://digitalcommons.uri.edu/jmle/vol6/iss3/3

Newman, Nic, Richard Fletcher, Antonis Kalogeropoulos and Rasmus Kleis Nielsen. 2019. Reuters
Institute Digital News Report 2019. Technical report Reuters Institute and University of Oxford
Oxford, UK: .
URL: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2019-
06/DNR_2019_FINAL_0.pdf

Oppenheimer, Daniel M., Tom Meyvis and Nicolas Davidenko. 2009. “Instructional Manipulation
Checks: Detecting Satisficing to Increase Statistical Power.” Journal of Experimental Social Psy-
chology 45(4):867–872.

Walter, Nathan, Jonathan Cohen, R. Lance Holbert and Yasmin Morag. 2020. “Fact-Checking: A
Meta-Analysis of What Works and for Whom.” Political Communication 37(3):350–375.

25

You might also like