Increasing Threats of Deepfake Identities 0
Increasing Threats of Deepfake Identities 0
Increasing Threats of Deepfake Identities 0
When people think of a deepfake face The use of the technology to harass or
swap, they commonly think of a video like harm private individuals who do not
the one of Bill Hader transforming into command public attention and cannot
Tom Cruise, Arnold Schwarzenegger, Al command resources necessary to refute
Pacino and Seth Rogan on David falsehoods should be concerning14. The
Letterman’s show in 2019. This is an ramifications of deepfake pornography
example of deepfake use that is not have only begun to be seen.
harmful. But as we’ve seen, there is a
Another deepfakes technique is “Lip
dark side to AI/ML and it has nothing to
Syncing.” Lip Syncing involves “mapping
do with the technology, but with the [a] voice recording from one or multiple
people using it. contexts to a video recording in another,
to make the subject of the video appear to
The world needs to be aware of the say something authentic15.” Lip synching
inherent risk of deepfakes by malign technology allows the user to make their
actors. target say anything they want through the
use of recurrent neural networks (RNN).
One major harmful use of face swap In November 2017 Stanford researchers
technology is deepfake pornography. published a paper and model for
Face swap technology was used to put Face2Face, a RNN based video production
actors Kristin Bell and Scarlett Johansson model that allows third parties the ability
to put words in the mouth of public figures
in several pornographic videos. One of the
in real time16.
fake videos that was labeled as “leaked”
footage generated over 1.5 million Deepfake technology has grown rapidly
views.13 Women have no way of since then and has become more
preventing a malign actor from creating available to the general population. As
deepfake pornography. these techniques become more
accessible the risk of harm to private
figures will increase, especially those who A new AI/ML technology called Wav2Lip
are politically, socially, or economically has enabled the creation of a lip synch
vulnerable17. deepfake. 18 19 .
Wav2Lip produces extremely realistic technique. As the name implies, the
audio and is the first speaker-independent puppet technique allows the user to make
model to generate videos with lip-sync the targeted individual move in ways they
accuracy that matches the real synced did not actually move. This can include
videos20. Human evaluations indicate that facial movements or whole body
the generated videos of Wav2Lip are movements. Puppet deepfakes use
preferred over existing methods and Generative Adversarial Network (GAN)
unsynced versions more than 90% of the technology that consists of computer-
time based graphics. Refer to the text box
below for more about GANs.
A final deepfake technique represented on
the timeline is referred to as the ‘puppet’
In 2015, the Defense Advanced Research To place MediFor developments within the
Projects Agency (DARPA) launched the broader context of synthetic media and
Media Forensics, or “MediFor” program. techniques for content manipulation, the
MediFor aimed to automatically detect MediFor Program mapped out the state of
manipulations in visual media (images the art in synthetic media and visual
and videos), provide detailed information manipulation technologies. The chart
about the manipulation, or manipulations, below is designed to map different
detected, and reason about the overall technologies against the level of skill
integrity of the media to help end users (horizontal axis) and level of resources
authenticate any questionable image or (i.e., time, money and compute power –
video. vertical axis) needed to utilize the different
technique or technology.
As noted in the text box above, Generative Adversarial Networks (GANs) represent a key
evolution in the creation of realistic synthetic content. There is not a single GAN, which is
why the chart above shows multiple GAN entries, including the possibility of new ones. The
chart shows that deepfakes today require a low level of skill and resources.
Any form of synthetic media manipulation can be weaponized to cause damage. The rapid
advancement of deepfakes, however, increases the threat. Deepfakes are more realistic
than Cheapfakes and harder to detect. Dr. Matthew Wright, Director of Research for the
Global Cybersecurity Institute and a Professor of Computing Security at the Rochester
Institute of Technology, told the AEP Team that ‘…cheapfakes must be easier to do because
we see so much more of them.’ However, deepfakes are the emerging, weightier issue. It is
the difference between a calculator and a computer; the computer is exponentially more
powerful and capable of doing all the things; more powerful for what you can do; more
visually compelling, accurate, and higher resolution; anything you can do with cheap fakes
you can do with deepfakes but more believable .
-------------------------------------------
THREAT ENVIRONMENT
Deepfakes and the misuse of synthetic content pose a clear, present, and
evolving threat to the public across national security, law enforcement,
financial, and societal domains.
To adequately characterize the current threat landscape posed by deepfakes, we must
consider the context of the varied threats, malign actors, victims, and technology. Experts
across industry, academia, and government have a diverse perspective on the threat of
synthetic media. Some assert that it is overblown, overstated, and technically infeasible to
create a convincing deepfake, or it will simply be ineffective against a skeptical public.
However, ask someone victimized by a malicious non-consensual synthetic pornography
attack and they will express that the threat is very real and harmful.
The truth is that deepfake attacks are here and appear to be proliferating in domains like
non-consensual pornography and in limited influence campaigns on social media platforms.
Since 2019 malign actors associated with nation-states, including Russia and China, have
conducted influence operations leveraging GAN-generated images on their social media
profiles. They used these synthetic personas to build credibility and believability to promote
a localized or regional issue. This is not a singular incident and it seems to be a common
technique now in the age of influence campaigns. Social media platforms, such as
Facebook,, and other AI/ML research companies, such as Graphika, were able to detect
these profiles and assess the images were AI-based and synthetically generated. It was a
great success in detecting these synthetic personas, yet it wasn’t always timely and it is
nearly impossible to say how many other social media profiles using GAN-generated images
have not been detected. Below are some specific examples of synthetic content used as
part of influence campaigns:
• From 2020 to 2021, social media personas with GANs-generated images criticized
Belgium’s posture on 5G restrictions, in an apparent effort to support Chinese firms
attempting to sell 5G infrastructure.24
• In 2021, FireEye reported cyber actors used GANs-generated images in social media
platforms to promote Lebanese political parties.25
• Multiple influence campaigns conducted by cyber actors associated with nation-
states used GANs-generated images targeting localized and regional issues.26, 27, 28
• In 2021, AI Dungeon, which uses synthetically generated text, was found to generate
text depicting the sexual exploitation of children. This was based on user input and
relied on vast training data, however, it demonstrated the unintended consequences
of AI/ML-based content generation with few constraints. AI Dungeon relied on
OpenAI’s GPT-3, an auto-regressive language model. This model can be implemented
in a number of different capabilities to include natural language generation, text-to-
image generation, translation, and other text-based tools.32
A variety of factors affect the threat landscape, including the advancing capabilities of
AI/ML-generated synthetic content; legal frameworks; norms and thresholds agreed to by
nation-states; the opportunity to use synthetic content to carry out fraud schemes; and the
susceptibility of the public to believe what they see. Not all malign actors will be affected by
every one of these factors. Cyber criminal actors involved in financial fraud schemes will not
care about norms governing the use of synthetic content by nation-states, for example.
Nation-states may view a dramatic deepfake attack as an escalation, and thus while they
may be among the most capable of actors, they may also be the least likely to deploy a
deepfake attack. Comparatively, cyber criminals and other individuals will likely be
undeterred from creating synthetic media. The “high-impact, low probability” attacks will
likely be outnumbered by the “low-impact, high probability” attacks, but by no means is
being victimized by non-consensual pornography, for example, low-impact to that individual.
The general public may be much more resilient against deepfake content after they’ve
experienced an initial attack, and therefore malign actors might prefer to wait for a ‘big
score’ before they attempt a high-impact deepfake. These actors may assess that the
window of opportunity is closing as the public gradually builds resilience against synthetic
media and may be incentivized to go for that ‘big score’ sooner than later.
An example scenario involving a deepfake video potentially fueling unrest and violence was
suggested by Professor Danielle Citron in testimony to the House Permanent Select
Committee on Intelligence in 2019. In this scenario a malign actor produces a deepfake
video of the Commissioner of the Baltimore Police Department (BPD) endorsing the
mistreatment of Freddie Gray, an African American who died in police custody in 2015.34
The scenario described by Professor Citron could unfold in the following way. The malign
actor, having decided to incite violence by means of publishing a deepfake, first performs
research on the Baltimore Police Department to gather still images, video, and audio of the
Commissioner to use as training data. This could come from past press conferences and/or
news stories.
Next, the malign actor uses the gathered information to train an AI/ML model to mimic the
likeness and voice of the Commissioner. With a trained model, the malign actor creates a
deepfake video of the police Commissioner endorsing the mistreatment of Mr. Gray, staged
as a private discussion to lend credibility to the statements, which might also include
inflammatory remarks. The malign actor could then anonymously posts the video to social
media sites and draws attention to it using fake social media accounts.
National Security & Law Enforcement Scenario 2: Producing False Evidence About Climate
Change
In this scenario, Chinese satellites would capture images of the Antarctica and the
surrounding ice sheet. Practitioners would use AI/ML models to generate features that
make it appear that ice growth has increased or stayed steady instead of decreased. China
then uses this synthetically generated satellite imagery to convince the United Nations and
others to delay or cancel implementation of stricter climate agreements that would be
unfavorable to China’s economic development. If unsuccessful, China could use this fake
data to credibly dispute the need for these agreements and ignore environmental
restrictions such as greenhouse gas emissions. Although US and other NGO satellites may
have other data to contest China’s submission to the UN, this could cause delays, confusion,
and undermine global agreements.
National Security & Law Enforcement Scenario 3: Deepfake Kidnapping
In this scenario, a criminal gang operating in a tourist location in Mexico conducts targeted
and opportunistic fraud schemes against victims using synthetic images and video to depict
someone in a situation of captivity. The malign actors wouldn’t actually kidnap someone but
would use images and information they find either online or from a stolen device to conduct
the fraud scheme. The malign actors would then contact the family of the target and
demand ransom. They would show the “proof of life” of the victim in a hotel room, possibly
bound and blindfolded. The malign actors could also send follow-up images of the victim
with indications of injury to place more pressure on the victim’s family to pay the ransom.
The victim themselves would not be in any direct harm and may be completely unaware of
this taking place.
National Security & Law Enforcement Scenario 4: Producing False Evidence in a Criminal
Case
The goal in this case is to weaken the biometric evidence and offering contradictory proof
that the defendant has a clear alibi. The deepfake content here is the submitted video itself.
It is not only the timestamp that can be modified to provide an alibi, but unique
circumstances can be created in the video so that it looks genuine. The deepfake video,
indirectly, also could also render otherwise strong evidence (such as biometrics)
circumstantial, due to the specifics of the building and the scene.
Commerce Scenario 1: Corporate Sabotage
In this scenario we consider the use of deepfake technology to spread misinformation about
a company’s product, place in the market, executives, overall brand, etc. This approach is
designed to negatively affect a company’s place in the market, manipulate the market,
unfairly diminish competition, negatively affect a competitor’s stock price, or target
prospective mergers & acquisition (M&A) of a company.
Commerce Scenario 2: Corporate Enhanced Social Engineering Attacks
In this scenario we consider the use of deepfake technology to more convincingly execute
social engineering attacks.
First, a malign actor would conduct research on the company’s line of business, executives,
and employees. He identifies the Chief Executive Officer (CEO) and the Finance Director of
the company. The malign actor researches a new joint venture that company announced
recently. He utilizes Ted Talks and online videos of the CEO to train the model to create a
deepfake audio of the CEO. The malign actor conducts research on the Finance Director’s
social media profiles. He sees that he posted a picture of a baby and a message it’s hard to
return to work. Next, the individual would place a call to the Finance Director with a goal to
fraudulently obtain funds. He would ask the Finance Director about how he is doing
returning to work and about the baby. The Finance Director answers his phone and
recognizes his boss’s voice. The malign actor directs him to wire $250K to an account for
the joint venture. The funds would be wired and then the malign actor would transfer the
funds to several different accounts.
Commerce Scenario 3: Financial Institution Social Engineering Attack
In this scenario, the malign actor decides to employ a deepfake audio to attack a financial
institution for financial gain. Next, she conducts research on the dark web and obtains
names, addresses, social security numbers, and bank account numbers of several
individuals. The malign actor identifies the individuals’ TikTok and Instagram social media
profiles. She utilizes the videos posted on social media platforms to train the model and
creates deepfake audio of targets. The malign actor researches the financial institution for
the verification policy and determines there’s a voice authentication system. Next, she calls
the financial institution and passes voice authentication. She is routed to a Representative
and then utilizes the customer proprietary information obtained via the dark web. The
malign actor tells the Representative that she was unable to access on her account online
and needs to reset her password. She was provided a temporary password to access the
online account. The malign actor gains access to her target’s financial accounts. The
malign actor wires funds from the target’s account to overseas accounts.
Commerce Scenario 4: Corporate Liability Concerns
If deepfakes become convincing enough and ubiquitous enough, companies may be at
increased legal risk due to affected consumers’ seeking damages and compensation for
financial loss due to ensuing breaches, identity theft, etc. Additionally, consumers or patrons
could fabricate an incident (e.g. a slip and fall in a grocery store) to defraud a company.
The first scenario would not be an attack, per se, but more of a consequence put into motion
by previous deepfake attacks.
In the second scenario, we consider a deepfake video designed to make a company seem liable for a
product that malfunctioned and caused an injury to defraud the company. The malign actor would
conduct research and identify a product with a history of causing a physical injury. Specific details
are extracted from previous incidents to include in the deepfake video. Next, the state tort laws
would be reviewed to determine the likelihood of a quick settlement. The malign actor finds videos
on YouTube and other social media platforms of the same product malfunctioning and injuring
people. The face swap model is trained and the deepfake video is created. The malign actor
releasing the video on social media and receives an outpouring of support. The company’s social
media team sees the post and reaches out to the malign actor. The pressure form social media is
mounting for the company to take accountability. The request for compensation for the injuries is
made to the company. Will the company have the ability to determine if the video is a deepfake
before they pay the malign actor? Does the company have the time to investigate the video? Is it
worth paying the malign actor and reducing the damage to the company’s brand?
In this scenario we consider a deepfake used to spread disinformation around the time of an
election. In the run-up to the election, a group of tech-savvy supporters of Candidate A could
launch a disinformation campaign against Candidate B. In the scenario, malign actors may
take advantage of audio, video, and text deepfakes to achieve their objectives. While
individual audio and video deepfakes are likely to create more sensational headlines and
grab people’s attention, the threat of text deepfakes lies in their ability to permeate the
information environment without necessarily raising alarms.
Another key use of text deepfakes is controlling the narrative on social media platforms. This
approach could heighten societal tensions, damage the reputation of an opponent, incite a
political base, or undermine trust in the election process.
2. Researching the Target: In this step, the malign actor performs research on the target
to gather still images, video, and/or audio. This could come from a variety of sources
including search engines, social media sites, video sharing sites, podcasts, news
media sites, etc.
3. Creating the Deepfake, Part 1 - Training the Model: The malign actor uses the
gathered information to train an AI/ML model to mimic the likeness and/or voice of
the target. Depending on the resources and technical sophistication of the actor, this
could be done using custom models or commercially available applications.
4. Creating the Deepfake, Part 2 – Creating the Media: The malign actor creates a
deepfake of the target doing or saying something that they never did. This could be
done using the actor’s hardware, commercial cloud infrastructure, or a third-party
application.
5. Disseminating the Deepfake: The malign actor releases the deepfake. This could be
done in a targeted fashion to an individual or more generally to a wide audience
through methods such as sending through email or posting to a social media site.
6. Viewer(s) Respond: Viewers react and respond to the contents of the deepfake.
7. Victim(s) Respond: The victim of the deepfake reacts and responds, often in the form
of “damage control.” Although the victim is also a “viewer,” their response could be
very different due to their unique role in the attack.
Civil Litigation
Recently, several bills addressing deepfakes have been introduced in the US Congress and
across state legislatures. The state of Virginia extended its revenge porn law to include
criminalizing nonconsensual deepfake pornography; Texas enacted laws criminalizing
deepfakes that interfere with elections; California passed two laws that allows victims of
nonconsensual deepfake pornography sue for damages and public office candidates the
ability to sue individuals or organizations that create or share election-related deepfakes
within 60 days of an election; and Maryland, New York, and Massachusetts are considering
their own specific approaches to legislating deepfakes.5152 Nonetheless, it is argued that
state laws are not the best way to solve the problem of deepfakes since each legislation
would target different aspects of deepfakes and only apply to specific states.53 However,
the need to rush to enact deepfake-specific laws might not be necessary.
Law enforcement officials can rely on existing criminal laws until deepfake-specific federal
laws are passed. In some cases, victims can pursue a civil suit by claiming extortion,
harassment, copyright infringement (i.e. using images protected by copyright law without
permission), intentional infliction of emotional distress, defamation, false advertisement,
and false light (i.e. a form of invasion of privacy). For example, if a deepfake is used in an
extortion scam, extortion laws could apply, and in an event where deepfakes were used to
harass someone, harassment laws could apply. Also, if a malign actor publishes offensive
false information about a person and implies that it is true (e.g. photo manipulation and
embellishment), a victim can purse a false light invasion of privacy civil claim.54
Research (and other Pre-Dissemination) Phase Mitigations
- Organizational planning: Establishing and nurturing effective communications
structures and channels to mitigate deepfakes incidents, when they inevitably occur,
will be key. Just as most organizations have PR plans in place in case of a
reputational crisis way ahead of said potential crisis, so should they have plans in
place to monitor and control their own narrative and avert potential disaster caused
by misinformation, in its various guises. Specific tactics in such a plan could include
developing a disinformation response policy as part of an overall information security
policy and having dedicated monitoring and reporting of information, as well as
disinformation, on social media and other outlets.
o Organizations and individuals which may be targets – political and commercial
– can also be proactive in monitoring and curating their multimedia output,
especially any broadcast or public content. The reasons for close curation is
twofold: (1) When an existing piece of content is repurposed in a deepfake, it
may be possible to quickly identify the original source content and offer that
as the “authentic” media; and (2) Some of the best deepfake detection
models55 today use a process whereby a speaker’s facial movements in a
video can be measured to determine if they are consistent with prior
instances of authentic speech. When a mouth replacement or puppet
technique is used to create a deepfake, the overall facial movements in the
altered video will differ enough from the true face that it can be detected as a
deepfake.
- Training and awareness: Using the example of phishing prevention and mitigation,
employers should consider investing resources in equipping employees with the
knowledge and skills to serve as on-the-ground “first responders” to deflect and/or
report on disinformation and related threats in the workspace, including deepfakes.
Other training measures specific to law enforcement and other officials, could focus
on training them to help victims mitigate the impact of deepfakes attacks on and
their reputation, health, and welfare. See below for more.
Creation Phase Mitigations
- Organizations and individuals who develop models utilized in deepfake creation
should also consider their responsibilities when it comes to mitigation. Individuals
and organizations who are concerned that their developments not be used in an
irresponsible or criminal manner could take steps that would make it easier to detect
when their model was used. For example, the developers may be aware of a
weakness or signature of their model which makes it easy to detect. Rather than
hide that fact, the developers could release the signature with their code as a sign of
their interest in being a responsible member of a technologically advanced society.
- Creators of deepfake content could also mark their creation as a “deepfake.”
Dissemination Phase Mitigations
- Partnership development: Partnerships among industry, academia, and law
enforcement, among other entities, would hopefully speed up the process of
detecting, labeling, and removing non-consensual images and other defamatory
synthetic media, when they occur.
- Detection and other technological innovation: Since the technology may be used for
entertainment, education, and protected speech, purposes, deepfake detection
alone cannot constitute an entire mitigation protocol. Nevertheless, the overall
importance and impact of such technological tools cannot be discounted. Successful
detection allows for early intervention and mitigation. Social media platforms,
Internet service providers, and other communication systems providers – those who
provide the pipes through multimedia flows – are best positioned to identify the
nature of content that they are transmitting. Technologies such as Microsoft’s
PhotoDNA – which identifies copies of previously identified digital images - have been
deployed by some of these providers to stem the flow of child sexual abuse materials
(CSAM), so there is precedent.
- As detection tools are developed, they can be shared with social media companies,
Internet Service Providers, and communication system providers, and made available
as open source tools.
o The creation and distribution of deepfake content which can be used to train
both humans and models in detection could also assist.
- On the flip side of detecting synthetic content, there is also the opportunity to
promote authentication measures. More specifically, individuals and organizations
can take steps to demonstrate and verify the authenticity of the media they create
and consume.
o For example, in 2019, the Content Authenticity Initiative (CAI) was created56.
The CAI describes itself as “…a community of media and tech companies,
NGOs, academics and others working to promote adoption of an open industry
standard for content authenticity and provenance.” These standards would
allow users to demonstrate the provenance and attribution of their media
content in a manner accessible to all who use the standards. In this way,
consumers could check media for a “seal of authenticity,” allowing for greater
trust in the content.
o While the CAI would offer open standards available for adoption by any user,
commercial service provider can also offer similar capabilities. Truepic is one
such company.
- Finally, success on this front can also be achieved through secure communication
channels where users control all of the content (i.e., closed networks).
__________________________________________________________
What can we do to increase trust in real-time interactions or media?
Increasing the public’s trust in real-time interactions and media is a long-term prospect, but
nonetheless a critical step to protect society and institutions from disinformation. A
renewed adherence to security protocols such as 2-Factor authentication and device-based
authentication constitute an elemental first step in this process. Additionally, it would be
advantageous to pursue a strategy of investment in strengthening our democratic and
media institutions and newer and emerging technology. Block chain authentication is one
such stand-out possibility, as it holds great potential to standardize and promote verification
and authenticity, thereby creating a trusted space and inspiring consumers’ confidence in
what is seen and heard.
How can we determine what is real and what has been manipulated? What
can we do to improve detection? Can we educate the public to detect a
deepfake? Should we?
As technology advances, it will become increasingly difficult to identify manipulated media.
There are commercial tools that can be used to help detect fake media, but these tools will
need to be constantly retrained and updated to detect variables and several manipulation
aspects. Every tool may vary in what they quantify as a deepfake as well, which will affect
the type of media manipulation it would flag. Depending on how sophisticated the deepfake
is, the public may be able to detect it with their own eye or forensic experts can analyze the
content more closely. It would be more efficient to have AI/ML tools do that work up front
instead of humans doing it manually but improving deepfake detection will be an all hands-
on deck situation. As with any AI/ML model, it will have to be trial and error to see what the
tool can recognize and continually running fake media through it to see what it misses. Then
the model can be adapted to lessen the possibility something will slip through the cracks.
From the society aspect then, individuals should be checking credibility of media before
passing it on so the spread of misinformation does not grow.
Experts in the field of AI/ML, including the Partnership on AI (PAI), have suggested that to
improve detection, a sort of paradigm shift should occur from focusing on detecting what is
fake to bolstering what is true about the media and adding context to the media. This will
empower individuals to explore the authenticity of media by using context clues or metadata
about where the media originated to help determine if it is real. PAI has conducted
interviews that suggest individuals do not want to be told what to believe or taught in a
condescending way of what is real or not, they want to figure it out for themselves. This
could also improve trust in institutions if individuals can legitimize the media that is being
are shared.
An individual should look for the following signs when trying to determine if an image or video is fake:
• Blurring evident in the face but not elsewhere in the image or video (or vice-versa)
• A change of skin tone near the edge of the face
• Double chins, double eyebrows, or double edges to the face
• Whether the face gets blurry when it is partially obscured by a hand or another object
• Lower-quality sections throughout the same video
• Box-like shapes and cropped effects around the mouth, eyes, and neck
• Blinking (or lack thereof), movements that are not natural
• Changes in the background and/or lighting
• Contextual clues – Is the background scene consistent with the foreground and subject?
An individual should look for the following signs when trying to determine if an audio is fake:
• Choppy sentences
• Varying tone inflection in speech
• Phrasing – would the speaker say it that way?
• Context of message – Is it relevant to a recent discussion or can they answer related questions?
• Contextual clues – ’ ?
An individual should look for the following signs when trying to determine if text is fake:
• MIsspellings
• Lack of flow in sentences
• Is the sender from a known number or email address?
• Phrasing – would the legitimate sender speak that way?
• Context of message – Is it relevant to a recent discussion?
Reporting Deepfakes
There are ways victims can report deepfake attacks. Victims of deepfakes could:
• contact law enforcement officials who could possibly help victims by conducting forensic
investigations using police reports and evidence gathered from victims;
• contact the Federal Bureau of Investigations (FBI) and report incidents to local FBI
offices or the FBI’s 24/7 Cyber Watch at [email protected];58
• utilize the Securities and Exchange Commission’s services to investigate financial
crimes
• report inappropriate content and abuse on social media platforms (i.e. Facebook,
Twitter, Instagram, etc.) using the platforms’ reporting procedures; and
• if a victim is under 18 years of age, incidents can be reported to the National Center for
Missing and Exploited Children via their cyber tip line at https://report.cybertip.org.
Resources Available for Victims
According to Bobby Chesney and Danielle Citron, victims may find it challenging taking the
civil liability route if convincing evidence is unavailable, and even if a malign actor is
identified, it may be impossible to use civil remedies if the malign actor is outside of the
United States or in a jurisdiction where local legal action is unsuccessful.59 However, there
are several resources available for victims of online abuse that could possibly support them
in other ways. Organizations who are dedicated to helping victims include but are not
limited to:
• Cyber Civil Rights Initiative, an organization that provides a 24-hour crisis helpline,
attorney referrals, and guides for removing images from social media platforms and
other websites;60
• EndTab, an organization that provides victims, including universities, law enforcement,
nonprofits, judicial systems, healthcare networks, with resources for education and
reporting abuse;61
• The National Suicide Prevention Lifeline, a national network of local crisis centers that
provides free and confidential emotional support for people in distress;62
• Cybersmile, a non-profit anti-bullying organization that provides expert support for
victims of cyberbullying and online hate campaigns;63
• Identitytheft.gov, the federal government’s one-stop resource for identity theft victims;64
• Withoutmyconsent.org, a non-profit organization that provides guides for preserving
evidence that could be used in a civil suit;65
• Google’s Help Center, a resource available via Google that enables victims to remove
fake pornography from Google searches;66 and
• Imatag, a company who offers tracking and image/video monitoring services.67
FINAL THOUGHTS
The Liar’s Dividend and the Spectre of Weaponized Distrust
This paper has considered the possible consequences resulting from the malign use of a
deepfake, but another threat exists that may be counter-intuitive to many readers. In this
concluding section, we consider the possible consequences not from the use of a deepfake,
but from the mere possibility of it being used. The mere existence of deepfakes could
undermine the primacy of credibility and authority of traditional social institutions, like the
press, government, and academia.
First introduced by academics Danielle K Citron and Robert Chesney in Deep Fakes: A
Looming Challenge for Privacy, Democracy, and National Security, we explore how ‘The
Liar’s Dividend’ could manifest from an earnest attempt to counter the threats of deepfakes.
The public may well emerge to be able to endure through the threat of malign actors
creating indiscernible deepfakes that attempt to cause widespread panic or harm. Media
reporting on the phenomenon of deepfakes continues to highlight this issue, and in so
doing, may imbue awareness and resilience to the public. If, however, the pendulum swings
too far the other way, in which the public views things not only with scrutiny, but with a
default posture of doubt and disbelief, then this too could be exploited by malign actors
hoping to muddle reality. This persistent threat could evoke a sense of skepticism that
undermines trustworthy institutions and questions the legitimacy and authenticity of true
content and media. Malign actors could intentionally sow distrust and doubt on legitimate
media, suggesting that authentic content is actually an elaborate deepfake.
In a climate where the political spectrum is polarized and adversarial, with 24 hour media
cycles, a politician facing a critical scandal, for example, could simply proclaim “that event
never happened. It is clearly a deepfake created by my political enemies,” evade timely
accountability and preserve their reputation unbesmirched.
Sophisticated actors could synthetically reproduce an already recorded event using AI/ML
technologies, insert some sort of detectable signature, in an effort to trigger a response from
authentication and detection tools, calling into question whether the real content is
legitimate in the first place. This could enable malign actors to point at their reproduced
media and claim the event never occurred. Recording both the good and the bad of history
is an important tool to evolve the decency of society. There are those today that claim the
Holocaust never existed. Deepfakes could be a nefarious tool to undermine the credibility of
history.
What next?
Deepfakes, synthetic media, and disinformation in general pose challenges to our society.
They can impact individuals and institutions from small businesses to nation states. All may
be impacted by them. As discussed above, there are some approaches which may help
mitigate these challenges, and there are undoubtedly other approaches we have yet to
identify. Regardless of the approach, however, for any one to be successful will require
collaboration across all affected parties. It is time for there to be a coordinated,
collaborative approach. Our team hopes to participate in that collaboration in the years
ahead.
ACKNOWLEDGEMENTS
The AEP “Increasing Threats from Deepfake Identities” Team gratefully acknowledges the
following individuals for providing their time and expertise in the course of our research:
Jon Bateman, Fellow, Carnegie Cyber Policy Initiative of the Technology and International
Affairs
Paul Benda, Senior Vice President Operational Risk and Cybersecurity, American Bankers
Association
Bobby Chesney, Associate Dean for Academic Affairs, University of Texas School of Law
Kathleen Darroch, Senior Vice President and Security Business Partner Manager, PNC Bank
Daniel Elliot, Information Security Architect, Network Security at Johnson Controls
Jordan Fuhr, Senior Vice President, Information Security Government and Public Policy
Candice G., Applied Research Mathematician, U.S. Department of Defense
Sam Gregory, Program Director, WITNESS
Karen Hao, Senior AI Editor, MIT Technology Review
Kathryn Harrison, Founder and CEO, DeepTrust Alliance & MAGPIE
Tia Hutchinson, Policy Analyst, U.S. Department of the Treasury Office of Cybersecurity and
Critical Infrastructure
Tim Hwang, Research Fellow, Georgetown’s Center for Security and Emerging Technology
Ashish Jaiman, Director of Product Management, Bing Multimedia, Microsoft
Dr. Neil Johnson, Cyber & Forensic Scientist, Pacific Northwest National Laboratory
Claire Leibowicz, Head of AI and Media Integrity, Partnership on AI
Dr. Baoxin Li, Professor, Chair of Computer Science and Engineering Program, Arizona State
University
Dr. Siwei Lyu, SUNY Empire Innovation Professor, Department of Computer Science and
Engineering at University at Buffalo, SUNY
Dr. Sebastien Marcel, Senior Researcher and Head of Biometrics Security and Privacy, Idiap
Research Institute
Noelle Martin, Activist and Survivor of Deepfake Attack
Mike Price, Chief Technology Officer, ZeroFox
Kelley Sayler, Analyst, Congressional Research Service
Thao T., Visual Information Specialist, Federal Bureau of Investigation
Dr. Matt Turek, Program Manager, Defense Advanced Research Projects Agency
Dr. Matthew Wright, Professor of Computing Security and Director of Research for the Global
Cybersecurity Institute at the Rochester Institute of Technology
END NOTES
1
(U) | Samantha Cole | Vice | https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn/| 11 Dec. 2017 |
AI- F H ’ F
2
(U) | Samantha Cole | Vice | https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn/| 11 Dec. 2017 |
AI- F H ’ F
3
(U) | Juergen Schmidhuber |Neural Networks | DOI 10.1016/j.neunet.2014.09.003 |Jan. 2015 | Deep Learning in
Neural Networks: An Overview | p 85–117.
4
(U) | Dave Gershgorn |OneZero | https://onezero.medium.com/deepfake-music-is-so-good-it-might-be-illegal-
c11f9618d1f9 | 01 May 2020 |Deepfake Music Is So Good It Might Be Illegal.
5
(U) | Nick Statt | The Verge| https://www.theverge.com/2020/4/28/21240488/jay-z-deepfakes-roc-nation-
youtube-removed-ai-copyright-impersonation/ | 28 Apr. 2020 | JayZ tries to use copyright strikes to remove
deepfaked audio of himself from YouTube.
6
(U) | Renee Diresta | Wired | https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all/ |
31 Jul. 2020 | AI-Generated Text Is the Scariest Deepfake of All.
7
(U) | Steven Rosenbaum | MediaPost| https://www.mediapost.com/publications/article/341074/what-is-
synthetic-media.html | 23 Sep. 2019 | What Is Synthetic Media?.
8
(U) | Sudharshan Chandra Babu |Paperspace Blog| https://blog.paperspace.com/2020-guide-to-synthetic-media
|2019 | A 2020 Guide to Synthetic Media.
9
(U) | Candice Gerstner, Emily Philips, Larry Lin | NSA| The Next Wave | ISSN 2640-1797| 2021 | Deepfakes: Is a
Picture Worth a Thousand Lies? | p 41- 52.
10
(U) | Candice Gerstner, Emily Philips, Larry Lin | NSA| The Next Wave | ISSN 2640-1797| 2021 | Deepfakes: Is a
Picture Worth a Thousand Lies? | p 41- 52.
11
(U) | Britt Paris, Joan Donovan | Data & Society| https://datasociety.net/library/deepfakes-and-cheap-fakes/ |
18 Sep. 2019 | DEEPFAKES AND CHEAP FAKES.
12
(U) | Alan Zucconi | https://www.alanzucconi.com/2018/03/14/understanding-the-technology-behind-
deepfakes/ | 14 Mar. 2018 | Understanding the Technology Behind DeepFakes.
13
(U) | Claudia Willen |Insider| https://www.insider.com/kristen-bell-face-pornographic-deepfake-video-
response-2020-6| 11 Jun. 2020 | Kristen bell says she was ‘ ’
pornographic Deepfake video.
14
(U) | Britt Paris, Joan Donovan | Data & Society| https://datasociety.net/library/deepfakes-and-cheap-fakes/ |
18 Sep. 2019 | DEEPFAKES AND CHEAP FAKES.
15
(U) | Raina Davis | HARVARD Kennedy School | Belfer Center|
https://www.belfercenter.org/publication/technology-factsheet-deepfakes | Spring 2020 | Technology Factsheet:
Deepfakes.
16
(U) | Britt Paris, Joan Donovan | Data & Society| https://datasociety.net/library/deepfakes-and-cheap-fakes/ |
18 Sep. 2019 | DEEPFAKES AND CHEAP FAKES.
17
(U) | Britt Paris, Joan Donovan | Data & Society| https://datasociety.net/library/deepfakes-and-cheap-fakes/ |
18 Sep. 2019 | DEEPFAKES AND CHEAP FAKES.
18
(U) | Chintan Trivedi |Medium| DG AI Research Lab| https://medium.com/deepgamingai/deepfakes-ai-
improved-lip-sync-animations-with-wav2lip-b5d4f590dcf | 31 Aug. 2020 | DeepFakes AI- Improved Lip Sync
Animations With Wav2Lip.
19
(U) | K R Prajwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C V Jawahar| ACM International Conference
on Multimedia| https://doi.org/10.1145/3394171.3413532 | Oct. 2020 | A Lip Sync Expert Is All You Need for
Speech to Lip Generation In The Wild | p 484-492.
20
(U) | K R Prajwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, C V Jawahar| ACM International Conference
on Multimedia| https://doi.org/10.1145/3394171.3413532 | Oct. 2020 | A Lip Sync Expert Is All You Need for
Speech to Lip Generation In The Wild | p 484-492.
21
(U) | Shruti Agarwal, Tarek El-Gaaly, Hany Farid, Ser-Nam Lim |IEEE International Workshop on Information
Forensics and Security| https://arxiv.org/abs/2004.14491 | 29 Apr. 2020| Detecting Deep-Fake Videos from
Appearance and Behavior.
22
(U) | Hannah Smith, Katherine Mansted |Australian Strategic Policy Institute|
https://www.aspi.org.au/report/weaponised-deep-fakes | 29 Apr. 2020 | Weaponised deep fakes.
23
(U) | Raina Davis | HARVARD Kennedy School | Belfer Center|
https://www.belfercenter.org/publication/technology-factsheet-deepfakes | Spring 2020 | Technology Factsheet:
Deepfakes.
24
(U) | Graphika Team |Graphika| https://graphika.com/reports/fake-cluster-boosts-huawei | 29 Jan. 2021 | Fake
Cluster Boosts Huawei.
25
(U)| FireEye | 20 Jul. 2021 | Dual Information Operations Campaigns Promote Lebanese Political Parties Kataeb
and Free Patriotic Movement Amid Economic, Political Crisis | A reliable US cyber security company which also
releases threat intelligence reports.
26
(U) | Graphika Team |Graphika| https://graphika.com/reports/step-into-my-parler/ | 01 Oct. 2020 | Step into
My Parler.
27
(U) | Ben Nimmo, C. Shawn Eib, Léa Ronzaud |Graphika| https://graphika.com/reports/operation-naval-gazing/
| 22 Sep. 2020 |Operation Naval Gazing.
28
(U) | Eto Buziashvili |Medium| DFR Lab| https://medium.com/dfrlab/inauthentic-instagram-accounts-with-
synthetic-faces-target-navalny-protests-a6a516395e25 | 28 Jan. 2021 | Inauthentic Instagram accounts with
synthetic faces target Navalny protests.
29
(U) | Henry Ajder, Giorgio Patrini, Francesco Cavalli |Sensity| https://www.medianama.com/wp-
content/uploads/Sensity-AutomatingImageAbuse.pdf | Oct. 2020 | Automating Imagine Abuse: Deepfake bots on
Telegram.
30
(U) | Siladitya Ray | Forbes| https://www.forbes.com/sites/siladityaray/2020/10/20/bot-generated-fake-nudes-
of-over-100000-women-without-their-knowledge-says-report/?sh=428694037f6b | 20 Oct. 2020 | Bot Generated
Fake Nudes of Over 100,000 Women Without Their Knowledge, Says Report.
31
(U) | Karen Hao | MIT Technology Review|
https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/ | 12 Feb. 2021 |
’ F B
32
(U) | Tom Simonite |Wired| https://www.wired.com/story/ai-fueled-dungeon-game-got-much-darker/ | 05 May
2021| It Began as an AI-Fueled Dungeon Game. It Got Much Darker.
33
(U) | Evan Jacoby |Vice| https://www.vice.com/en/article/vb55p8/i-paid-dollar30-to-create-a-deepfake-porn-
of-myself | 09 Dec. 2019 | I Paid $30 to Create a Deepfake Porn of Myself.
34
(U) | Danielle Keats Citron | Prepared Written Testimony and Statement for the Record before the House
|H “T
Intelligence, Manipulated Media, and Deep Fake ” |13 Jun. 2019.
35
(U) | Jana Benscoter |PennLive| https://www.pennlive.com/news/2021/03/pa-woman-created-deepfake-
videos-to-force-rivals-off-daughters-cheerleading-squad-police.html | 12 Mar. 2021 | Pa. woman created
‘ ’ ’ q :
36
(U) | Interview with Karen Hao | 14 May 2021.
37
(U) | Interview with Karen Hao | 14 May 2021.
38
(U) | Anne Pechenik Gieseke | Vanderbilt Law Review| https://scholarship.law.vanderbilt.edu/vlr/vol73/iss5/4/
| 05 Oct. 2020 | “T ”: ’
| p 1479-1515.
39 (U) | Britt Paris, Joan Donovan | Data & Society| https://datasociety.net/library/deepfakes-and-cheap-fakes/ |