Analysis Paper Sample A
Analysis Paper Sample A
Analysis Paper Sample A
In a world where reality can be twisted at a click of a button, the rise of deepfakes has
ushered in a new era of digital manipulation. Deepfake is a product of artificial intelligence (AI)
that superimposes content from one video to another, consequently allowing users to create a
fake video of a person saying or doing almost anything (Ray). From the playful antics of
Snapchat's face swap filters to the jaw-dropping spectacle of Rodrigo Duterte's authoritative
voice effortlessly emanating from the lips of Hollywood heartthrob Brad Pitt, and even the most
unlikely of collaborations coming to life with Kanye West belting out a Taylor Swift ballad on
TikTok, the possibilities are both captivating and unsettling. As deepfake technology continues to
and democratic processes. With its ability to generate visually indistinguishable media from
genuine sources, deepfake technology has the potential to be exploited as a tool for interfering
with electoral processes, casting a shadow of doubt over the very foundations of our democratic
systems.
The political realm witnessed the initial attention surrounding deepfake technology in
2018 when a comedian impersonating Barack Obama delivered a PSA video highlighting the
Georgio Patrini, now that political leaders are becoming targets of deepfakes, this technology is
Furthermore, Patrini points out the emergence of new cases involving the use of synthetic voice
Sample A 2
audio and images depicting fictitious individuals, which are employed to carry out social
developments raise concerns about the potential exploitation of deepfakes to manipulate voters,
Deepfakes hold the potential to manipulate political content, paving the way for the
dissemination of misinformation and disinformation that can taint the essence of democracy
itself (Klein). In the United States, deepfake videos of President Joe Biden's speeches have
become popular, featuring the president discussing topics ranging from hip hop to drugs and
video games. However, some individuals are also utilizing deepfake technology to spread
transgender women (Klein). The video, shared on Instagram on February 25 of this year, featured
a fabricated Biden stating, "You will never be a real woman," followed by a torrent of criticism
targeting transgender women, including a mention of suicide. The voice in the video was
remarkably similar to Biden's, creating an illusion of authenticity. A day after the release of this
deepfake video, PolitiFact, a fact-checking nonprofit operated by the Poynter Institute, debunked
the said video, revealing that no evidence could be found of Biden making such remarks in
White House transcripts. Instead, evidence was found of the president's efforts to support
transgender individuals, contradicting the fabricated words in the deepfake video (Cercone and
Czopek).
Deepfake videos that aim to manipulate political content and propagate misinformation
and disinformation have also emerged in the Philippines. During last year's national election
Sample A 3
campaign, a deepfake video surfaced, featuring the late dictator Ferdinand Marcos Sr., allegedly
questioning the moral values of individuals who would elect a tax evader as president (Vera
Files). Vera Files, a fact-checking news organization in the country, debunked the deepfake video
by exposing that Marcos Sr.'s mouth had been edited and "replaced with someone else's and
made to move to deliver the remark” (Samson). The manipulated video animated an Associated
Press file photo of Marcos Sr. taken in 1986, creating the illusion that he was uttering the words:
“What is unity if your people have no moral values? Halimbawa, ‘yung kapitbahay mo ay tax
evader, tumakbong presidente, binoto mo. Anong klaseng tao ka?” One netizen commented,
“THE STRONGMAN HAS RISEN FROM HIS GRAVE!!! LISTEN!!! … (am having
stance and credibility of politicians or political parties, eroding public trust and sowing seeds of
doubt (Ray). In 2019, a deepfake video featuring US House Speaker Nancy Pelosi, in which she
appears impaired, circulated widely on social media, amassing over 2.5 million views on
Facebook alone (CBS News). Another popular Facebook post, with 91,000 shares, carried a
caption exclaiming, "This is unbelievable, she is blowed out of her mind, I bet this gets taken
down!" However, in 2020, the Reuters Fact Check team revealed that the video had been
significantly slowed down to create the false impression of Pelosi appearing drunk and
incoherent (Reuters Staff). Although the "Drunk" Nancy Pelosi video was later exposed as a
fake, its intent was clearly to cast doubt on Pelosi's abilities and undermine her credibility as a
political leader.
Sample A 4
The danger extends even further, as deepfakes can be meticulously tailored to target
specific voter groups, deftly swaying their opinions and shaping their voting decisions (Klepper
and Swenson). On February 7, just a day before the Legislative Assembly elections in Delhi, two
deepfake videos featuring Manoj Tiwari, the President of the Bharatiya Janata Party (BJP), were
widely circulated on WhatsApp. These videos, which appeared to show Tiwari criticizing the
incumbent Delhi government led by Arvind Kejriwal, gained significant attention. One video
featured Tiwari speaking in English, while the other showcased him speaking in the Hindi dialect
of Haryanvi. In both versions, Tiwari urged viewers to support the BJP and vote for a change in
the form of a Modi-led government (Christopher). This deepfake video featuring Tiwari had a
clear objective: to undermine the credibility of the incumbent government and sway voters
towards supporting the BJP. By fabricating these videos and presenting them as genuine,
deepfakes can effectively exploit the vulnerabilities and biases of specific voter groups,
With each new instance of a political deepfake, the trust in politicians and politics at large
further dwindles, leaving a lingering sense of skepticism and suspicion (Ray). However, despite
the widely-recognized negative impacts of deepfakes in the electoral process, there are
viewpoints that seek to undermine their significance. Some argue that deepfakes have not
emphasize that deepfakes can be effectively regulated and mitigated through advancements in
technology alone, placing faith in advancements that can combat their harmful effects. On the
other hand, there are those who point out that the public has already become increasingly
Sample A 5
skeptical and cautious regarding the authenticity of online content. A ConsumerLab report
reveals a growing trend of skepticism, with less than one in five individuals trusting the
information they encounter on social media platforms (Shyrokykh). This implies that there is
already a level of wariness and critical thinking among the public, potentially acting as a natural
defense against the influence of deepfakes. These opposing viewpoints, however, do not
overshadow the significance of recognizing the tangible risks posed by political deepfakes.
While some argue that deepfakes have not prominently emerged in politics, the
exemplified by the 2018 video featuring Barack Obama speaking words he never uttered, serving
as a stark reminder of their ability to distort reality and sow confusion (Mak and Temple-Raston).
Furthermore, deepfakes played a role in the 2020 US Presidential election, where they were used
to target President Joe Biden and spread false narratives (Ray). Amidst these instances, experts
and researchers have warned about the looming threat of deepfakes in upcoming elections. Data
scientist Dominic Ligot expressed concerns about the potential impact of deepfakes on elections
in an interview with #FactsFirst, stating that the Philippines is just one election away from facing
the challenges posed by these manipulated videos (Esguerra). This sentiment reflects the
growing awareness of the potential harm that deepfakes can inflict on political systems and
societies at large.
Furthermore, despite claims that technological solutions alone are adequate in effectively
regulating and mitigating the risks linked to political deepfakes, the intricate and rapidly
forensics expert at the University of California, describes the rapid progression of deepfakes,
noting how quickly they have evolved in a matter of months (Galston). This rapid advancement
creates a significant disparity between the generation of deepfakes and the detection of
deepfakes. Even if reliable methods or technologies for detecting deepfakes exist, their
effectiveness will be considerably slower compared to the creation of these deceptive videos
(Galston). Consequently, false representations can dominate the media landscape for prolonged
periods, ranging from days to weeks. David Doermann, the director of the Artificial Intelligence
Institute at the University of Buffalo, emphasizes the speed at which misinformation can spread,
using the analogy that "a lie can go halfway around the world before the truth can get its shoes
on" (Galston). Additionally, Farid highlights the resource imbalance, with the number of
individuals working on video synthesis far outnumbering those developing detection techniques
by a ratio of 100 to 1. This underscores the lack of attention, funding, and institutional support
dedicated to identifying fake media compared to the creation of deepfakes, as reported by The
Ultimately, despite arguments suggesting that the public has grown more skeptical and
cautious about the authenticity of online content, disinformation experts warned that deepfake
technology exacerbates people’s difficulty in distinguishing between genuine and forged online
content (Mozur and Satariano). A study conducted by Köbis et al. (2021) has revealed two
important insights regarding the reliability of people in detecting deepfakes. First, it shows that
the difficulty in detecting deepfakes is not due to a lack of motivation but rather a lack of ability.
Despite participants' genuine effort to identify deepfakes, they were unable to do so consistently.
Sample A 7
This highlights the complexity and sophistication of deepfake technology, making it increasingly
challenging for individuals to distinguish between real and manipulated videos. Second, the
study reveals a systematic bias among participants toward assuming that videos are authentic,
coupled with an overconfidence in their own ability to detect deepfakes. This suggests that
people tend to rely on a "seeing-is-believing" heuristic, assuming that what they see in a video is
true. This cognitive bias puts individuals at risk of being influenced by deepfakes, as they may
In the face of advancing deepfake technology, it is evident that this technology can be
exploited to interfere with electoral processes and undermine democratic systems. Recognizing
the potential harm and the ever-evolving nature of deepfake technology is crucial in safeguarding
the integrity of electoral processes. To effectively address the risks associated with political
literacy campaigns to educate the public about the dangers of misinformation and disinformation,
and regulatory measures to establish guidelines and accountability. Only by addressing the
challenges posed by political deepfakes head-on can we protect the foundations of our
democratic societies and secure a future where truth triumphs over deception.
Sample A 8
Works Cited
Agence France-Presse. “Porn, politics are key targets in 'deepfakes' – study.” Rappler, 8
October 2019,
https://www.rappler.com/technology/242016-deepfake-pornography-politics-deeptrace-st
udy-october-2019/.
CBS News. “Doctored Nancy Pelosi video highlights threat of "deepfake" tech.” CBS
https://www.cbsnews.com/news/doctored-nancy-pelosi-video-highlights-threat-of-deepfa
ke-tech-2019-05-25/.
Cercone, Jeff, and Madison Czopek. “Joe Biden has offered support for transgender
https://www.politifact.com/factchecks/2023/feb/06/instagram-posts/joe-biden-has-offered
-support-transgender-american/.
February 2020,
https://www.vice.com/en/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-b
jp.
Galston, William A. “Is seeing still believing? The deepfake challenge to truth in
https://www.brookings.edu/research/is-seeing-still-believing-the-deepfake-challenge-to-tr
uth-in-politics/.
Sample A 9
Klein, Charlotte. “‘This Will Be Dangerous in Elections’: Political Media's Next Big
https://www.vanityfair.com/news/2023/03/ai-2024-deepfake.
Klepper, David, and Ali Swenson. “AI presents political peril for 2024 with threat to
https://apnews.com/article/artificial-intelligence-misinformation-deepfakes-2024-election
-trump-59fb51002661ac5290089060b3ae39a0.
Köbis, Nils C., et al. “Fooled twice: People cannot detect deepfakes but think they can.” vol. 24,
Mak, Tim, and Dina Temple-Raston. “Are Deepfake Videos A Threat? Simple Tools Still
https://www.npr.org/2020/10/01/918223033/where-are-the-deepfakes-in-this-presidential-
election.
Mozur, Paul, and Adam Satariano. “The People Onscreen Are Fake. The Disinformation
https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake
.html.
Ray, Andrew. “Disinformation, Deepfakes and Democracies: The Need for Legislative
https://www.unswlawjournal.unsw.edu.au/article/disinformation-deepfakes-and-democrac
ies-the-need-for-legislative-reform.
Reuters Staff. “Fact check: “Drunk” Nancy Pelosi video is manipulated.” Reuters, 3
August 2020,
Sample A 10
https://www.reuters.com/article/uk-factcheck-nancypelosi-manipulated-idUSKCN24Z2B
I.
king, misinformation takes its throne in TikTok era.” VERA Files, 20 December 2022,
https://verafiles.org/articles/vera-files-fact-check-yearender-as-video-is-king-misinformat
ion-takes-its-throne-in-tiktok-era.
Shyrokykh, Karina. “Fake news on social media: Whose responsibility is it?” Ericsson, 5
November 2018,
https://www.ericsson.com/en/blog/2018/11/fake-news-on-social-media-whose-responsibil
ity-is-it.
Vera Files. “VERA FILES FACT CHECK: Video of Marcos Sr. taking a jab at voters
https://verafiles.org/articles/vera-files-fact-check-video-marcos-sr-taking-jab-voters-supp.