Fact Check and Social Media Analysis

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

FactCheckBureau: Build Your Own Fact-Check Analysis

Pipeline
Oana Balalau, Pablo Bertaud-Velten, Younes El-Fraihi, Garima Gaur, Oana
Goga, Samuel Guimaraes, Ioana Manolescu, Brahim Saadi

To cite this version:


Oana Balalau, Pablo Bertaud-Velten, Younes El-Fraihi, Garima Gaur, Oana Goga, et al.. FactCheck-
Bureau: Build Your Own Fact-Check Analysis Pipeline. CIKM 2024 - 33rd ACM International
Conference on Information and Knowledge Management, ACM, Oct 2024, Boise Idaho, United States.
�10.1145/3627673.3679220�. �hal-04684068�

HAL Id: hal-04684068


https://inria.hal.science/hal-04684068v1
Submitted on 2 Sep 2024

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from émanant des établissements d’enseignement et de
teaching and research institutions in France or recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers. publics ou privés.

Distributed under a Creative Commons Attribution - NonCommercial 4.0 International License


FactCheckBureau: Build Your Own Fact-Check Analysis Pipeline
Oana Balalau Pablo Bertaud-Velten Younes El-Fraihi
INRIA, Institut Polytechnique de Paris INRIA, Institut Polytechnique de Paris CNRS,Institut Polytechnique de Paris
Palaiseau, France Palaiseau, France Palaiseau, France

Garima Gaur Oana Goga Samuel Guimaraes


INRIA,Institut Polytechnique de Paris INRIA, CNRS, Institut Polytechnique Federal University of Minas Gerais,
Palaiseau, France de Paris CNRS, Institut Polytechnique de Paris
Palaiseau, France Belo Horizonte, Brazil

Ioana Manolescu Brahim Saadi


INRIA, Institut Polytechnique de Paris CNRS, Institut Polytechnique de Paris
Palaiseau, France Palaiseau, France
Abstract way of communicating influential and critical information with-
Fact-checkers are overwhelmed by the volume of claims they need out veracity evaluation has a far-reaching and adverse impact3 . To
to pay attention to fight misinformation. Even once debunked, a sanitize the information space, fact-checkers assess a given claim
claim may still be spread by people unaware that it is false, or it may for its correctness by evaluating evidence available in the public
be recycled as a source of inspiration by malicious users. Hence, domain. The result of this investigation is a fact-checking (FC, in
the importance of fact-check (FC) retrieval as a research problem: short) article that includes information about the claim, the evi-
given a claim and a database of previous checks, find the checks dence, and the truthfulness of the claim. A recent, comprehensive
relevant to the claim. Existing solutions addressing this problem report on challenges raised by online fact-checking has been pub-
rely on the strategy of retrieve and re-rank relevant documents. lished by FullFact, a notable FC agency [5, 10]. The report highlights
We have built FactCheckBureau, an end-to-end solution that areas where automation can play a revolutionary role – finding
enables researchers to easily and interactively design and evaluate check-worthy claims, detecting previously checked claims, retriev-
FC retrieval pipelines. We also present a corpus 1 we have built, ing evidence, and automated verification.
which can be used in further research to test fact-check retrieval This work focuses on the task of automatically detecting previ-
tools. The source code of our tool is available at this link 2 . ously checked claims. Often, false claims resurface on the Web in
the same form or under linguistic disguises such as paraphrasing or
CCS Concepts translating to other languages. Not only false news resurface, but
they also spread six times faster than correct news [21]. Therefore,
• Information systems → Learning to rank.
it becomes crucial to quickly identify if a claim has already been
fact-checked, leading to the FC retrieval (FCR, in short) problem:
Keywords
Given a corpus of FCs C and a query document q (a social media
Fact Check Retrieval, Claim Review Datasets, Retrieve-and-rerank post, an image specifying a claim, or textual description of a claim),
ACM Reference Format: retrieve a list of FCs from the corpus, in decreasing order of relevance
Oana Balalau, Pablo Bertaud-Velten, Younes El-Fraihi, Garima Gaur, Oana (likelihood to be have checked this claim).
Goga, Samuel Guimaraes, Ioana Manolescu, and Brahim Saadi. 2024. FactCheck- Given the nature of claims and FCs, Machine Learning (ML)
Bureau: Build Your Own Fact-Check Analysis Pipeline. In Proceedings of the methods working on text and/or images have been studied for
33rd ACM International Conference on Information and Knowledge Manage- the FCR problem. They are typically harnessed in a retrieve-and-
ment (CIKM ’24), October 21–25, 2024, Boise, ID, USA. ACM, New York, NY, rerank paradigm where a model, called retriever, is used to filter
USA, 5 pages. https://doi.org/10.1145/3627673.3679220
out candidate FCs that are further investigated by a second model,
the re-ranker, which computes a more precise relevance score [2, 4,
1 Introduction 15, 17, 20].
Social media and the Web are the central communication channels We have developed the interactive system FactCheckBureau
for public figures to disseminate information to many people. This that supports researchers in efficiently designing, deploying and
analyzing FCR pipelines. A user can specify, evaluate, and com-
1 https://huggingface.co/datasets/NaughtyConstrictor/fact-check-bureau
2 https://gitlab.inria.fr/cedar/factcheckbureau pare pipelines interactively. Further, we equipped FactCheckBureau
with a default FCR pipeline and a dataset that we built to help non-
technical users, mainly journalists. Using the querying interface
This work is licensed under a Creative Commons Attribution-
NonCommercial International 4.0 License. of FactCheckBureau, non-technical users can fetch relevant FCs
from our corpus against an input claim. Our system can be used to
CIKM ’24, October 21–25, 2024, Boise, ID, USA
3 https://www.weforum.org/press/2024/01/global-risks-report-2024-press-release/,
© 2024 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0436-9/24/10 https://commission.europa.eu/news/boosting-awareness-raising-risks-
https://doi.org/10.1145/3627673.3679220 disinformation-and-information-manipulation-2024-05-08_en
CIKM ’24, October 21–25, 2024, Boise, ID, USA Oana Balalau et al.

comprehensively test information retrieval techniques proposed in 3.1 Inputs: fact-checks and claims
the literature, on a variety of datasets. Our dataset also contributes Conceptually, there are two main entities: fact-checks, and claims.
to the FCR task, in particular in the coverage of French. In principle, a claim can be a social media post, an image speci-
fying a claim, or a simple text phrase. In the corpus we built for
2 Related Work our demo (detailed in Section 4), claims are tweets. Therefore, our
The need to reuse FCs when analyzing new statements has been claim is characterized by its accountHandle (the Twitter account
recognized early on [3]. Accordingly, techniques emerged for re- having published the tweet), text, date, language, hashTags, and
trieving the fact-checked claims most relevant for a given query URLtoEmbeddedMediaContent. The attributes of fact-check are
(tweet or claim) [2, 4, 15, 17, 20]. All these adopt the standard (re- derived from the ClaimReview schema that many FC agencies adopt
triever, reranker) architecture proposed for information retrieval in their FC articles (https://schema.org/ClaimReview); ClaimReview
tasks. A retriever is used to retrieve efficiently, for a given input, was promoted by Google, which used to show, next to search re-
from a potentially very large FCs, a subset considered to closest sults, related FCs4 . Specifically, the attributes of an FC are: title,
to it; then, a second, potentially more expensive method is used to claimant, publisher, dateOfPublication, URLtoArticle, claimText, lan-
re-rank the retrieved results. For both stages, there are term-based guage and rating. The relationship between these two key entities
(probabilistic) models, and neural methods. Among the probabilistic is captured by a many-to-many relation claimAboutFC(claim-id,
retrievers, BM25 [16] is the most frequent, and performed well in FC-id): a claim and an FC are paired in this way, if (according to a
many studies, e.g., [4, 17]. Neural retrievers and re-rankers are often specific automated or manual decision method) the FC is about the
based on Transformer networks, which capture matching signals claim. We also say the claim and FC are aligned.
between words in the FC and the query. In Table 1, we present
the datasets and ranking methods used in prior works. We note 3.2 Users’ interactions with FactCheckBureau
that there are slight discrepancies on the best performing models.
The core task to be solved in FC retrieval pipelines is: given a claim
This could be due to the different evaluation settings, hence the
(also called query) and an FC corpus, find the FCs most relevant
importance of a unified evaluation and the relevance of our system.
to the claim. Specifically, each FC retrieval pipeline contains a
Table 1: Partial snapshot of the front-runners in the area, candidate FC retrieval module and a candidate ranking module. If the
adapted from [13] claim is text, it can be used as such, but other formats may require
Dataset Type Languages Source # pairs Evaluation Best Model some pre-processing, based on the query type, before being entered
Claim- X, Snopes, MRR BM25 (Top50)
[17] Tweet en PolitiFact, 1,768 MAP@k + Sentence into the text-based retrieval pipeline. For instance, if the claim is an
Pairs ClaimsKG Precision@k BERT image, the text needs to be extracted by OCR, or the image can be
Accuracy
Claim en,hi,bn, 2,343 Precision captioned; if the claim is a tweet, pre-processing may remove or split
[7] Whatsapp I XLM-R
Pairs ml,ta 398 (en) Recall hashtags in individual words, normalize numerical data, transcribe
F1 score
Accuracy emojis to text, etc. To be comprehensive, FactCheckBureau models
FC-
en,hi, X, 6,533 F1 score Full Length FC retrieval pipelines as consisting of three stages: pre-processing,
[8] Tweet
es,pt GFC Tools 4,850 (en) MRR BM25
Pairs retrieval, and ranking.
MAP@k
Claim- 26,048 Adapted
[12] Thread
41 X,
42,88% (en) F1 score GraphSAGE
FactCheckBureau has two main operation modes: deploy-
(en,fr,...) GFC tools
Pairs 3.46% (fr) model ment and development, shown in Figure 1, where dark navy
Claim- X, Snopes, MRR Sentence
[11] Tweet ar,en AraFacts,
2,518
MAP@k T5 +
modules are used in deployment, whereas in development, all the
1,610 (en)
Pairs ClaimsKG Hit@k GPT Neo modules (both navy and light blue) may be involved. In develop-
FC- 31,305
[15] Post
27 X, Meta,
7,307 (en) Hit@k
GTR-T5 (en) ment mode, it supports designing, inspecting, and comparing FC
(en,fr,...) GFC tools MPNet
Pairs 2,146 (fr) retrieval pipelines; in deployment mode, a retrieval pipeline can be
X BM25
FC-
en,hi, Boomlive, 1,600 MRR (Top 200)+ deployed and used to query the FC corpus. As explained below, our
[18] Tweet
Pairs
es,pt AFP,EFE 400 (en) MAP@k LaBSE demonstration will showcase the four use cases (design, inspect,
PolitiFact (BERT)
Claim- MRR BM25 compare, and deploy).
[17]
[4] Tweet en 1,000 MAP@k (Top 100)+
(Snopes)
Pairs Hit@k BERT Inspect. A user builds a retrieval pipeline by choosing or load-
Claim normalization is used to improve the linguistic quality of ing: pre-processing modules; a retrieval module; and a ranking
claims and to help retrieve FCs [19]. Other related lines of work module. The user also supplies aligned pairs, and chooses the met-
focus on identifying check-worthy claims [1]. ClaimKG [6] and ric(s) to use to evaluate the quality of the pipeline. Since relevant
CimpleKG [14] are corpora of fact-checks together with associated FC retrieval is a ranked-list search problem, we support the famil-
claims; the recent [15] is multilingual. iar Mean Average Precision (MAP@𝑘), Mean Reciprocal Ranking
In this context, the interest of FactCheckBureau is to enable (MRR@𝑘), Normalized Discounted Cummulative Gain (NDCG@𝑘),
researchers and technicians working in fact-checking organizations and Hits@𝑘. The user triggers the evaluation of the pipeline (top
to build and personalize their pipelines, experiment, and analyze snippet in Figure 2); for each query, this leads to a list of FCs, or-
with different modules. dered by their relevance, the former as computed by the pipeline.
3 FactCheckBureau at a glance
4 That
had been discontinued, among others, because some of the shown FC were not
We describe the data sources we work on (Sec. 3.1), and the user semantically close enough to the respective search results [9]. This highlights the
interaction modes with our tool (Sec. 3.2). importance of the FC retrieval problem.
FactCheckBureau: Build Your Own Fact-Check Analysis Pipeline CIKM ’24, October 21–25, 2024, Boise, ID, USA

Figure 1: FactCheckBureau architecture in the two use modes (Development and Deployment)
FactCheckBureau also presents the values of the chosen metric(s) • Output: Querying interface – querying through post, image,
for different cut-off values 𝑘. text, or topic (will use FC tags available).
For further inspection, a user can choose the deep-dive option,
where FactCheckBureau enables to inspect test samples where the 4 Dataset and FC pipeline
specified pipeline performed poorly. It also reports the performance Dataset. We built a corpus of 218𝐾 claim reviews in 14 languages
of each model in the pipeline in isolation. published by 83 fact-checking agencies recognized as verified sig-
• Input: Chose models, aligned pairs, metric(s); natories by IFCN (International Fact-Checking Network). Further,
• Output: Computed metric (plots and tables), the perfor- we have collected 9.1𝐾 tweets mentioned in various FC articles
mance of candidate selector only, identify top 5 badly per- and 8𝐾 recent tweets from prominent Members of the European
forming examples. Parliament.
We used Google Fact-Check API 5 for collecting FCs. The data
Compare. While developing a retrieval solution, evaluating and
returned follows the ClaimReview schema but with some fields
comparing different models is essential to find the best possible com-
omitted as described in Google’s documentation 6 . The returned
bination of models for candidate retrieval and ranking. The compare
data also includes the URL to the FC article. We also collected
mode (middle in Figure 2) enables a user to compare previously
the FC article text using these URLs to enrich our dataset. Some
saved pipelines and/or newly specified ones. The user obtains a con-
of the reputed FC agencies, like Le Monde, do not publish their
solidated performance report comparing all the specified pipelines
FCs via Google FC Explorer, or stopped at some point, therefore,
under a set of chosen metrics. It also provides a deep-dive option,
we crawled their web pages to collect FCs. For social media post
to compare models’ performance on selected test samples.
collection, we used a paid subscription of X to gather the 8𝐾 tweets
• Input: Choose pipelines form a list of pipelines; from 402 MEP. We focused on social media posts in English and
• Output: Single plot of overall performance, plot of candidate French FCs articles for the aligned pair collection. We collected
identifier performance, 5 worst performance instances. tweets mentioned in 4.7𝐾 English FCs, 1.2𝐾 French FCs and 3.2𝐾
Design. This option is for users who do not intend to develop a FC in other languages. This resulted in 9.1𝐾 aligned pairs of social
pipeline but need to use one. The user can supply an FC corpus, or media posts and relevant FC articles, as many FC has two tweets
a default one (the one we prepare for the demo, Section 4) can be related to it. For some of our FC retrieval experiments described
used. The user specifies the claim language (or we can auto-detect next, we will consider these pairs to be the ground truth, allowing
it), and claim type (post, image, or text). the search of a FC by given its paired tweet. For others, the text of
the claim described on the ClaimReview schema of the FC is used
• Input: Specify query type (post, image, text), and dataset as the its ground-truth pair.
language;
• Output: A recommended pipeline based on (𝑖) the most Pipeline. We preload FactCheckBureau with our proposed
frequently used components for these inputs, or comparable FC retrieval pipeline that supports the default query interface
inputs (same language, same query type) if there is no history for non-technical users. In our FC retrieval setting, the collec-
of running on the same inputs; (𝑖𝑖) simple rules to chose the tion of FC articles serves as the document corpus and tweets
necessary pre-processing models based on the input type. serve as the input query. We experiment with around 41𝐾
articles in English and French. Our pipeline starts with pre-
Deploy. FactCheckBureau can be used as a search interface (bot- processing a tweet by removing links, emojis, escape control and
tom in Figure 2) for finding the relevant FCs for an input query, or, special characters, standarizing Unicode presentation with more
alternatively, for a specific topic, specified as a short phrase, e.g., than one representation, normalizing numbers and dates using
"Covid". The user chooses a previously specified retrieval pipeline num2words7 and dateparser8 libraries, and tokenizing text us-
already present in the system, and configure the number of relevant ing Spacy 9 tokenizer. We employed the well-established BM25
documents she wants to retrieve. Then, FactCheckBureau returns
a list of the FCs relevant for the claim, respectively, FCs about the 5 https://toolbox.google.com/factcheck/apis
6 https://tinyurl.com/25t28phf
given topic.
7 https://pypi.org/project/num2words/
• Input: Choose a pipeline, or select "auto-design"; specify 8 https://pypi.org/project/dateparser/

query or topic; 9 https://spacy.io/api/tokenizer


CIKM ’24, October 21–25, 2024, Boise, ID, USA Oana Balalau et al.

Table 2: Comparison of two tasks P2C (post to claim) and P2A


(post to article) on top-5 results of our pipeline for different
candidate list size.
Dataset Task BM25-10 BM25-20 BM25-50 BM25-100
P2C 67.94 71.14 73.53 74.10
MultiClaim
P2A 86.67 86.02 85.69 84.94
P2C 63.95 67.30 70.19 71.23
Our Dataset
P2A 85.75 85.27 84.21 83.41

Table 3: Evaluation of the impact of different data augmenta-


tion features, OCR, and image captioning (IC) used for both
datasets. The Hit@k metric is computed over the empirically
best FC candidate list size 10, i.e., BM25-10.
Dataset Features Hit@1 Hit@3 Hit@5 Hit@10
T 73.67 83.65 86.35 88.67
T + OCR 76.00 86.33 88.69 90.82
Our Dataset (EN)
T + IC 76.55 85.92 88.42 90.90
T + OCR + IC 73.40 82.79 85.94 88.52
T 72.41 85.22 88.57 92.33
Our Dataset (FR)
T + OCR 74.04 86.53 90.29 93.71
T 75.05 84.43 86.67 88.53
MultiClaim (EN)
T + OCR 85.93 94.41 96.21 97.28
T 74.82 86.84 90.09 93.18
MultiClaim (FR)
T + OCR 78.93 91.0 94.08 97.04

Table 2. The performance of P2C retrieval tasks improves as


more candidates FC are considered for re-ranking, whereas
this trend is inverse for the P2A task. This implies retrieving
FC articles is not only better performing (improvement of
12 points), it is also more efficient as the computationally
expensive re-ranker model needs to re-rank a significantly
smaller (by a factor of 𝑋 10) candidate list.
I2 Careful data augmentation helps in retrieval: We experi-
mented with augmenting the tweets with the features ex-
tracted from the images embedded within the tweets – OCR
and image caption. We evaluated the impact of different
augmentation over English and French-aligned pairs of our
dataset and that of MultiClaim [15]. The results are reported
Figure 2: FactCheckBureau in Development mode in Table 3 where different setups are indicated as tweet text
(T), text appended with OCR text (T+OCR), tweet text ap-
[16] as our retriever model and used all-mpnet-base-v2 and
pended with image caption (T+IC), and a combination of
paraphrase-multilingual-mpnet-base-v2 for re-ranking doc-
both the features (T+OCR+IC). For space consideration, we
uments in English and French respectively.
have reported the best augmentation feature for the French
Discussion. We discuss some insights we had while developing
subset of our dataset and the MultiClaim dataset. Our main
our pipeline highlighting the non-trivial nature of FC retrieval
takeaway from this experiment is that tweets need to be care-
problem:
fully augmented, and not necessarily more features ensure
I1 Retrieving FC articles is better than claims specified in Claim- better performance.
Review: Using our pipeline, we evaluated the performance
for two retrieval tasks: first, given a post, retrieve claims 5 Conclusion
specified in ClaimReview (P2C) and, second, for an input In this work, we present FactCheckBureau, an interactive tool for
post, retrieve the corresponding FC article (P2A), on our designing and deploying configurable FC retrieval pipelines. The
datasets and a recent multilingual FC dataset MultiClaim designing task supports researchers in specifying and comparing
[15]. We computed the 𝐻𝑖𝑡@𝑘 metric that indicates the per- the performance of different pipelines following the retrieve-and-
centage of samples for which the correct answer is present rerank paradigm. Along with the tool, we proposed an FC retrieval
in the top-k answers of the re-ranker model. For a given k, pipeline and constructed an FC dataset that backs the search inter-
the 𝐻𝑖𝑡@𝑘 metric is computed on different sizes of candidate face for non-technical users.
FCs list retrieved by our retriever model BM25. A candidate Acknowledgments. This work is supported in parts by the EU
list of size 𝐽 is referred to as BM25-J. We report Hit@5 at project ELIAS (101120237) grant, EU 101041223 grant, French AI
different BM25-J for 𝐽 = 10, 20, 50, 100 on both datasets in Chair SourcesSay (ANR-20-CHIA-0015), and Hi!PARIS Center.
FactCheckBureau: Build Your Own Fact-Check Analysis Pipeline CIKM ’24, October 21–25, 2024, Boise, ID, USA

References https://doi.org/10.24963/ijcai.2021/619
[1] Fatma Arslan, Naeemul Hassan, Chengkai Li, and Mark Tremayne. 2020. A [11] Preslav Nakov, Giovanni Da San Martino, Firoj Alam, Shaden Shaar, Hamdy
Benchmark Dataset of Check-worthy Factual Claims. In 14th International AAAI Mubarak, and Nikolay Babulkov. 2022. Overview of the CLEF-2022 CheckThat!
Conference on Web and Social Media. AAAI. lab task 2 on detecting previously fact-checked claims. (2022).
[2] Alberto Barrón-Cedeño, Tamer Elsayed, Preslav Nakov, Giovanni Da San Martino, [12] Dan S Nielsen and Ryan McConville. 2022. Mumin: A large-scale multilingual
Maram Hasanain, Reem Suwaileh, Fatima Haouari, Nikolay Babulkov, Bayan multimodal fact-checked misinformation social network dataset. In SIGIR.
Hamdan, Alex Nikolov, Shaden Shaar, and Zien Sheikh Ali. 2020. Overview of [13] Rrubaa Panchendrarajan and Arkaitz Zubiaga. 2024. Claim detection for
CheckThat! 2020: Automatic Identification and Verification of Claims in Social automated fact-checking: A survey on monolingual, multilingual and cross-
Media. In CLEF. Springer. https://doi.org/10.1007/978-3-030-58219-7_17 lingual research. Natural Language Processing Journal 7 (2024), 100066. https:
[3] Sylvie Cazalens, Philippe Lamarre, Julien Leblay, Ioana Manolescu, and Xavier //doi.org/10.1016/j.nlp.2024.100066
Tannier. 2018. A Content Management Perspective on Fact-Checking. In The [14] Youri Peskine, Raphaël Troncy, and Paolo Papotti. 2024. CimpleKG: a Continu-
Web Conference. ACM. https://doi.org/10.1145/3184558.3188727 ously Updated Knowledge Graph of Fact-Checks and Related Misinformation. In
[4] Tanmoy Chakraborty, Valerio La Gatta, Vincenzo Moscato, and Giancarlo Sperl‘i. Infox sur Seine workshop.
2023. Information retrieval algorithms and neural ranking models to detect [15] Matús Pikuliak, Ivan Srba, Róbert Móro, Timo Hromadka, Timotej Smolen, Martin
previously fact-checked information. Neurocomputing 557 (2023), 126680. https: Melisek, Ivan Vykopal, Jakub Simko, Juraj Podrouzek, and Mária Bieliková. 2023.
//doi.org/10.1016/j.neucom.2023.126680 Multilingual Previously Fact-Checked Claim Retrieval. In EMNLP. https://doi.
[5] FullFact. 2020. The challenges of online fact checking. https://fullfact.org/media/ org/10.18653/V1/2023.EMNLP-MAIN.1027
uploads/coof-2020.pdf. [16] Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Frame-
[6] Susmita Gangopadhyay, Katarina Boland, Danilo Dessì, Stefan Dietze, Pavlos work: BM25 and Beyond. Foundations and Trends in Information Retrieval 3, 4
Fafalios, Andon Tchechmedjiev, Konstantin Todorov, and Hajira Jabeen. 2023. (2009). https://doi.org/10.1561/1500000019
Truth or Dare: Investigating Claims Truthfulness with ClaimsKG. In Linked Data- [17] Shaden Shaar, Nikolay Babulkov, Giovanni Da San Martino, and Preslav Nakov.
driven Resilience Research, Vol. 3401. https://ceur-ws.org/Vol-3401/paper7.pdf 2020. That is a Known Lie: Detecting Previously Fact-Checked Claims. In ACL.
[7] Ashkan Kazemi, Kiran Garimella, Devin Gaffney, and Scott Hale. 2021. Claim https://doi.org/10.18653/V1/2020.ACL-MAIN.332
Matching Beyond English to Scale Global Fact-Checking. In IJCNLP. Association [18] Iknoor Singh, Carolina Scarton, Xingyi Song, and Kalina Bontcheva. 2023. Finding
for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-long.347 Already Debunked Narratives via Multistage Retrieval: Enabling Cross-Lingual,
[8] Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, Scott A Hale, and Rada Mihalcea. Cross-Dataset and Zero-Shot Learning. arXiv:2308.05680 (2023).
2022. Matching tweets with applicable fact-checks across languages. In De-Factify: [19] Megha Sundriyal, Tanmoy Chakraborty, and Preslav Nakov. 2023. From Chaos
Workshop on Multimodal Fact Checking and Hate Speech Detection (with AAAI). to Clarity: Claim Normalization to Empower Fact-Checking. In EMNLP Findings.
[9] Emma Lurie and Eni Mustafaraj. 2020. Highly Partisan and Blatantly Wrong: https://doi.org/10.18653/V1/2023.FINDINGS-EMNLP.439
Analyzing News Publishers’ Critiques of Google’s Reviewed Claims. In Truth and [20] Nguyen Vo and Kyumin Lee. 2020. Where Are the Facts? Searching for Fact-
Trust Online Conference. Hacks Hackers. https://truthandtrustonline.com/wp- checked Information to Alleviate the Spread of Fake News. In EMNLP. https:
content/uploads/2020/10/TTO07.pdf //doi.org/10.18653/V1/2020.EMNLP-MAIN.621
[10] Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Al- [21] Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and
berto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. false news online. Science 359 (2018). https://doi.org/10.1126/science.aap9559
2021. Automated Fact-Checking for Assisting Human Fact-Checkers. In IJCAI. arXiv:https://www.science.org/doi/pdf/10.1126/science.aap9559

You might also like