Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022, Companion Proceedings of the Web Conference 2022
…
1 page
1 file
Misinformation has always been part of humankind's information ecosystem. The development of tools and methods for automatically detecting the reliability of information has received a great deal of attention in recent years, such as calculating the authenticity of images, calculating the likelihood of claims, and assessing the credibility of sources. Unfortunately, there is little evidence that the presence of these advanced technologies or the constant effort of fact-checkers worldwide can help stop the spread of misinformation. I will try to convince you that you also hold various false beliefs, and argue for the need for technologies and processes to assess the information shared by ourselves or by others, over a longer period of time, in order to improve our knowledge of our information credibility and vulnerability, as well as those of the people we listen to. Also, I will describe the benefits, challenges, and risks of automated information corrective actions, both for the target recipients and their wider audience.
arXiv (Cornell University), 2023
Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, and annotating epistemic elements related to intended use, i.e., means, ends, and stakeholders. We find that narratives leaving out some of these aspects are common, that many papers propose inconsistent means and ends, and that the feasibility of suggested strategies rarely has empirical backing. We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking and writing about the use of fact-checking artefacts.
Findings of the Association for Computational Linguistics: NAACL 2022
Recent years have seen the proliferation of disinformation and misinformation online, thanks to the freedom of expression on the Internet and to the rise of social media. Two solutions were proposed to address the problem: (i) manual fact-checking, which is accurate and credible, but slow and non-scalable, and (ii) automatic fact-checking, which is fast and scalable, but lacks explainability and credibility. With the accumulation of enough manually factchecked claims, a middle-ground approach has emerged: checking whether a given claim has previously been fact-checked. This can be made automatically, and thus fast, while also offering credibility and explainability, thanks to the human fact-checking and explanations in the associated fact-checking article. This is a relatively new and understudied research direction, and here we focus on claims made in a political debate, where context really matters. Thus, we study the impact of modeling the context of the claim: both on the source side, i.e., in the debate, as well as on the target side, i.e., in the fact-checking explanation document. We do this by modeling the local context, the global context, as well as by means of co-reference resolution, and reasoning over the target text using Transformer-XH. The experimental results show that each of these represents a valuable information source, but that modeling the source-side context is more important, and can yield 10+ points of absolute improvement. 1 Introduction The fight against the spread of dis/mis-information in social media has become an urgent social and political issue. Social media have been widely used not only for social good but also to mislead entire communities. Many fact-checking organizations, such as FactCheck.org, Snopes, PolitiFact, and FullFact, along with many others, and also along with some broader international initiatives such as the Credibility Coalition and Eufactcheck, 043 have emerged in the past few years to address the 044 issue (Stencel, 2019). It has also become of great 045 concern for government entities, companies, as well 046 as national and international agencies. 047 At the same time, there have been efforts to 048 develop automatic systems to detect and to flag 049
2024
Artificial intelligence is changing the way our world works, and the journalism and communication field is no exception. The development of high technologies such as NLP or machine learning has modified professional routines, work profiles, and business models. Fact-checking initiatives, which have long battled disinformation, now face a complex context where misleading content grows faster than ever. In this situation, artificial intelligence, or AI, can be a double-edged sword. On the one side, AI-generated content can be created faster than regular content; therefore, there is a huge volume of data to be analysed by fact-checkers. Furthermore, NLP software is not always as reliable as it might be expected. It tends to ‘hallucinate’, creating more misleading content and hoaxes. On the other hand, AI can be a helpful tool in fighting disinformation. This paper analyses 10 independent international fact-checking initiatives through case analysis and questionnaires with fact-checkers. Results show that these sites use AI during different stages of their routines, accelerating processes, simplifying tasks and improving the accuracy of fact-checking results. AI integration shows some risks related to economic restrictions, platform limitations, media distrust, and inequity between countries. To conclude, this research also demonstrates that journalists are still in the loop about fact-checking sites, but more tech profiles and better skills are required.
An Interdisciplinary, Searchable, and Linkable Resource, 2015
The increasing prevalence of misinformation in society may adversely affect democratic decision-making which depends on a well-informed public. False information can originate from a number of sources including rumors, literary fiction, mainstream media, corporate vested interests, governments, and nongovernmental organizations. The rise of the Internet and user-driven content has provided a venue for quick and broad dissemination of information, not all of which is accurate. Consequently, a large body of research spanning a number of disciplines has sought to understand misinformation and determine which interventions are most effective in reducing its influence. This article summarizes research into misinformation, bringing together studies from psychology, political science, education, and computer science. Cognitive psychology investigates why individuals struggle with correcting misinformation and inaccurate beliefs, and why myths are so difficult to dislodge. Two important findings involve (a) various "backfire effects," which arise when refutations ironically reinforce misconceptions, and (b) the role of 3 worldviews in accentuating the persistence of misinformation. Computer scientists simulate the spread of misinformation through social networks and develop algorithms to automatically detect or neutralize myths. We draw together various research threads to provide guidelines on how to effectively refute misconceptions without risking backfire effects.
Science and Society: Journal of Political and Moral Theory, 2024
Given the evolution and growth of fact-checking around the globe, practitioners and academics have been gathering with increasing frequency to discuss the state of the enterprise. The continued interest in fact-checking suggests periodic updates on how the practice is evolving has merit.The present article is one such effort which briefly addresses the origins of fact-checking followed by an examination of some of the challenges and opportunities facing the enterprise.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021
The reporting and the analysis of current events around the globe has expanded from professional, editor-lead journalism all the way to citizen journalism. Nowadays, politicians and other key players enjoy direct access to their audiences through social media, bypassing the filters of official cables or traditional media. However, the multiple advantages of free speech and direct communication are dimmed by the misuse of media to spread inaccurate or misleading claims. These phenomena have led to the modern incarnation of the fact-checker --- a professional whose main aim is to examine claims using available evidence and to assess their veracity. Here, we survey the available intelligent technologies that can support the human expert in the different steps of her fact-checking endeavor. These include identifying claims worth fact-checking, detecting relevant previously fact-checked claims, retrieving relevant evidence to fact-check a claim, and actually verifying a claim. In each ca...
2020
For all the recent advancements in Natural Language Processing and deep learning, current systems for misinformation detection are still woefully inaccurate in real-world data. Automated misinformation detection systems —available to the general public and producing explainable ratings— are therefore still an open problem and involvement of domain experts, journalists or fact-checkers is necessary to correct the mistakes such systems currently make. Reliance on such expert feedback imposes a bottleneck and prevents scalability of current approaches. In this paper, we propose a method —based on a recent semantic-based approach for misinformation detection, Credibility Reviews (CR)—, to (i) identify real-world errors of the automatic analysis; (ii) use the semantic links in the CR graphs to identify steps in the misinformation analysis which may have caused the errors and (iii) derive crowdsourcing tasks to pinpoint the source of errors. As a bonus, our approach generates real-world t...
arXiv (Cornell University), 2023
Social media and user-generated content (UGC) have become increasingly important features of journalistic work in a number of different ways. However, the growth of misinformation means that news organisations have had devote more and more resources to determining its veracity and to publishing corrections if it is found to be misleading. In this work, we present the results of interviews with eight members of fact-checking teams from two organisations. Team members described their fact-checking processes and the challenges they currently face in completing a fact-check in a robust and timely way. The former reveals, inter alia, significant differences in fact-checking practices and the role played by collaboration between team members. We conclude with a discussion of the implications for the development and application of computational tools, including where computational tool support is currently lacking and the importance of being able to accommodate different fact-checking practices.
2021
Spread of online misinformation is an ubiquitous problem especially in the context of social media In addition to the impact on global health caused by the current COVID-19 pandemic, the spread of related misinformation poses an additional health threat Detecting and controlling the spread of misinformation using algorithmic methods is a challenging task Relying on human fact-checking experts is the most reliable approach, however, it does not scale with the volume and speed with which digital misinformation is being produced and disseminated In this paper, we present the SAMS Human-in-the-loop (SAMS-HITL) approach to combat the detection and the spread of digital misinformation SAMS-HITL leverages the fact-checking skills of humans by providing feedback on news stories about the source, author, message, and spelling The SAMS features are jointly integrated into a machine learning pipeline for detecting misinformation First results indicate that SAMS features have a marked impact on...
Misura, sperimentazione e controllo di processo. La progettazione degli esperimenti, 1993
Oltreterra Art Project, 2024
Journal of Criminology Sociology and Law (JCSL),, 2024
International Journal of Management Information Systems and Data Science, 2024
Revista Linguíʃtica, 2020
Tiempo de Mujeres. Huellas femeninas en la historia de Navarra , 2023
Genealogies of Speculation : Materialism and Subjectivity Since Structuralism
Kakanjske novine, 2022
Media Ecology and Language Innovations (Infolexicographic Literacy)., 2022
The Journal of Clinical Endocrinology and Metabolism, 2001
Veterinary sciences, 2018
Common Eye Infections, 2013
Open data journal for agricultural research, 2023
arXiv: Instrumentation and Detectors, 2020
Biomedical Engineering, 2022
European journal of public health, 2018