Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2022, Companion Proceedings of the Web Conference 2022
…
1 page
1 file
Misinformation has always been part of humankind's information ecosystem. The development of tools and methods for automatically detecting the reliability of information has received a great deal of attention in recent years, such as calculating the authenticity of images, calculating the likelihood of claims, and assessing the credibility of sources. Unfortunately, there is little evidence that the presence of these advanced technologies or the constant effort of fact-checkers worldwide can help stop the spread of misinformation. I will try to convince you that you also hold various false beliefs, and argue for the need for technologies and processes to assess the information shared by ourselves or by others, over a longer period of time, in order to improve our knowledge of our information credibility and vulnerability, as well as those of the people we listen to. Also, I will describe the benefits, challenges, and risks of automated information corrective actions, both for the target recipients and their wider audience.
arXiv (Cornell University), 2023
Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, and annotating epistemic elements related to intended use, i.e., means, ends, and stakeholders. We find that narratives leaving out some of these aspects are common, that many papers propose inconsistent means and ends, and that the feasibility of suggested strategies rarely has empirical backing. We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking and writing about the use of fact-checking artefacts.
2024
Artificial intelligence is changing the way our world works, and the journalism and communication field is no exception. The development of high technologies such as NLP or machine learning has modified professional routines, work profiles, and business models. Fact-checking initiatives, which have long battled disinformation, now face a complex context where misleading content grows faster than ever. In this situation, artificial intelligence, or AI, can be a double-edged sword. On the one side, AI-generated content can be created faster than regular content; therefore, there is a huge volume of data to be analysed by fact-checkers. Furthermore, NLP software is not always as reliable as it might be expected. It tends to ‘hallucinate’, creating more misleading content and hoaxes. On the other hand, AI can be a helpful tool in fighting disinformation. This paper analyses 10 independent international fact-checking initiatives through case analysis and questionnaires with fact-checkers. Results show that these sites use AI during different stages of their routines, accelerating processes, simplifying tasks and improving the accuracy of fact-checking results. AI integration shows some risks related to economic restrictions, platform limitations, media distrust, and inequity between countries. To conclude, this research also demonstrates that journalists are still in the loop about fact-checking sites, but more tech profiles and better skills are required.
An Interdisciplinary, Searchable, and Linkable Resource, 2015
The increasing prevalence of misinformation in society may adversely affect democratic decision-making which depends on a well-informed public. False information can originate from a number of sources including rumors, literary fiction, mainstream media, corporate vested interests, governments, and nongovernmental organizations. The rise of the Internet and user-driven content has provided a venue for quick and broad dissemination of information, not all of which is accurate. Consequently, a large body of research spanning a number of disciplines has sought to understand misinformation and determine which interventions are most effective in reducing its influence. This article summarizes research into misinformation, bringing together studies from psychology, political science, education, and computer science. Cognitive psychology investigates why individuals struggle with correcting misinformation and inaccurate beliefs, and why myths are so difficult to dislodge. Two important findings involve (a) various "backfire effects," which arise when refutations ironically reinforce misconceptions, and (b) the role of 3 worldviews in accentuating the persistence of misinformation. Computer scientists simulate the spread of misinformation through social networks and develop algorithms to automatically detect or neutralize myths. We draw together various research threads to provide guidelines on how to effectively refute misconceptions without risking backfire effects.
Science and Society: Journal of Political and Moral Theory, 2024
Given the evolution and growth of fact-checking around the globe, practitioners and academics have been gathering with increasing frequency to discuss the state of the enterprise. The continued interest in fact-checking suggests periodic updates on how the practice is evolving has merit.The present article is one such effort which briefly addresses the origins of fact-checking followed by an examination of some of the challenges and opportunities facing the enterprise.
2020
For all the recent advancements in Natural Language Processing and deep learning, current systems for misinformation detection are still woefully inaccurate in real-world data. Automated misinformation detection systems —available to the general public and producing explainable ratings— are therefore still an open problem and involvement of domain experts, journalists or fact-checkers is necessary to correct the mistakes such systems currently make. Reliance on such expert feedback imposes a bottleneck and prevents scalability of current approaches. In this paper, we propose a method —based on a recent semantic-based approach for misinformation detection, Credibility Reviews (CR)—, to (i) identify real-world errors of the automatic analysis; (ii) use the semantic links in the CR graphs to identify steps in the misinformation analysis which may have caused the errors and (iii) derive crowdsourcing tasks to pinpoint the source of errors. As a bonus, our approach generates real-world t...
Elgar Encyclopedia of Political Communication, 2025
This entry defines the contemporary practice of external fact-checking and offers a brief primer on its origins and global spread. It then provides an overview of how fact-checking has affected individuals, opinion leaders, as well as the journalism industry, itself. Major challenges facing fact-checking are considered, including perceived biases and weaponization of fact-checks as well as how funding from corporate sources such as Meta may influence the practice. Several promising opportunities are also addressed such as the use of artificial intelligence and machine learning in automating fact-checking, the collaboration of fact-checkers on global issues, and leveraging fact-checking as part of a constellation of tools intended to create systematic change. The entry concludes by acknowledging the limitations of fact-checking in a society which has largely structured media systems that encourage mindless, shallow consumption for the benefit of commercial rather than democratic interests.
arXiv (Cornell University), 2024
recent phenomenon, occurring significantly after heavy press coverage. We also show "simple" methods dominated historically, particularly context manipulations, and continued to hold a majority as of the end of data collection in November 2023. The dataset, Annotated Misinformation, Media-Based (AMMEBA), is publicly-available, and we hope that these data will serve as both a means of evaluating mitigation methods in a realistic setting and as a first-of-its-kind census of the types and modalities of online misinformation.
arXiv (Cornell University), 2023
Social media and user-generated content (UGC) have become increasingly important features of journalistic work in a number of different ways. However, the growth of misinformation means that news organisations have had devote more and more resources to determining its veracity and to publishing corrections if it is found to be misleading. In this work, we present the results of interviews with eight members of fact-checking teams from two organisations. Team members described their fact-checking processes and the challenges they currently face in completing a fact-check in a robust and timely way. The former reveals, inter alia, significant differences in fact-checking practices and the role played by collaboration between team members. We conclude with a discussion of the implications for the development and application of computational tools, including where computational tool support is currently lacking and the importance of being able to accommodate different fact-checking practices.