Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2021
…
5 pages
1 file
We propose a novel method for online misinformation analysis based on a Wittgensteinian approach. We found no previous work that use Wittgenstein’s early philosophy for misinformation analysis. The works of Ludwig Wittgenstein (1889-1951) are usually divided into two: the early and the later philosophy. Wittgenstein’s book Tractatus Logico-Philosophicus (TL-P) is regarded as his early masterpiece. The TL-P is concerned with the role facts play in the world. According to TL-P, the world is composed of facts and we connect with facts by our thoughts. Our thoughts picture the world and are expressed through propositions. The system for online content analysis, described here, is a descriptive tool to clarify the thoughts and propositions found within online content analysed. Web-based written non-graphical information (articles, commentary etc.) is analysed and then scored based on criteria designed to evaluate the information quality of the content. Our hypothesis is that when applied...
The Web in the past decade and a half has revolutionized the way information is made available to the users. There is unfortunately another side of the story and that is misinformation on the web which has the potential of undermining the veracity of web content. This paper reviews the topic of Misinformation on the Web by giving insights
The spread of misinformation online is specifically amplified by use of social media, yet the tools for allowing online users to authenticate text and images are available though not easily accessible. The authors challenge this view suggesting that corporations' responsible for the development of browsers and social media websites need to incorporate such tools to combat the spread of misinformation. As a step stone towards developing a formula for simulating spread of misinformation, the authors ran theoretical simulations which demon‐ strate the unchallenged spread of misinformation which users are left to authen‐ ticate on their own, as opposed to providing the users means to authenticate such material. The team simulates five scenarios that gradually get complicated as variables are identified and added to the model. The results demonstrate a simu‐ lation of the process as proof-of-concept as well as identification of the key vari‐ ables that influence the spread and combat of misinformation online.
Investigación Bibliotecológica: archivonomía, bibliotecología e información, 2020
The article presents information reliability criteria to identify misinformation and its representations (fake news, post truth, alternative facts and deepfake) in the current scenario, characterized by the digital environment. It also contextualizes the concepts of critical reading and critical thinking, essential in the conceptual formulation of informational reliability. From there, the paper elaborates its criteria in order to verify the reliability of information disseminated in the web. For this purpose, it uses the criteria to evaluate information sources developed by Tomaél, Alcará and Silva (2008), the critical analysis of arguments by Carraher (1983), and Floridi’s (2011, 2010) concept of informativeness and also his informational concept map. The article concludes that these criteria help to combat misinformation, and stimulate critical reading and thinking processes in the individual, even though they are not a final solution for this purpose.
Proceedings of the 2003 InSITE Conference
Participants in interactions (particularly in interactive network media (WWW) aim to change their intentions, goals, decisions and actions - as well as those of other participants in communication. Integrity of information is therefore disturbed; changes are manifested as technical, semantic and social errors and lead to the creation of information, misinformation and disinformation. These become legitimate points of interest in Information Science. Criteria for differentiation among information, misinformation and disinformation are suggested based on their value for different participants in interaction. Therefore, the notion of relevance needs to be redefined and efficiency of information is analyzed from the point of view of participants in the communication process. Usability of information is a measure of efficiency of the information process as judged by the user. Relevance is an accepted term for measuring information usability by the user. Usefulness of information is a mea...
2022
The spread of misinformation online and its impact on society have become a pressing issue for both the technology industry and democracies around the world. Developing new tools and methods to mitigate this threat is critical. In recent years, "fake news" has received increasing attention because the term covers intentionally false, deceptive stories as well as factual errors, satire, and stories a person does not like. Few users possess the digital media literacy skills necessary to navigate these challenges. In this workshop paper, we propose developing a mixed-initiative, crowd-powered system to (i) semi-automatically explore definitions of fake news online, (ii) curate datasets of new stories for machine learning applications, (iii) power browser-based tools that identify misinformation content in-situ , and (iv) deploy digital media literacy interventions and news recommendations that increase a user’s hardiness against misinformation threats on online news media pla...
Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18
The proliferation of misinformation in online news and its amplification by platforms are a growing concern, leading to numerous efforts to improve the detection of and response to misinformation. Given the variety of approaches, collective agreement on the indicators that signify credible content could allow for greater collaboration and data-sharing across initiatives. In this paper, we present an initial set of indicators for article credibility defined by a diverse coalition of experts. These indicators originate from both within an article's text as well as from external sources or article metadata. As a proof-of-concept, we present a dataset of 40 articles of varying credibility annotated with our indicators by 6 trained annotators using specialized platforms. We discuss future steps including expanding annotation, broadening the set of indicators, and considering their use by platforms and the public, towards the development of interoperable standards for content credibility. This paper is published under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Authors reserve their rights to disseminate the work on their personal and corporate Web sites with the appropriate attribution.
2019
The aim of this study is to find key areas of research that can be useful to fight against disinformation on Wikipedia. To address this problem we perform a literature review trying to answer three main questions: (i) What is disinformation? (ii)What are the most popular mechanisms to spread online disinformation? and (iii) Which are the mechanisms that are currently being used to fight against disinformation?. In all these three questions we take first a general approach, considering studies from different areas such as journalism and communications, sociology, philosophy , information and political sciences. And comparing those studies with the current situation on the Wikipedia ecosystem. We found that disinformation can be defined as non-accidentally misleading information that is likely to create false beliefs. While the exact definition of mis-information varies across different authors, they tend to agree that disinformation is different from other types of misinformation, because it requires the intention of deceiving the receiver. A more actionable way to scope disinformation is to define it as a problem of information quality. In Wikipedia quality of information is mainly controlled by the policies of neutral point of view and verifiability. The mechanisms used to spread online disinformation include the coordinated action of online brigades, the usage of bots, and other techniques to create fake content. Underresouced topics and communities are especially vulnerable to such attacks. The usage of sock-puppets is one of the most important problems for Wikipedia. The techniques used to fight against information on the internet, include manual fact checking done by agencies and communities, as well as automatic techniques to assess the quality and credibility of a given information. Machine learning approaches can be fully automatic or can be used as tools by human fact checkers. Wikipedia and especially Wikidata play double role here, because they are used by automatic methods as ground-truth to determine the credibility of an information, and at the same time (and for that reason) they are the target of many attacks. Currently, the main defense of Wikimedia projects against fake news is the work done by community members and especially by patrollers, that use mixed techniques to detect and control disinformation campaigns on Wikipedia. We conclude that in order to keep Wikipedia as free as possible from disinforma-tion, it's necessary to help patrollers to early detect disinformation and assess the credibility of external sources. More research is needed to develop tools that use state-of-the-art machine learning techniques to detect potentially dangerous content, empowering patrollers to deal with attacks that are becoming more complex and sophisticated.
2018
Online media are ubiquitous and consumed by billions of people globally. Recently, however, several phenomena regarding online media have emerged that pose a severe threat to media consumption and reception as well as to the potential of manipulating opinions and, thus, (re)actions, on a large scale. Lumped together under the label “fake news”, these phenomena comprise, among others, maliciously manipulated content, bad journalism, parodies, satire, propaganda and several other types of false news; related phenomena are the often cited filter bubble (echo chamber) effect and the amount of abusive language used online. In an earlier paper we describe an architectural and technological approach to empower users to handle these online media phenomena. In this article we provide the first approach of a metadata scheme to enable, eventually, the standardised annotation of these phenomena in online media. We also show an initial version of a tool that enables the creation, visualisation a...
2015
The rise of the internet has led it to become the biggest information sharing platform in the world today. Its ease of access and users’ ability to upload anything they see fit means that it is a benefit as well as a disadvantage. It has resulted in the amount of quality information disseminated being reduced through the two concepts of misinformation and disinformation. These phenomena may have detrimental effects on society and compromise the credibility/trustworthiness of a vast number of online information sources. It is therefore important for internet users to evaluate sources that may influence them or their knowledge of the world in any way. This paper aims to provide internet users with a framework/taxonomy that may be used in an effort to access quality information. It achieves this through the provision of a scoring system that evaluates each source based on a number of certain criteria. This provides the evaluator with a total score which may be placed in one of the three categories of information, misinformation or disinformation. These categories indicate the possible nature of the source and whether they should be trusted.
International Journal of Advanced Computer Science and Applications, 2022
The expeditious flow of information over the web and its ease of convenience has increased the fear of the rampant spread of misinformation. This poses a health threat and an unprecedented issue to the world impacting people's life. To cater to this problem, there is a need to detect misinformation. Recent techniques in this area focus on static models based on feature extraction and classification. However, data may change at different time intervals and the veracity of data needs to be checked as it gets updated. There is a lack of models in the literature that can handle incremental data, check the veracity of data and detect misinformation. To fill this gap, authors have proposed a novel Veracity Scanning Model (VSM) to detect misinformation in the healthcare domain by iteratively factchecking the contents evolving over the period of time. In this approach, the healthcare web URLs are classified as legitimate or non-legitimate using sentiment analysis as a feature, document similarity measures to perform fact-checking of URLs, and incremental learning to handle the arrival of incremental data. The experimental results show that the Jaccard Distance measure has outperformed other techniques with an accuracy of 79.2% with Random Forest classifier while the Cosine similarity measure showed less accuracy of 60.4% with the Support Vector Machine classifier. Also, when implemented as an algorithm Euclidean distance showed an accuracy of 97.14% and 98.33% respectively for train and test data.
arXiv (Cornell University), 2023
Routledge eBooks, 2020
International Journal of Classified Research Techniques & Advances (IJCRTA), 2024
Revista Internacional de Andrología
Direct Research Journal of Engineering and Information Technology , 2024
Tér és Társadalom, 2017
Jurnal Kejuruteraan
The International Review of Information Ethics, 2004
The Journal of the Acoustical Society of America, 2008
Chemical Geology, 1988
Jurnal Farmasi Higea, 2016
International Arab Journal of Dentistry, 2016