Social Sciences Sociology and Political Science

Misinformation and Its Impacts

Description

This cluster of papers focuses on the spread of misinformation, fake news, and conspiracy theories through social media platforms. It explores topics such as infodemics, fact-checking, rumor detection, online credibility, and the impact of misinformation on public health and political polarization.

Keywords

Misinformation; Fake News; Social Media; Infodemic; Conspiracy Theories; Fact-Checking; Rumor Detection; Online Credibility; Polarization; Health Misinformation

This edition of the Handbook follows the first edition by 10 years. The earlier edition was a promissory note, presaging the directions in which the then-emerging field of social cognition … This edition of the Handbook follows the first edition by 10 years. The earlier edition was a promissory note, presaging the directions in which the then-emerging field of social cognition was likely to move. The field was then in its infancy and the areas of research and theory that came to dominate the field during the next decade were only beginning to surface. The concepts and methods used had frequently been borrowed from cognitive psychology and had been applied to phenomena in a very limited number of areas. Nevertheless, social cognition promised to develop rapidly into an important area of psychological inquiry that would ultimately have an impact on not only several areas of psychology but other fields as well. The promises made by the earlier edition have generally been fulfilled. Since its publication, social cognition has become one of the most active areas of research in the entire field of psychology; its influence has extended to health and clinical psychology, and personality, as well as to political science, organizational behavior, and marketing and consumer behavior. The impact of social cognition theory and research within a very short period of time is incontrovertible. The present volumes provide a comprehensive and detailed review of the theoretical and empirical work that has been performed during these years, and of its implications for information processing in a wide variety of domains. The handbook is divided into two volumes. The first provides an overview of basic research and theory in social information processing, covering the automatic and controlled processing of information and its implications for how information is encoded and stored in memory, the mental representation of persons -- including oneself -- and events, the role of procedural knowledge in information processing, inference processes, and response processes. Special attention is given to the cognitive determinants and consequences of affect and emotion. The second book provides detailed discussions of the role of information processing in specific areas such as stereotyping; communication and persuasion; political judgment; close relationships; organizational, clinical and health psychology; and consumer behavior. The contributors are theorists and researchers who have themselves carried out important studies in the areas to which their chapters pertain. In combination, the contents of this two-volume set provide a sophisticated and in-depth treatment of both theory and research in this major area of psychological inquiry and the directions in which it is likely to proceed in the future.
The primary purpose of this paper is to develop a reasoned classification of illocutionary acts into certain basic categories or types. It is to answer the question: How many kinds … The primary purpose of this paper is to develop a reasoned classification of illocutionary acts into certain basic categories or types. It is to answer the question: How many kinds of illocutionary acts are there?
Resistance and Persuasion is the first book to analyze the nature of resistance and demonstrate how it can be reduced, overcome, or used to promote persuasion. By examining resistance, and … Resistance and Persuasion is the first book to analyze the nature of resistance and demonstrate how it can be reduced, overcome, or used to promote persuasion. By examining resistance, and providing strategies for overcoming it, this new book generates insight into new facets of influence and persuasion. With contributions from the leaders in the field, this book presents original ideas and research that demonstrate how understanding resistance can improve persuasion, compliance, and social influence. Many of the authors present their research for the first time. Four faces of resistance are identified: reactance, distrust, scrutiny, and inertia. The concluding chapter summarizes the book's theoretical contributions and establishes a resistance-based research agenda for persuasion and attitude change. This new book helps to establish resistance as a legitimate sub-field of persuasion that is equal in force to influence. Resistance and Persuasion offers many new revelations about persuasion: *Acknowledging resistance helps to reduce it. *Raising reactance makes a strong message more persuasive. *Putting arguments into a narrative increases their influence. *Identifying illegitimate sources of information strengthens the influence of legitimate sources. *Looking ahead reduces resistance to persuasive attempts. This volume will appeal to researchers and students from a variety of disciplines including social, cognitive, and health psychology, communication, marketing, political science, journalism, and education.
Belief systems have never surrendered easily to empirical study or quantification. Whole belief systems may also be compared in a rough way with respect to the range of objects that … Belief systems have never surrendered easily to empirical study or quantification. Whole belief systems may also be compared in a rough way with respect to the range of objects that are referents for the ideas and attitudes in the system. A realistic picture of political belief systems in the mass public, then, is neither one that omits issues and policy demands completely nor one that presumes widespread ideological coherence; it is rather one that captures with some fidelity the fragmentation, narrowness, and diversity of these demands. Perhaps the simplest and most obvious consequences are those that depend on the fact that reduced constraint with reduced information means in turn that ideologically constrained belief systems are necessarily more common in upper than in lower social strata. The controversy over internal communism provides a classic example of a mortal struggle among elites that passed almost unwitnessed by an astonishing portion of the mass public.
The current studies investigated the potential impact of anti-vaccine conspiracy beliefs, and exposure to anti-vaccine conspiracy theories, on vaccination intentions. In Study 1, British parents completed a questionnaire measuring beliefs … The current studies investigated the potential impact of anti-vaccine conspiracy beliefs, and exposure to anti-vaccine conspiracy theories, on vaccination intentions. In Study 1, British parents completed a questionnaire measuring beliefs in anti-vaccine conspiracy theories and the likelihood that they would have a fictitious child vaccinated. Results revealed a significant negative relationship between anti-vaccine conspiracy beliefs and vaccination intentions. This effect was mediated by the perceived dangers of vaccines, and feelings of powerlessness, disillusionment and mistrust in authorities. In Study 2, participants were exposed to information that either supported or refuted anti-vaccine conspiracy theories, or a control condition. Results revealed that participants who had been exposed to material supporting anti-vaccine conspiracy theories showed less intention to vaccinate than those in the anti-conspiracy condition or controls. This effect was mediated by the same variables as in Study 1. These findings point to the potentially detrimental consequences of anti-vaccine conspiracy theories, and highlight their potential role in shaping health-related behaviors.
Surveys are popular methods to measure public perceptions in emergencies but can be costly and time consuming. We suggest and evaluate a complementary "infoveillance" approach using Twitter during the 2009 … Surveys are popular methods to measure public perceptions in emergencies but can be costly and time consuming. We suggest and evaluate a complementary "infoveillance" approach using Twitter during the 2009 H1N1 pandemic. Our study aimed to: 1) monitor the use of the terms "H1N1" versus "swine flu" over time; 2) conduct a content analysis of "tweets"; and 3) validate Twitter as a real-time content, sentiment, and public attention trend-tracking tool.Between May 1 and December 31, 2009, we archived over 2 million Twitter posts containing keywords "swine flu," "swineflu," and/or "H1N1." using Infovigil, an infoveillance system. Tweets using "H1N1" increased from 8.8% to 40.5% (R(2) = .788; p<.001), indicating a gradual adoption of World Health Organization-recommended terminology. 5,395 tweets were randomly selected from 9 days, 4 weeks apart and coded using a tri-axial coding scheme. To track tweet content and to test the feasibility of automated coding, we created database queries for keywords and correlated these results with manual coding. Content analysis indicated resource-related posts were most commonly shared (52.6%). 4.5% of cases were identified as misinformation. News websites were the most popular sources (23.2%), while government and health agencies were linked only 1.5% of the time. 7/10 automated queries correlated with manual coding. Several Twitter activity peaks coincided with major news stories. Our results correlated well with H1N1 incidence data.This study illustrates the potential of using social media to conduct "infodemiology" studies for public health. 2009 H1N1-related tweets were primarily used to disseminate information from credible sources, but were also a source of opinions and experiences. Tweets can be used for real-time content analysis and knowledge translation research, allowing health authorities to respond to public concerns.
Abstract This article reviews selected literature related to the credibility of information, including (1) the general markers of credibility, and how different source, message and receiver characteristics affect people's perceptions … Abstract This article reviews selected literature related to the credibility of information, including (1) the general markers of credibility, and how different source, message and receiver characteristics affect people's perceptions of information; (2) the impact of information medium on the assessment of credibility; and (3) the assessment of credibility in the context of information presented on the Internet. The objective of the literature review is to synthesize the current state of knowledge in this area, develop new ways to think about how people interact with information presented via the Internet, and suggest next steps for research and practical applications. The review examines empirical evidence, key reviews, and descriptive material related to credibility in general, and in terms of on‐line media. A general discussion of credibility and persuasion and a description of recent work on the credibility and persuasiveness of computer‐based applications is presented. Finally, the article synthesizes what we have learned from various fields, and proposes a model as a framework for much‐needed future research in this area.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public … The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people's memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners-including journalists, health professionals, educators, and science communicators-design effective misinformation retractions, educational tools, and public-information campaigns.
In a new test of the process of forgetting, the authors found that subjects, at the time of exposure, discounted material from "untrustworthy" sources. In time, however, the subjects tended … In a new test of the process of forgetting, the authors found that subjects, at the time of exposure, discounted material from "untrustworthy" sources. In time, however, the subjects tended to disassociate the content and the source with the result that the original scepticism faded and the "untrustworthy" material was accepted. Lies, in fact, seemed to be remembered better than truths.
This article reviews the peer-reviewed literature addressing the healthcare information available on YouTube. Inclusion and exclusion criteria were determined, and the online databases PubMed and Web of Knowledge were searched … This article reviews the peer-reviewed literature addressing the healthcare information available on YouTube. Inclusion and exclusion criteria were determined, and the online databases PubMed and Web of Knowledge were searched using the search phrases: (1) YouTube* AND Health* and (2) YouTube* AND Healthcare*. In all, 18 articles were reviewed, with the results suggesting that (1) YouTube is increasingly being used as a platform for disseminating health information; (2) content and frame analysis were the primary techniques employed by researchers to analyze the characteristics of this information; (3) YouTube contains misleading information, primarily anecdotal, that contradicts the reference standards and the probability of a lay user finding such content is relatively high; (4) the retrieval of relevant videos is dependent on the search term used; and (5) videos from government organizations and professional associations contained trustworthy and high-quality information. YouTube is used as a medium for promoting unscientific therapies and drugs that are yet to be approved by the appropriate agencies and has the potential to change the beliefs of patients concerning controversial topics such as vaccinations. This review recognizes the need to design interventions to enable consumers to critically assimilate the information posted on YouTube with more authoritative information sources to make effective healthcare decisions.
Journal Article Social and Heuristic Approaches to Credibility Evaluation Online Get access Miriam J. Metzger, Miriam J. Metzger 1Department of Communication, University of California, Santa Barbara, CA 93106, USA Search … Journal Article Social and Heuristic Approaches to Credibility Evaluation Online Get access Miriam J. Metzger, Miriam J. Metzger 1Department of Communication, University of California, Santa Barbara, CA 93106, USA Search for other works by this author on: Oxford Academic Google Scholar Andrew J. Flanagin, Andrew J. Flanagin 1Department of Communication, University of California, Santa Barbara, CA 93106, USA Search for other works by this author on: Oxford Academic Google Scholar Ryan B. Medders Ryan B. Medders 1Department of Communication, University of California, Santa Barbara, CA 93106, USA Search for other works by this author on: Oxford Academic Google Scholar Journal of Communication, Volume 60, Issue 3, September 2010, Pages 413–439, https://doi.org/10.1111/j.1460-2466.2010.01488.x Published: 19 August 2010
Significance The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful … Significance The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful environment for the massive diffusion of unverified rumors. In this work, using a massive quantitative analysis of Facebook, we show that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities (i.e., echo chambers) having similar information consumption patterns. Then, we derive a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.
A survey of 348 residents of southwestern New Jersey showed that most believed that several of a list of 10 conspiracy theories were at least probably true. People who believed … A survey of 348 residents of southwestern New Jersey showed that most believed that several of a list of 10 conspiracy theories were at least probably true. People who believed in one conspiracy were more likely to also believe in others. Belief in conspiracies was correlated with anomia, lack of interpersonal trust, and insecurity about employment. Black and hispanic respondents were more likely to believe in conspiracy theories than were white respondents. Young people were slightly more likely to believe in conspiracy theories, but there were few significant correlations with gender, educational level, or occupational category.
What psychological factors drive the popularity of conspiracy theories, which explain important events as secret plots by powerful and malevolent groups? What are the psychological consequences of adopting these theories? … What psychological factors drive the popularity of conspiracy theories, which explain important events as secret plots by powerful and malevolent groups? What are the psychological consequences of adopting these theories? We review the current research and find that it answers the first of these questions more thoroughly than the second. Belief in conspiracy theories appears to be driven by motives that can be characterized as epistemic (understanding one's environment), existential (being safe and in control of one's environment), and social (maintaining a positive image of the self and the social group). However, little research has investigated the consequences of conspiracy belief, and to date, this research does not indicate that conspiracy belief fulfills people's motivations. Instead, for many people, conspiracy belief may be more appealing than satisfying. Further research is needed to determine for whom, and under what conditions, conspiracy theories may satisfy key psychological motives.
Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume … Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of \fake news", i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ine ective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.
This paper is based on a review of how previous studies have defined and operationalized the term "fake news." An examination of 34 academic articles that used the term "fake … This paper is based on a review of how previous studies have defined and operationalized the term "fake news." An examination of 34 academic articles that used the term "fake news" between 2003 and 2017 resulted in a typology of types of fake news: news satire, news parody, fabrication, manipulation, advertising, and propaganda. These definitions are based on two dimensions: levels of facticity and deception. Such a typology is offered to clarify what we mean by fake news and to guide future studies.
We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 … We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.
Fake news emerged as an apparent global problem during the 2016 U.S. Presidential election. Addressing it requires a multidisciplinary effort to define the nature and extent of the problem, detect … Fake news emerged as an apparent global problem during the 2016 U.S. Presidential election. Addressing it requires a multidisciplinary effort to define the nature and extent of the problem, detect fake news in real time, and mitigate its potentially harmful effects. This will require a better understanding of how the Internet spreads content, how people process news, and how the two interact. We review the state of knowledge in these areas and discuss two broad potential mitigation strategies: better enabling individuals to identify fake news, and intervention within the platforms to reduce the attention given to fake news. The cooperation of Internet platforms (especially Facebook, Google, and Twitter) with researchers will be critical to understanding the scale of the issue and the effectiveness of possible interventions.
Many democratic nations are experiencing increased levels of false information circulating through social media and political websites that mimic journalism formats. In many cases, this disinformation is associated with the … Many democratic nations are experiencing increased levels of false information circulating through social media and political websites that mimic journalism formats. In many cases, this disinformation is associated with the efforts of movements and parties on the radical right to mobilize supporters against centre parties and the mainstream press that carries their messages. The spread of disinformation can be traced to growing legitimacy problems in many democracies. Declining citizen confidence in institutions undermines the credibility of official information in the news and opens publics to alternative information sources. Those sources are often associated with both nationalist (primarily radical right) and foreign (commonly Russian) strategies to undermine institutional legitimacy and destabilize centre parties, governments and elections. The Brexit campaign in the United Kingdom and the election of Donald Trump in the United States are among the most prominent examples of disinformation campaigns intended to disrupt normal democratic order, but many other nations display signs of disinformation and democratic disruption. The origins of these problems and their implications for political communication research are explored.
The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political "disinformation," a term used … The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political "disinformation," a term used to encompass a wide range of types of information about politics found online, including "fake news," rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and "hyperpartisan" news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known—and unknown—about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them.
To understand how Twitter bots and trolls ("bots") promote online health content.We compared bots' to average users' rates of vaccine-relevant messages, which we collected online from July 2014 through September … To understand how Twitter bots and trolls ("bots") promote online health content.We compared bots' to average users' rates of vaccine-relevant messages, which we collected online from July 2014 through September 2017. We estimated the likelihood that users were bots, comparing proportions of polarized and antivaccine tweets across user types. We conducted a content analysis of a Twitter hashtag associated with Russian troll activity.Compared with average users, Russian trolls (χ2(1) = 102.0; P < .001), sophisticated bots (χ2(1) = 28.6; P < .001), and "content polluters" (χ2(1) = 7.0; P < .001) tweeted about vaccination at higher rates. Whereas content polluters posted more antivaccine content (χ2(1) = 11.18; P < .001), Russian trolls amplified both sides. Unidentifiable accounts were more polarized (χ2(1) = 12.1; P < .001) and antivaccine (χ2(1) = 35.9; P < .001). Analysis of the Russian troll hashtag showed that its messages were more political and divisive.Whereas bots that spread malware and unsolicited content disseminated antivaccine messages, Russian trolls promoted discord. Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination. Public Health Implications. Directly confronting vaccine skeptics enables bots to legitimize the vaccine debate. More research is needed to determine how best to combat bot-driven content.
Scholarly efforts to understand conspiracy theories have grown significantly in recent years, and there is now a broad and interdisciplinary literature. In reviewing this body of work, we ask three … Scholarly efforts to understand conspiracy theories have grown significantly in recent years, and there is now a broad and interdisciplinary literature. In reviewing this body of work, we ask three specific questions. First, what factors are associated with conspiracy beliefs? Our review of the literature shows that conspiracy beliefs result from a range of psychological, political, and social factors. Next, how are conspiracy theories communicated? Here, we explain how conspiracy theories are shared among individuals and spread through traditional and social media platforms. Next, what are the societal risks and rewards associated with conspiracy theories? By focusing on politics and science, we argue that conspiracy theories do more harm than good. We conclude by suggesting several promising avenues for future research.
Fake news sharing in 2016 was rare but significantly more common among older Americans. Fake news sharing in 2016 was rare but significantly more common among older Americans.
The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news … The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.
Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically … Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present LIAR: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model.
Contemporary commentators describe the current period as "an era of fake news" in which misinformation, generated intentionally or unintentionally, spreads rapidly. Although affecting all areas of life, it poses particular … Contemporary commentators describe the current period as "an era of fake news" in which misinformation, generated intentionally or unintentionally, spreads rapidly. Although affecting all areas of life, it poses particular problems in the health arena, where it can delay or prevent effective care, in some cases threatening the lives of individuals. While examples of the rapid spread of misinformation date back to the earliest days of scientific medicine, the internet, by allowing instantaneous communication and powerful amplification has brought about a quantum change. In democracies where ideas compete in the marketplace for attention, accurate scientific information, which may be difficult to comprehend and even dull, is easily crowded out by sensationalized news. In order to uncover the current evidence and better understand the mechanism of misinformation spread, we report a systematic review of the nature and potential drivers of health-related misinformation. We searched PubMed, Cochrane, Web of Science, Scopus and Google databases to identify relevant methodological and empirical articles published between 2012 and 2018. A total of 57 articles were included for full-text analysis. Overall, we observe an increasing trend in published articles on health-related misinformation and the role of social media in its propagation. The most extensively studied topics involving misinformation relate to vaccination, Ebola and Zika Virus, although others, such as nutrition, cancer, fluoridation of water and smoking also featured. Studies adopted theoretical frameworks from psychology and network science, while co-citation analysis revealed potential for greater collaboration across fields. Most studies employed content analysis, social network analysis or experiments, drawing on disparate disciplinary paradigms. Future research should examine susceptibility of different sociodemographic groups to misinformation and understand the role of belief systems on the intention to spread misinformation. Further interdisciplinary research is also warranted to identify effective and tailored interventions to counter the spread of health-related misinformation online.
… An abundance of research has shown that human beings are conservative processors of fallible information. Such experiments compare human behavior with the outputs of Bayes's theorem, the formally optimal … … An abundance of research has shown that human beings are conservative processors of fallible information. Such experiments compare human behavior with the outputs of Bayes's theorem, the formally optimal rule about how opinions (that is, probabilities) should be revised on the basis of new information. It turns out that opinion change is very orderly, and usually proportional to numbers calculated from Bayes's theorem – but it is insufficient in amount. A convenient first approximation to the data would say that it takes anywhere from two to five observations to do one observation's worth of work in inducing a subject to change his opinions. A number of experiments have been aimed at an explanation for this phenomenon. They show that a major, probably the major, cause of conservatism is human misaggregation of the data. That is, men perceive each datum accurately and are well aware of its individual diagnostic meaning, but are unable to combine its diagnostic meaning well with the diagnostic meaning of other data when revising their opinions. …
WHO's newly launched platform aims to combat misinformation around COVID-19. John Zarocostas reports from Geneva. WHO is leading the effort to slow the spread of the 2019 coronavirus disease (COVID-19) … WHO's newly launched platform aims to combat misinformation around COVID-19. John Zarocostas reports from Geneva. WHO is leading the effort to slow the spread of the 2019 coronavirus disease (COVID-19) outbreak. But a global epidemic of misinformation—spreading rapidly through social media platforms and other outlets—poses a serious problem for public health. “We’re not just fighting an epidemic; we’re fighting an infodemic”, said WHO Director-General Tedros Adhanom Ghebreyesus at the Munich Security Conference on Feb 15. Immediately after COVID-19 was declared a Public Health Emergency of International Concern, WHO's risk communication team launched a new information platform called WHO Information Network for Epidemics (EPI-WIN), with the aim of using a series of amplifiers to share tailored information with specific target groups. Sylvie Briand, director of Infectious Hazards Management at WHO's Health Emergencies Programme and architect of WHO's strategy to counter the infodemic risk, told The Lancet, “We know that every outbreak will be accompanied by a kind of tsunami of information, but also within this information you always have misinformation, rumours, etc. We know that even in the Middle Ages there was this phenomenon”. “But the difference now with social media is that this phenomenon is amplified, it goes faster and further, like the viruses that travel with people and go faster and further. So it is a new challenge, and the challenge is the [timing] because you need to be faster if you want to fill the void…What is at stake during an outbreak is making sure people will do the right thing to control the disease or to mitigate its impact. So it is not only information to make sure people are informed; it is also making sure people are informed to act appropriately.” About 20 staff and some consultants are involved in WHO's communications teams globally, at any given time. This includes social media personnel at each of WHO's six regional offices, risk communications consultants, and WHO communications officers. Aleksandra Kuzmanovic, social media manager with WHO's department of communications, told The Lancet that “fighting infodemics and misinformation is a joint effort between our technical risk communications [team] and colleagues who are working on the EPI-WIN platform, where they communicate with different…professionals providing them with advice and guidelines and also receiving information”. Kuzmanovic said, “In my role, I am in touch with Facebook, Twitter, Tencent, Pinterest, TikTok, and also my colleagues in the China office who are working closely with Chinese social media platforms…So when we see some questions or rumours spreading, we write it down, we go back to our risk communications colleagues and then they help us find evidence-based answers”. “Another thing we are doing with social media platforms, and that is something we are putting our strongest efforts in, is to ensure no matter where people live….when they’re on Facebook, Twitter, or Google, when they search for ‘coronavirus’ or ‘COVID-19’ or [a] related term, they have a box that…directs them to a reliable source: either to [the] WHO website to their ministry of health or public health institute or centre for disease control”, she said. Google, Kuzmanovic noted, has created an SOS Alert on COVID-19 for the six official UN languages, and is also expanding in some other languages. The idea is to make the first information that the public receive be from the WHO website and the social media accounts of WHO and Dr Tedros. WHO also uses social media for real-time updates. WHO is also working closely with UNICEF and other international agencies that have extensive experience in risk communications, such as the International Federation of Red Cross and Red Crescent Societies. Carlos Navarro, head of Public Health Emergencies at UNICEF, the children's agency, told The Lancet that while a lot of incorrect information is spreading through social media, a lot is also coming from traditional mass media. “Often, they pick the most extreme pictures they can find…There is overkill on the use of [personal protective equipment] and that tends to be the photos that are published everywhere, in all major newspapers and TV…that is, in fact, sending the wrong message”, Navarro said. David Heymann, professor of infectious disease epidemiology at the London School of Hygiene & Tropical Medicine, told The Lancet that the traditional media has a key role in providing evidence-based information to the general public, which will then hopefully be picked up on social media. He also observed that for both social and conventional media, it is important that the public health community help the media to “better understand what they should be looking for, because the media sometimes gets ahead of the evidence”.
We need to rapidly detect and respond to public rumours, perceptions, attitudes and behaviours around COVID-19 and control measures. The creation of an interactive platform and dashboard to provide real-time … We need to rapidly detect and respond to public rumours, perceptions, attitudes and behaviours around COVID-19 and control measures. The creation of an interactive platform and dashboard to provide real-time alerts of rumours and concerns about coronavirus spreading globally would enable public health officials and relevant stakeholders to respond rapidly with a proactive and engaging narrative that can mitigate misinformation.
Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about … Across two studies with more than 1,700 U.S. adults recruited online, we present evidence that people share false claims about COVID-19 partly because they simply fail to think sufficiently about whether or not the content is accurate when deciding what to share. In Study 1, participants were far worse at discerning between true and false content when deciding what they would share on social media relative to when they were asked directly about accuracy. Furthermore, greater cognitive reflection and science knowledge were associated with stronger discernment. In Study 2, we found that a simple accuracy reminder at the beginning of the study (i.e., judging the accuracy of a non-COVID-19-related headline) nearly tripled the level of truth discernment in participants’ subsequent sharing intentions. Our results, which mirror those found previously for political fake news, suggest that nudging people to think about accuracy is a simple way to improve choices about what to share on social media.
Infodemics, often including rumors, stigma, and conspiracy theories, have been common during the COVID-19 pandemic. Monitoring social media data has been identified as the best method for tracking rumors in … Infodemics, often including rumors, stigma, and conspiracy theories, have been common during the COVID-19 pandemic. Monitoring social media data has been identified as the best method for tracking rumors in real time and as a possible way to dispel misinformation and reduce stigma. However, the detection, assessment, and response to rumors, stigma, and conspiracy theories in real time are a challenge. Therefore, we followed and examined COVID-19-related rumors, stigma, and conspiracy theories circulating on online platforms, including fact-checking agency websites, Facebook, Twitter, and online newspapers, and their impacts on public health. Information was extracted between December 31, 2019 and April 5, 2020, and descriptively analyzed. We performed a content analysis of the news articles to compare and contrast data collected from other sources. We identified 2,311 reports of rumors, stigma, and conspiracy theories in 25 languages from 87 countries. Claims were related to illness, transmission and mortality (24%), control measures (21%), treatment and cure (19%), cause of disease including the origin (15%), violence (1%), and miscellaneous (20%). Of the 2,276 reports for which text ratings were available, 1,856 claims were false (82%). Misinformation fueled by rumors, stigma, and conspiracy theories can have potentially serious implications on the individual and community if prioritized over evidence-based guidelines. Health agencies must track misinformation associated with the COVID-19 in real time, and engage local communities and government stakeholders to debunk misinformation.
The COVID-19 pandemic poses extraordinary challenges to public health. Because the novel coronavirus is highly contagious, the widespread use of preventive measures such as masking, physical distancing, and eventually vaccination … The COVID-19 pandemic poses extraordinary challenges to public health. Because the novel coronavirus is highly contagious, the widespread use of preventive measures such as masking, physical distancing, and eventually vaccination is needed to bring it under control. We hypothesized that accepting conspiracy theories that were circulating in mainstream and social media early in the COVID-19 pandemic in the US would be negatively related to the uptake of preventive behaviors and also of vaccination when a vaccine becomes available. A national probability survey of US adults (N = 1050) was conducted in the latter half of March 2020 and a follow-up with 840 of the same individuals in July 2020. The surveys assessed adoption of preventive measures recommended by public health authorities, vaccination intentions, conspiracy beliefs, perceptions of threat, belief about the safety of vaccines, political ideology, and media exposure patterns. Belief in three COVID-19-related conspiracy theories was highly stable across the two periods and inversely related to the (a) perceived threat of the pandemic, (b) taking of preventive actions, including wearing a face mask, (c) perceived safety of vaccination, and (d) intention to be vaccinated against COVID-19. Conspiracy beliefs in March predicted subsequent mask-wearing and vaccination intentions in July even after controlling for action taken and intentions in March. Although adopting preventive behaviors was predicted by political ideology and conservative media reliance, vaccination intentions were less related to political ideology. Mainstream television news use predicted adopting both preventive actions and vaccination. Because belief in COVID-related conspiracy theories predicts resistance to both preventive behaviors and future vaccination for the virus, it will be critical to confront both conspiracy theories and vaccination misinformation to prevent further spread of the virus in the US. Reducing those barriers will require continued messaging by public health authorities on mainstream media and in particular on politically conservative outlets that have supported COVID-related conspiracy theories.
Misinformation about COVID-19 is a major threat to public health. Using five national samples from the UK ( n = 1050 and n = 1150), Ireland ( n = 700), … Misinformation about COVID-19 is a major threat to public health. Using five national samples from the UK ( n = 1050 and n = 1150), Ireland ( n = 700), the USA ( n = 700), Spain ( n = 700) and Mexico ( n = 700), we examine predictors of belief in the most common statements about the virus that contain misinformation. We also investigate the prevalence of belief in COVID-19 misinformation across different countries and the role of belief in such misinformation in predicting relevant health behaviours. We find that while public belief in misinformation about COVID-19 is not particularly common, a substantial proportion views this type of misinformation as highly reliable in each country surveyed. In addition, a small group of participants find common factual information about the virus highly unreliable. We also find that increased susceptibility to misinformation negatively affects people's self-reported compliance with public health guidance about COVID-19, as well as people's willingness to get vaccinated against the virus and to recommend the vaccine to vulnerable friends and family. Across all countries surveyed, we find that higher trust in scientists and having higher numeracy skills were associated with lower susceptibility to coronavirus-related misinformation. Taken together, these results demonstrate a clear link between susceptibility to misinformation and both vaccine hesitancy and a reduced likelihood to comply with health guidance measures, and suggest that interventions which aim to improve critical thinking and trust in science may be a promising avenue for future research.
We address the diffusion of information about the COVID-19 with a massive data analysis on Twitter, Instagram, YouTube, Reddit and Gab. We analyze engagement and interest in the COVID-19 topic … We address the diffusion of information about the COVID-19 with a massive data analysis on Twitter, Instagram, YouTube, Reddit and Gab. We analyze engagement and interest in the COVID-19 topic and provide a differential assessment on the evolution of the discourse on a global scale for each platform and their users. We fit information spreading with epidemic models characterizing the basic reproduction numbers $R_0$ for each social media platform. Moreover, we characterize information spreading from questionable sources, finding different volumes of misinformation in each platform. However, information from both reliable and questionable sources do not present different spreading patterns. Finally, we provide platform-dependent numerical estimates of rumors' amplification.
The 2016 U.S. presidential election brought considerable attention to the phenomenon of "fake news": entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism … The 2016 U.S. presidential election brought considerable attention to the phenomenon of "fake news": entirely fabricated and often partisan content that is presented as factual. Here we demonstrate one mechanism that contributes to the believability of fake news: fluency via prior exposure. Using actual fake-news headlines presented as they were seen on Facebook, we show that even a single exposure increases subsequent perceptions of accuracy, both within the same session and after a week. Moreover, this "illusory truth effect" for fake-news headlines occurs despite a low level of overall believability and even when the stories are labeled as contested by fact checkers or are inconsistent with the reader's political ideology. These results suggest that social media platforms help to incubate belief in blatantly false news stories and that tagging such stories as disputed is not an effective solution to this problem. It is interesting, however, that we also found that prior exposure does not impact entirely implausible statements (e.g., "The earth is a perfect square"). These observations indicate that although extreme implausibility is a boundary condition of the illusory truth effect, only a small degree of potential plausibility is sufficient for repetition to increase perceived accuracy. As a consequence, the scope and impact of repetition on beliefs is greater than has been previously assumed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Significance We explore the key differences between the main social media platforms and how they are likely to influence information spreading and the formation of echo chambers. To assess the … Significance We explore the key differences between the main social media platforms and how they are likely to influence information spreading and the formation of echo chambers. To assess the different dynamics, we perform a comparative analysis on more than 100 million pieces of content concerning controversial topics (e.g., gun control, vaccination, abortion) from Gab, Facebook, Reddit, and Twitter. The analysis focuses on two main dimensions: 1) homophily in the interaction networks and 2) bias in the information diffusion toward like-minded peers. Our results show that the aggregation in homophilic clusters of users dominates online dynamics. However, a direct comparison of news consumption on Facebook and Reddit shows higher segregation on Facebook.
Researchers have been addressing social judgment from a cognitive perspective for more than 15 years. Within recent years, however, it has become increasingly clear that many of the models and … Researchers have been addressing social judgment from a cognitive perspective for more than 15 years. Within recent years, however, it has become increasingly clear that many of the models and assumptions initially adopted are in need of revision. The chapters in this volume point out where the original models and assumptions have fallen short, and suggest directions for future research and theorizing. The contributors address issues related to judgment, memory, affect, attitudes, and self-perception. In addition, many present theoretical frameworks within which these different issues can be integrated. As such, this volume represents the transition from one era of social cognition research to the next.
Humans massively depend on communication with others, but this leaves them open to the risk of being accidentally or intentionally misinformed. To ensure that, despite this risk, communication remains advantageous, … Humans massively depend on communication with others, but this leaves them open to the risk of being accidentally or intentionally misinformed. To ensure that, despite this risk, communication remains advantageous, humans have, we claim, a suite of cognitive mechanisms for epistemic vigilance. Here we outline this claim and consider some of the ways in which epistemic vigilance works in mental and social life by surveying issues, research and theories in different domains of philosophy, linguistics, cognitive psychology and the social sciences.
Abstract This book examines the shape, composition, and practices of the United States political media landscape. It explores the roots of the current epistemic crisis in political communication with a … Abstract This book examines the shape, composition, and practices of the United States political media landscape. It explores the roots of the current epistemic crisis in political communication with a focus on the remarkable 2016 U.S. president election culminating in the victory of Donald Trump and the first year of his presidency. The authors present a detailed map of the American political media landscape based on the analysis of millions of stories and social media posts, revealing a highly polarized and asymmetric media ecosystem. Detailed case studies track the emergence and propagation of disinformation in the American public sphere that took advantage of structural weaknesses in the media institutions across the political spectrum. This book describes how the conservative faction led by Steve Bannon and funded by Robert Mercer was able to inject opposition research into the mainstream media agenda that left an unsubstantiated but indelible stain of corruption on the Clinton campaign. The authors also document how Fox News deflects negative coverage of President Trump and has promoted a series of exaggerated and fabricated counter narratives to defend the president against the damaging news coming out of the Mueller investigation. Based on an analysis of the actors that sought to influence political public discourse, this book argues that the current problems of media and democracy are not the result of Russian interference, behavioral microtargeting and algorithms on social media, political clickbait, hackers, sockpuppets, or trolls, but of asymmetric media structures decades in the making. The crisis is political, not technological.
The difficulties in determining the quality of information on the Internet-in particular, the implications of wide access and questionable credibility for youth and learning. Today we have access to an … The difficulties in determining the quality of information on the Internet-in particular, the implications of wide access and questionable credibility for youth and learning. Today we have access to an almost inconceivably vast amount of information, from sources that are increasingly portable, accessible, and interactive. The Internet and the explosion of digital media content have made more information available from more sources to more people than at any other time in human history. This brings an infinite number of opportunities for learning, social connection, and entertainment. But at the same time, the origin of information, its quality, and its veracity are often difficult to assess. This volume addresses the issue of credibility-the objective and subjective components that make information believable-in the contemporary media environment. The contributors look particularly at youth audiences and experiences, considering the implications of wide access and the questionable credibility of information for youth and learning. They discuss such topics as the credibility of health information online, how to teach credibility assessment, and public policy solutions. Much research has been done on credibility and new media, but little of it focuses on users younger than college students. Digital Media, Youth, and Credibility fills this gap in the literature. Contributors Matthew S. Eastin, Gunther Eysenbach, Brian Hilligoss, Frances Jacobson Harris, R. David Lankes, Soo Young Rieh, S. Shyam Sundar, Fred W. Weingarten
In recent years, trust in US public health and science institutions has faced unprecedented declines, particularly among Republicans/conservatives. To what extent might institutional criticism on social media be responsible for … In recent years, trust in US public health and science institutions has faced unprecedented declines, particularly among Republicans/conservatives. To what extent might institutional criticism on social media be responsible for such politically polarized declines in institutional trust? Two online survey experiments (total N = 6,800), using samples roughly reflective of the US adult population, examined the effects of key types of criticism against the Agency for Healthcare Research and Quality (AHRQ) and the Centers for Disease Control and Prevention (CDC). The results suggest that just a single exposure to any of the key types of criticism was sufficient to undermine institutional trust. While an institutional rebuttal was partially able to reverse these effects, residual declines in trust were substantial enough to cause decreased intentions to adhere to the AHRQ/CDC health recommendation featured in the experiments. While institutions should, therefore, be concerned about all types of social media criticism, those featuring morally charged trust-undermining narratives attacking the integrity of the AHRQ/CDC generated dramatically more anger (i.e., moral outrage), which in turn attracted social media engagement preferences likely to promote viral spread and exacerbate preexisting institutional politicization and issue polarization. These results suggest that efforts to bolster institutional trust should pay special attention to criticisms featuring integrity-based trust-undermining narratives.
Susan Schneider | Social Epistemology
Alexander Sabar , Adrianus Meliala | International Journal of Social Science and Human Research
Terrorists have utilized cyberspace to further their agendas, particularly in the expansion and dissemination of propaganda. Within this digital realm, terror actors orchestrate various processes of reconstruction and manipulate facts … Terrorists have utilized cyberspace to further their agendas, particularly in the expansion and dissemination of propaganda. Within this digital realm, terror actors orchestrate various processes of reconstruction and manipulate facts and reality. By applying Erving Goffman’s dramaturgical theory, terrorism is conceptualized as a stage play where social interactions are performed roles in a drama. This research examines how terrorist groups strategically design and present their propaganda as "performances" on social media and online forums, employing dramaturgical elements such as stages, roles, and audiences to enhance the effectiveness of their messaging. The study highlights the dramaturgical techniques employed in normalizing violence, leveraging symbolism, and fostering communities with shared ideologies. Insights from this research contribute to more effective counter-propaganda strategies that address the manipulative narratives propagated by terrorist groups in the digital era.
<ns3:p>In addition to the catastrophic effects on the environment, infrastructure and the population, the2024 flood in Poland demonstrated additional means of pressure characteristic of hybrid warfare,unprecedented in this type of … <ns3:p>In addition to the catastrophic effects on the environment, infrastructure and the population, the2024 flood in Poland demonstrated additional means of pressure characteristic of hybrid warfare,unprecedented in this type of threat. At issue is false information spread through social media andprivate messaging platforms, triggering social unrest and division. It is social media platforms, withtheir structure for instantaneous sharing of news, that provide an exceptionally fertile environmentfor the spread of disinformation. The content was intended to undermine trust in public institutions,dividing local communities, and realistically affecting the undertaken rescue efforts.The aim of the research, the results of which are presented in the article, was to identify socio-politicaleffects of disinformation during the flood in question. The article attempts to find how falseinformation and data manipulation influenced citizens’ attitudes, the actions of public institutions,and political decision-making during the crisis. In order to effectively comprehend the social andpolitical dynamics during this type of emergency, it was necessary to look at the role played bydisinformation in shaping attitudes and decisions during difficult times. The findings highlight therole of emotional triggers in amplifying disinformation and point to the need of developing mediaeducation, content verification tools and regulation to mitigate its effects during crises.</ns3:p>
Background Social media is widely used by the general public as a source of health information because of its convenience. However, the increasing prevalence of health misinformation on social media … Background Social media is widely used by the general public as a source of health information because of its convenience. However, the increasing prevalence of health misinformation on social media is becoming a serious concern, and it remains unclear how the general public identifies and responds to it. Objective This study aims to explore the approaches used by the general public for identifying and responding to health misinformation on social media. Methods Semistructured interviews were conducted with 22 respondents from the Malaysian general public. The theory of motivated information management was used as a guiding framework for conducting the interviews. Audio-taped interviews were transcribed verbatim and imported into ATLAS.ti software for analysis. Themes were identified from the qualitative data using a thematic analysis method. Results The 3 main themes identified were emotional responses and impacts of health misinformation, approaches used to identify health misinformation, and responses to health misinformation. The spread of health misinformation through social media platforms has caused uncertainty and triggered a range of emotional responses, including anxiety and feelings of vulnerability, among respondents who encountered it. The approaches to identifying health misinformation on social media included examining message characteristics and sources. Messages were deemed to be misinformation if they contradicted credible sources or exhibited illogical and exaggerated content. Respondents described multiple response approaches to health misinformation based on the situation. Verification was chosen if the information was deemed important, while misinformation was often ignored to avoid conflict. Respondents were compelled to take action if misinformation affected their family members, had been corrected by others, or if they were knowledgeable about the topic. Taking action involved correcting the misinformation and reporting the misinformation to relevant social media, enforcement authorities, and government bodies. Conclusions This study highlights the factors and motivations influencing the general public’s identification and response to health misinformation on social media. Addressing the challenges of health misinformation identified in this study requires collaborative efforts from all stakeholders to reduce the spread of health misinformation and reduce the general public’s belief in it.
ABSTRACT An important goal of science education is to equip students with the scientific knowledge and evaluation skills necessary to identify misinformation. However, the specific role of science education and … ABSTRACT An important goal of science education is to equip students with the scientific knowledge and evaluation skills necessary to identify misinformation. However, the specific role of science education and knowledge and the evaluation strategies most commonly relied on in this process remain unclear. The two studies presented here leverage educational diversity to explore these issues: Study I focused on a representative sample of the general population ( n = 500), where science education is compulsory up to 10th grade. Study II focused on a representative sample from an ultra‐Orthodox community ( n = 800), where science education in school is not compulsory. Respondents in both studies were asked to share misinformation they heard during the COVID‐19 pandemic and explain why they did not believe it. Using content analysis of participants’ open‐ended answers, we found that about half of the general population and only a third of the ultra‐Orthodox sample were able to identify misinformation when confronted with such. Science knowledge was correlated with accurately identifying misinformation in both studies, but science education was correlated with performance only among the general public who widely receives mandatory science education. Participants from the general public who justified their suspicion using content evaluation were more likely to identify misinformation correctly. At the same time, ultra‐Orthodox were more likely to perform well when they justified their suspicion based on the evaluation of the source. These findings highlight the difficulty of recognizing misinformation in real life. Having scientific knowledge and awareness of sources doesn't guarantee protection from being misled—but it helps.
Purpose In tandem with the omnipresence of falsehoods on social media, scepticism has emerged as a research topic, mainly in mitigating falsehood-related behaviours. However, the potential trajectories social media users … Purpose In tandem with the omnipresence of falsehoods on social media, scepticism has emerged as a research topic, mainly in mitigating falsehood-related behaviours. However, the potential trajectories social media users navigate to develop scepticism are under-researched. The current study proposes a conceivable personality-cognitive-driven pathway grounded in media literacy theory (MLT), with the heuristic-systematic model (HSM) as a supplementary framework. Specifically, the hypothesis proposed a direct effect of social media locus of control on social media scepticism and further examined this relationship through the mediation effect of mindful thought processing. Design/methodology/approach Employing a cross-sectional design, the data was collected from 423 Malaysian young adult social media users. SEM-PLS was utilised to test the direct and indirect effects of the constructs. Findings The study found that social media locus of control and mindful thought processing positively predicted social media scepticism. Mindful thought processing plays a positive mediating role between social media locus of control and social media scepticism. Research limitations/implications Media literacy advocates can benefit from the findings to design intervention programmes, investing in personality- and cognitive-related factors to train social media users to develop scepticism to battle the negative ramifications of falsehoods. Originality/value This study yields multiple salient contributions: First, it brings to the fore the conscious and deliberate efforts social media users undertake to develop scepticism; second, it integrates MLT with HSM in a structured framework, allowing its canons to systematically cultivate scepticism toward social media; third, it contributes to literature on social media scepticism, introducing avenues to explore the underlying drives of scepticism.
Previous research has highlighted that encounters with deepfakes induce uncertainty, skepticism, and mistrust among audiences. In this study, we relate perceived deepfake exposure to media cynicism. Deepfakes shake users’ sense … Previous research has highlighted that encounters with deepfakes induce uncertainty, skepticism, and mistrust among audiences. In this study, we relate perceived deepfake exposure to media cynicism. Deepfakes shake users’ sense of reality, increasing a need to rely on epistemic authorities, such as journalistic media, while raising fears of manipulation. Based on uncertainty management theory, we propose that two “epistemic virtues” moderate the relationship between deepfake exposure and media cynicism: self-efficacy and intellectual humility. In a survey of 1421 German internet users, we find that perceived deepfake exposure positively relates to media cynicism. Intellectual humility does not dampen this relationship. Deepfake detection self-efficacy may be more harmful than helpful in preventing media cynicism. We discuss these findings in the context of research indicating that users tend to overestimate their ability to detect deepfakes and the challenges the novel deepfake technology poses to audience trust in a digital information ecosystem.
Large language models (LLMs) offer substantial promise for improving health care; however, some risks warrant evaluation and discussion. This study assessed the effectiveness of safeguards in foundational LLMs against malicious … Large language models (LLMs) offer substantial promise for improving health care; however, some risks warrant evaluation and discussion. This study assessed the effectiveness of safeguards in foundational LLMs against malicious instruction into health disinformation chatbots. Five foundational LLMs-OpenAI's GPT-4o, Google's Gemini 1.5 Pro, Anthropic's Claude 3.5 Sonnet, Meta's Llama 3.2-90B Vision, and xAI's Grok Beta-were evaluated via their application programming interfaces (APIs). Each API received system-level instructions to produce incorrect responses to health queries, delivered in a formal, authoritative, convincing, and scientific tone. Ten health questions were posed to each customized chatbot in duplicate. Exploratory analyses assessed the feasibility of creating a customized generative pretrained transformer (GPT) within the OpenAI GPT Store and searched to identify if any publicly accessible GPTs in the store seemed to respond with disinformation. Of the 100 health queries posed across the 5 customized LLM API chatbots, 88 (88%) responses were health disinformation. Four of the 5 chatbots (GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, and Grok Beta) generated disinformation in 100% (20 of 20) of their responses, whereas Claude 3.5 Sonnet responded with disinformation in 40% (8 of 20). The disinformation included claimed vaccine-autism links, HIV being airborne, cancer-curing diets, sunscreen risks, genetically modified organism conspiracies, attention deficit-hyperactivity disorder and depression myths, garlic replacing antibiotics, and 5G causing infertility. Exploratory analyses further showed that the OpenAI GPT Store could currently be instructed to generate similar disinformation. Overall, LLM APIs and the OpenAI GPT Store were shown to be vulnerable to malicious system-level instructions to covertly create health disinformation chatbots. These findings highlight the urgent need for robust output screening safeguards to ensure public health safety in an era of rapidly evolving technologies.
Purpose Using a multidisciplinary approach, this study aims to trace the path of disinformation campaigns from their detection by linguistic cues of credibility to furtherance through the dissemination mechanisms, and … Purpose Using a multidisciplinary approach, this study aims to trace the path of disinformation campaigns from their detection by linguistic cues of credibility to furtherance through the dissemination mechanisms, and lastly, assessing their impact on the socio-political context. Design/methodology/approach This study provides an in-depth overview of four fundamental aspects of disinformation: the linguistic features that distinguish content designed to deceive and manipulate public opinion, the media mechanisms that facilitate its dissemination by exploiting the cognitive processes of its audience, the threats posed by the increasing use of generative artificial intelligence to spread disinformation and the broader consequences these disinformation dynamics have on public opinion and, consequently, on political decision-making processes. Findings As a result, the paper provides an interdisciplinary and holistic examination of the phenomenon, referring to its pluralized elements to highlight the importance of platform responsibility, media literacy campaigns among citizens and interactive cooperation between private and public sectors as measures to enhance resilience against the threat of disinformation. Originality/value The study highlights the need to increase platform accountability, promote media literacy among individuals and develop cooperation between the public and private sectors. Strengthening resilience to disinformation and ensuring the EU’s adaptability in the face of changing digital threats are the goals of this integrated strategy. Ultimately, the paper advocates a fair and open strategy that protects freedom of expression and strengthens democratic institutions at a time when digital disinformation is on the rise.
Resul YÜKSEL | Türkiye İletişim Araştırmaları Dergisi
Dijital epistemoloji, enformasyonun dijitalleşmesi neticesinde ortaya çıkmıştır. Dijitalleşmenin ardından evrene dair dijital bir perspektif geliştirilmiş ve bu durum, dijital-analog ayrımını gündeme getirmiştir. Bu çalışmada, dijitalleşmenin ortaya çıkardığı dijital nesneler ve … Dijital epistemoloji, enformasyonun dijitalleşmesi neticesinde ortaya çıkmıştır. Dijitalleşmenin ardından evrene dair dijital bir perspektif geliştirilmiş ve bu durum, dijital-analog ayrımını gündeme getirmiştir. Bu çalışmada, dijitalleşmenin ortaya çıkardığı dijital nesneler ve epistemolojik süreçleri, geleneksel epistemolojinin kavramsal şemalarının dönüşümü üzerinden kuramsal olarak tahkik edilmiştir. Bu bağlamda çalışmanın amacı, dijitalleşmenin doğurduğu bilgi işlem süreçlerini veri ve enformasyon kavramları çerçevesinde çözümleyerek geleneksel epistemolojilerden farklı olarak dijital epistemolojinin, dijital medya temelinde bir analizini yapmaktır. Bu doğrultuda dijital nesneler, dört ontolojik katmanda ele alınmaktadır: temel düzey, kurucu dijital nesneler, üretici dijital nesneler ve ürün olarak dijital nesneler. Temel düzey, henüz nesneleşmemiş dijital şeylere işaret ederken; kurucu dijital nesneler, işlenmemiş ham verilerdir. Üretici dijital nesneler, yani algoritmalar ve bilgisayar programları, bu ham verileri biçimlendirerek enformasyona dönüştürür. Ürün olarak dijital nesneler ise bu enformasyonları ifade eder. Dolayısıyla dijital epistemolojinin nihai hedefi, bu enformasyonların üretilmesidir. Bu çerçevede dijital nesnelere, insanın epistemik yetilerinin genişletilmesi bağlamında bakılabilir. Bilginin depolanması ve bilgi işlem hızındaki artış, büyük veri kavramını ortaya çıkarmıştır. Büyük veri havuzunda sınırsız sayıda enformasyonun depolanması, bu enformasyonlar arasında bağlantılar kurulmasını ve organize edilmelerini zorlaştırmıştır. Bu durum, epistemik bilgiye erişimde çeşitli sorunlar doğurarak enformasyon temelli bir yaşam modeline yol açmıştır.
P. Karthick | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
In recent years one of the biggest world concerns is the proliferation of fake news through social networks. Due to their intended purpose as means to sway the beliefs of … In recent years one of the biggest world concerns is the proliferation of fake news through social networks. Due to their intended purpose as means to sway the beliefs of large crowds, fake news has been causing a great effect on the world. Much attention has been paid to this area by researchers as the process of manual confirmation of the news’ authenticity is practically impossible and very costly. Explorations into the detection of false news targeted content based approaches, social context based approaches, image based approaches, sentiment based approaches and hybrid context based classification systems. As a consequence of utilizing the content-based classification approach, this work will propose a model for False news Classification utilizing the headlines of the news. The model used to solve the current problem is a BERT model and the output layer of which is connected to an LSTM layer. The FakeNewsNet dataset was used in both the training and evaluation processes, which consists of two sub-datasets: PolitiFact and GossipCop. Comparison has been made between the model and basic classification model. The suggested model, which utilizes an LSTM layer for the evaluation of impact, is almost analogous to the vanilla BERT model that was trained on the dataset on the same terms and conditions. The findings showed that there was a 2 percent increase in the level of accuracy that has been achieved. For PolitiFact, the recall was equal to 45 % and for training a 1. The performance boost achieved on the GossipCop dataset is about 11% as compared to the vanilla pre-trained BERT model. Key Words: Fake news, Social networks, Detection approaches, BERT model, FakeNewsNet dataset, Classification accuracy
Noam Shomron | Journal of Molecular Neuroscience
The present research examined the mnemonic consequences of social endorsement in the form of followers and likes. In four studies, participants were presented with simulated social media posts associated with … The present research examined the mnemonic consequences of social endorsement in the form of followers and likes. In four studies, participants were presented with simulated social media posts associated with high and low levels of social endorsement. In Studies 1 and 2, participants read tweets about COVID-19 (Study 1; N = 199) and Facebook status updates about positive and negative personal events (Study 2, N = 159) posted by users with large or small numbers of followers. In Studies 3 and 4, participants read the posts (tweets in Study 3, N = 158; Facebook status updates in Study 4, N = 177) that received large or small numbers of likes. Across all studies, regardless of cultural background and social conformity tendency, social endorsement did not affect memory performance for posted information: Although participants rated profiles with greater social endorsement as more popular, trustworthy, likable, and attractive, they remembered the posted information associated with high and low levels of social endorsement similarly. Participants better remembered negative information (Studies 2 and 4) and information posted by more likable users (Studies 1 and 3). The findings suggest that social endorsement alone, while influencing the perception of profile owners, does not enhance the memorability of the associated information.
The COVID-19 pandemic brought new challenges and five years later it is essential to understand its implications on society. Now, digital communication is no longer a trend but a certainty … The COVID-19 pandemic brought new challenges and five years later it is essential to understand its implications on society. Now, digital communication is no longer a trend but a certainty in a society facing uncertainty, flexibility and fragmentation. A massive investment in this area was the solution to face this ‘new normal’ era with an outstanding growth. The purpose of this paper is to analyze the impact of the pandemic on digital communication in Portugal. We use several national and international data bases and business studies to develop our research and analyze this phenomenon. The pandemic provided conditions not only to develop health-related communication, but also to be at the service of citizens through a multitude of channels. This has contributed to impact also on healthcare, improving literacy in this area as it highlighted the role of media in health awareness, disease prevention and health promotion. In Portugal, the COVID-19 pandemic accelerated the relevance of the digital transformation in 2 to 3 years. Consequently, internet access, online shopping and the virtual business presence increased. Today, organizational digital communications are more regular, transparent, empathetic and effective. These strategies allowed companies to remain active in the market and maintain a relationship with customers. Concerning this, online communication had a significantly increasing, particularly through various digital platforms such as blogs, Instagram and YouTube channels. In Portugal, the COVID-19 pandemic had a negative impact in companies that were not able to adapt their business to an online setting.
In both digital and mainstream media, clickbait headlines are rampant forms of receiving and distributing attention, designed to maximize click through rates which in turn delegitimize content and help spread … In both digital and mainstream media, clickbait headlines are rampant forms of receiving and distributing attention, designed to maximize click through rates which in turn delegitimize content and help spread misinformation. In this paper we propose an efficient and interpretable machine learning framework for the binary classification of clickbait vs non-clickbait headlines using traditional models. Our pipeline involves the preprocessing step, followed by n-gram feature extraction via CountVectorizer and the classification done using Multinomial Naive Bayes, Logistic Regression and XGBoost. The models were trained and evaluated on accuracy, precision, recall, F1 score and ROC AUC, using a publicly available dataset. Our results indicate that Naive Bayes and Logistic Regression models enjoyed better performance with an accuracy of 95.88% and an F1-score of 95.88% and an AUC of 0.99, performing better than the more complex XGBoost classifier. Confirming the ability of lightweight models for real time clickbait detection, we further show that traditional machine learning is also interpretable and scalable.
Abstract Recently, fake news detection on social media (SM) has attracted a lot of attention. With the emergence of fake news at a breakneck pace, the massive spread of fake … Abstract Recently, fake news detection on social media (SM) has attracted a lot of attention. With the emergence of fake news at a breakneck pace, the massive spread of fake news has had a serious impact in our society. The authenticity of the news is questionable and there exists a necessity for an automated tool for the detection. However, most fake news detection methods are mainly supervised, requiring huge amounts of annotated data, which is time-consuming, expensive, and almost impossible with vast new SM volume. To deal with this problem, in this paper, we propose a novel unsupervised fake news detection framework based on structural contrastive learning by combining the propagation structure of news and contrastive learning to achieve unsupervised training. To validate the influence of parameters and our method’s performance, we design experiment sets on public Twitter and Weibo datasets, which validate our approach outperforms current baseline ones and has proper robustness.
Abstract Within the social sciences, various types of inaccurate or epistemically risky information – including disinformation, fake news, and conspiracy theories – are frequently referred to as fiction. In the … Abstract Within the social sciences, various types of inaccurate or epistemically risky information – including disinformation, fake news, and conspiracy theories – are frequently referred to as fiction. In the present article, I argue that this comparison conflicts with how fiction as a broad category of texts is typically defined in philosophical aesthetics and literary studies. First, I define disinformation, misinformation, fake news, and conspiracy theories, focusing on the extent to which they can be considered as inaccurate information. This is followed by a definition of fiction from the perspective of philosophical aesthetics and literary studies, which can be summarized as ‘intentionally signaled invention’. I then examine whether the various forms of previously mentioned inaccurate information fulfill this definition of fiction. In summary, the types of inaccurate information under investigation are not intentionally signaled inventions, as they either claim to be accurate, or do not make a claim about their truth content. The article closes with implications for psychologists, a discussion of limitations and further considerations.
This research analyses how the Ministry of National Health Services Regulation and Coordination (MoNHSRC) Pakistan utilized Facebook as a channel for crisis communication during the COVID-19 pandemic, with an emphasis … This research analyses how the Ministry of National Health Services Regulation and Coordination (MoNHSRC) Pakistan utilized Facebook as a channel for crisis communication during the COVID-19 pandemic, with an emphasis on vaccine-related communications. By using a quantitative content analysis method, 375 Facebook posts published between February 1, 2021, and September 23, 2022, were reviewed to identify the most prevalent themes and to determine how frequently specific target audiences were mentioned in vaccine-related call-to-action messages. The data reveals that the most popular theme was "Up to Date Information," followed by "Call for Vaccine" and "Vaccine Supportive" and “Addressing public concerns and side effects” related to vaccine material, underlining Ministry of National Health Services Regulation and Coordination (MoNHSRC)’s importance on transparency and public health promotion. However, "Vaccine Availability" was the least highlighted theme, revealing a potential gap in logistical communication during a period of widespread concern and limited availability. With regard to targeting audience, the general public was targeted in the most of posts, while specific segments such as old citizens, healthcare workers, pregnant and breast-feeding women, and children received very less direct message. These results imply that while MoNHSRC maintained a constant online presence and offered vital information during the COVID-19 crisis, its communication strategy lacked personalized interaction with high-risk or hesitant populations. The study underlines the necessity of integrating focused, accessible, and inclusive messaging into future public health initiatives, especially in developing nations where social media plays a crucial role in crisis response.
Indu Arora | International Journal of Advanced Research in Computer Science
It is very easy to propagate fake news in the era of information age as people have become addicted to Internet, WWW, Social Media Platforms and Blogging sites. The fake … It is very easy to propagate fake news in the era of information age as people have become addicted to Internet, WWW, Social Media Platforms and Blogging sites. The fake news has far-reaching implications for politics, society and media credibility. Media literacy campaigns, fact-checking initiatives and the use of Artificial Intelligence (AI) tools &amp; techniques are some of the means for detecting false content. This research paper deliberates on the origin of fake news, factors responsible for spreading them and implications of fake news on society and public opinion. This paper compares the tools and techniques used to detect the fake news. This paper further considers a case study for detecting the fake news with the help of developed Machine Learning (ML) Models using different datasets and then predicting their accuracy level. It is also observed that due to constantly evolving nature of fake news, its spread in different languages and regions, it has become difficult in regulating information flow and controlling the propagation of the fake news.
Abstract The spread of narratives online has been extensively studied. However, prior research typically relies on metadata and often overlooks the dynamics of post stances. In this paper, we introduce … Abstract The spread of narratives online has been extensively studied. However, prior research typically relies on metadata and often overlooks the dynamics of post stances. In this paper, we introduce a novel stance-based epidemiological model, which explicitly incorporates the stance of posts—a critical element often ignored by models that focus solely on isolated narratives. Our model, $$SEI_A I_D Z$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>S</mml:mi> <mml:mi>E</mml:mi> <mml:msub> <mml:mi>I</mml:mi> <mml:mi>A</mml:mi> </mml:msub> <mml:msub> <mml:mi>I</mml:mi> <mml:mi>D</mml:mi> </mml:msub> <mml:mi>Z</mml:mi> </mml:mrow> </mml:math> ( $$S$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>S</mml:mi> </mml:math> = susceptible, $$E$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>E</mml:mi> </mml:math> = exposed, $$I_A$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msub> <mml:mi>I</mml:mi> <mml:mi>A</mml:mi> </mml:msub> </mml:math> = agree with the narrative, $$I_D$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msub> <mml:mi>I</mml:mi> <mml:mi>D</mml:mi> </mml:msub> </mml:math> = posting disagree or alternative narrative, $$Z$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>Z</mml:mi> </mml:math> = skeptic), captures the dynamic interactions among competing narratives, including the introduction of countermeasures or opposing viewpoints, thereby offering a more comprehensive framework for analyzing online narrative dissemination. Comparative evaluations demonstrate that our model outperforms the baseline $$SEIZ$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mi>SEIZ</mml:mi> </mml:mrow> </mml:math> model, achieving lower predictive error rates. Furthermore, we identify two key parameters: $$\beta$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>β</mml:mi> </mml:math> (transmission rate) and $$\psi$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mi>ψ</mml:mi> </mml:math> (the rate at which exposed individuals transition to the Agreed and Skeptic compartment). Both parameters significantly influence the basic reproduction number ( $${\mathcal {R}}_0$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msub> <mml:mi>R</mml:mi> <mml:mn>0</mml:mn> </mml:msub> </mml:math> ), a measure of transmission potential. Our findings indicate that β plays a significant role in driving propagation across all platforms, underscoring the need to control it in order to mitigate the spread of misinformation. To ensure the generalization of our model, we validate our approach using three distinct platforms and scenarios: Health-related conspiracy theories about COVID-19 and 5G on the text-based platform X (formerly Twitter), Geopolitical narratives related to the Russia-Ukraine conflict on the messaging platform Telegram, and Election misinformation campaigns during Taiwan’s 2024 elections on the multimedia-based platform TikTok. Our research provides critical insights into the role of stance and content dynamics in online narrative dissemination, paving the way for more effective strategies to combat harmful narratives and inform policymaking.
This study investigates Spanish-language public discourse on YouTube following the unprecedented Iberian Peninsula blackout of 28 April 2025. Leveraging comments extracted via the YouTube Data API and analyzed with the … This study investigates Spanish-language public discourse on YouTube following the unprecedented Iberian Peninsula blackout of 28 April 2025. Leveraging comments extracted via the YouTube Data API and analyzed with the OpenAI GPT-4o-mini model, it systematically examined 76,398 comments from 360 of the most relevant videos posted on the day of the event. The analysis explored emotional responses, sentiment trends, misinformation prevalence, civic engagement, and attributions of blame within the immediate aftermath of the blackout. The results reveal a discourse dominated by negativity and anger, with 43% of comments classified as angry and an overall negative sentiment trend. Misinformation was pervasive, present in 46% of comments, with most falsehoods going unchallenged. The majority of users attributed the blackout to government or political failures rather than technical causes, reflecting a profound distrust in institutions. Notably, while one in five comments included a call to action, only a minority offered constructive solutions, focusing mainly on infrastructure and energy reform. These findings highlight the crucial role of multilingual, real-time crisis communication and the unique information needs of Spanish-speaking populations during emergencies. By illuminating how rumors, emotions, and calls for accountability manifest in digital spaces, this study contributes to the literature on crisis informatics, digital resilience, and inclusive sustainability policy.
This chapter investigates students' perceptions, knowledge, and attitudes toward deepfake technology, including its potential misuse and related protection and reporting mechanisms. The research employs a focus group discussion involving 10 … This chapter investigates students' perceptions, knowledge, and attitudes toward deepfake technology, including its potential misuse and related protection and reporting mechanisms. The research employs a focus group discussion involving 10 students from five faculties at the University “St. Kliment Ohridski” – Bitola: Faculty of Law, Faculty of Education, Faculty of Technical Sciences, Faculty of Information and Communication Technologies, and Faculty of Biotechnical Sciences. Preliminary findings indicate that while students are generally familiar with the concept of deepfakes, they lack sufficient skills to accurately identify deepfake content. To the best of our knowledge, this is the first study in North Macedonia to examine students' awareness and understanding of deepfake technology. Based on these results, the chapter offers recommendations aimed at enhancing public awareness and improving students' ability to recognize and respond to deepfake materials effectively.
This chapter examines how education serves as a key strategy to combat cybercrime by specifically analysing misinformation and disinformation issues in Australia. The chapter analyses complex elements of the subject … This chapter examines how education serves as a key strategy to combat cybercrime by specifically analysing misinformation and disinformation issues in Australia. The chapter analyses complex elements of the subject area to provide valuable insights which can shape upcoming educational approaches and inform policymaking decisions. Through education priority, the chapter supports an adaptive and practical cybercrime approach that respects ethical standards to protect individual rights. This research aims to build an essential basis for enhancing educational methods and policy systems across the Asia-Pacific region. This chapter points out that digital literacy and critical thinking skills must be developed in individuals because these abilities are essential for navigating and fighting against misinformation and disinformation in the digital environment. These educational programs prove essential because they teach citizens to identify trustworthy information, which helps build societal defences against cybercrime in the region.
The software and data in this repository are a snapshot of the software and data that were used in the research reported on in the paper Combating Fake News on … The software and data in this repository are a snapshot of the software and data that were used in the research reported on in the paper Combating Fake News on Social Media: An Early Detection Approach using Multimodal Adversarial Transfer Learning by Cong Wang and Chuchun Zhang and Runyu Chen.