Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

journalmedia-logo

Article Menu

research paper on news media

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Artificial intelligence in news media: current perceptions and future outlook.

research paper on news media

1. Introduction

  • RQ1. How is news media positioning itself in the subfields of artificial intelligence?
  • RQ2 . To what extent is AI being deployed in the news industry?
  • RQ3. What are the future avenues for AI in news media?

2. Theoretical Framework

2.1. artificial intelligence in its current manifestation, 2.2. artificial intelligence in the news industry, 4. findings, 4.1. an overview of the ai in news media, 4.2. machine learning and its applications in the journalistic field, 4.3. computer vision to investigative reporting, 4.4. planning, scheduling, and optimization in news media, 5. discussion and conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Appelgren, Ester, and Gunnar Nygren. 2014. Data Journalism in Sweden. Digital Journalism 2: 394–405. [ Google Scholar ] [ CrossRef ]
  • Aronson, Jay D. 2018. Computer Vision and Machine Learning for Human Rights Video Analysis: Case Studies, Possibilities, Concerns, and Limitations. Law and Social Inquiry 43: 1188–209. [ Google Scholar ] [ CrossRef ]
  • Bastos, Marco, and Dan Mercea. 2018. The Public Accountability of Social Platforms: Lessons from a Study on Bots and Trolls in the Brexit Campaign. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Biswal, Santosh Kumar, and Nikhil Kumar Gouda. 2020. Artificial Intelligence in Journalism: A Boon or Bane? Singapore: Springer, pp. 155–67. [ Google Scholar ] [ CrossRef ]
  • Boczkowski, P. J. 2004. Digitizing the News: Innovation in Online Newspapers. Inside Technology . Cambridge and London: MIT Press, pp. 1–360. [ Google Scholar ]
  • Borges-Rey, Eddy. 2016. Unravelling Data Journalism: A Study of Data Journalism Practice in British Newsrooms. Journalism Practice 10: 833–43. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Broussard, Meredith. 2015. Artificial Intelligence for Investigative Reporting: Using an Expert System to Enhance Journalists’ Ability to Discover Original Public Affairs Stories. Digital Journalism 3: 814–31. [ Google Scholar ] [ CrossRef ]
  • Broussard, Meredith. 2018. Artificial Unintelligence: How Computers Misunderstand the World , 1st ed. Cambridge: MIT Press. [ Google Scholar ]
  • Broussard, Meredith, Nicholas Diakopoulos, Andrea L. Guzman, Rediet Abebe, Michel Dupagne, and Ching Hua Chuan. 2019. Artificial Intelligence and Journalism. Journalism and Mass Communication Quarterly 96: 673–95. [ Google Scholar ] [ CrossRef ]
  • Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and et al. 2020. Language Models Are Few-Shot Learners. Advances in Neural Information Processing Systems . Available online: https://arxiv.org/abs/2005.14165v4 (accessed on 1 November 2021).
  • Carlson, Matt. 2015. The Robotic Reporter: Automated Journalism and the Redefinition of Labor, Compositional Forms, and Journalistic Authority. Digital Journalism 3: 416–31. [ Google Scholar ] [ CrossRef ]
  • Castro, Daniel, and Joshua New. 2016. The Promise of Artificial Intelligence. Washington, DC and Brussels . Available online: https://www2.datainnovation.org/2016-promise-of-ai.pdf (accessed on 1 November 2021).
  • Chan-Olmsted, Sylvia M. 2019. A Review of Artificial Intelligence Adoptions in the Media Industry. International Journal on Media Management 21: 193–215. [ Google Scholar ] [ CrossRef ]
  • Coddington, Mark. 2015. Clarifying Journalism’s Quantitative Turn. Digital Journalism 3: 331–48. [ Google Scholar ] [ CrossRef ]
  • Cook, Clare, Emiliana Garcia, Heghine Gyulnazaryan, Juan Melano, Jakub Parusinski, and Alex Sabadan. 2021. The Next Wave of Disruption: Emerging Market Media Use of Artificial Intelligence and Machine Learning , 1st ed. Edited by Robert Shaw. Copenhagen: International Media Support (IMS). [ Google Scholar ]
  • DalBen, Silvia, and Amanda Chevtchouk Jurno. 2021. More than Code: The Complex Network That Involves Journalism Production in Five Brazilian Robot Initiatives Cite This Paper Related Papers. ISOJ 11: 111–37. [ Google Scholar ]
  • Danzon-Chambaud, Samuel. 2021. Covering COVID-19 with Automated News. Columbia Journalism Review. Available online: https://www.cjr.org/tow_center_reports/covering-covid-automated-news.php (accessed on 6 August 2021).
  • de-Lima-Santos, Mathias-Felipe, and Lucia Mesquita. 2021a. Data Journalism Beyond Technological Determinism. Journalism Studies 22: 1416–35. [ Google Scholar ] [ CrossRef ]
  • de-Lima-Santos, Mathias-Felipe, and Lucia Mesquita. 2021b. A Challenging Future for the Latin American News Media Industry. In Journalism, Data and Technology in Latin America , 1st ed. Edited by Ramón Salaverría and Mathias-Felipe De-Lima-Santos. Cham: Palgrave Macmillan, pp. 229–62. [ Google Scholar ] [ CrossRef ]
  • de-Lima-Santos, Mathias-Felipe, and Ramón Salaverría. 2021. From Data Journalism to Artificial Intelligence: Challenges Faced by La Nación in Implementing Computer Vision in News Reporting. Palabra Clave 24: 1–40. [ Google Scholar ] [ CrossRef ]
  • Deloitte. 2014. Demystifying Artificial Intelligence . New York: Deloitte. [ Google Scholar ]
  • Diakopoulos, Nicholas. 2020. Computational News Discovery: Towards Design Considerations for Editorial Orientation Algorithms in Journalism. Digital Journalism 8: 945–67. [ Google Scholar ] [ CrossRef ]
  • Dörr, Konstantin Nicholas. 2016. Mapping the Field of Algorithmic Journalism. Digital Journalism 4: 700–22. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Erdal, Ivar John. 2011. Coming to Terms with Convergence Journalism: Cross-Media as a Theoretical and Analytical Concept. Convergence: The International Journal of Research into New Media Technologies 17: 213–23. [ Google Scholar ] [ CrossRef ]
  • Ferrer-Conill, Raul, and Edson C. Tandoc. 2018. The Audience-Oriented Editor: Making Sense of the Audience in the Newsroom. Digital Journalism 6: 436–53. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Future Today Institute. 2018. 2019 Industry Trends: Journalism, Media, Technology. Available online: https://futuretodayinstitute.com/ (accessed on 30 September 2021).
  • Gage, Justin. 2020. What’s GPT-3?—Technically. Technically . Available online: https://technically.substack.com/p/whats-gpt-3 (accessed on 20 July 2020).
  • Gelgel, Ni Made Ras Amanda. 2020. Will Technology Take over Journalism? Informasi 50: 5–10. [ Google Scholar ] [ CrossRef ]
  • Graefe, Andreas. 2016. Guide to Automated Journalism . New York: Tow Center for Digital Journalism, Columbia University. [ Google Scholar ] [ CrossRef ]
  • Guzman, Andrea L., and Seth C. Lewis. 2020. Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda. New Media & Society 22: 70–86. [ Google Scholar ] [ CrossRef ]
  • Harvard, Jonas. 2020. Post-Hype Uses of Drones in News Reporting: Revealing the Site and Presenting Scope. Media and Communication 8: 85–92. [ Google Scholar ] [ CrossRef ]
  • Hassaballah, Mahmoud, and Ali Ismail Awad. 2020. Deep Learning in Computer Vision , 1st ed. Edited by Mahmoud Hassaballah and Ali Ismail Awad. Boca Raton: CRC Press/Taylor and Francis. [ Google Scholar ] [ CrossRef ]
  • Hermida, Alfred, and Mary Lynn Young. 2019. Data Journalism and the Regeneration of News , 1st ed. London: Routledge. [ Google Scholar ] [ CrossRef ]
  • Hernandez Serrano, Maria Jose, Anita Greenhill, and Gary Graham. 2015. Transforming the News Value Chain in the Social Era: A Community Perspective. Edited by Dr. Gary Graham. Supply Chain Management: An International Journal 20: 313–26. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Jamil, Sadia. 2020. Artificial Intelligence and Journalistic Practice: The Crossroads of Obstacles and Opportunities for the Pakistani Journalists. Journalism Practice , 1–23. [ Google Scholar ] [ CrossRef ]
  • JournalismAI. n.d. Case Studies. JournalismAI . Available online: https://www.lse.ac.uk/media-and-communications/polis/JournalismAI/Case-studies (accessed on 30 September 2021).
  • Kang, Seok, Erin O’Brien, Arturo Villarreal, Wansoo Lee, and Chad Mahood. 2019. Immersive Journalism and Telepresence. Digital Journalism 7: 294–313. [ Google Scholar ] [ CrossRef ]
  • Kothari, Ammina, and Sally Ann Cruikshank. 2021. Artificial Intelligence and Journalism: An Agenda for Journalism Research in Africa. African Journalism Studies , 1–17. [ Google Scholar ] [ CrossRef ]
  • Krumsvik, Arne H, Stefania Milan, Niamh Ní Bhroin, and Tanja Storsul. 2019. Making (Sense of) Media Innovations. In Making Media , 1st ed. Edited by Mark Deuze and Mirjam Prenger. Amsterdam: Amsterdam University Press, pp. 193–206. [ Google Scholar ] [ CrossRef ]
  • Lewis, Seth C., Andrea L. Guzman, and Thomas R. Schmidt. 2019. Automation, Journalism, and Human–Machine Communication: Rethinking Roles and Relationships of Humans and Machines in News. Digital Journalism 7: 409–27. [ Google Scholar ] [ CrossRef ]
  • Linden, Carl-Gustav. 2017a. Algorithms for Journalism: The Future of News Work. The Journal of Media Innovations 4: 60–76. [ Google Scholar ] [ CrossRef ]
  • Linden, Carl-Gustav. 2017b. Decades of Automation in the Newsroom. Digital Journalism 5: 123–40. [ Google Scholar ] [ CrossRef ]
  • Locker, Mic, Jeff Loucks, Susanne Hupfer, and David Jarvis. 2019. Seasoned Explorers: How Experienced TMT Organizations Are Navigating AI . New York: Deloitte. [ Google Scholar ]
  • Lokot, Tetyana, and Nicholas Diakopoulos. 2016. News Bots: Automating News and Information Dissemination on Twitter. Digital Journalism 4: 682–99. [ Google Scholar ] [ CrossRef ]
  • Marr, D. 2010. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information . Cambridge: The MIT Press. [ Google Scholar ]
  • McCarthy, John. 1998. What Is Artificial Intelligence? CogPrints ; Stanford. Available online: http://cogprints.org/412/2/whatisai.ps (accessed on 30 September 2021).
  • Munoriyarwa, Allen, Sarah Chiumbu, and Gilbert Motsaathebe. 2021. Artificial Intelligence Practices in Everyday News Production: The Case of South Africa’s Mainstream Newsrooms. Journalism Practice Online First . [ Google Scholar ] [ CrossRef ]
  • Mutsvairo, Bruce. 2019. Challenges Facing Development of Data Journalism in Non-Western Societies. Digital Journalism 7: 1289–94. [ Google Scholar ] [ CrossRef ]
  • Nelson, Jacob L., and Edson C. Tandoc. 2018. Doing ‘Well’ or Doing ‘Good’: What Audience Analytics Reveal About Journalism’s Competing Goals. Journalism Studies 20: 13. [ Google Scholar ] [ CrossRef ]
  • Örnebring, Henrik. 2010. Technology and Journalism-as-Labour: Historical Perspectives. Journalism 11: 57–74. [ Google Scholar ] [ CrossRef ]
  • Ortiz Freuler, J., and C. Iglesias. 2018. Algorithms e Inteligencia Artificial En Latin America: Un Estudio de Implementaciones Por Parte de Gobiernos En Argentina y Uruguay . Available online: www.webfoundation.org (accessed on 1 November 2021).
  • Parasie, Sylvain, and Eric Dagiral. 2013. Data-Driven Journalism and the Public Good: ‘Computer-Assisted-Reporters’ and ‘Programmer-Journalists’ in Chicago. New Media & Society 15: 853–71. [ Google Scholar ] [ CrossRef ]
  • Paulussen, Steve. 2016. Innovation in the Newsroom. In The SAGE Handbook of Digital Journalism , 1st ed. Edited by Tamara Witschge, C. W. Anderson, David Domingo and Alfred Hermida. London: SAGE Publications Ltd., pp. 192–206. [ Google Scholar ] [ CrossRef ]
  • Rashidian, Nushin. 2020. Platforms and Publishers: The Great Pandemic Funding Push. Columbia Journalism Review . Available online: https://www.cjr.org/tow_center_reports/platforms-publishers-pandemic-funding-news.php (accessed on 17 December 2020).
  • Rashidian, Nushin, Pete Brown, Elizabeth Hansen, Emily Bell, Jonathan Albright, Abigail Hartstone, Elizabeth Hansen-With, Emily Bell, Jonathan Albright, and Abigail Hartstone. 2018. Friend and Foe: The Platform Press at the Heart of Journalism. Columbia Journalism Review . [ Google Scholar ] [ CrossRef ]
  • Rodrigues, Ricardo, Hugo Gonçalo Oliveira, and Paulo Gomes. 2014. Lemport: A High-Accuracy Cross-Platform Lemmatizer for Portuguese. OpenAccess Series in Informatics 38: 267–74. [ Google Scholar ] [ CrossRef ]
  • Russell, Stuart, and Peter Norvig, eds. 2021. Artificial Intelligence: A Modern Approach , 4th ed. Harlow, UK: Pearson, Available online: https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html (accessed on 1 November 2021).
  • Salaverría, Ramón, and Mathias-Felipe de-Lima-Santos. 2020. Towards Ubiquitous Journalism: Impacts of IoT on News. In Journalistic Metamorphosis: Media Transformation in the Digital Age Volume 70 of Studies in Big Data , 1st ed. Edited by Jorge Vázquez-Herrero, Sabela Direito-Rebollal, Alba Silva-Rodríguez and Xosé López-García. Cham: Springer International Publishing, pp. 1–15. [ Google Scholar ] [ CrossRef ]
  • Stray, Jonathan. 2019. Making Artificial Intelligence Work for Investigative Journalism. Digital Journalism 7: 1–22. [ Google Scholar ] [ CrossRef ]
  • Szeliski, Richard. 2011. Computer Vision . Texts in Computer Science. London: Springer. [ Google Scholar ] [ CrossRef ]
  • van Dalen, Arjen. 2012. The Algorithms behind the Headlines: How Machine-Written News Redefines the Core Skills of Human Journalists. Journalism Practice 6: 648–58. [ Google Scholar ] [ CrossRef ]
  • Whittaker, Jason. 2019. Tech Giants, Artificial Intelligence, and the Future of Journalism . London: Routledge. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Young, Mary Lynn, and Alfred Hermida. 2015. From Mr. and Mrs. Outlier To Central Tendencies. Digital Journalism 3: 381–97. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

de-Lima-Santos, M.-F.; Ceron, W. Artificial Intelligence in News Media: Current Perceptions and Future Outlook. Journal. Media 2022 , 3 , 13-26. https://doi.org/10.3390/journalmedia3010002

de-Lima-Santos M-F, Ceron W. Artificial Intelligence in News Media: Current Perceptions and Future Outlook. Journalism and Media . 2022; 3(1):13-26. https://doi.org/10.3390/journalmedia3010002

de-Lima-Santos, Mathias-Felipe, and Wilson Ceron. 2022. "Artificial Intelligence in News Media: Current Perceptions and Future Outlook" Journalism and Media 3, no. 1: 13-26. https://doi.org/10.3390/journalmedia3010002

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

research paper on news media

Public perspectives on trust in news

research paper on news media

Almost all news reporting implicitly asks the public to trust it. At a basic level, it asks people to trust that ‘we really did talk to the sources we mention, they really said what we have quoted them on, and the data we cite is reliable’. And in a more expansive sense, ‘our editorial judgement on what to cover, who to talk to, and what data to rely on is sound, so is our presentation of what we found, and our motivations’. 

But across the world, much of the public does not trust most news most of the time. While there is significant variation from country to country and from brand to brand, in this year’s report, just 40% of our respondents across all 47 markets say they trust most news. 

Public trust is not the same as trustworthiness. Sometimes people trust individuals and institutions that are not, in fact, trustworthy. Sometimes they do not trust – or even distrust – those that they might, on closer inspection, see are trustworthy (or that journalists or others think they ought to see as trustworthy). 

But whether well-founded or not, trust in news is, from the perspective of journalists and news media who face an often sceptical public, what sociologists call a ‘social fact’, famously defined as ‘manners of acting, thinking and feeling external to the individual, which are invested with a coercive power by virtue of which they exercise control’ (Durkheim 1982: 52). 

This means that trust, both at the brand level and at the general level, influences the role news can and does play in society. Journalists and media organisations have both pragmatic reasons to care – ‘trust can be a key to unlocking user revenue’ as Agnes Stenbom, the head of IN/LAB at Schibsted puts it 1 – and more principled reasons to care, as years of research has documented how people who trust the news less are less likely to believe in the information it presents and learn from it (see Altay et al. 2023 for an overview). 

Individual reporters and editors will not necessarily agree with – let alone like – how they, their colleagues, and their competitors are seen by members of the public. And trust is not, in itself, a measure of the value of what journalists do, just as earning it is not always the most important thing journalists can or should aspire to. But public perceptions of trust are important in themselves. In people’s relations with journalism and the news media, as in their relations with politics and much else, perception is a consequential part of reality. 

Much of the public has a similar view on trust in news media 

Because we know that many journalists and editors care whether people trust the news or not, we have long tracked this at a general level by asking people whether they feel they can trust most news most of the time. While there is significant variation by country – and in some countries significant variation by, for example, political orientation – overall trust in news in many cases varies less by gender, age, income, and education (as well as by political orientation, as we will discuss in more detail in the last part of this chapter) than one might assume.  

Generally, younger people, people with low income, and people with lower levels of formal education tend to trust the news less. These are also groups that are often less well served by the news media, and generally less likely to think that the media cover people like them fairly, as we showed in our Digital News Report 2021. 

But looking across respondents who identify as being politically on the left, in the centre, or on the right, at the aggregate, there is little difference when looking at data from all our respondents (though there are partisan differences in some individual countries). 

Much of the public highlights similar factors underpinning trust in news 

In this year’s Digital News Report , we add further nuance to the work we have done over the years on trust in news by exploring what factors different members of the public say matter the most to them when it comes to deciding which news outlets to trust.  

This is another step that builds on years of research documenting how trust in news is often highly dependent on political context, correlated with interpersonal trust and trust in other institutions in society. In some countries and at the level of individual brands, trust is often intertwined with political partisanship. It also sometimes in part reflects the volume of media criticism people see, often strategically targeted at independent news media and individual journalists by political actors who use social media and other channels to try to undermine those they see as challenges to their agenda. 2

As part of our survey, we ask all respondents about eight different possible factors that we have derived from qualitative research we have done in the past, from existing academic work, and from input from journalists keen to better understand the drivers of trust in news. (They are not exhaustive, but cover several different factors known to influence people’s relationship with news.) 

Trust factors included in our survey

How important or unimportant are the following to you when it comes to deciding which news outlets to trust?  Whether:   

they have a long history 

they have high journalistic standards  

they are too negative  

they are biassed  

they exaggerate or sensationalise  

they are transparent about how the news is made  

their values are the same as mine 

they represent people like me fairly

The eight factors include some that many journalists associate with trustworthiness – such as high journalistic standards, transparency, freedom from bias, avoiding exaggeration and sensationalism, and representing people fairly.  

They also include factors that are not necessarily associated with trustworthiness from an editorial point of view, but that previous research suggests nonetheless are important in influencing whether people trust news – including whether news outlets have a long history, are seen as too negative, or have the same values as the respondent.  

All these factors are in the eye of the beholder, often necessarily so (there are limits to what people can realistically learn about, e.g., the journalistic standards of specific outlets). What matters when it comes to trust is whether people perceive someone as trustworthy. The ‘coercive power’ these beliefs exercise over journalists – as per the sociological notion of ‘social facts’ – rests on people’s perceptions having real-world consequences, including for which news media they give credence to, engage with, and rely on. 

While there is important variation from country to country, two things stand out looking at our data across all markets. First, while all these factors are important for many respondents (underlining the complexity of what engenders trust), several of those that are most frequently highlighted by respondents as important for how they think about trust are also central to how many journalists think about trustworthiness – in particular transparency, high journalistic standards, and a freedom from bias. Fairness, also often identified as central to trustworthy news reporting, is, in our survey, specifically concerned with whether respondents believe that people like themselves are being represented fairly, and this too is among the factors most frequently underlined as important.  

With data from 47 markets, there is necessarily a lot of important detail and variation, but it is worth highlighting that there is less cross-country variation when it comes to the emphasis on transparency, high standards, and representing people like me fairly than there is on the other factors. And while other factors are also important, they rarely rival these core values. Take the question of whether a news outlet’s values are ‘the same as mine’ – in none of the markets we cover do significantly more respondents identify this as an important factor in deciding which outlets to trust than identify transparency, high standards, and representing people fairly. 

Second, while it is sometimes assumed that different generations and different parts of the political spectrum think very differently about news, our data suggest that this is not actually the case when it comes to factors related to trust. 

If, for example, we compare younger respondents (aged under 35) with older ones (35 and over), the differences are quite small, and not always as one might expect – journalists and editors may associate concerns over social justice and perceived unfairness with younger people, but actually older people are more likely to say this is important for how they think about trust in news.  

Looking more closely at smaller subgroups, people who are more affluent, more highly educated, older, and more on the right politically are more likely to insist on the importance of people like them being represented fairly – our data thus provide quite a different picture from the impression some seem to have of discontent driven by younger, aggrieved lefties. 

With some minor differences, the pattern we see when looking at different generations also holds for education, income, and, as shown in the chart, for gender.  

This relative lack of variation is in itself a striking finding. Almost everything about how people use and think about news is deeply shaped by basic socio-economic factors such as age, income, and education, and people’s relations with media are often influenced by political orientation. But this is not the case for how people think about trust in news overall.  

Thus, our research suggests that much of the public has much in common in terms of what they want from news, and what they want is at least somewhat aligned with what many journalists and media would like to offer them. What varies is not so much which factors people highlight. They are strikingly similar. What varies are the conclusions they come to, reflecting often very different experiences with the news. 

When trust in news is low, the issue is thus generally not that people do not know what to look for. It is that many do not feel they are getting it. If they are right, news has a product problem. If they are wrong, news has a communications problem. 

‘The other divide’ – how political orientation and interest in politics intersects with trust in news  

While our data challenge the idea that younger people think very differently about trust in news from how older people think about it, and suggest education, income, and gender matter less than they do in some other respects, they do underline the importance of people’s relationship with politics – but not in the way that is often assumed. 

Many journalists operate in polarised political environments. Given that many of the most engaged news users – and of the most aggressively expressive voices on social media – are highly partisan, and given that some prominent politicians on the right (e.g., Donald Trump) and sometimes on the left (e.g., Andrés Manuel López Obrador) routinely attack the media, it is often presumed that people on the right think very differently about trust in news from those on the left or in the political centre.  

Certainly they often do when it comes to individual news media brands, and in some countries when it comes to trust in news overall. But they do not when it comes to what factors matter for them in deciding which news outlets to trust. 

Differences between often highly engaged partisans on the right and on the left, or for that matter those with more centrist political orientations, are very small in our data. Instead, the most important political divide in how people think about what factors shape their trust in news is what political scientists call ‘the other divide’, the far less immediately obvious divide between those people who make politics a central part of their lives and those who do not (Krupnikov and Ryan 2022). 

One way to capture this is to break down our respondents by political orientation. Across all markets covered, 15% of our respondents identify as very or fairly left-wing, 14% as very or fairly right-wing, and 50% centre or slightly to the left- or right-of-centre. The remaining 20% answer ‘don’t know’ when asked about their political orientation.  

In discussions often focused on partisan division, this latter, large group is sometimes overlooked. Younger people, people with limited formal education, and people with lower incomes are more likely to be part of it. (Just as they are likely to trust the news less than the public at large.) It is also a group that is over-represented among consistent news avoiders and casual users, so often these are people who have a tenuous connection not only with conventional party politics, but also with the news. 

Just 28% of the respondents who answer ‘don’t know’ when asked about their political orientation say they think they can trust most news most of the time – compared to 43% of those on the left, 42% in the centre, and 45% on the right. And, as the next chart shows, they are far less likely to name any of the eight factors included in our survey as important for how they decide which (if any) news outlets to trust. This often overlooked large minority not only trusts the news less, they are also less sure about how to make up their minds about whom to trust. 

Further illustrating this point, we can shift from political position to political interest. If we compare, across 47 markets, those who say they are interested in politics (27% of the sample) with those who say they are not interested in politics (35%) we find very different levels of trust. Around half (50%) of those interested in politics say they trust most news most of the time compared to 32% of those not interested.  

The gaps in terms of which factors, if any, people identify as important are aligned with those outlined earlier in this chapter. Our qualitative research suggests that those that are not interested in politics are also much less sure about how to even begin to make up their mind about news media that many see as completely intertwined with, even indistinguishable from, political institutions that they often feel distant or even alienated from. 

Securing trust in news calls for different approaches for different parts of the public 

Across the world, our data thus capture two important things. First, most people think in broadly similar terms about what are the most important factors when it comes to deciding which news outlets to trust – transparency, high standards, freedom from bias, and treating people fairly. These are things many journalists aspire to live up to, and for these journalists, it is encouraging to see that there is such an overlap between how many reporters and much of the public think about what makes news worth trusting. The challenge for news media when it comes to winning and maintaining trust is to show that they live up to these expectations. 

In some countries, trust in news is heavily influenced by politics, and people’s trust in individual news brands is often influenced by whether they perceive the outlet in question as editorially aligned with their own political values (or at least not antithetical to them). 

But generally, across differences in age, gender, and to a large extent across differences in education, income, and political orientation in terms of left, centre, and right, most people think in very similar terms about what matters for trust in news – even though they sometimes come to different conclusions both about news in general and particular news outlets. Many might appreciate that some outlets have values that are the same as their own. But when it comes to what people say is decisive for which outlets they trust, this factor is far less frequently mentioned than core issues around transparency, standards, bias, and fairness.  

Second, however, for a large minority of the public with a distant relation to politics – a fifth of our respondents don’t know where they stand in conventional political terms – trust in news is much lower, many of them are less clear about what might help engender trust, and their connection with news is generally more precarious. The same goes for the overlapping group of respondents who are not interested in politics – more than a third. 

The challenge for news media with this part of the public is to overcome the distance and convince them that news is engaging, interesting, and valuable enough to spend time with – and on that basis perhaps over time earn their trust as well. 

1 Our own work includes the three-year Trust in News-project with extensive research across Brazil, India, the UK, and the US (details here ), and last year’s Digital News Report data on media criticism and the relationship between press freedom and public trust in news (More here ).

2 See Schibsted .

signup block

  • Perspectives on trust in news
  • The use of AI in journalism
  • Audiences and user needs
  • How much people pay for news
  • The rise of news influencers
  • Lee en español
  • Country and market data
  • Methodology

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fake news, disinformation and misinformation in social media: a review

Esma aïmeur.

Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada

Sabrine Amri

Gilles brassard, associated data.

All the data and material are available in the papers cited in the references.

Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.

Introduction

Context and motivation.

Fake news, disinformation and misinformation have become such a scourge that Marcia McNutt, president of the National Academy of Sciences of the United States, is quoted to have said (making an implicit reference to the COVID-19 pandemic) “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence” in a joint statement of the National Academies 1 posted on July 15, 2021. Indeed, although online social networks (OSNs), also called social media, have improved the ease with which real-time information is broadcast; its popularity and its massive use have expanded the spread of fake news by increasing the speed and scope at which it can spread. Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. A long time ago, there were rumors in the traditional media that Elvis was not dead, 2 that the Earth was flat, 3 that aliens had invaded us, 4 , etc.

Therefore, social media has become nowadays a powerful source for fake news dissemination (Sharma et al. 2019 ; Shu et al. 2017 ). According to Pew Research Center’s analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, 5 while in 2018, only one-fifth of them say they often get news via social media. 6

Hence, fake news can have a significant impact on society as manipulated and false content is easier to generate and harder to detect (Kumar and Shah 2018 ) and as disinformation actors change their tactics (Kumar and Shah 2018 ; Micallef et al. 2020 ). In 2017, Snow predicted in the MIT Technology Review (Snow 2017 ) that most individuals in mature economies will consume more false than valid information by 2022.

Recent news on the COVID-19 pandemic, which has flooded the web and created panic in many countries, has been reported as fake. 7 For example, holding your breath for ten seconds to one minute is not a self-test for COVID-19 8 (see Fig.  1 ). Similarly, online posts claiming to reveal various “cures” for COVID-19 such as eating boiled garlic or drinking chlorine dioxide (which is an industrial bleach), were verified 9 as fake and in some cases as dangerous and will never cure the infection.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig1_HTML.jpg

Fake news example about a self-test for COVID-19 source: https://cdn.factcheck.org/UploadedFiles/Screenshot031120_false.jpg , last access date: 26-12-2022

Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017 ). Furthermore, it has been reported in a previous study about the spread of online news on Twitter (Vosoughi et al. 2018 ) that the spread of false news online is six times faster than truthful content and that 70% of the users could not distinguish real from fake news (Vosoughi et al. 2018 ) due to the attraction of the novelty of the latter (Bovet and Makse 2019 ). It was determined that falsehood spreads significantly farther, faster, deeper and more broadly than the truth in all categories of information, and the effects are more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information (Vosoughi et al. 2018 ).

Over 1 million tweets were estimated to be related to fake news by the end of the 2016 US presidential election. 11 In 2017, in Germany, a government spokesman affirmed: “We are dealing with a phenomenon of a dimension that we have not seen before,” referring to an unprecedented spread of fake news on social networks. 12 Given the strength of this new phenomenon, fake news has been chosen as the word of the year by the Macquarie dictionary both in 2016 13 and in 2018 14 as well as by the Collins dictionary in 2017. 15 , 16 Since 2020, the new term “infodemic” was coined, reflecting widespread researchers’ concern (Gupta et al. 2022 ; Apuke and Omar 2021 ; Sharma et al. 2020 ; Hartley and Vu 2020 ; Micallef et al. 2020 ) about the proliferation of misinformation linked to the COVID-19 pandemic.

The Gartner Group’s top strategic predictions for 2018 and beyond included the need for IT leaders to quickly develop Artificial Intelligence (AI) algorithms to address counterfeit reality and fake news. 17 However, fake news identification is a complex issue. (Snow 2017 ) questioned the ability of AI to win the war against fake news. Similarly, other researchers concurred that even the best AI for spotting fake news is still ineffective. 18 Besides, recent studies have shown that the power of AI algorithms for identifying fake news is lower than its ability to create it Paschen ( 2019 ). Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed to closely resemble the truth in order to deceive users, and as a result, it is often hard to determine its veracity by AI alone. Therefore, it is crucial to consider more effective approaches to solve the problem of fake news in social media.

Contribution

The fake news problem has been addressed by researchers from various perspectives related to different topics. These topics include, but are not restricted to, social science studies , which investigate why and who falls for fake news (Altay et al. 2022 ; Batailler et al. 2022 ; Sterret et al. 2018 ; Badawy et al. 2019 ; Pennycook and Rand 2020 ; Weiss et al. 2020 ; Guadagno and Guttieri 2021 ), whom to trust and how perceptions of misinformation and disinformation relate to media trust and media consumption patterns (Hameleers et al. 2022 ), how fake news differs from personal lies (Chiu and Oh 2021 ; Escolà-Gascón 2021 ), examine how can the law regulate digital disinformation and how governments can regulate the values of social media companies that themselves regulate disinformation spread on their platforms (Marsden et al. 2020 ; Schuyler 2019 ; Vasu et al. 2018 ; Burshtein 2017 ; Waldman 2017 ; Alemanno 2018 ; Verstraete et al. 2017 ), and argue the challenges to democracy (Jungherr and Schroeder 2021 ); Behavioral interventions studies , which examine what literacy ideas mean in the age of dis/mis- and malinformation (Carmi et al. 2020 ), investigate whether media literacy helps identification of fake news (Jones-Jang et al. 2021 ) and attempt to improve people’s news literacy (Apuke et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers 2022 ; Nagel 2022 ; Jones-Jang et al. 2021 ; Mihailidis and Viotty 2017 ; García et al. 2020 ) by encouraging people to pause to assess credibility of headlines (Fazio 2020 ), promote civic online reasoning (McGrew 2020 ; McGrew et al. 2018 ) and critical thinking (Lutzke et al. 2019 ), together with evaluations of credibility indicators (Bhuiyan et al. 2020 ; Nygren et al. 2019 ; Shao et al. 2018a ; Pennycook et al. 2020a , b ; Clayton et al. 2020 ; Ozturk et al. 2015 ; Metzger et al. 2020 ; Sherman et al. 2020 ; Nekmat 2020 ; Brashier et al. 2021 ; Chung and Kim 2021 ; Lanius et al. 2021 ); as well as social media-driven studies , which investigate the effect of signals (e.g., sources) to detect and recognize fake news (Vraga and Bode 2017 ; Jakesch et al. 2019 ; Shen et al. 2019 ; Avram et al. 2020 ; Hameleers et al. 2020 ; Dias et al. 2020 ; Nyhan et al. 2020 ; Bode and Vraga 2015 ; Tsang 2020 ; Vishwakarma et al. 2019 ; Yavary et al. 2020 ) and investigate fake and reliable news sources using complex networks analysis based on search engine optimization metric (Mazzeo and Rapisarda 2022 ).

The impacts of fake news have reached various areas and disciplines beyond online social networks and society (García et al. 2020 ) such as economics (Clarke et al. 2020 ; Kogan et al. 2019 ; Goldstein and Yang 2019 ), psychology (Roozenbeek et al. 2020a ; Van der Linden and Roozenbeek 2020 ; Roozenbeek and van der Linden 2019 ), political science (Valenzuela et al. 2022 ; Bringula et al. 2022 ; Ricard and Medeiros 2020 ; Van der Linden et al. 2020 ; Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ), health science (Alonso-Galbán and Alemañy-Castilla 2022 ; Desai et al. 2022 ; Apuke and Omar 2021 ; Escolà-Gascón 2021 ; Wang et al. 2019c ; Hartley and Vu 2020 ; Micallef et al. 2020 ; Pennycook et al. 2020b ; Sharma et al. 2020 ; Roozenbeek et al. 2020b ), environmental science (e.g., climate change) (Treen et al. 2020 ; Lutzke et al. 2019 ; Lewandowsky 2020 ; Maertens et al. 2020 ), etc.

Interesting research has been carried out to review and study the fake news issue in online social networks. Some focus not only on fake news, but also distinguish between fake news and rumor (Bondielli and Marcelloni 2019 ; Meel and Vishwakarma 2020 ), while others tackle the whole problem, from characterization to processing techniques (Shu et al. 2017 ; Guo et al. 2020 ; Zhou and Zafarani 2020 ). However, they mostly focus on studying approaches from a machine learning perspective (Bondielli and Marcelloni 2019 ), data mining perspective (Shu et al. 2017 ), crowd intelligence perspective (Guo et al. 2020 ), or knowledge-based perspective (Zhou and Zafarani 2020 ). Furthermore, most of these studies ignore at least one of the mentioned perspectives, and in many cases, they do not cover other existing detection approaches using methods such as blockchain and fact-checking, as well as analysis on metrics used for Search Engine Optimization (Mazzeo and Rapisarda 2022 ). However, in our work and to the best of our knowledge, we cover all the approaches used for fake news detection. Indeed, we investigate the proposed solutions from broader perspectives (i.e., the detection techniques that are used, as well as the different aspects and types of the information used).

Therefore, in this paper, we are highly motivated by the following facts. First, fake news detection on social media is still in the early age of development, and many challenging issues remain that require deeper investigation. Hence, it is necessary to discuss potential research directions that can improve fake news detection and mitigation tasks. However, the dynamic nature of fake news propagation through social networks further complicates matters (Sharma et al. 2019 ). False information can easily reach and impact a large number of users in a short time (Friggeri et al. 2014 ; Qian et al. 2018 ). Moreover, fact-checking organizations cannot keep up with the dynamics of propagation as they require human verification, which can hold back a timely and cost-effective response (Kim et al. 2018 ; Ruchansky et al. 2017 ; Shu et al. 2018a ).

Our work focuses primarily on understanding the “fake news” problem, its related challenges and root causes, and reviewing automatic fake news detection and mitigation methods in online social networks as addressed by researchers. The main contributions that differentiate us from other works are summarized below:

  • We present the general context from which the fake news problem emerged (i.e., online deception)
  • We review existing definitions of fake news, identify the terms and features most commonly used to define fake news, and categorize related works accordingly.
  • We propose a fake news typology classification based on the various categorizations of fake news reported in the literature.
  • We point out the most challenging factors preventing researchers from proposing highly effective solutions for automatic fake news detection in social media.
  • We highlight and classify representative studies in the domain of automatic fake news detection and mitigation on online social networks including the key methods and techniques used to generate detection models.
  • We discuss the key shortcomings that may inhibit the effectiveness of the proposed fake news detection methods in online social networks.
  • We provide recommendations that can help address these shortcomings and improve the quality of research in this domain.

The rest of this article is organized as follows. We explain the methodology with which the studied references are collected and selected in Sect.  2 . We introduce the online deception problem in Sect.  3 . We highlight the modern-day problem of fake news in Sect.  4 , followed by challenges facing fake news detection and mitigation tasks in Sect.  5 . We provide a comprehensive literature review of the most relevant scholarly works on fake news detection in Sect.  6 . We provide a critical discussion and recommendations that may fill some of the gaps we have identified, as well as a classification of the reviewed automatic fake news detection approaches, in Sect.  7 . Finally, we provide a conclusion and propose some future directions in Sect.  8 .

Review methodology

This section introduces the systematic review methodology on which we relied to perform our study. We start with the formulation of the research questions, which allowed us to select the relevant research literature. Then, we provide the different sources of information together with the search and inclusion/exclusion criteria we used to select the final set of papers.

Research questions formulation

The research scope, research questions, and inclusion/exclusion criteria were established following an initial evaluation of the literature and the following research questions were formulated and addressed.

  • RQ1: what is fake news in social media, how is it defined in the literature, what are its related concepts, and the different types of it?
  • RQ2: What are the existing challenges and issues related to fake news?
  • RQ3: What are the available techniques used to perform fake news detection in social media?

Sources of information

We broadly searched for journal and conference research articles, books, and magazines as a source of data to extract relevant articles. We used the main sources of scientific databases and digital libraries in our search, such as Google Scholar, 19 IEEE Xplore, 20 Springer Link, 21 ScienceDirect, 22 Scopus, 23 ACM Digital Library. 24 Also, we screened most of the related high-profile conferences such as WWW, SIGKDD, VLDB, ICDE and so on to find out the recent work.

Search criteria

We focused our research over a period of ten years, but we made sure that about two-thirds of the research papers that we considered were published in or after 2019. Additionally, we defined a set of keywords to search the above-mentioned scientific databases since we concentrated on reviewing the current state of the art in addition to the challenges and the future direction. The set of keywords includes the following terms: fake news, disinformation, misinformation, information disorder, social media, detection techniques, detection methods, survey, literature review.

Study selection, exclusion and inclusion criteria

To retrieve relevant research articles, based on our sources of information and search criteria, a systematic keyword-based search was carried out by posing different search queries, as shown in Table  1 .

List of keywords for searching relevant articles

Fake news + social media
Fake news + disinformation
Fake news + misinformation
Fake news + information disorder
Fake news + survey
Fake news + detection methods
Fake news + literature review
Fake news + detection techniques
Fake news + detection + social media
Disinformation + misinformation + social media

We discovered a primary list of articles. On the obtained initial list of studies, we applied a set of inclusion/exclusion criteria presented in Table  2 to select the appropriate research papers. The inclusion and exclusion principles are applied to determine whether a study should be included or not.

Inclusion and exclusion criteria

Inclusion criterionExclusion criterion
Peer-reviewed and written in the English languageArticles in a different language than English.
Clearly describes fake news, misinformation and disinformation problems in social networksDoes not focus on fake news, misinformation, or disinformation problem in social networks
Written by academic or industrial researchersShort papers, posters or similar
Have a high number of citations
Recent articles only (last ten years)
In the case of equivalent studies, the one published in the highest-rated journal or conference is selected to sustain a high-quality set of articles on which the review is conductedArticles not following these inclusion criteria
Articles that propose methodologies, methods, or approaches for fake news detection online social networks

After reading the abstract, we excluded some articles that did not meet our criteria. We chose the most important research to help us understand the field. We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its related concepts (see Table  4 ). We used the remaining papers to understand the field, reveal the challenges, review the detection techniques, and discuss future directions.

Classification of fake news definitions based on the used term and features

Fake newsMisinformationDisinformationFalse informationMalinformationInformation disorder
Intent and authenticityShu et al. ( ), Sharma et al. ( ), Mustafaraj and Metaxas ( ), Klein and Wueller ( ), Potthast et al. ( ), Allcott and Gentzkow ( ), Zhou and Zafarani ( ), Zhang and Ghorbani ( ), Conroy et al. ( ), Celliers and Hattingh ( ), Nakov ( ), Shu et al. ( ), Tandoc Jr et al. ( ), Abu Arqoub et al. ( ),Molina et al. ( ), de Cock Buning ( ), Meel and Vishwakarma ( )Wu et al. ( ), Shu et al. ( ), Islam et al. ( ), Hameleers et al. ( )Kapantai et al. ( ), Shu et al. ( ), Shu et al. ( ),Kumar et al. ( ), Jungherr and Schroeder ( ), Starbird et al. ( ), de Cock Buning ( ), Bastick ( ), Bringula et al. ( ), Tsang ( ), Hameleers et al. ( ), Wu et al. ( )Shu et al. ( ), Di Domenico et al. ( ), Dame Adjin-Tettey ( )Wardle and Derakhshan ( ), Wardle Wardle ( ), Derakhshan and Wardle ( ), Shu et al. ( )
Intent or authenticityJin et al. ( ), Rubin et al. ( ), Balmas ( ),Brewer et al. ( ), Egelhofer and Lecheler ( ), Lazer et al. ( ), Allen et al. ( ), Guadagno and Guttieri ( ), Van der Linden et al. ( ), ERGA ( )Pennycook and Rand ( ), Shao et al. ( ), Shao et al. ( ),Micallef et al. ( ), Ha et al. ( ), Singh et al. ( ), Wu et al. ( )Marsden et al. ( ), Ireton and Posetti ( ), ERGA ( ), Baptista and Gradim ( )Habib et al. ( )Carmi et al. ( )
Intent and knowledgeWeiss et al. ( )Bhattacharjee et al. ( ), Khan et al. ( )Kumar and Shah ( ), Guo et al. ( )

A brief introduction of online deception

The Cambridge Online Dictionary defines Deception as “ the act of hiding the truth, especially to get an advantage .” Deception relies on peoples’ trust, doubt and strong emotions that may prevent them from thinking and acting clearly (Aïmeur et al. 2018 ). We also define it in previous work (Aïmeur et al. 2018 ) as the process that undermines the ability to consciously make decisions and take convenient actions, following personal values and boundaries. In other words, deception gets people to do things they would not otherwise do. In the context of online deception, several factors need to be considered: the deceiver, the purpose or aim of the deception, the social media service, the deception technique and the potential target (Aïmeur et al. 2018 ; Hage et al. 2021 ).

Researchers are working on developing new ways to protect users and prevent online deception (Aïmeur et al. 2018 ). Due to the sophistication of attacks, this is a complex task. Hence, malicious attackers are using more complex tools and strategies to deceive users. Furthermore, the way information is organized and exchanged in social media may lead to exposing OSN users to many risks (Aïmeur et al. 2013 ).

In fact, this field is one of the recent research areas that need collaborative efforts of multidisciplinary practices such as psychology, sociology, journalism, computer science as well as cyber-security and digital marketing (which are not yet well explored in the field of dis/mis/malinformation but relevant for future research). Moreover, Ismailov et al. ( 2020 ) analyzed the main causes that could be responsible for the efficiency gap between laboratory results and real-world implementations.

In this paper, it is not in our scope of work to review online deception state of the art. However, we think it is crucial to note that fake news, misinformation and disinformation are indeed parts of the larger landscape of online deception (Hage et al. 2021 ).

Fake news, the modern-day problem

Fake news has existed for a very long time, much before their wide circulation became facilitated by the invention of the printing press. 25 For instance, Socrates was condemned to death more than twenty-five hundred years ago under the fake news that he was guilty of impiety against the pantheon of Athens and corruption of the youth. 26 A Google Trends Analysis of the term “fake news” reveals an explosion in popularity around the time of the 2016 US presidential election. 27 Fake news detection is a problem that has recently been addressed by numerous organizations, including the European Union 28 and NATO. 29

In this section, we first overview the fake news definitions as they were provided in the literature. We identify the terms and features used in the definitions, and we classify the latter based on them. Then, we provide a fake news typology based on distinct categorizations that we propose, and we define and compare the most cited forms of one specific fake news category (i.e., the intent-based fake news category).

Definitions of fake news

“Fake news” is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, 30 yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017 ).

The first definition of the term fake news was provided by Allcott and Gentzkow ( 2017 ) as news articles that are intentionally and verifiably false and could mislead readers. Then, other definitions were provided in the literature, but they all agree on the authenticity of fake news to be false (i.e., being non-factual). However, they disagree on the inclusion and exclusion of some related concepts such as satire , rumors , conspiracy theories , misinformation and hoaxes from the given definition. More recently, Nakov ( 2020 ) reported that the term fake news started to mean different things to different people, and for some politicians, it even means “news that I do not like.”

Hence, there is still no agreed definition of the term “fake news.” Moreover, we can find many terms and concepts in the literature that refer to fake news (Van der Linden et al. 2020 ; Molina et al. 2021 ) (Abu Arqoub et al. 2022 ; Allen et al. 2020 ; Allcott and Gentzkow 2017 ; Shu et al. 2017 ; Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Conroy et al. 2015 ; Celliers and Hattingh 2020 ; Nakov 2020 ; Shu et al. 2020c ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ; Egelhofer and Lecheler 2019 ; Mustafaraj and Metaxas 2017 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Lazer et al. 2018 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ), disinformation (Kapantai et al. 2021 ; Shu et al. 2020a , c ; Kumar et al. 2016 ; Bhattacharjee et al. 2020 ; Marsden et al. 2020 ; Jungherr and Schroeder 2021 ; Starbird et al. 2019 ; Ireton and Posetti 2018 ), misinformation (Wu et al. 2019 ; Shu et al. 2020c ; Shao et al. 2016 , 2018b ; Pennycook and Rand 2019 ; Micallef et al. 2020 ), malinformation (Dame Adjin-Tettey 2022 ) (Carmi et al. 2020 ; Shu et al. 2020c ), false information (Kumar and Shah 2018 ; Guo et al. 2020 ; Habib et al. 2019 ), information disorder (Shu et al. 2020c ; Wardle and Derakhshan 2017 ; Wardle 2018 ; Derakhshan and Wardle 2017 ), information warfare (Guadagno and Guttieri 2021 ) and information pollution (Meel and Vishwakarma 2020 ).

There is also a remarkable amount of disagreement over the classification of the term fake news in the research literature, as well as in policy (de Cock Buning 2018 ; ERGA 2018 , 2021 ). Some consider fake news as a type of misinformation (Allen et al. 2020 ; Singh et al. 2021 ; Ha et al. 2021 ; Pennycook and Rand 2019 ; Shao et al. 2018b ; Di Domenico et al. 2021 ; Sharma et al. 2019 ; Celliers and Hattingh 2020 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Islam et al. 2020 ), others consider it as a type of disinformation (de Cock Buning 2018 ) (Bringula et al. 2022 ; Baptista and Gradim 2022 ; Tsang 2020 ; Tandoc Jr et al. 2021 ; Bastick 2021 ; Khan et al. 2019 ; Shu et al. 2017 ; Nakov 2020 ; Shu et al. 2020c ; Egelhofer and Lecheler 2019 ), while others associate the term with both disinformation and misinformation (Wu et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers et al. 2022 ; Carmi et al. 2020 ; Allcott and Gentzkow 2017 ; Zhang and Ghorbani 2020 ; Potthast et al. 2017 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ). On the other hand, some prefer to differentiate fake news from both terms (ERGA 2018 ; Molina et al. 2021 ; ERGA 2021 ) (Zhou and Zafarani 2020 ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ).

The existing terms can be separated into two groups. The first group represents the general terms, which are information disorder , false information and fake news , each of which includes a subset of terms from the second group. The second group represents the elementary terms, which are misinformation , disinformation and malinformation . The literature agrees on the definitions of the latter group, but there is still no agreed-upon definition of the first group. In Fig.  2 , we model the relationship between the most used terms in the literature.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig2_HTML.jpg

Modeling of the relationship between terms related to fake news

The terms most used in the literature to refer, categorize and classify fake news can be summarized and defined as shown in Table  3 , in which we capture the similarities and show the differences between the different terms based on two common key features, which are the intent and the authenticity of the news content. The intent feature refers to the intention behind the term that is used (i.e., whether or not the purpose is to mislead or cause harm), whereas the authenticity feature refers to its factual aspect. (i.e., whether the content is verifiably false or not, which we label as genuine in the second case). Some of these terms are explicitly used to refer to fake news (i.e., disinformation, misinformation and false information), while others are not (i.e., malinformation). In the comparison table, the empty dash (–) cell denotes that the classification does not apply.

A comparison between used terms based on intent and authenticity

TermDefinitionIntentAuthenticity
False informationVerifiably false informationFalse
MisinformationFalse information that is shared without the intention to mislead or to cause harmNot to misleadFalse
DisinformationFalse information that is shared to intentionally misleadTo misleadFalse
MalinformationGenuine information that is shared with an intent to cause harmTo cause harmGenuine

In Fig.  3 , we identify the different features used in the literature to define fake news (i.e., intent, authenticity and knowledge). Hence, some definitions are based on two key features, which are authenticity and intent (i.e., news articles that are intentionally and verifiably false and could mislead readers). However, other definitions are based on either authenticity or intent. Other researchers categorize false information on the web and social media based on its intent and knowledge (i.e., when there is a single ground truth). In Table  4 , we classify the existing fake news definitions based on the used term and the used features . In the classification, the references in the cells refer to the research study in which a fake news definition was provided, while the empty dash (–) cells denote that the classification does not apply.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig3_HTML.jpg

The features used for fake news definition

Fake news typology

Various categorizations of fake news have been provided in the literature. We can distinguish two major categories of fake news based on the studied perspective (i.e., intention or content) as shown in Fig.  4 . However, our proposed fake news typology is not about detection methods, and it is not exclusive. Hence, a given category of fake news can be described based on both perspectives (i.e., intention and content) at the same time. For instance, satire (i.e., intent-based fake news) can contain text and/or multimedia content types of data (e.g., headline, body, image, video) (i.e., content-based fake news) and so on.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig4_HTML.jpg

Most researchers classify fake news based on the intent (Collins et al. 2020 ; Bondielli and Marcelloni 2019 ; Zannettou et al. 2019 ; Kumar et al. 2016 ; Wardle 2017 ; Shu et al. 2017 ; Kumar and Shah 2018 ) (see Sect.  4.2.2 ). However, other researchers (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ) focus on the content to categorize types of fake news through distinguishing the different formats and content types of data in the news (e.g., text and/or multimedia).

Recently, another classification was proposed by Zhang and Ghorbani ( 2020 ). It is based on the combination of content and intent to categorize fake news. They distinguish physical news content and non-physical news content from fake news. Physical content consists of the carriers and format of the news, and non-physical content consists of the opinions, emotions, attitudes and sentiments that the news creators want to express.

Content-based fake news category

According to researchers of this category (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ), forms of fake news may include false text such as hyperlinks or embedded content; multimedia such as false videos (Demuyakor and Opata 2022 ), images (Masciari et al. 2020 ; Shen et al. 2019 ), audios (Demuyakor and Opata 2022 ) and so on. Moreover, we can also find multimodal content (Shu et al. 2020a ) that is fake news articles and posts composed of multiple types of data combined together, for example, a fabricated image along with a text related to the image (Shu et al. 2020a ). In this category of fake news forms, we can mention as examples deepfake videos (Yang et al. 2019b ) and GAN-generated fake images (Zhang et al. 2019b ), which are artificial intelligence-based machine-generated fake content that are hard for unsophisticated social network users to identify.

The effects of these forms of fake news content vary on the credibility assessment, as well as sharing intentions which influences the spread of fake news on OSNs. For instance, people with little knowledge about the issue compared to those who are strongly concerned about the key issue of fake news tend to be easier to convince that the misleading or fake news is real, especially when shared via a video modality as compared to the text or the audio modality (Demuyakor and Opata 2022 ).

Intent-based Fake News Category

The most often mentioned and discussed forms of fake news according to researchers in this category include but are not restricted to clickbait , hoax , rumor , satire , propaganda , framing , conspiracy theories and others. In the following subsections, we explain these types of fake news as they were defined in the literature and undertake a brief comparison between them as depicted in Table  5 . The following are the most cited forms of intent-based types of fake news, and their comparison is based on what we suspect are the most common criteria mentioned by researchers.

A comparison between the different types of intent-based fake news

Intent to deceivePropagationNegative ImpactGoal
ClickbaitHighSlowLowPopularity, Profit
HoaxHighFastLowOther
RumorHighFastHighOther
SatireLowSlowLowPopularity, Other
PropagandaHighFastHighPopularity
FramingHighFastLowOther
Conspiracy theoryHighFastHighOther

Clickbait refers to misleading headlines and thumbnails of content on the web (Zannettou et al. 2019 ) that tend to be fake stories with catchy headlines aimed at enticing the reader to click on a link (Collins et al. 2020 ). This type of fake news is considered to be the least severe type of false information because if a user reads/views the whole content, it is possible to distinguish if the headline and/or the thumbnail was misleading (Zannettou et al. 2019 ). However, the goal behind using clickbait is to increase the traffic to a website (Zannettou et al. 2019 ).

A hoax is a false (Zubiaga et al. 2018 ) or inaccurate (Zannettou et al. 2019 ) intentionally fabricated (Collins et al. 2020 ) news story used to masquerade the truth (Zubiaga et al. 2018 ) and is presented as factual (Zannettou et al. 2019 ) to deceive the public or audiences (Collins et al. 2020 ). This category is also known either as half-truth or factoid stories (Zannettou et al. 2019 ). Popular examples of hoaxes are stories that report the false death of celebrities (Zannettou et al. 2019 ) and public figures (Collins et al. 2020 ). Recently, hoaxes about the COVID-19 have been circulating through social media.

The term rumor refers to ambiguous or never confirmed claims (Zannettou et al. 2019 ) that are disseminated with a lack of evidence to support them (Sharma et al. 2019 ). This kind of information is widely propagated on OSNs (Zannettou et al. 2019 ). However, they are not necessarily false and may turn out to be true (Zubiaga et al. 2018 ). Rumors originate from unverified sources but may be true or false or remain unresolved (Zubiaga et al. 2018 ).

Satire refers to stories that contain a lot of irony and humor (Zannettou et al. 2019 ). It presents stories as news that might be factually incorrect, but the intent is not to deceive but rather to call out, ridicule, or to expose behavior that is shameful, corrupt, or otherwise “bad” (Golbeck et al. 2018 ). This is done with a fabricated story or by exaggerating the truth reported in mainstream media in the form of comedy (Collins et al. 2020 ). The intent behind satire seems kind of legitimate and many authors (such as Wardle (Wardle 2017 )) do include satire as a type of fake news as there is no intention to cause harm but it has the potential to mislead or fool people.

Also, Golbeck et al. ( 2018 ) mention that there is a spectrum from fake to satirical news that they found to be exploited by many fake news sites. These sites used disclaimers at the bottom of their webpages to suggest they were “satirical” even when there was nothing satirical about their articles, to protect them from accusations about being fake. The difference with a satirical form of fake news is that the authors or the host present themselves as a comedian or as an entertainer rather than a journalist informing the public (Collins et al. 2020 ). However, most audiences believed the information passed in this satirical form because the comedian usually projects news from mainstream media and frames them to suit their program (Collins et al. 2020 ).

Propaganda refers to news stories created by political entities to mislead people. It is a special instance of fabricated stories that aim to harm the interests of a particular party and, typically, has a political context (Zannettou et al. 2019 ). Propaganda was widely used during both World Wars (Collins et al. 2020 ) and during the Cold War (Zannettou et al. 2019 ). It is a consequential type of false information as it can change the course of human history (e.g., by changing the outcome of an election) (Zannettou et al. 2019 ). States are the main actors of propaganda. Recently, propaganda has been used by politicians and media organizations to support a certain position or view (Collins et al. 2020 ). Online astroturfing can be an example of the tools used for the dissemination of propaganda. It is a covert manipulation of public opinion (Peng et al. 2017 ) that aims to make it seem that many people share the same opinion about something. Astroturfing can affect different domains of interest, based on which online astroturfing can be mainly divided into political astroturfing, corporate astroturfing and astroturfing in e-commerce or online services (Mahbub et al. 2019 ). Propaganda types of fake news can be debunked with manual fact-based detection models such as the use of expert-based fact-checkers (Collins et al. 2020 ).

Framing refers to employing some aspect of reality to make content more visible, while the truth is concealed (Collins et al. 2020 ) to deceive and misguide readers. People will understand certain concepts based on the way they are coined and invented. An example of framing was provided by Collins et al. ( 2020 ): “suppose a leader X says “I will neutralize my opponent” simply meaning he will beat his opponent in a given election. Such a statement will be framed such as “leader X threatens to kill Y” and this framed statement provides a total misrepresentation of the original meaning.

Conspiracy Theories

Conspiracy theories refer to the belief that an event is the result of secret plots generated by powerful conspirators. Conspiracy belief refers to people’s adoption and belief of conspiracy theories, and it is associated with psychological, political and social factors (Douglas et al. 2019 ). Conspiracy theories are widespread in contemporary democracies (Sutton and Douglas 2020 ), and they have major consequences. For instance, lately and during the COVID-19 pandemic, conspiracy theories have been discussed from a public health perspective (Meese et al. 2020 ; Allington et al. 2020 ; Freeman et al. 2020 ).

Comparison Between Most Popular Intent-based Types of Fake News

Following a review of the most popular intent-based types of fake news, we compare them as shown in Table  5 based on the most common criteria mentioned by researchers in their definitions as listed below.

  • the intent behind the news, which refers to whether a given news type was mainly created to intentionally deceive people or not (e.g., humor, irony, entertainment, etc.);
  • the way that the news propagates through OSN, which determines the nature of the propagation of each type of fake news and this can be either fast or slow propagation;
  • the severity of the impact of the news on OSN users, which refers to whether the public has been highly impacted by the given type of fake news; the mentioned impact of each fake news type is mainly the proportion of the negative impact;
  • and the goal behind disseminating the news, which can be to gain popularity for a particular entity (e.g., political party), for profit (e.g., lucrative business), or other reasons such as humor and irony in the case of satire, spreading panic or anger, and manipulating the public in the case of hoaxes, made-up stories about a particular person or entity in the case of rumors, and misguiding readers in the case of framing.

However, the comparison provided in Table  5 is deduced from the studied research papers; it is our point of view, which is not based on empirical data.

We suspect that the most dangerous types of fake news are the ones with high intention to deceive the public, fast propagation through social media, high negative impact on OSN users, and complicated hidden goals and agendas. However, while the other types of fake news are less dangerous, they should not be ignored.

Moreover, it is important to highlight that the existence of the overlap in the types of fake news mentioned above has been proven, thus it is possible to observe false information that may fall within multiple categories (Zannettou et al. 2019 ). Here, we provide two examples by Zannettou et al. ( 2019 ) to better understand possible overlaps: (1) a rumor may also use clickbait techniques to increase the audience that will read the story; and (2) propaganda stories, as a special instance of a framing story.

Challenges related to fake news detection and mitigation

To alleviate fake news and its threats, it is crucial to first identify and understand the factors involved that continue to challenge researchers. Thus, the main question is to explore and investigate the factors that make it easier to fall for manipulated information. Despite the tremendous progress made in alleviating some of the challenges in fake news detection (Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Shu et al. 2020a ), much more work needs to be accomplished to address the problem effectively.

In this section, we discuss several open issues that have been making fake news detection in social media a challenging problem. These issues can be summarized as follows: content-based issues (i.e., deceptive content that resembles the truth very closely), contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and OSN’s dynamic natures that leads to the fast propagation), as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for fake news detection). These various aspects have proven (Shu et al. 2017 ) to have a great impact on the accuracy of fake news detection approaches.

Content-based issue, deceptive content

Automatic fake news detection remains a huge challenge, primarily because the content is designed in a way that it closely resembles the truth. Besides, most deceivers choose their words carefully and use their language strategically to avoid being caught. Therefore, it is often hard to determine its veracity by AI without the reliance on additional information from third parties such as fact-checkers.

Abdullah-All-Tanvir et al. ( 2020 ) reported that fake news tends to have more complicated stories and hardly ever make any references. It is more likely to contain a greater number of words that express negative emotions. This makes it so complicated that it becomes impossible for a human to manually detect the credibility of this content. Therefore, detecting fake news on social media is quite challenging. Moreover, fake news appears in multiple types and forms, which makes it hard and challenging to define a single global solution able to capture and deal with the disseminated content. Consequently, detecting false information is not a straightforward task due to its various types and forms Zannettou et al. ( 2019 ).

Contextual issues

Contextual issues are challenges that we suspect may not be related to the content of the news but rather they are inferred from the context of the online news post (i.e., humans are the weakest factor due to lack of user awareness, social bots spreaders, dynamic nature of online social platforms and fast propagation of fake news).

Humans are the weakest factor due to the lack of awareness

Recent statistics 31 show that the percentage of unintentional fake news spreaders (people who share fake news without the intention to mislead) over social media is five times higher than intentional spreaders. Moreover, another recent statistic 32 shows that the percentage of people who were confident about their ability to discern fact from fiction is ten times higher than those who were not confident about the truthfulness of what they are sharing. As a result, we can deduce the lack of human awareness about the ascent of fake news.

Public susceptibility and lack of user awareness (Sharma et al. 2019 ) have always been the most challenging problem when dealing with fake news and misinformation. This is a complex issue because many people believe almost everything on the Internet and the ones who are new to digital technology or have less expertise may be easily fooled (Edgerly et al. 2020 ).

Moreover, it has been widely proven (Metzger et al. 2020 ; Edgerly et al. 2020 ) that people are often motivated to support and accept information that goes with their preexisting viewpoints and beliefs, and reject information that does not fit in as well. Hence, Shu et al. ( 2017 ) illustrate an interesting correlation between fake news spread and psychological and cognitive theories. They further suggest that humans are more likely to believe information that confirms their existing views and ideological beliefs. Consequently, they deduce that humans are naturally not very good at differentiating real information from fake information.

Recent research by Giachanou et al. ( 2020 ) studies the role of personality and linguistic patterns in discriminating between fake news spreaders and fact-checkers. They classify a user as a potential fact-checker or a potential fake news spreader based on features that represent users’ personality traits and linguistic patterns used in their tweets. They show that leveraging personality traits and linguistic patterns can improve the performance in differentiating between checkers and spreaders.

Furthermore, several researchers studied the prevalence of fake news on social networks during (Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ) and after (Garrett and Bond 2021 ) the 2016 US presidential election and found that individuals most likely to engage with fake news sources were generally conservative-leaning, older, and highly engaged with political news.

Metzger et al. ( 2020 ) examine how individuals evaluate the credibility of biased news sources and stories. They investigate the role of both cognitive dissonance and credibility perceptions in selective exposure to attitude-consistent news information. They found that online news consumers tend to perceive attitude-consistent news stories as more accurate and more credible than attitude-inconsistent stories.

Similarly, Edgerly et al. ( 2020 ) explore the impact of news headlines on the audience’s intent to verify whether given news is true or false. They concluded that participants exhibit higher intent to verify the news only when they believe the headline to be true, which is predicted by perceived congruence with preexisting ideological tendencies.

Luo et al. ( 2022 ) evaluate the effects of endorsement cues in social media on message credibility and detection accuracy. Results showed that headlines associated with a high number of likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. Consequently, they highlight the urgency of empowering individuals to assess both news veracity and endorsement cues appropriately on social media.

Moreover, misinformed people are a greater problem than uninformed people (Kuklinski et al. 2000 ), because the former hold inaccurate opinions (which may concern politics, climate change, medicine) that are harder to correct. Indeed, people find it difficult to update their misinformation-based beliefs even after they have been proved to be false (Flynn et al. 2017 ). Moreover, even if a person has accepted the corrected information, his/her belief may still affect their opinion (Nyhan and Reifler 2015 ).

Falling for disinformation may also be explained by a lack of critical thinking and of the need for evidence that supports information (Vilmer et al. 2018 ; Badawy et al. 2019 ). However, it is also possible that people choose misinformation because they engage in directionally motivated reasoning (Badawy et al. 2019 ; Flynn et al. 2017 ). Online clients are normally vulnerable and will, in general, perceive web-based networking media as reliable, as reported by Abdullah-All-Tanvir et al. ( 2019 ), who propose to mechanize fake news recognition.

It is worth noting that in addition to bots causing the outpouring of the majority of the misrepresentations, specific individuals are also contributing a large share of this issue (Abdullah-All-Tanvir et al. 2019 ). Furthermore, Vosoughi et al. (Vosoughi et al. 2018 ) found that contrary to conventional wisdom, robots have accelerated the spread of real and fake news at the same rate, implying that fake news spreads more than the truth because humans, not robots, are more likely to spread it.

In this case, verified users and those with numerous followers were not necessarily responsible for spreading misinformation of the corrupted posts (Abdullah-All-Tanvir et al. 2019 ).

Viral fake news can cause much havoc to our society. Therefore, to mitigate the negative impact of fake news, it is important to analyze the factors that lead people to fall for misinformation and to further understand why people spread fake news (Cheng et al. 2020 ). Measuring the accuracy, credibility, veracity and validity of news contents can also be a key countermeasure to consider.

Social bots spreaders

Several authors (Shu et al. 2018b , 2017 ; Shi et al. 2019 ; Bessi and Ferrara 2016 ; Shao et al. 2018a ) have also shown that fake news is likely to be created and spread by non-human accounts with similar attributes and structure in the network, such as social bots (Ferrara et al. 2016 ). Bots (short for software robots) exist since the early days of computers. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior (Ferrara et al. 2016 ). Although they are designed to provide a useful service, they can be harmful, for example when they contribute to the spread of unverified information or rumors (Ferrara et al. 2016 ). However, it is important to note that bots are simply tools created and maintained by humans for some specific hidden agendas.

Social bots tend to connect with legitimate users instead of other bots. They try to act like a human with fewer words and fewer followers on social media. This contributes to the forwarding of fake news (Jiang et al. 2019 ). Moreover, there is a difference between bot-generated and human-written clickbait (Le et al. 2019 ).

Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. ( 2020a ) describes social bots use of two strategies to spread low-credibility content. First, they amplify interactions with content as soon as it is created to make it look legitimate and to facilitate its spread across social networks. Next, they try to increase public exposure to the created content and thus boost its perceived credibility by targeting influential users that are more likely to believe disinformation in the hope of getting them to “repost” the fabricated content. They further discuss the social bot detection systems taxonomy proposed by Ferrara et al. ( 2016 ) which divides bot detection methods into three classes: (1) graph-based, (2) crowdsourcing and (3) feature-based social bot detection methods.

Similarly, Shao et al. ( 2018a ) examine social bots and how they promote the spread of misinformation through millions of Twitter posts during and following the 2016 US presidential campaign. They found that social bots played a disproportionate role in spreading articles from low-credibility sources by amplifying such content in the early spreading moments and targeting users with many followers through replies and mentions to expose them to this content and induce them to share it.

Ismailov et al. ( 2020 ) assert that the techniques used to detect bots depend on the social platform and the objective. They note that a malicious bot designed to make friends with as many accounts as possible will require a different detection approach than a bot designed to repeatedly post links to malicious websites. Therefore, they identify two models for detecting malicious accounts, each using a different set of features. Social context models achieve detection by examining features related to an account’s social presence including features such as relationships to other accounts, similarities to other users’ behaviors, and a variety of graph-based features. User behavior models primarily focus on features related to an individual user’s behavior, such as frequency of activities (e.g., number of tweets or posts per time interval), patterns of activity and clickstream sequences.

Therefore, it is crucial to consider bot detection techniques to distinguish bots from normal users to better leverage user profile features to detect fake news.

However, there is also another “bot-like” strategy that aims to massively promote disinformation and fake content in social platforms, which is called bot farms or also troll farms. It is not social bots, but it is a group of organized individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion (Wardle 2018 ) hired to massively spread fake news or any other harmful content. A prominent troll farm example is the Russia-based Internet Research Agency (IRA), which disseminated inflammatory content online to influence the outcome of the 2016 U.S. presidential election. 33 As a result, Twitter suspended accounts connected to the IRA and deleted 200,000 tweets from Russian trolls (Jamieson 2020 ). Another example to mention in this category is review bombing (Moro and Birt 2022 ). Review bombing refers to coordinated groups of people massively performing the same negative actions online (e.g., dislike, negative review/comment) on an online video, game, post, product, etc., in order to reduce its aggregate review score. The review bombers can be both humans and bots coordinated in order to cause harm and mislead people by falsifying facts.

Dynamic nature of online social platforms and fast propagation of fake news

Sharma et al. ( 2019 ) affirm that the fast proliferation of fake news through social networks makes it hard and challenging to assess the information’s credibility on social media. Similarly, Qian et al. ( 2018 ) assert that fake news and fabricated content propagate exponentially at the early stage of its creation and can cause a significant loss in a short amount of time (Friggeri et al. 2014 ) including manipulating the outcome of political events (Liu and Wu 2018 ; Bessi and Ferrara 2016 ).

Moreover, while analyzing the way source and promoters of fake news operate over the web through multiple online platforms, Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to real information (11%).

Furthermore, recently, Shu et al. ( 2020c ) attempted to understand the propagation of disinformation and fake news in social media and found that such content is produced and disseminated faster and easier through social media because of the low barriers that prevent doing so. Similarly, Shu et al. ( 2020b ) studied hierarchical propagation networks for fake news detection. They performed a comparative analysis between fake and real news from structural, temporal and linguistic perspectives. They demonstrated the potential of using these features to detect fake news and they showed their effectiveness for fake news detection as well.

Lastly, Abdullah-All-Tanvir et al. ( 2020 ) note that it is almost impossible to manually detect the sources and authenticity of fake news effectively and efficiently, due to its fast circulation in such a small amount of time. Therefore, it is crucial to note that the dynamic nature of the various online social platforms, which results in the continued rapid and exponential propagation of such fake content, remains a major challenge that requires further investigation while defining innovative solutions for fake news detection.

Datasets issue

The existing approaches lack an inclusive dataset with derived multidimensional information to detect fake news characteristics to achieve higher accuracy of machine learning classification model performance (Nyow and Chua 2019 ). These datasets are primarily dedicated to validating the machine learning model and are the ultimate frame of reference to train the model and analyze its performance. Therefore, if a researcher evaluates their model based on an unrepresentative dataset, the validity and the efficiency of the model become questionable when it comes to applying the fake news detection approach in a real-world scenario.

Moreover, several researchers (Shu et al. 2020d ; Wang et al. 2020 ; Pathak and Srihari 2019 ; Przybyla 2020 ) believe that fake news is diverse and dynamic in terms of content, topics, publishing methods and media platforms, and sophisticated linguistic styles geared to emulate true news. Consequently, training machine learning models on such sophisticated content requires large-scale annotated fake news data that are difficult to obtain (Shu et al. 2020d ).

Therefore, datasets are also a great topic to work on to enhance data quality and have better results while defining our solutions. Adversarial learning techniques (e.g., GAN, SeqGAN) can be used to provide machine-generated data that can be used to train deeper models and build robust systems to detect fake examples from the real ones. This approach can be used to counter the lack of datasets and the scarcity of data available to train models.

Fake news detection literature review

Fake news detection in social networks is still in the early stage of development and there are still challenging issues that need further investigation. This has become an emerging research area that is attracting huge attention.

There are various research studies on fake news detection in online social networks. Few of them have focused on the automatic detection of fake news using artificial intelligence techniques. In this section, we review the existing approaches used in automatic fake news detection, as well as the techniques that have been adopted. Then, a critical discussion built on a primary classification scheme based on a specific set of criteria is also emphasized.

Categories of fake news detection

In this section, we give an overview of most of the existing automatic fake news detection solutions adopted in the literature. A recent classification by Sharma et al. ( 2019 ) uses three categories of fake news identification methods. Each category is further divided based on the type of existing methods (i.e., content-based, feedback-based and intervention-based methods). However, a review of the literature for fake news detection in online social networks shows that the existing studies can be classified into broader categories based on two major aspects that most authors inspect and make use of to define an adequate solution. These aspects can be considered as major sources of extracted information used for fake news detection and can be summarized as follows: the content-based (i.e., related to the content of the news post) and the contextual aspect (i.e., related to the context of the news post).

Consequently, the studies we reviewed can be classified into three different categories based on the two aspects mentioned above (the third category is hybrid). As depicted in Fig.  5 , fake news detection solutions can be categorized as news content-based approaches, the social context-based approaches that can be divided into network and user-based approaches, and hybrid approaches. The latter combines both content-based and contextual approaches to define the solution.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig5_HTML.jpg

Classification of fake news detection approaches

News Content-based Category

News content-based approaches are fake news detection approaches that use content information (i.e., information extracted from the content of the news post) and that focus on studying and exploiting the news content in their proposed solutions. Content refers to the body of the news, including source, headline, text and image-video, which can reflect subtle differences.

Researchers of this category rely on content-based detection cues (i.e., text and multimedia-based cues), which are features extracted from the content of the news post. Text-based cues are features extracted from the text of the news, whereas multimedia-based cues are features extracted from the images and videos attached to the news. Figure  6 summarizes the most widely used news content representation (i.e., text and multimedia/images) and detection techniques (i.e., machine learning (ML), deep Learning (DL), natural language processing (NLP), fact-checking, crowdsourcing (CDS) and blockchain (BKC)) in news content-based category of fake news detection approaches. Most of the reviewed research works based on news content for fake news detection rely on the text-based cues (Kapusta et al. 2019 ; Kaur et al. 2020 ; Vereshchaka et al. 2020 ; Ozbay and Alatas 2020 ; Wang 2017 ; Nyow and Chua 2019 ; Hosseinimotlagh and Papalexakis 2018 ; Abdullah-All-Tanvir et al. 2019 , 2020 ; Mahabub 2020 ; Bahad et al. 2019 ; Hiriyannaiah et al. 2020 ) extracted from the text of the news content including the body of the news and its headline. However, a few researchers such as Vishwakarma et al. ( 2019 ) and Amri et al. ( 2022 ) try to recognize text from the associated image.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig6_HTML.jpg

News content-based category: news content representation and detection techniques

Most researchers of this category rely on artificial intelligence (AI) techniques (such as ML, DL and NLP models) to improve performance in terms of prediction accuracy. Others use different techniques such as fact-checking, crowdsourcing and blockchain. Specifically, the AI- and ML-based approaches in this category are trying to extract features from the news content, which they use later for content analysis and training tasks. In this particular case, the extracted features are the different types of information considered to be relevant for the analysis. Feature extraction is considered as one of the best techniques to reduce data size in automatic fake news detection. This technique aims to choose a subset of features from the original set to improve classification performance (Yazdi et al. 2020 ).

Table  6 lists the distinct features and metadata, as well as the used datasets in the news content-based category of fake news detection approaches.

The features and datasets used in the news content-based approaches

Feature and metadataDatasetsReference
The average number of words in sentences, number of stop words, the sentiment rate of the news measured through the difference between the number of positive and negative words in the articleGetting real about fake news , Gathering mediabiasfactcheck , KaiDMML FakeNewsNet , Real news for Oct-Dec 2016 Kapusta et al. ( )
The length distribution of the title, body and label of the articleNews trends, Kaggle, ReutersKaur et al. ( )
Sociolinguistic, historical, cultural, ideological and syntactical features attached to particular words, phrases and syntactical constructionsFakeNewsNetVereshchaka et al. ( )
Term frequencyBuzzFeed political news, Random political news, ISOT fake newsOzbay and Alatas ( )
The statement, speaker, context, label, justificationPOLITIFACT, LIAR Wang ( )
Spatial vicinity of each word, spatial/contextual relations between terms, and latent relations between terms and articlesKaggle fake news dataset Hosseinimotlagh and Papalexakis ( )
Word length, the count of words in a tweeted statementTwitter dataset, Chile earthquake 2010 datasetsAbdullah-All-Tanvir et al. ( )
The number of words that express negative emotionsTwitter datasetAbdullah-All-Tanvir et al. ( )
Labeled dataBuzzFeed , PolitiFact Mahabub ( )
The relationship between the news article headline and article body. The biases of a written news articleKaggle: real_or_fake , Fake news detection Bahad et al. ( )
Historical data. The topic and sentiment associated with content textual. The subject and context of the text, semantic knowledge of the contentFacebook datasetDel Vicario et al. ( )
The veracity of image text. The credibility of the top 15 Google search results related to the image textGoogle images, the Onion, KaggleVishwakarma et al. ( )
Topic modeling of text and the associated image of the online newsTwitter dataset , Weibo Amri et al. ( )

a https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

b https://mediabiasfactcheck.com/ , last access date: 26-12-2022

c https://github.com/KaiDMML/FakeNewsNet , last access date: 26-12-2022

d https://www.kaggle.com/anthonyc1/gathering-real-news-for-oct-dec-2016 , last access date: 26-12-2022

e https://www.cs.ucsb.edu/~william/data/liar_dataset.zip , last access date: 26-12-2022

f https://www.kaggle.com/mrisdal/fake-news , last access date: 26-12-2022

g https://github.com/BuzzFeedNews/2016-10-facebook-fact-check , last access date: 26-12-2022

h https://www.politifact.com/subjects/fake-news/ , last access date: 26-12-2022

i https://www.kaggle.com/rchitic17/real-or-fake , last access date: 26-12-2022

j https://www.kaggle.com/jruvika/fake-news-detection , last access date: 26-12-2022

k https://github.com/MKLab-ITI/image-verification-corpus , last access date: 26-12-2022

l https://drive.google.com/file/d/14VQ7EWPiFeGzxp3XC2DeEHi-BEisDINn/view , last access date: 26-12-2022

Social Context-based Category

Unlike news content-based solutions, the social context-based approaches capture the skeptical social context of the online news (Zhang and Ghorbani 2020 ) rather than focusing on the news content. The social context-based category contains fake news detection approaches that use the contextual aspects (i.e., information related to the context of the news post). These aspects are based on social context and they offer additional information to help detect fake news. They are the surrounding data outside of the fake news article itself, where they can be an essential part of automatic fake news detection. Some useful examples of contextual information may include checking if the news itself and the source that published it are credible, checking the date of the news or the supporting resources, and checking if any other online news platforms are reporting the same or similar stories (Zhang and Ghorbani 2020 ).

Social context-based aspects can be classified into two subcategories, user-based and network-based, and they can be used for context analysis and training tasks in the case of AI- and ML-based approaches. User-based aspects refer to information captured from OSN users such as user profile information (Shu et al. 2019b ; Wang et al. 2019c ; Hamdi et al. 2020 ; Nyow and Chua 2019 ; Jiang et al. 2019 ) and user behavior (Cardaioli et al. 2020 ) such as user engagement (Uppada et al. 2022 ; Jiang et al. 2019 ; Shu et al. 2018b ; Nyow and Chua 2019 ) and response (Zhang et al. 2019a ; Qian et al. 2018 ). Meanwhile, network-based aspects refer to information captured from the properties of the social network where the fake content is shared and disseminated such as news propagation path (Liu and Wu 2018 ; Wu and Liu 2018 ) (e.g., propagation times and temporal characteristics of propagation), diffusion patterns (Shu et al. 2019a ) (e.g., number of retweets, shares), as well as user relationships (Mishra 2020 ; Hamdi et al. 2020 ; Jiang et al. 2019 ) (e.g., friendship status among users).

Figure  7 summarizes some of the most widely adopted social context representations, as well as the most used detection techniques (i.e., AI, ML, DL, fact-checking and blockchain), in the social context-based category of approaches.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig7_HTML.jpg

Social context-based category: social context representation and detection techniques

Table  7 lists the distinct features and metadata, the adopted detection cues, as well as the used datasets, in the context-based category of fake news detection approaches.

The features, detection cues and datasets used int the social context-based approaches

Feature and metadataDetection cuesDatasetsReference
Users’ sharing behaviors, explicit and implicit profile featuresUser-based: user profile informationFakeNewsNetShu et al. ( )
Users’ trust level, explicit and implicit profile features of “experienced” users who can recognize fake news items as false and “naive” users who are more likely to believe fake newsUser-based: user engagementFakeNewsNet, BuzzFeed, PolitiFactShu et al. ( )
Users’ replies on fake content, the reply stancesUser-based: user responseRumourEval, PHEMEZhang et al. ( )
Historical user responses to previous articlesUser-based: user responseWeibo, Twitter datasetQian et al. ( )
Speaker name, job title, political party affiliation, etc.User-based: user profile informationLIARWang et al. ( )
Latent relationships among users, the influence of the users with high prestige on the other usersNetworks-based: user relationshipsTwitter15 and Twitter16 Mishra ( )
The inherent tri-relationships among publishers, news items and users (i.e., publisher-news relations and user-news interactions)Networks-based: diffusion patternsFakeNewsNetShu et al. ( )
Propagation paths of news stories constructed from the retweets of source tweetsNetworks-based: news propagation pathWeibo, Twitter15, Twitter16Liu and Wu ( )
The propagation of messages in a social networkNetworks-based: news propagation pathTwitter datasetWu and Liu ( )
Spatiotemporal information (i.e., location, timestamps of user engagements), user’s Twitter profile, the user engagement to both fake and real newsUser-based: user engagementFakeNewsNet, PolitiFact, GossipCop, TwitterNyow and Chua ( )
The credibility of information sources, characteristics of the user, and their social graphUser and network-based: user profile information and user relationshipsEgo-Twitter Hamdi et al. ( )
Number of follows and followers on social media (user followee/follower, The friendship network), users’ similaritiesUser and network-based: user profile information, user engagement and user relationshipsFakeNewsNetJiang et al. ( )

a https://www.dropbox.com/s/7ewzdrbelpmrnxu/rumdetect2017.zip , last access date: 26-12-2022 b https://snap.stanford.edu/data/ego-Twitter.html , last access date: 26-12-2022

Hybrid approaches

Most researchers are focusing on employing a specific method rather than a combination of both content- and context-based methods. This is because some of them (Wu and Rao 2020 ) believe that there still some challenging limitations in the traditional fusion strategies due to existing feature correlations and semantic conflicts. For this reason, some researchers focus on extracting content-based information, while others are capturing some social context-based information for their proposed approaches.

However, it has proven challenging to successfully automate fake news detection based on just a single type of feature (Ruchansky et al. 2017 ). Therefore, recent directions tend to do a mixture by using both news content-based and social context-based approaches for fake news detection.

Table  8 lists the distinct features and metadata, as well as the used datasets, in the hybrid category of fake news detection approaches.

The features and datasets used in the hybrid approaches

Feature and metadataDatasetsReference
Features and textual metadata of the news content: title, content, date, source, locationSOT fake news dataset, LIAR dataset and FA-KES datasetElhadad et al. ( )
Spatiotemporal information (i.e., location, timestamps of user engagements), user’s Twitter profile, the user engagement to both fake and real newsFakeNewsNet, PolitiFact, GossipCop, TwitterNyow and Chua ( )
The domains and reputations of the news publishers. The important terms of each news and their word embeddings and topics. Shares, reactions and commentsBuzzFeedXu et al. ( )
Shares and propagation path of the tweeted content. A set of metrics comprising of created discussions such as the increase in authors, attention level, burstiness level, contribution sparseness, author interaction, author count and the average length of discussionsTwitter datasetAswani et al. ( )
Features extracted from the evolution of news and features from the users involved in the news spreading: The news veracity, the credibility of news spreaders, and the frequency of exposure to the same piece of newsTwitter datasetPreviti et al. ( )
Similar semantics and conflicting semantics between posts and commentsRumourEval, PHEMEWu and Rao ( )
Information from the publisher, including semantic and emotional information in news content. Semantic and emotional information from users. The resultant latent representations from news content and user commentsWeiboGuo et al. ( )
Relationships between news articles, creators and subjectsPolitiFactZhang et al. ( )
Source domains of the news article, author namesGeorge McIntire fake news datasetDeepak and Chitturi ( )
The news content, social context and spatiotemporal information. Synthetic user engagements generated from historical temporal user engagement patternsFakeNewsNetShu et al. ( )
The news content, social reactions, statements, the content and language of posts, the sharing and dissemination among users, content similarity, stance, sentiment score, headline, named entity, news sharing, credibility history, tweet commentsSHPT, PolitiFactWang et al. ( )
The source of the news, its headline, its author, its publication time, the adherence of a news source to a particular party, likes, shares, replies, followers-followees and their activitiesNELA-GT-2019, FakedditRaza and Ding ( )

Fake news detection techniques

Another vision for classifying automatic fake news detection is to look at techniques used in the literature. Hence, we classify the detection methods based on the techniques into three groups:

  • Human-based techniques: This category mainly includes the use of crowdsourcing and fact-checking techniques, which rely on human knowledge to check and validate the veracity of news content.
  • Artificial Intelligence-based techniques: This category includes the most used AI approaches for fake news detection in the literature. Specifically, these are the approaches in which researchers use classical ML, deep learning techniques such as convolutional neural network (CNN), recurrent neural network (RNN), as well as natural language processing (NLP).
  • Blockchain-based techniques: This category includes solutions using blockchain technology to detect and mitigate fake news in social media by checking source reliability and establishing the traceability of the news content.

Human-based Techniques

One specific research direction for fake news detection consists of using human-based techniques such as crowdsourcing (Pennycook and Rand 2019 ; Micallef et al. 2020 ) and fact-checking (Vlachos and Riedel 2014 ; Chung and Kim 2021 ; Nyhan et al. 2020 ) techniques.

These approaches can be considered as low computational requirement techniques since both rely on human knowledge and expertise for fake news detection. However, fake news identification cannot be addressed solely through human force since it demands a lot of effort in terms of time and cost, and it is ineffective in terms of preventing the fast spread of fake content.

Crowdsourcing. Crowdsourcing approaches (Kim et al. 2018 ) are based on the “wisdom of the crowds” (Collins et al. 2020 ) for fake content detection. These approaches rely on the collective contributions and crowd signals (Tschiatschek et al. 2018 ) of a group of people for the aggregation of crowd intelligence to detect fake news (Tchakounté et al. 2020 ) and to reduce the spread of misinformation on social media (Pennycook and Rand 2019 ; Micallef et al. 2020 ).

Micallef et al. ( 2020 ) highlight the role of the crowd in countering misinformation. They suspect that concerned citizens (i.e., the crowd), who use platforms where disinformation appears, can play a crucial role in spreading fact-checking information and in combating the spread of misinformation.

Recently Tchakounté et al. ( 2020 ) proposed a voting system as a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party site.

Similarly, Huffaker et al. ( 2020 ) propose a crowdsourced detection of emotionally manipulative language. They introduce an approach that transforms classification problems into a comparison task to mitigate conflation content by allowing the crowd to detect text that uses manipulative emotional language to sway users toward positions or actions. The proposed system leverages anchor comparison to distinguish between intrinsically emotional content and emotionally manipulative language.

La Barbera et al. ( 2020 ) try to understand how people perceive the truthfulness of information presented to them. They collect data from US-based crowd workers, build a dataset of crowdsourced truthfulness judgments for political statements, and compare it with expert annotation data generated by fact-checkers such as PolitiFact.

Coscia and Rossi ( 2020 ) introduce a crowdsourced flagging system that consists of online news flagging. The bipolar model of news flagging attempts to capture the main ingredients that they observe in empirical research on fake news and disinformation.

Unlike the previously mentioned researchers who focus on news content in their approaches, Pennycook and Rand ( 2019 ) focus on using crowdsourced judgments of the quality of news sources to combat social media disinformation.

Fact-Checking. The fact-checking task is commonly manually performed by journalists to verify the truthfulness of a given claim. Indeed, fact-checking features are being adopted by multiple online social network platforms. For instance, Facebook 34 started addressing false information through independent fact-checkers in 2017, followed by Google 35 the same year. Two years later, Instagram 36 followed suit. However, the usefulness of fact-checking initiatives is questioned by journalists 37 , as well as by researchers such as Andersen and Søe ( 2020 ). On the other hand, work is being conducted to boost the effectiveness of these initiatives to reduce misinformation (Chung and Kim 2021 ; Clayton et al. 2020 ; Nyhan et al. 2020 ).

Most researchers use fact-checking websites (e.g., politifact.com, 38 snopes.com, 39 Reuters, 40 , etc.) as data sources to build their datasets and train their models. Therefore, in the following, we specifically review examples of solutions that use fact-checking (Vlachos and Riedel 2014 ) to help build datasets that can be further used in the automatic detection of fake content.

Yang et al. ( 2019a ) use PolitiFact fact-checking website as a data source to train, tune, and evaluate their model named XFake, on political data. The XFake system is an explainable fake news detector that assists end users to identify news credibility. The fakeness of news items is detected and interpreted considering both content and contextual (e.g., statements) information (e.g., speaker).

Based on the idea that fact-checkers cannot clean all data, and it must be a selection of what “matters the most” to clean while checking a claim, Sintos et al. ( 2019 ) propose a solution to help fact-checkers combat problems related to data quality (where inaccurate data lead to incorrect conclusions) and data phishing. The proposed solution is a combination of data cleaning and perturbation analysis to avoid uncertainties and errors in data and the possibility that data can be phished.

Tchechmedjiev et al. ( 2019 ) propose a system named “ClaimsKG” as a knowledge graph of fact-checked claims aiming to facilitate structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. “ClaimsKG” designs the relationship between vocabularies. To gather vocabularies, a semi-automated pipeline periodically gathers data from popular fact-checking websites regularly.

AI-based Techniques

Previous work by Yaqub et al. ( 2020 ) has shown that people lack trust in automated solutions for fake news detection However, work is already being undertaken to increase this trust, for instance by von der Weth et al. ( 2020 ).

Most researchers consider fake news detection as a classification problem and use artificial intelligence techniques, as shown in Fig.  8 . The adopted AI techniques may include machine learning ML (e.g., Naïve Bayes, logistic regression, support vector machine SVM), deep learning DL (e.g., convolutional neural networks CNN, recurrent neural networks RNN, long short-term memory LSTM) and natural language processing NLP (e.g., Count vectorizer, TF-IDF Vectorizer). Most of them combine many AI techniques in their solutions rather than relying on one specific approach.

An external file that holds a picture, illustration, etc.
Object name is 13278_2023_1028_Fig8_HTML.jpg

Examples of the most widely used AI techniques for fake news detection

Many researchers are developing machine learning models in their solutions for fake news detection. Recently, deep neural network techniques are also being employed as they are generating promising results (Islam et al. 2020 ). A neural network is a massively parallel distributed processor with simple units that can store important information and make it available for use (Hiriyannaiah et al. 2020 ). Moreover, it has been proven (Cardoso Durier da Silva et al. 2019 ) that the most widely used method for automatic detection of fake news is not simply a classical machine learning technique, but rather a fusion of classical techniques coordinated by a neural network.

Some researchers define purely machine learning models (Del Vicario et al. 2019 ; Elhadad et al. 2019 ; Aswani et al. 2017 ; Hakak et al. 2021 ; Singh et al. 2021 ) in their fake news detection approaches. The more commonly used machine learning algorithms (Abdullah-All-Tanvir et al. 2019 ) for classification problems are Naïve Bayes, logistic regression and SVM.

Other researchers (Wang et al. 2019c ; Wang 2017 ; Liu and Wu 2018 ; Mishra 2020 ; Qian et al. 2018 ; Zhang et al. 2020 ; Goldani et al. 2021 ) prefer to do a mixture of different deep learning models, without combining them with classical machine learning techniques. Some even prove that deep learning techniques outperform traditional machine learning techniques (Mishra et al. 2022 ). Deep learning is one of the most widely popular research topics in machine learning. Unlike traditional machine learning approaches, which are based on manually crafted features, deep learning approaches can learn hidden representations from simpler inputs both in context and content variations (Bondielli and Marcelloni 2019 ). Moreover, traditional machine learning algorithms almost always require structured data and are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets, which requires human intervention to “teach them” when the result is incorrect (Parrish 2018 ), while deep learning networks rely on layers of artificial neural networks (ANN) and do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes (Parrish 2018 ). The two most widely implemented paradigms in deep neural networks are recurrent neural networks (RNN) and convolutional neural networks (CNN).

Still other researchers (Abdullah-All-Tanvir et al. 2019 ; Kaliyar et al. 2020 ; Zhang et al. 2019a ; Deepak and Chitturi 2020 ; Shu et al. 2018a ; Wang et al. 2019c ) prefer to combine traditional machine learning and deep learning classification, models. Others combine machine learning and natural language processing techniques. A few combine deep learning models with natural language processing (Vereshchaka et al. 2020 ). Some other researchers (Kapusta et al. 2019 ; Ozbay and Alatas 2020 ; Ahmed et al. 2020 ) combine natural language processing with machine learning models. Furthermore, others (Abdullah-All-Tanvir et al. 2019 ; Kaur et al. 2020 ; Kaliyar 2018 ; Abdullah-All-Tanvir et al. 2020 ; Bahad et al. 2019 ) prefer to combine all the previously mentioned techniques (i.e., ML, DL and NLP) in their approaches.

Table  11 , which is relegated to the Appendix (after the bibliography) because of its size, shows a comparison of the fake news detection solutions that we have reviewed based on their main approaches, the methodology that was used and the models.

Comparison of AI-based fake news detection techniques

ReferenceApproachMethodModel
Del Vicario et al. ( )An approach to analyze the sentiment associated with data textual content and add semantic knowledge to itMLLinear Regression (LIN), Logistic Regression (LOG), Support Vector Machine (SVM) with Linear Kernel, K-Nearest Neighbors (KNN), Neural Network Models (NN), Decision Trees (DT)
Elhadad et al. ( )An approach to select hybrid features from the textual content of the news, which they consider as blocks, without segmenting text into parts (title, content, date, source, etc.)MLDecision Tree, KNN, Logistic Regression, SVM, Naïve Bayes with n-gram, LSVM, Perceptron
Aswani et al. ( )A hybrid artificial bee colony approach to identify and segregate buzz in Twitter and analyze user-generated content (UGC) to mine useful information (content buzz/popularity)MLKNN with artificial bee colony optimization
Hakak et al. ( )An ensemble of machine learning approaches for effective feature extraction to classify fake newsMLDecision Tree, Random Forest and Extra Tree Classifier
Singh et al. ( )A multimodal approach, combining text and visual analysis of online news stories to automatically detect fake news through predictive analysis to detect features most strongly associated with fake newsMLLogistic Regression, Linear Discrimination Analysis, Quadratic Discriminant Analysis, K-Nearest Neighbors, Naïve Bayes, Support Vector Machine, Classification and Regression Tree, and Random Forest Analysis
Amri et al. ( )An explainable multimodal content-based fake news detection systemMLVision-and-Language BERT (VilBERT), Local Interpretable Model-Agnostic Explanations (LIME), Latent Dirichlet Allocation (LDA) topic modeling
Wang et al. ( )A hybrid deep neural network model to learn the useful features from contextual information and to capture the dependencies between sequences of contextual informationDLRecurrent and Convolutional Neural Networks (RNN and CNN)
Wang ( )A hybrid convolutional neural network approach for automatic fake news detectionDLRecurrent and Convolutional Neural Networks (RNN and CNN)
Liu and Wu ( )An early detection approach of fake news to classify the propagation path to mine the global and local changes of user characteristics in the diffusion pathDLRecurrent and Convolutional Neural Networks (RNN and CNN)
Mishra ( )Unsupervised network representation learning methods to learn user (node) embeddings from both the follower network and the retweet network and to encode the propagation path sequenceDLRNN: (long short-term memory unit (LSTM))
Qian et al. ( )A Two-Level Convolutional Neural Network with User Response Generator (TCNN-URG) where TCNN captures semantic information from the article text by representing it at the sentence and word level. The URG learns a generative model of user responses to article text from historical user responses that it can use to generate responses to new articles to assist fake news detectionDLConvolutional Neural Network (CNN)
Zhang et al. ( )Based on a set of explicit features extracted from the textual information, a deep diffusive network model is built to infer the credibility of news articles, creators and subjects simultaneouslyDLDeep Diffusive Network Model Learning
Goldani et al. ( )A capsule networks (CapsNet) approach for fake news detection using two architectures for different lengths of news statements and claims that capsule neural networks have been successful in computer vision and are receiving attention for use in Natural Language Processing (NLP)DLCapsule Networks (CapsNet)
Wang et al. ( )An automated approach to distinguish different cases of fake news (i.e., hoaxes, irony and propaganda) while assessing and classifying news articles and claims including linguistic cues as well as user credibility and news dissemination in social mediaDL, MLConvolutional Neural Network (CNN), long Short-Term Memory (LSTM), logistic regression
Abdullah-All-Tanvir et al. ( )A model to recognize forged news messages from twitter posts, by figuring out how to anticipate precision appraisals, in view of computerizing forged news identification in Twitter dataset. A combination of traditional machine learning, as well as deep learning classification models, is tested to enhance the accuracy of predictionDL, MLNaïve Bayes, Logistic Regression, Support Vector Machine, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM)
Kaliyar et al. ( )An approach named (FNDNet) based on the combination between unsupervised learning algorithm GloVe and deep convolutional neural network for fake news detectionDL, MLDeep Convolutional Neural Network (CNN), Global Vectors (GloVe)
Zhang et al. ( )A hybrid approach to encode auxiliary information coming from people’s replies alone in temporal order. Such auxiliary information is then used to update a priori belief generating a posteriori beliefDL, MLDeep Learning Model, Long Short-Term Memory Neural Network (LSTM)
Deepak and Chitturi ( )A system that consists of live data mining in addition to the deep learning modelDL, MLFeedforward Neural Network (FNN) and LSTM Word Vector Model
Shu et al. ( )A multidimensional fake news data repository “FakeNewsNet” and conduct an exploratory analysis of the datasets to evaluate themDL, MLConvolutional Neural Network (CNN), Support Vector Machines (SVMs), Logistic Regression (LR), Naïve Bayes (NB)
Vereshchaka et al. ( )A sociocultural textual analysis, computational linguistics analysis, and textual classification using NLP, as well as deep learning models to distinguish fake from real news to mitigate the problem of disinformationDL, NLPShort-Term Memory (LSTM), Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU)
Kapusta et al. ( )A sentiment and frequency analysis using both machine learning and NLP in what is called text mining to processing news content sentiment analysis and frequency analysis to compare basic text characteristics of fake and real news articlesML, NLPThe Natural Language Toolkit (NLTK), TF-IDF
Ozbay and Alatas ( )A hybrid approach based on text analysis and supervised artificial intelligence for fake news detectionML, NLPSupervised algorithms: BayesNet, JRip, OneR, Decision Stump, ZeroR, Stochastic Gradient Descent (SGD), CV Parameter Selection (CVPS), Randomizable Filtered Classifier (RFC), Logistic Model Tree (LMT). NLP: TF weighting
Ahmed et al. ( )A machine learning and NLP text-based processing to identify fake news. Various features of the text are extracted through text processing and after that those features are incorporated into classificationML, NLPMachine learning classifiers (i.e., Passive-aggressive, Naïve Bayes and Support Vector Machine)
Abdullah-All-Tanvir et al. ( )A hybrid neural network approach to identify authentic news on popular Twitter threads which would outperform the traditional neural network architecture’s performance. Three traditional supervised algorithms and two Deep Neural are combined to train the defined model. Some NLP concepts were also used to implement some of the traditional supervised machine learning algorithms over their datasetML, DL, NLPTraditional supervised algorithm (i.e., Logistic Regression, Bayesian Classifier and Support Vector Machine). Deep Neural Networks (i.e., Recurrent Neural Network, Long Short-Term Memory LSTM). NLP concepts such as Count vectorizer and TF-IDF Vectorizer
Kaur et al. ( )A hybrid method to identify news articles as fake or real through finding out which classification model identifies false features accuratelyML, DL, NLPNeural Networks (NN) and Ensemble Models. Supervised Machine Learning Classifiers such as Naïve Bayes (NB), Decision Tree (DT), Support Vector Machine (SVM), Linear Models. Term Frequency-Inverse Document Frequency (TF-IDF), Count-Vectorizer (CV), Hashing-Vectorizer (HV)
Kaliyar ( )A fake news detection approach to classify the news article or other documents into certain or not. Natural language processing, machine learning and deep learning techniques are used to implement the defined models and to predict the accuracy of different models and classifiersML, DL, NLPMachine Learning Models: Naïve Bayes, K-nearest Neighbors, Decision Tree, Random Forest. Deep Learning Networks: Shallow Convolutional Neural Networks (CNN), Very Deep Convolutional Neural Network (VDCNN), Long Short-Term Memory Network (LSTM), Gated Recurrent Unit Network (GRU). A combination of Convolutional Neural Network with Long Short-Term Memory (CNN-LSTM) and Convolutional Neural Network with Gated Recurrent Unit (CNN-LSTM)
Mahabub ( )An intelligent detection system to manage the classification of news as being either real or fakeML, DL, NLPMachine Learning: Naïve Bayes, KNN, SVM, Random Forest, Artificial Neural Network, Logistic Regression, Gradient Boosting, AdaBoost
Bahad et al. ( )A method based on Bi-directional LSTM-recurrent neural network to analyze the relationship between the news article headline and article bodyML, DL, NLPUnsupervised Learning algorithm: Global Vectors (GloVe). Bi-directional LSTM-recurrent Neural Network

Blockchain-based Techniques for Source Reliability and Traceability

Another research direction for detecting and mitigating fake news in social media focuses on using blockchain solutions. Blockchain technology is recently attracting researchers’ attention due to the interesting features it offers. Immutability, decentralization, tamperproof, consensus, record keeping and non-repudiation of transactions are some of the key features that make blockchain technology exploitable, not just for cryptocurrencies, but also to prove the authenticity and integrity of digital assets.

However, the proposed blockchain approaches are few in number and they are fundamental and theoretical approaches. Specifically, the solutions that are currently available are still in research, prototype, and beta testing stages (DiCicco and Agarwal 2020 ; Tchechmedjiev et al. 2019 ). Furthermore, most researchers (Ochoa et al. 2019 ; Song et al. 2019 ; Shang et al. 2018 ; Qayyum et al. 2019 ; Jing and Murugesan 2018 ; Buccafurri et al. 2017 ; Chen et al. 2018 ) do not specify which fake news type they are mitigating in their studies. They mention news content in general, which is not adequate for innovative solutions. For that, serious implementations should be provided to prove the usefulness and feasibility of this newly developing research vision.

Table  9 shows a classification of the reviewed blockchain-based approaches. In the classification, we listed the following:

  • The type of fake news that authors are trying to mitigate, which can be multimedia-based or text-based fake news.
  • The techniques used for fake news mitigation, which can be either blockchain only, or blockchain combined with other techniques such as AI, Data mining, Truth-discovery, Preservation metadata, Semantic similarity, Crowdsourcing, Graph theory and SIR model (Susceptible, Infected, Recovered).
  • The feature that is offered as an advantage of the given solution (e.g., Reliability, Authenticity and Traceability). Reliability is the credibility and truthfulness of the news content, which consists of proving the trustworthiness of the content. Traceability aims to trace and archive the contents. Authenticity consists of checking whether the content is real and authentic.

A checkmark ( ✓ ) in Table  9 denotes that the mentioned criterion is explicitly mentioned in the proposed solution, while the empty dash (–) cell for fake news type denotes that it depends on the case: The criterion was either not explicitly mentioned (e.g., fake news type) in the work or the classification does not apply (e.g., techniques/other).

A classification of popular blockchain-based approaches for fake news detection in social media

ReferenceFake News TypeTechniquesFeature
MultimediaText
Shae and Tsai ( ) AIReliability
Ochoa et al. ( ) Data Mining, Truth-DiscoveryReliability
Huckle and White ( ) Preservation MetadataReliability
Song et al. ( )Traceability
Shang et al. ( )Traceability
Qayyum et al. ( )Semantic SimilarityReliability
Jing and Murugesan ( )AIReliability
Buccafurri et al. ( )Crowd-SourcingReliability
Chen et al. ( )SIR ModelReliability
Hasan and Salah ( ) Authenticity
Tchechmedjiev et al. ( )Graph theoryReliability

After reviewing the most relevant state of the art for automatic fake news detection, we classify them as shown in Table  10 based on the detection aspects (i.e., content-based, contextual, or hybrid aspects) and the techniques used (i.e., AI, crowdsourcing, fact-checking, blockchain or hybrid techniques). Hybrid techniques refer to solutions that simultaneously combine different techniques from previously mentioned categories (i.e., inter-hybrid methods), as well as techniques within the same class of methods (i.e., intra-hybrid methods), in order to define innovative solutions for fake news detection. A hybrid method should bring the best of both worlds. Then, we provide a discussion based on different axes.

Fake news detection approaches classification

Artificial IntelligenceCrowdsourcing (CDS)Blockchain (BKC)Fact-checkingHybrid
MLDLNLP
ContentDel Vicario et al. ( ), Hosseinimotlagh and Papalexakis ( ), Hakak et al. ( ), Singh et al. ( ), Amri et al. ( )Wang ( ), Hiriyannaiah et al. ( )Zellers et al. ( )Kim et al. ( ), Tschiatschek et al. ( ), Tchakounté et al. ( ), Huffaker et al. ( ), La Barbera et al. ( ), Coscia and Rossi ( ), Micallef et al. ( )Song et al. ( )Sintos et al. ( )ML, DL, NLP: Abdullah-All-Tanvir et al. ( ), Kaur et al. ( ), Mahabub ( ), Bahad et al. ( ) Kaliyar ( )
ML, DL:
Abdullah-All-Tanvir et al. ( ), Kaliyar et al. ( ), Deepak and Chitturi ( )
DL, NLP: Vereshchaka et al. ( )
ML, NLP: Kapusta et al. ( ), Ozbay and Alatas Ozbay and Alatas ( ), Ahmed et al. ( )
BKC, CDS: Buccafurri et al. ( )
ContextQian et al. ( ), Liu and Wu ( ), Hamdi et al. ( ), Wang et al. ( ), Mishra ( )Pennycook and Rand ( )Huckle and White ( ), Shang et al. ( )Tchechmedjiev et al. ( )ML, DL: Zhang et al. ( ), Shu et al. ( ), Shu et al. ( ), Wu and Liu ( )
BKC, AI: Ochoa et al. ( )
BKC, SIR: Chen et al. ( )
HybridAswani et al. ( ), Previti et al. ( ), Elhadad et al. ( ), Nyow and Chua ( )Ruchansky et al. ( ), Wu and Rao ( ), Guo et al. ( ), Zhang et al. ( )Xu et al. ( )Qayyum et al. ( ), Hasan and Salah ( ), Tchechmedjiev et al. ( )Yang et al. ( )ML, DL: Shu et al. ( ), Wang et al. ( )
BKC, AI: Shae and Tsai ( ), Jing and Murugesan ( )

News content-based methods

Most of the news content-based approaches consider fake news detection as a classification problem and they use AI techniques such as classical machine learning (e.g., regression, Bayesian) as well as deep learning (i.e., neural methods such as CNN and RNN). More specifically, classification of social media content is a fundamental task for social media mining, so that most existing methods regard it as a text categorization problem and mainly focus on using content features, such as words and hashtags (Wu and Liu 2018 ). The main challenge facing these approaches is how to extract features in a way to reduce the data used to train their models and what features are the most suitable for accurate results.

Researchers using such approaches are motivated by the fact that the news content is the main entity in the deception process, and it is a straightforward factor to analyze and use while looking for predictive clues of deception. However, detecting fake news only from the content of the news is not enough because the news is created in a strategic intentional way to mimic the truth (i.e., the content can be intentionally manipulated by the spreader to make it look like real news). Therefore, it is considered to be challenging, if not impossible, to identify useful features (Wu and Liu 2018 ) and consequently tell the nature of such news solely from the content.

Moreover, works that utilize only the news content for fake news detection ignore the rich information and latent user intelligence (Qian et al. 2018 ) stored in user responses toward previously disseminated articles. Therefore, the auxiliary information is deemed crucial for an effective fake news detection approach.

Social context-based methods

The context-based approaches explore the surrounding data outside of the news content, which can be an effective direction and has some advantages in areas where the content approaches based on text classification can run into issues. However, most existing studies implementing contextual methods mainly focus on additional information coming from users and network diffusion patterns. Moreover, from a technical perspective, they are limited to the use of sophisticated machine learning techniques for feature extraction, and they ignore the usefulness of results coming from techniques such as web search and crowdsourcing which may save much time and help in the early detection and identification of fake content.

Hybrid approaches can simultaneously model different aspects of fake news such as the content-based aspects, as well as the contextual aspect based on both the OSN user and the OSN network patterns. However, these approaches are deemed more complex in terms of models (Bondielli and Marcelloni 2019 ), data availability, and the number of features. Furthermore, it remains difficult to decide which information among each category (i.e., content-based and context-based information) is most suitable and appropriate to be used to achieve accurate and precise results. Therefore, there are still very few studies belonging to this category of hybrid approaches.

Early detection

As fake news usually evolves and spreads very fast on social media, it is critical and urgent to consider early detection directions. Yet, this is a challenging task to do especially in highly dynamic platforms such as social networks. Both news content- and social context-based approaches suffer from this challenging early detection of fake news.

Although approaches that detect fake news based on content analysis face this issue less, they are still limited by the lack of information required for verification when the news is in its early stage of spread. However, approaches that detect fake news based on contextual analysis are most likely to suffer from the lack of early detection since most of them rely on information that is mostly available after the spread of fake content such as social engagement, user response, and propagation patterns. Therefore, it is crucial to consider both trusted human verification and historical data as an attempt to detect fake content during its early stage of propagation.

Conclusion and future directions

In this paper, we introduced the general context of the fake news problem as one of the major issues of the online deception problem in online social networks. Based on reviewing the most relevant state of the art, we summarized and classified existing definitions of fake news, as well as its related terms. We also listed various typologies and existing categorizations of fake news such as intent-based fake news including clickbait, hoax, rumor, satire, propaganda, conspiracy theories, framing as well as content-based fake news including text and multimedia-based fake news, and in the latter, we can tackle deepfake videos and GAN-generated fake images. We discussed the major challenges related to fake news detection and mitigation in social media including the deceptiveness nature of the fabricated content, the lack of human awareness in the field of fake news, the non-human spreaders issue (e.g., social bots), the dynamicity of such online platforms, which results in a fast propagation of fake content and the quality of existing datasets, which still limits the efficiency of the proposed solutions. We reviewed existing researchers’ visions regarding the automatic detection of fake news based on the adopted approaches (i.e., news content-based approaches, social context-based approaches, or hybrid approaches) and the techniques that are used (i.e., artificial intelligence-based methods; crowdsourcing, fact-checking, and blockchain-based methods; and hybrid methods), then we showed a comparative study between the reviewed works. We also provided a critical discussion of the reviewed approaches based on different axes such as the adopted aspect for fake news detection (i.e., content-based, contextual, and hybrid aspects) and the early detection perspective.

To conclude, we present the main issues for combating the fake news problem that needs to be further investigated while proposing new detection approaches. We believe that to define an efficient fake news detection approach, we need to consider the following:

  • Our choice of sources of information and search criteria may have introduced biases in our research. If so, it would be desirable to identify those biases and mitigate them.
  • News content is the fundamental source to find clues to distinguish fake from real content. However, contextual information derived from social media users and from the network can provide useful auxiliary information to increase detection accuracy. Specifically, capturing users’ characteristics and users’ behavior toward shared content can be a key task for fake news detection.
  • Moreover, capturing users’ historical behavior, including their emotions and/or opinions toward news content, can help in the early detection and mitigation of fake news.
  • Furthermore, adversarial learning techniques (e.g., GAN, SeqGAN) can be considered as a promising direction for mitigating the lack and scarcity of available datasets by providing machine-generated data that can be used to train and build robust systems to detect the fake examples from the real ones.
  • Lastly, analyzing how sources and promoters of fake news operate over the web through multiple online platforms is crucial; Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to valid information (11%).

Appendix: A Comparison of AI-based fake news detection techniques

This Appendix consists only in the rather long Table  11 . It shows a comparison of the fake news detection solutions based on artificial intelligence that we have reviewed according to their main approaches, the methodology that was used, and the models, as explained in Sect.  6.2.2 .

Author Contributions

The order of authors is alphabetic as is customary in the third author’s field. The lead author was Sabrine Amri, who collected and analyzed the data and wrote a first draft of the paper, all along under the supervision and tight guidance of Esma Aïmeur. Gilles Brassard reviewed, criticized and polished the work into its final form.

This work is supported in part by Canada’s Natural Sciences and Engineering Research Council.

Availability of data and material

Declarations.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

1 https://www.nationalacademies.org/news/2021/07/as-surgeon-general-urges-whole-of-society-effort-to-fight-health-misinformation-the-work-of-the-national-academies-helps-foster-an-evidence-based-information-environment , last access date: 26-12-2022.

2 https://time.com/4897819/elvis-presley-alive-conspiracy-theories/ , last access date: 26-12-2022.

3 https://www.therichest.com/shocking/the-evidence-15-reasons-people-think-the-earth-is-flat/ , last access date: 26-12-2022.

4 https://www.grunge.com/657584/the-truth-about-1952s-alien-invasion-of-washington-dc/ , last access date: 26-12-2022.

5 https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/ , last access date: 26-12-2022.

6 https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ , last access date: 26-12-2022.

7 https://www.buzzfeednews.com/article/janelytvynenko/coronavirus-fake-news-disinformation-rumors-hoaxes , last access date: 26-12-2022.

8 https://www.factcheck.org/2020/03/viral-social-media-posts-offer-false-coronavirus-tips/ , last access date: 26-12-2022.

9 https://www.factcheck.org/2020/02/fake-coronavirus-cures-part-2-garlic-isnt-a-cure/ , last access date: 26-12-2022.

10 https://www.bbc.com/news/uk-36528256 , last access date: 26-12-2022.

11 https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory , last access date: 26-12-2022.

12 https://www.theguardian.com/world/2017/jan/09/germany-investigating-spread-fake-news-online-russia-election , last access date: 26-12-2022.

13 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2016 , last access date: 26-12-2022.

14 https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2018 , last access date: 26-12-2022.

15 https://apnews.com/article/47466c5e260149b1a23641b9e319fda6 , last access date: 26-12-2022.

16 https://blog.collinsdictionary.com/language-lovers/collins-2017-word-of-the-year-shortlist/ , last access date: 26-12-2022.

17 https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/ , last access date: 26-12-2022.

18 https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/ , last access date: 26-12-2022.

19 https://scholar.google.ca/ , last access date: 26-12-2022.

20 https://ieeexplore.ieee.org/ , last access date: 26-12-2022.

21 https://link.springer.com/ , last access date: 26-12-2022.

22 https://www.sciencedirect.com/ , last access date: 26-12-2022.

23 https://www.scopus.com/ , last access date: 26-12-2022.

24 https://www.acm.org/digital-library , last access date: 26-12-2022.

25 https://www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535 , last access date: 26-12-2022.

26 https://en.wikipedia.org/wiki/Trial_of_Socrates , last access date: 26-12-2022.

27 https://trends.google.com/trends/explore?hl=en-US &tz=-180 &date=2013-12-06+2018-01-06 &geo=US &q=fake+news &sni=3 , last access date: 26-12-2022.

28 https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation , last access date: 26-12-2022.

29 https://www.nato.int/cps/en/natohq/177273.htm , last access date: 26-12-2022.

30 https://www.collinsdictionary.com/dictionary/english/fake-news , last access date: 26-12-2022.

31 https://www.statista.com/statistics/657111/fake-news-sharing-online/ , last access date: 26-12-2022.

32 https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ , last access date: 26-12-2022.

33 https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731 , last access date: 26-12-2022.

34 https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news , last access date: 26-12-2022.

35 https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-news-is-true-or-false , last access date: 26-12-2022.

36 https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram , last access date: 26-12-2022.

37 https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/ , last access date: 26-12-2022.

38 https://www.politifact.com/ , last access date: 26-12-2022.

39 https://www.snopes.com/ , last access date: 26-12-2022.

40 https://www.reutersagency.com/en/ , last access date: 26-12-2022.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Esma Aïmeur, Email: ac.laertnomu.ori@ruemia .

Sabrine Amri, Email: [email protected] .

Gilles Brassard, Email: ac.laertnomu.ori@drassarb .

  • Abdullah-All-Tanvir, Mahir EM, Akhter S, Huq MR (2019) Detecting fake news using machine learning and deep learning algorithms. In: 7th international conference on smart computing and communications (ICSCC), IEEE, pp 1–5 10.1109/ICSCC.2019.8843612
  • Abdullah-All-Tanvir, Mahir EM, Huda SMA, Barua S (2020) A hybrid approach for identifying authentic news using deep learning methods on popular Twitter threads. In: International conference on artificial intelligence and signal processing (AISP), IEEE, pp 1–6 10.1109/AISP48273.2020.9073583
  • Abu Arqoub O, Abdulateef Elega A, Efe Özad B, Dwikat H, Adedamola Oloyede F. Mapping the scholarship of fake news research: a systematic review. J Pract. 2022; 16 (1):56–86. doi: 10.1080/17512786.2020.1805791. [ CrossRef ] [ Google Scholar ]
  • Ahmed S, Hinkelmann K, Corradini F. Development of fake news model using machine learning through natural language processing. Int J Comput Inf Eng. 2020; 14 (12):454–460. [ Google Scholar ]
  • Aïmeur E, Brassard G, Rioux J. Data privacy: an end-user perspective. Int J Comput Netw Commun Secur. 2013; 1 (6):237–250. [ Google Scholar ]
  • Aïmeur E, Hage H, Amri S (2018) The scourge of online deception in social networks. In: 2018 international conference on computational science and computational intelligence (CSCI), IEEE, pp 1266–1271 10.1109/CSCI46756.2018.00244
  • Alemanno A. How to counter fake news? A taxonomy of anti-fake news approaches. Eur J Risk Regul. 2018; 9 (1):1–5. doi: 10.1017/err.2018.12. [ CrossRef ] [ Google Scholar ]
  • Allcott H, Gentzkow M. Social media and fake news in the 2016 election. J Econ Perspect. 2017; 31 (2):211–36. doi: 10.1257/jep.31.2.211. [ CrossRef ] [ Google Scholar ]
  • Allen J, Howland B, Mobius M, Rothschild D, Watts DJ. Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv. 2020 doi: 10.1126/sciadv.aay3539. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Allington D, Duffy B, Wessely S, Dhavan N, Rubin J. Health-protective behaviour, social media usage and conspiracy belief during the Covid-19 public health emergency. Psychol Med. 2020 doi: 10.1017/S003329172000224X. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Alonso-Galbán P, Alemañy-Castilla C (2022) Curbing misinformation and disinformation in the Covid-19 era: a view from cuba. MEDICC Rev 22:45–46 10.37757/MR2020.V22.N2.12 [ PubMed ] [ CrossRef ]
  • Altay S, Hacquin AS, Mercier H. Why do so few people share fake news? It hurts their reputation. New Media Soc. 2022; 24 (6):1303–1324. doi: 10.1177/1461444820969893. [ CrossRef ] [ Google Scholar ]
  • Amri S, Sallami D, Aïmeur E (2022) Exmulf: an explainable multimodal content-based fake news detection system. In: International symposium on foundations and practice of security. Springer, Berlin, pp 177–187. 10.1109/IJCNN48605.2020.9206973
  • Andersen J, Søe SO. Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news-the case of Facebook. Eur J Commun. 2020; 35 (2):126–139. doi: 10.1177/0267323119894489. [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B. Fake news and Covid-19: modelling the predictors of fake news sharing among social media users. Telematics Inform. 2021; 56 :101475. doi: 10.1016/j.tele.2020.101475. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Apuke OD, Omar B, Tunca EA, Gever CV. The effect of visual multimedia instructions against fake news spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. 2022 doi: 10.1177/09610006221096477. [ CrossRef ] [ Google Scholar ]
  • Aswani R, Ghrera S, Kar AK, Chandra S. Identifying buzz in social media: a hybrid approach using artificial bee colony and k-nearest neighbors for outlier detection. Soc Netw Anal Min. 2017; 7 (1):1–10. doi: 10.1007/s13278-017-0461-2. [ CrossRef ] [ Google Scholar ]
  • Avram M, Micallef N, Patil S, Menczer F (2020) Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arxiv:2005.04682 , 10.37016/mr-2020-033
  • Badawy A, Lerman K, Ferrara E (2019) Who falls for online political manipulation? In: Companion proceedings of the 2019 world wide web conference, pp 162–168 10.1145/3308560.3316494
  • Bahad P, Saxena P, Kamal R. Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Comput Sci. 2019; 165 :74–82. doi: 10.1016/j.procs.2020.01.072. [ CrossRef ] [ Google Scholar ]
  • Bakdash J, Sample C, Rankin M, Kantarcioglu M, Holmes J, Kase S, Zaroukian E, Szymanski B (2018) The future of deception: machine-generated and manipulated images, video, and audio? In: 2018 international workshop on social sensing (SocialSens), IEEE, pp 2–2 10.1109/SocialSens.2018.00009
  • Balmas M. When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun Res. 2014; 41 (3):430–454. doi: 10.1177/0093650212453600. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. Understanding fake news consumption: a review. Soc Sci. 2020 doi: 10.3390/socsci9100185. [ CrossRef ] [ Google Scholar ]
  • Baptista JP, Gradim A. A working definition of fake news. Encyclopedia. 2022; 2 (1):632–645. doi: 10.3390/encyclopedia2010043. [ CrossRef ] [ Google Scholar ]
  • Bastick Z. Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Comput Hum Behav. 2021; 116 :106633. doi: 10.1016/j.chb.2020.106633. [ CrossRef ] [ Google Scholar ]
  • Batailler C, Brannon SM, Teas PE, Gawronski B. A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci. 2022; 17 (1):78–98. doi: 10.1177/1745691620986135. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bessi A, Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. First Monday 21(11-7). 10.5210/fm.v21i11.7090
  • Bhattacharjee A, Shu K, Gao M, Liu H (2020) Disinformation in the online information ecosystem: detection, mitigation and challenges. arXiv preprint arXiv:2010.09113
  • Bhuiyan MM, Zhang AX, Sehat CM, Mitra T. Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria. Proc ACM Hum Comput Interact. 2020; 4 (CSCW2):1–26. doi: 10.1145/3415164. [ CrossRef ] [ Google Scholar ]
  • Bode L, Vraga EK. In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun. 2015; 65 (4):619–638. doi: 10.1111/jcom.12166. [ CrossRef ] [ Google Scholar ]
  • Bondielli A, Marcelloni F. A survey on fake news and rumour detection techniques. Inf Sci. 2019; 497 :38–55. doi: 10.1016/j.ins.2019.05.035. [ CrossRef ] [ Google Scholar ]
  • Bovet A, Makse HA. Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun. 2019; 10 (1):1–14. doi: 10.1038/s41467-018-07761-2. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brashier NM, Pennycook G, Berinsky AJ, Rand DG. Timing matters when correcting fake news. Proc Natl Acad Sci. 2021 doi: 10.1073/pnas.2020043118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brewer PR, Young DG, Morreale M. The impact of real news about “fake news”: intertextual processes and political satire. Int J Public Opin Res. 2013; 25 (3):323–343. doi: 10.1093/ijpor/edt015. [ CrossRef ] [ Google Scholar ]
  • Bringula RP, Catacutan-Bangit AE, Garcia MB, Gonzales JPS, Valderama AMC. “Who is gullible to political disinformation?” Predicting susceptibility of university students to fake news. J Inf Technol Polit. 2022; 19 (2):165–179. doi: 10.1080/19331681.2021.1945988. [ CrossRef ] [ Google Scholar ]
  • Buccafurri F, Lax G, Nicolazzo S, Nocera A (2017) Tweetchain: an alternative to blockchain for crowd-based applications. In: International conference on web engineering, Springer, Berlin, pp 386–393. 10.1007/978-3-319-60131-1_24
  • Burshtein S. The true story on fake news. Intell Prop J. 2017; 29 (3):397–446. [ Google Scholar ]
  • Cardaioli M, Cecconello S, Conti M, Pajola L, Turrin F (2020) Fake news spreaders profiling through behavioural analysis. In: CLEF (working notes)
  • Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news? A survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences. 10.24251/HICSS.2019.332
  • Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 10.14763/2020.2.1481
  • Celliers M, Hattingh M (2020) A systematic review on fake news themes reported in literature. In: Conference on e-Business, e-Services and e-Society. Springer, Berlin, pp 223–234. 10.1007/978-3-030-45002-1_19
  • Chen Y, Li Q, Wang H (2018) Towards trusted social networks with blockchain technology. arXiv preprint arXiv:1801.02796
  • Cheng L, Guo R, Shu K, Liu H (2020) Towards causal understanding of fake news dissemination. arXiv preprint arXiv:2010.10580
  • Chiu MM, Oh YW. How fake news differs from personal lies. Am Behav Sci. 2021; 65 (2):243–258. doi: 10.1177/0002764220910243. [ CrossRef ] [ Google Scholar ]
  • Chung M, Kim N. When I learn the news is false: how fact-checking information stems the spread of fake news via third-person perception. Hum Commun Res. 2021; 47 (1):1–24. doi: 10.1093/hcr/hqaa010. [ CrossRef ] [ Google Scholar ]
  • Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Inf Syst Res. 2020 doi: 10.1287/isre.2019.0910. [ CrossRef ] [ Google Scholar ]
  • Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E, et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav. 2020; 42 (4):1073–1095. doi: 10.1007/s11109-019-09533-0. [ CrossRef ] [ Google Scholar ]
  • Collins B, Hoang DT, Nguyen NT, Hwang D (2020) Fake news types and detection models on social media a state-of-the-art survey. In: Asian conference on intelligent information and database systems. Springer, Berlin, pp 562–573 10.1007/978-981-15-3380-8_49
  • Conroy NK, Rubin VL, Chen Y. Automatic deception detection: methods for finding fake news. Proc Assoc Inf Sci Technol. 2015; 52 (1):1–4. doi: 10.1002/pra2.2015.145052010082. [ CrossRef ] [ Google Scholar ]
  • Cooke NA. Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q. 2017; 87 (3):211–221. doi: 10.1086/692298. [ CrossRef ] [ Google Scholar ]
  • Coscia M, Rossi L. Distortions of political bias in crowdsourced misinformation flagging. J R Soc Interface. 2020; 17 (167):20200020. doi: 10.1098/rsif.2020.0020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dame Adjin-Tettey T. Combating fake news, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human. 2022; 9 (1):2037229. doi: 10.1080/23311983.2022.2037229. [ CrossRef ] [ Google Scholar ]
  • Deepak S, Chitturi B. Deep neural approach to fake-news identification. Procedia Comput Sci. 2020; 167 :2236–2243. doi: 10.1016/j.procs.2020.03.276. [ CrossRef ] [ Google Scholar ]
  • de Cock Buning M (2018) A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Publications Office of the European Union
  • Del Vicario M, Quattrociocchi W, Scala A, Zollo F. Polarization and fake news: early warning of potential misinformation targets. ACM Trans Web (TWEB) 2019; 13 (2):1–22. doi: 10.1145/3316809. [ CrossRef ] [ Google Scholar ]
  • Demuyakor J, Opata EM. Fake news on social media: predicting which media format influences fake news most on facebook. J Intell Commun. 2022 doi: 10.54963/jic.v2i1.56. [ CrossRef ] [ Google Scholar ]
  • Derakhshan H, Wardle C (2017) Information disorder: definitions. In: Understanding and addressing the disinformation ecosystem, pp 5–12
  • Desai AN, Ruidera D, Steinbrink JM, Granwehr B, Lee DH. Misinformation and disinformation: the potential disadvantages of social media in infectious disease and how to combat them. Clin Infect Dis. 2022; 74 (Supplement–3):e34–e39. doi: 10.1093/cid/ciac109. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Domenico G, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: a systematic review. J Bus Res. 2021; 124 :329–341. doi: 10.1016/j.jbusres.2020.11.037. [ CrossRef ] [ Google Scholar ]
  • Dias N, Pennycook G, Rand DG. Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv Kennedy School Misinform Rev. 2020 doi: 10.37016/mr-2020-001. [ CrossRef ] [ Google Scholar ]
  • DiCicco KW, Agarwal N (2020) Blockchain technology-based solutions to fight misinformation: a survey. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 267–281, 10.1007/978-3-030-42699-6_14
  • Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, Deravi F. Understanding conspiracy theories. Polit Psychol. 2019; 40 :3–35. doi: 10.1111/pops.12568. [ CrossRef ] [ Google Scholar ]
  • Edgerly S, Mourão RR, Thorson E, Tham SM. When do audiences verify? How perceptions about message and source influence audience verification of news headlines. J Mass Commun Q. 2020; 97 (1):52–71. doi: 10.1177/1077699019864680. [ CrossRef ] [ Google Scholar ]
  • Egelhofer JL, Lecheler S. Fake news as a two-dimensional phenomenon: a framework and research agenda. Ann Int Commun Assoc. 2019; 43 (2):97–116. doi: 10.1080/23808985.2019.1602782. [ CrossRef ] [ Google Scholar ]
  • Elhadad MK, Li KF, Gebali F (2019) A novel approach for selecting hybrid features from online news textual metadata for fake news detection. In: International conference on p2p, parallel, grid, cloud and internet computing. Springer, Berlin, pp 914–925, 10.1007/978-3-030-33509-0_86
  • ERGA (2018) Fake news, and the information disorder. European Broadcasting Union (EBU)
  • ERGA (2021) Notions of disinformation and related concepts. European Regulators Group for Audiovisual Media Services (ERGA)
  • Escolà-Gascón Á. New techniques to measure lie detection using Covid-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2) Comput Hum Behav Rep. 2021; 3 :100049. doi: 10.1016/j.chbr.2020.100049. [ CrossRef ] [ Google Scholar ]
  • Fazio L. Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016/mr-2020-009. [ CrossRef ] [ Google Scholar ]
  • Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun ACM. 2016; 59 (7):96–104. doi: 10.1145/2818717. [ CrossRef ] [ Google Scholar ]
  • Flynn D, Nyhan B, Reifler J. The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol. 2017; 38 :127–150. doi: 10.1111/pops.12394. [ CrossRef ] [ Google Scholar ]
  • Fraga-Lamas P, Fernández-Caramés TM. Fake news, disinformation, and deepfakes: leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof. 2020; 22 (2):53–59. doi: 10.1109/MITP.2020.2977589. [ CrossRef ] [ Google Scholar ]
  • Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, Jenner L, Teale AL, Carr L, Mulhall S, et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. 2020 doi: 10.1017/S0033291720001890. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Friggeri A, Adamic L, Eckles D, Cheng J (2014) Rumor cascades. In: Proceedings of the international AAAI conference on web and social media
  • García SA, García GG, Prieto MS, Moreno Guerrero AJ, Rodríguez Jiménez C. The impact of term fake news on the scientific community. Scientific performance and mapping in web of science. Soc Sci. 2020 doi: 10.3390/socsci9050073. [ CrossRef ] [ Google Scholar ]
  • Garrett RK, Bond RM. Conservatives’ susceptibility to political misperceptions. Sci Adv. 2021 doi: 10.1126/sciadv.abf1234. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Giachanou A, Ríssola EA, Ghanem B, Crestani F, Rosso P (2020) The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: International conference on applications of natural language to information systems. Springer, Berlin, pp 181–192 10.1007/978-3-030-51310-8_17
  • Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, Buntain C, Chanduka R, Cheakalos P, Everett JB et al (2018) Fake news vs satire: a dataset and analysis. In: Proceedings of the 10th ACM conference on web science, pp 17–21, 10.1145/3201064.3201100
  • Goldani MH, Momtazi S, Safabakhsh R. Detecting fake news with capsule neural networks. Appl Soft Comput. 2021; 101 :106991. doi: 10.1016/j.asoc.2020.106991. [ CrossRef ] [ Google Scholar ]
  • Goldstein I, Yang L. Good disclosure, bad disclosure. J Financ Econ. 2019; 131 (1):118–138. doi: 10.1016/j.jfineco.2018.08.004. [ CrossRef ] [ Google Scholar ]
  • Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 US presidential election. Science. 2019; 363 (6425):374–378. doi: 10.1126/science.aau2706. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guadagno RE, Guttieri K (2021) Fake news and information warfare: an examination of the political and psychological processes from the digital sphere to the real world. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 218–242 10.4018/978-1-7998-7291-7.ch013
  • Guess A, Nagler J, Tucker J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. 2019 doi: 10.1126/sciadv.aau4586. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guo C, Cao J, Zhang X, Shu K, Yu M (2019) Exploiting emotions for fake news detection on social media. arXiv preprint arXiv:1903.01728
  • Guo B, Ding Y, Yao L, Liang Y, Yu Z. The future of false information detection on social media: new perspectives and trends. ACM Comput Surv (CSUR) 2020; 53 (4):1–36. doi: 10.1145/3393880. [ CrossRef ] [ Google Scholar ]
  • Gupta A, Li H, Farnoush A, Jiang W. Understanding patterns of covid infodemic: a systematic and pragmatic approach to curb fake news. J Bus Res. 2022; 140 :670–683. doi: 10.1016/j.jbusres.2021.11.032. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ha L, Andreu Perez L, Ray R. Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am Behav Sci. 2021; 65 (2):290–315. doi: 10.1177/0002764219869402. [ CrossRef ] [ Google Scholar ]
  • Habib A, Asghar MZ, Khan A, Habib A, Khan A. False information detection in online content and its role in decision making: a systematic literature review. Soc Netw Anal Min. 2019; 9 (1):1–20. doi: 10.1007/s13278-019-0595-5. [ CrossRef ] [ Google Scholar ]
  • Hage H, Aïmeur E, Guedidi A (2021) Understanding the landscape of online deception. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 39–66. 10.4018/978-1-7998-2543-2.ch014
  • Hakak S, Alazab M, Khan S, Gadekallu TR, Maddikunta PKR, Khan WZ. An ensemble machine learning approach through effective feature extraction to classify fake news. Futur Gener Comput Syst. 2021; 117 :47–58. doi: 10.1016/j.future.2020.11.022. [ CrossRef ] [ Google Scholar ]
  • Hamdi T, Slimi H, Bounhas I, Slimani Y (2020) A hybrid approach for fake news detection in Twitter based on user features and graph embedding. In: International conference on distributed computing and internet technology. Springer, Berlin, pp 266–280. 10.1007/978-3-030-36987-3_17
  • Hameleers M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and netherlands. Inf Commun Soc. 2022; 25 (1):110–126. doi: 10.1080/1369118X.2020.1764603. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Powell TE, Van Der Meer TG, Bos L. A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Polit Commun. 2020; 37 (2):281–301. doi: 10.1080/10584609.2019.1674979. [ CrossRef ] [ Google Scholar ]
  • Hameleers M, Brosius A, de Vreese CH. Whom to trust? media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. Eur J Commun. 2022 doi: 10.1177/02673231211072667. [ CrossRef ] [ Google Scholar ]
  • Hartley K, Vu MK. Fighting fake news in the Covid-19 era: policy insights from an equilibrium model. Policy Sci. 2020; 53 (4):735–758. doi: 10.1007/s11077-020-09405-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hasan HR, Salah K. Combating deepfake videos using blockchain and smart contracts. IEEE Access. 2019; 7 :41596–41606. doi: 10.1109/ACCESS.2019.2905689. [ CrossRef ] [ Google Scholar ]
  • Hiriyannaiah S, Srinivas A, Shetty GK, Siddesh G, Srinivasa K (2020) A computationally intelligent agent for detecting fake news using generative adversarial networks. Hybrid computational intelligence: challenges and applications. pp 69–96 10.1016/B978-0-12-818699-2.00004-4
  • Hosseinimotlagh S, Papalexakis EE (2018) Unsupervised content-based identification of fake news articles with tensor decomposition ensembles. In: Proceedings of the workshop on misinformation and misbehavior mining on the web (MIS2)
  • Huckle S, White M. Fake news: a technological approach to proving the origins of content, using blockchains. Big Data. 2017; 5 (4):356–371. doi: 10.1089/big.2017.0071. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Huffaker JS, Kummerfeld JK, Lasecki WS, Ackerman MS (2020) Crowdsourced detection of emotionally manipulative language. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp 1–14 10.1145/3313831.3376375
  • Ireton C, Posetti J. Journalism, fake news & disinformation: handbook for journalism education and training. Paris: UNESCO Publishing; 2018. [ Google Scholar ]
  • Islam MR, Liu S, Wang X, Xu G. Deep learning for misinformation detection on online social networks: a survey and new perspectives. Soc Netw Anal Min. 2020; 10 (1):1–20. doi: 10.1007/s13278-020-00696-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ismailov M, Tsikerdekis M, Zeadally S. Vulnerabilities to online social network identity deception detection research and recommendations for mitigation. Fut Internet. 2020; 12 (9):148. doi: 10.3390/fi12090148. [ CrossRef ] [ Google Scholar ]
  • Jakesch M, Koren M, Evtushenko A, Naaman M (2019) The role of source and expressive responding in political news evaluation. In: Computation and journalism symposium
  • Jamieson KH. Cyberwar: how Russian hackers and trolls helped elect a president: what we don’t, can’t, and do know. Oxford: Oxford University Press; 2020. [ Google Scholar ]
  • Jiang S, Chen X, Zhang L, Chen S, Liu H (2019) User-characteristic enhanced model for fake news detection in social media. In: CCF International conference on natural language processing and Chinese computing, Springer, Berlin, pp 634–646. 10.1007/978-3-030-32233-5_49
  • Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence
  • Jing TW, Murugesan RK (2018) A theoretical framework to build trust and prevent fake news in social media using blockchain. In: International conference of reliable information and communication technology. Springer, Berlin, pp 955–962, 10.1007/978-3-319-99007-1_88
  • Jones-Jang SM, Mortensen T, Liu J. Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. Am Behav Sci. 2021; 65 (2):371–388. doi: 10.1177/0002764219869406. [ CrossRef ] [ Google Scholar ]
  • Jungherr A, Schroeder R. Disinformation and the structural transformations of the public arena: addressing the actual challenges to democracy. Soc Media Soc. 2021 doi: 10.1177/2056305121988928. [ CrossRef ] [ Google Scholar ]
  • Kaliyar RK (2018) Fake news detection using a deep neural network. In: 2018 4th international conference on computing communication and automation (ICCCA), IEEE, pp 1–7 10.1109/CCAA.2018.8777343
  • Kaliyar RK, Goswami A, Narang P, Sinha S. Fndnet—a deep convolutional neural network for fake news detection. Cogn Syst Res. 2020; 61 :32–44. doi: 10.1016/j.cogsys.2019.12.005. [ CrossRef ] [ Google Scholar ]
  • Kapantai E, Christopoulou A, Berberidis C, Peristeras V. A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc. 2021; 23 (5):1301–1326. doi: 10.1177/1461444820959296. [ CrossRef ] [ Google Scholar ]
  • Kapusta J, Benko L, Munk M (2019) Fake news identification based on sentiment and frequency analysis. In: International conference Europe middle east and North Africa information systems and technologies to support learning. Springer, Berlin, pp 400–409, 10.1007/978-3-030-36778-7_44
  • Kaur S, Kumar P, Kumaraguru P. Automating fake news detection system using multi-level voting model. Soft Comput. 2020; 24 (12):9049–9069. doi: 10.1007/s00500-019-04436-y. [ CrossRef ] [ Google Scholar ]
  • Khan SA, Alkawaz MH, Zangana HM (2019) The use and abuse of social media for spreading fake news. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), IEEE, pp 145–148. 10.1109/I2CACIS.2019.8825029
  • Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M (2018) Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 324–332. 10.1145/3159652.3159734
  • Klein D, Wueller J. Fake news: a legal perspective. J Internet Law. 2017; 20 (10):5–13. [ Google Scholar ]
  • Kogan S, Moskowitz TJ, Niessner M (2019) Fake news: evidence from financial markets. Available at SSRN 3237763
  • Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF. Misinformation and the currency of democratic citizenship. J Polit. 2000; 62 (3):790–816. doi: 10.1111/0022-3816.00033. [ CrossRef ] [ Google Scholar ]
  • Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv preprint arXiv:1804.08559
  • Kumar S, West R, Leskovec J (2016) Disinformation on the web: impact, characteristics, and detection of Wikipedia hoaxes. In: Proceedings of the 25th international conference on world wide web, pp 591–602. 10.1145/2872427.2883085
  • La Barbera D, Roitero K, Demartini G, Mizzaro S, Spina D (2020) Crowdsourcing truthfulness: the impact of judgment scale and assessor bias. In: European conference on information retrieval. Springer, Berlin, pp 207–214. 10.1007/978-3-030-45442-5_26
  • Lanius C, Weber R, MacKenzie WI. Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min. 2021; 11 (1):1–15. doi: 10.1007/s13278-021-00739-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D, et al. The science of fake news. Science. 2018; 359 (6380):1094–1096. doi: 10.1126/science.aao2998. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Le T, Shu K, Molina MD, Lee D, Sundar SS, Liu H (2019) 5 sources of clickbaits you should know! Using synthetic clickbaits to improve prediction and distinguish between bot-generated and human-written headlines. In: 2019 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 33–40. 10.1145/3341161.3342875
  • Lewandowsky S (2020) Climate change, disinformation, and how to combat it. In: Annual Review of Public Health 42. 10.1146/annurev-publhealth-090419-102409 [ PubMed ]
  • Liu Y, Wu YF (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 354–361
  • Luo M, Hancock JT, Markowitz DM. Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues. Commun Res. 2022; 49 (2):171–195. doi: 10.1177/0093650220921321. [ CrossRef ] [ Google Scholar ]
  • Lutzke L, Drummond C, Slovic P, Árvai J. Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook. Glob Environ Chang. 2019; 58 :101964. doi: 10.1016/j.gloenvcha.2019.101964. [ CrossRef ] [ Google Scholar ]
  • Maertens R, Anseel F, van der Linden S. Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J Environ Psychol. 2020; 70 :101455. doi: 10.1016/j.jenvp.2020.101455. [ CrossRef ] [ Google Scholar ]
  • Mahabub A. A robust technique of fake news detection using ensemble voting classifier and comparison with other classifiers. SN Applied Sciences. 2020; 2 (4):1–9. doi: 10.1007/s42452-020-2326-y. [ CrossRef ] [ Google Scholar ]
  • Mahbub S, Pardede E, Kayes A, Rahayu W. Controlling astroturfing on the internet: a survey on detection techniques and research challenges. Int J Web Grid Serv. 2019; 15 (2):139–158. doi: 10.1504/IJWGS.2019.099561. [ CrossRef ] [ Google Scholar ]
  • Marsden C, Meyer T, Brown I. Platform values and democratic elections: how can the law regulate digital disinformation? Comput Law Secur Rev. 2020; 36 :105373. doi: 10.1016/j.clsr.2019.105373. [ CrossRef ] [ Google Scholar ]
  • Masciari E, Moscato V, Picariello A, Sperlí G (2020) Detecting fake news by image analysis. In: Proceedings of the 24th symposium on international database engineering and applications, pp 1–5. 10.1145/3410566.3410599
  • Mazzeo V, Rapisarda A. Investigating fake and reliable news sources using complex networks analysis. Front Phys. 2022; 10 :886544. doi: 10.3389/fphy.2022.886544. [ CrossRef ] [ Google Scholar ]
  • McGrew S. Learning to evaluate: an intervention in civic online reasoning. Comput Educ. 2020; 145 :103711. doi: 10.1016/j.compedu.2019.103711. [ CrossRef ] [ Google Scholar ]
  • McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S. Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ. 2018; 46 (2):165–193. doi: 10.1080/00933104.2017.1416320. [ CrossRef ] [ Google Scholar ]
  • Meel P, Vishwakarma DK. Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl. 2020; 153 :112986. doi: 10.1016/j.eswa.2019.112986. [ CrossRef ] [ Google Scholar ]
  • Meese J, Frith J, Wilken R. Covid-19, 5G conspiracies and infrastructural futures. Media Int Aust. 2020; 177 (1):30–46. doi: 10.1177/1329878X20952165. [ CrossRef ] [ Google Scholar ]
  • Metzger MJ, Hartsell EH, Flanagin AJ. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun Res. 2020; 47 (1):3–28. doi: 10.1177/0093650215613136. [ CrossRef ] [ Google Scholar ]
  • Micallef N, He B, Kumar S, Ahamad M, Memon N (2020) The role of the crowd in countering misinformation: a case study of the Covid-19 infodemic. arXiv preprint arXiv:2011.05773
  • Mihailidis P, Viotty S. Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact society. Am Behav Sci. 2017; 61 (4):441–454. doi: 10.1177/0002764217701217. [ CrossRef ] [ Google Scholar ]
  • Mishra R (2020) Fake news detection using higher-order user to user mutual-attention progression in propagation paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 652–653
  • Mishra S, Shukla P, Agarwal R. Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wirel Commun Mobile Comput. 2022 doi: 10.1155/2022/1575365. [ CrossRef ] [ Google Scholar ]
  • Molina MD, Sundar SS, Le T, Lee D. “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am Behav Sci. 2021; 65 (2):180–212. doi: 10.1177/0002764219878224. [ CrossRef ] [ Google Scholar ]
  • Moro C, Birt JR (2022) Review bombing is a dirty practice, but research shows games do benefit from online feedback. Conversation. https://research.bond.edu.au/en/publications/review-bombing-is-a-dirty-practice-but-research-shows-games-do-be
  • Mustafaraj E, Metaxas PT (2017) The fake news spreading plague: was it preventable? In: Proceedings of the 2017 ACM on web science conference, pp 235–239. 10.1145/3091478.3091523
  • Nagel TW. Measuring fake news acumen using a news media literacy instrument. J Media Liter Educ. 2022; 14 (1):29–42. doi: 10.23860/JMLE-2022-14-1-3. [ CrossRef ] [ Google Scholar ]
  • Nakov P (2020) Can we spot the “fake news” before it was even written? arXiv preprint arXiv:2008.04374
  • Nekmat E. Nudge effect of fact-check alerts: source influence and media skepticism on sharing of news misinformation in social media. Soc Media Soc. 2020 doi: 10.1177/2056305119897322. [ CrossRef ] [ Google Scholar ]
  • Nygren T, Brounéus F, Svensson G. Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ. 2019; 18 (2):87–109. doi: 10.4119/jsse-917. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Reifler J. Displacing misinformation about events: an experimental test of causal corrections. J Exp Polit Sci. 2015; 2 (1):81–93. doi: 10.1017/XPS.2014.22. [ CrossRef ] [ Google Scholar ]
  • Nyhan B, Porter E, Reifler J, Wood TJ. Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Polit Behav. 2020; 42 (3):939–960. doi: 10.1007/s11109-019-09528-x. [ CrossRef ] [ Google Scholar ]
  • Nyow NX, Chua HN (2019) Detecting fake news with tweets’ properties. In: 2019 IEEE conference on application, information and network security (AINS), IEEE, pp 24–29. 10.1109/AINS47559.2019.8968706
  • Ochoa IS, de Mello G, Silva LA, Gomes AJ, Fernandes AM, Leithardt VRQ (2019) Fakechain: a blockchain architecture to ensure trust in social media networks. In: International conference on the quality of information and communications technology. Springer, Berlin, pp 105–118. 10.1007/978-3-030-29238-6_8
  • Ozbay FA, Alatas B. Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A. 2020; 540 :123174. doi: 10.1016/j.physa.2019.123174. [ CrossRef ] [ Google Scholar ]
  • Ozturk P, Li H, Sakamoto Y (2015) Combating rumor spread on social media: the effectiveness of refutation and warning. In: 2015 48th Hawaii international conference on system sciences, IEEE, pp 2406–2414. 10.1109/HICSS.2015.288
  • Parikh SB, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 436–441.10.1109/MIPR.2018.00093
  • Parrish K (2018) Deep learning & machine learning: what’s the difference? Online: https://parsers.me/deep-learning-machine-learning-whats-the-difference/ . Accessed 20 May 2020
  • Paschen J. Investigating the emotional appeal of fake news using artificial intelligence and human contributions. J Prod Brand Manag. 2019; 29 (2):223–233. doi: 10.1108/JPBM-12-2018-2179. [ CrossRef ] [ Google Scholar ]
  • Pathak A, Srihari RK (2019) Breaking! Presenting fake news corpus for automated fact checking. In: Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pp 357–362
  • Peng J, Detchon S, Choo KKR, Ashman H. Astroturfing detection in social media: a binary n-gram-based approach. Concurr Comput: Pract Exp. 2017; 29 (17):e4013. doi: 10.1002/cpe.4013. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc Natl Acad Sci. 2019; 116 (7):2521–2526. doi: 10.1073/pnas.1806781116. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Rand DG. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers. 2020; 88 (2):185–200. doi: 10.1111/jopy.12476. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pennycook G, Bear A, Collins ET, Rand DG. The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci. 2020; 66 (11):4944–4957. doi: 10.1287/mnsc.2019.3478. [ CrossRef ] [ Google Scholar ]
  • Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting Covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. 2020; 31 (7):770–780. doi: 10.1177/0956797620939054. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638
  • Previti M, Rodriguez-Fernandez V, Camacho D, Carchiolo V, Malgeri M (2020) Fake news detection using time series and user features classification. In: International conference on the applications of evolutionary computation (Part of EvoStar), Springer, Berlin, pp 339–353. 10.1007/978-3-030-43722-0_22
  • Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, pp 490–497. 10.1609/aaai.v34i01.5386
  • Qayyum A, Qadir J, Janjua MU, Sher F. Using blockchain to rein in the new post-truth world and check the spread of fake news. IT Prof. 2019; 21 (4):16–24. doi: 10.1109/MITP.2019.2910503. [ CrossRef ] [ Google Scholar ]
  • Qian F, Gong C, Sharma K, Liu Y (2018) Neural user response generator: fake news detection with collective user intelligence. In: IJCAI, vol 18, pp 3834–3840. 10.24963/ijcai.2018/533
  • Raza S, Ding C. Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal. 2022; 13 (4):335–362. doi: 10.1007/s41060-021-00302-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ricard J, Medeiros J (2020) Using misinformation as a political weapon: Covid-19 and Bolsonaro in Brazil. Harv Kennedy School misinformation Rev 1(3). https://misinforeview.hks.harvard.edu/article/using-misinformation-as-a-political-weapon-covid-19-and-bolsonaro-in-brazil/
  • Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. 2019; 5 (1):1–10. doi: 10.1057/s41599-019-0279-9. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, van der Linden S, Nygren T. Prebunking interventions based on the psychological theory of “inoculation” can reduce susceptibility to misinformation across cultures. Harv Kennedy School Misinformation Rev. 2020 doi: 10.37016//mr-2020-008. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, Van Der Bles AM, Van Der Linden S. Susceptibility to misinformation about Covid-19 around the world. R Soc Open Sci. 2020; 7 (10):201199. doi: 10.1098/rsos.201199. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rubin VL, Conroy N, Chen Y, Cornwell S (2016) Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of the second workshop on computational approaches to deception detection, pp 7–17
  • Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806. 10.1145/3132847.3132877
  • Schuyler AJ (2019) Regulating facts: a procedural framework for identifying, excluding, and deterring the intentional or knowing proliferation of fake news online. Univ Ill JL Technol Pol’y, vol 2019, pp 211–240
  • Shae Z, Tsai J (2019) AI blockchain platform for trusting news. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS), IEEE, pp 1610–1619. 10.1109/ICDCS.2019.00160
  • Shang W, Liu M, Lin W, Jia M (2018) Tracing the source of news based on blockchain. In: 2018 IEEE/ACIS 17th international conference on computer and information science (ICIS), IEEE, pp 377–381. 10.1109/ICIS.2018.8466516
  • Shao C, Ciampaglia GL, Flammini A, Menczer F (2016) Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th international conference companion on world wide web, pp 745–750. 10.1145/2872518.2890098
  • Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. The spread of low-credibility content by social bots. Nat Commun. 2018; 9 (1):1–9. doi: 10.1038/s41467-018-06930-7. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL. Anatomy of an online misinformation network. PLoS ONE. 2018; 13 (4):e0196087. doi: 10.1371/journal.pone.0196087. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y. Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 2019; 10 (3):1–42. doi: 10.1145/3305260. [ CrossRef ] [ Google Scholar ]
  • Sharma K, Seo S, Meng C, Rambhatla S, Liu Y (2020) Covid-19 on social media: analyzing misinformation in Twitter conversations. arXiv preprint arXiv:2003.12309
  • Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF. Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc. 2019; 21 (2):438–463. doi: 10.1177/1461444818799526. [ CrossRef ] [ Google Scholar ]
  • Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544
  • Shi P, Zhang Z, Choo KKR. Detecting malicious social bots based on clickstream sequences. IEEE Access. 2019; 7 :28855–28862. doi: 10.1109/ACCESS.2019.2901864. [ CrossRef ] [ Google Scholar ]
  • Shu K, Sliva A, Wang S, Tang J, Liu H. Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor Newsl. 2017; 19 (1):22–36. doi: 10.1145/3137597.3137600. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2018a) Fakenewsnet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286 , 10.1089/big.2020.0062 [ PubMed ]
  • Shu K, Wang S, Liu H (2018b) Understanding user profiles on social media for fake news detection. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 430–435. 10.1109/MIPR.2018.00092
  • Shu K, Wang S, Liu H (2019a) Beyond news contents: the role of social context for fake news detection. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 312–320. 10.1145/3289600.3290994
  • Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019b) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439. 10.1145/3341161.3342927
  • Shu K, Bhattacharjee A, Alatawi F, Nazer TH, Ding K, Karami M, Liu H. Combating disinformation in a social media age. Wiley Interdiscip Rev: Data Min Knowl Discov. 2020; 10 (6):e1385. doi: 10.1002/widm.1385. [ CrossRef ] [ Google Scholar ]
  • Shu K, Mahudeswaran D, Wang S, Liu H. Hierarchical propagation networks for fake news detection: investigation and exploitation. Proc Int AAAI Conf Web Soc Media AAAI Press. 2020; 14 :626–637. [ Google Scholar ]
  • Shu K, Wang S, Lee D, Liu H (2020c) Mining disinformation and fake news: concepts, methods, and recent advancements. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 1–19 10.1007/978-3-030-42699-6_1
  • Shu K, Zheng G, Li Y, Mukherjee S, Awadallah AH, Ruston S, Liu H (2020d) Early detection of fake news with multi-source weak social supervision. In: ECML/PKDD (3), pp 650–666
  • Singh VK, Ghosh I, Sonagara D. Detecting fake news stories via multimodal analysis. J Am Soc Inf Sci. 2021; 72 (1):3–17. doi: 10.1002/asi.24359. [ CrossRef ] [ Google Scholar ]
  • Sintos S, Agarwal PK, Yang J (2019) Selecting data to clean for fact checking: minimizing uncertainty vs. maximizing surprise. Proc VLDB Endowm 12(13), 2408–2421. 10.14778/3358701.3358708 [ CrossRef ]
  • Snow J (2017) Can AI win the war against fake news? MIT Technology Review Online: https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/ . Accessed 3 Oct. 2020
  • Song G, Kim S, Hwang H, Lee K (2019) Blockchain-based notarization for social media. In: 2019 IEEE international conference on consumer clectronics (ICCE), IEEE, pp 1–2 10.1109/ICCE.2019.8661978
  • Starbird K, Arif A, Wilson T (2019) Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. In: Proceedings of the ACM on human–computer interaction, vol 3(CSCW), pp 1–26 10.1145/3359229
  • Sterret D, Malato D, Benz J, Kantor L, Tompson T, Rosenstiel T, Sonderman J, Loker K, Swanson E (2018) Who shared it? How Americans decide what news to trust on social media. Technical report, Norc Working Paper Series, WP-2018-001, pp 1–24
  • Sutton RM, Douglas KM. Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci. 2020; 34 :118–122. doi: 10.1016/j.cobeha.2020.02.015. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jr, Thomas RJ, Bishop L. What is (fake) news? Analyzing news values (and more) in fake stories. Media Commun. 2021; 9 (1):110–119. doi: 10.17645/mac.v9i1.3331. [ CrossRef ] [ Google Scholar ]
  • Tchakounté F, Faissal A, Atemkeng M, Ntyam A. A reliable weighting scheme for the aggregation of crowd intelligence to detect fake news. Information. 2020; 11 (6):319. doi: 10.3390/info11060319. [ CrossRef ] [ Google Scholar ]
  • Tchechmedjiev A, Fafalios P, Boland K, Gasquet M, Zloch M, Zapilko B, Dietze S, Todorov K (2019) Claimskg: a knowledge graph of fact-checked claims. In: International semantic web conference. Springer, Berlin, pp 309–324 10.1007/978-3-030-30796-7_20
  • Treen KMd, Williams HT, O’Neill SJ. Online misinformation about climate change. Wiley Interdiscip Rev Clim Change. 2020; 11 (5):e665. doi: 10.1002/wcc.665. [ CrossRef ] [ Google Scholar ]
  • Tsang SJ. Motivated fake news perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. 2020 doi: 10.1177/1077699020952129. [ CrossRef ] [ Google Scholar ]
  • Tschiatschek S, Singla A, Gomez Rodriguez M, Merchant A, Krause A (2018) Fake news detection in social networks via crowd signals. In: Companion proceedings of the the web conference 2018, pp 517–524. 10.1145/3184558.3188722
  • Uppada SK, Manasa K, Vidhathri B, Harini R, Sivaselvan B. Novel approaches to fake news and fake account detection in OSNS: user social engagement and visual content centric model. Soc Netw Anal Min. 2022; 12 (1):1–19. doi: 10.1007/s13278-022-00878-9. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van der Linden S, Roozenbeek J (2020) Psychological inoculation against fake news. In: Accepting, sharing, and correcting misinformation, the psychology of fake news. 10.4324/9780429295379-11
  • Van der Linden S, Panagopoulos C, Roozenbeek J. You are fake news: political bias in perceptions of fake news. Media Cult Soc. 2020; 42 (3):460–470. doi: 10.1177/0163443720906992. [ CrossRef ] [ Google Scholar ]
  • Valenzuela S, Muñiz C, Santos M. Social media and belief in misinformation in mexico: a case of maximal panic, minimal effects? Int J Press Polit. 2022 doi: 10.1177/19401612221088988. [ CrossRef ] [ Google Scholar ]
  • Vasu N, Ang B, Teo TA, Jayakumar S, Raizal M, Ahuja J (2018) Fake news: national security in the post-truth era. RSIS
  • Vereshchaka A, Cosimini S, Dong W (2020) Analyzing and distinguishing fake and real news to mitigate the problem of disinformation. In: Computational and mathematical organization theory, pp 1–15. 10.1007/s10588-020-09307-8
  • Verstraete M, Bambauer DE, Bambauer JR (2017) Identifying and countering fake news. Arizona legal studies discussion paper 73(17-15). 10.2139/ssrn.3007971
  • Vilmer J, Escorcia A, Guillaume M, Herrera J (2018) Information manipulation: a challenge for our democracies. In: Report by the Policy Planning Staff (CAPS) of the ministry for europe and foreign affairs, and the institute for strategic research (RSEM) of the Ministry for the Armed Forces
  • Vishwakarma DK, Varshney D, Yadav A. Detection and veracity analysis of fake news via scrapping and authenticating the web search. Cogn Syst Res. 2019; 58 :217–229. doi: 10.1016/j.cogsys.2019.07.004. [ CrossRef ] [ Google Scholar ]
  • Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22. 10.3115/v1/W14-2508
  • von der Weth C, Abdul A, Fan S, Kankanhalli M (2020) Helping users tackle algorithmic threats on social media: a multimedia research agenda. In: Proceedings of the 28th ACM international conference on multimedia, pp 4425–4434. 10.1145/3394171.3414692
  • Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359 (6380):1146–1151. doi: 10.1126/science.aap9559. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vraga EK, Bode L. Using expert sources to correct health misinformation in social media. Sci Commun. 2017; 39 (5):621–645. doi: 10.1177/1075547017731776. [ CrossRef ] [ Google Scholar ]
  • Waldman AE. The marketplace of fake news. Univ Pa J Const Law. 2017; 20 :845. [ Google Scholar ]
  • Wang WY (2017) “Liar, liar pants on fire”: a new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648
  • Wang L, Wang Y, de Melo G, Weikum G. Understanding archetypes of fake news via fine-grained classification. Soc Netw Anal Min. 2019; 9 (1):1–17. doi: 10.1007/s13278-019-0580-z. [ CrossRef ] [ Google Scholar ]
  • Wang Y, Han H, Ding Y, Wang X, Liao Q (2019b) Learning contextual features with multi-head self-attention for fake news detection. In: International conference on cognitive computing. Springer, Berlin, pp 132–142. 10.1007/978-3-030-23407-2_11
  • Wang Y, McKee M, Torbica A, Stuckler D. Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med. 2019; 240 :112552. doi: 10.1016/j.socscimed.2019.112552. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang Y, Yang W, Ma F, Xu J, Zhong B, Deng Q, Gao J (2020) Weak supervision for fake news detection via reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 516–523. 10.1609/aaai.v34i01.5389
  • Wardle C (2017) Fake news. It’s complicated. Online: https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 3 Oct 2020
  • Wardle C. The need for smarter definitions and practical, timely empirical research on information disorder. Digit J. 2018; 6 (8):951–963. doi: 10.1080/21670811.2018.1502047. [ CrossRef ] [ Google Scholar ]
  • Wardle C, Derakhshan H. Information disorder: toward an interdisciplinary framework for research and policy making. Council Eur Rep. 2017; 27 :1–107. [ Google Scholar ]
  • Weiss AP, Alwan A, Garcia EP, Garcia J. Surveying fake news: assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int J Educ Integr. 2020; 16 (1):1–30. doi: 10.1007/s40979-019-0049-x. [ CrossRef ] [ Google Scholar ]
  • Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645. 10.1145/3159652.3159677
  • Wu L, Rao Y (2020) Adaptive interaction fusion networks for fake news detection. arXiv preprint arXiv:2004.10009
  • Wu L, Morstatter F, Carley KM, Liu H. Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor Newsl. 2019; 21 (2):80–90. doi: 10.1145/3373464.3373475. [ CrossRef ] [ Google Scholar ]
  • Wu Y, Ngai EW, Wu P, Wu C. Fake news on the internet: a literature review, synthesis and directions for future research. Intern Res. 2022 doi: 10.1108/INTR-05-2021-0294. [ CrossRef ] [ Google Scholar ]
  • Xu K, Wang F, Wang H, Yang B. Detecting fake news over online social media via domain reputations and content understanding. Tsinghua Sci Technol. 2019; 25 (1):20–27. doi: 10.26599/TST.2018.9010139. [ CrossRef ] [ Google Scholar ]
  • Yang F, Pentyala SK, Mohseni S, Du M, Yuan H, Linder R, Ragan ED, Ji S, Hu X (2019a) Xfake: explainable fake news detector with visualizations. In: The world wide web conference, pp 3600–3604. 10.1145/3308558.3314119
  • Yang X, Li Y, Lyu S (2019b) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8261–8265. 10.1109/ICASSP.2019.8683164
  • Yaqub W, Kakhidze O, Brockman ML, Memon N, Patil S (2020) Effects of credibility indicators on social media news sharing intent. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–14. 10.1145/3313831.3376213
  • Yavary A, Sajedi H, Abadeh MS. Information verification in social networks based on user feedback and news agencies. Soc Netw Anal Min. 2020; 10 (1):1–8. doi: 10.1007/s13278-019-0616-4. [ CrossRef ] [ Google Scholar ]
  • Yazdi KM, Yazdi AM, Khodayi S, Hou J, Zhou W, Saedy S. Improving fake news detection using k-means and support vector machine approaches. Int J Electron Commun Eng. 2020; 14 (2):38–42. doi: 10.5281/zenodo.3669287. [ CrossRef ] [ Google Scholar ]
  • Zannettou S, Sirivianos M, Blackburn J, Kourtellis N. The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J Data Inf Qual (JDIQ) 2019; 11 (3):1–37. doi: 10.1145/3309699. [ CrossRef ] [ Google Scholar ]
  • Zellers R, Holtzman A, Rashkin H, Bisk Y, Farhadi A, Roesner F, Choi Y (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616
  • Zhang X, Ghorbani AA. An overview of online fake news: characterization, detection, and discussion. Inf Process Manag. 2020; 57 (2):102025. doi: 10.1016/j.ipm.2019.03.004. [ CrossRef ] [ Google Scholar ]
  • Zhang J, Dong B, Philip SY (2020) Fakedetector: effective fake news detection with deep diffusive neural network. In: 2020 IEEE 36th international conference on data engineering (ICDE), IEEE, pp 1826–1829. 10.1109/ICDE48307.2020.00180
  • Zhang Q, Lipani A, Liang S, Yilmaz E (2019a) Reply-aided detection of misinformation via Bayesian deep learning. In: The world wide web conference, pp 2333–2343. 10.1145/3308558.3313718
  • Zhang X, Karaman S, Chang SF (2019b) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), IEEE, pp 1–6 10.1109/WIFS47025.2019.9035107
  • Zhou X, Zafarani R. A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 2020; 53 (5):1–40. doi: 10.1145/3395046. [ CrossRef ] [ Google Scholar ]
  • Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R. Detection and resolution of rumours in social media: a survey. ACM Comput Surv (CSUR) 2018; 51 (2):1–36. doi: 10.1145/3161603. [ CrossRef ] [ Google Scholar ]

Advertisement

Advertisement

Fake news, disinformation and misinformation in social media: a review

  • Original Article
  • Published: 09 February 2023
  • Volume 13 , article number  30 , ( 2023 )

Cite this article

research paper on news media

  • Esma Aïmeur   ORCID: orcid.org/0000-0001-7414-5454 1 ,
  • Sabrine Amri   ORCID: orcid.org/0000-0002-5009-4573 1 &
  • Gilles Brassard   ORCID: orcid.org/0000-0002-4380-117X 1  

83k Accesses

65 Citations

160 Altmetric

31 Mentions

Explore all metrics

Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.

Similar content being viewed by others

research paper on news media

Fake news on Social Media: the Impact on Society

research paper on news media

The disaster of misinformation: a review of research in social media

research paper on news media

Social Media Account Hacking Using Kali Linux-Based Tool BeEF

Avoid common mistakes on your manuscript.

1 Introduction

1.1 context and motivation.

Fake news, disinformation and misinformation have become such a scourge that Marcia McNutt, president of the National Academy of Sciences of the United States, is quoted to have said (making an implicit reference to the COVID-19 pandemic) “Misinformation is worse than an epidemic: It spreads at the speed of light throughout the globe and can prove deadly when it reinforces misplaced personal bias against all trustworthy evidence” in a joint statement of the National Academies Footnote 1 posted on July 15, 2021. Indeed, although online social networks (OSNs), also called social media, have improved the ease with which real-time information is broadcast; its popularity and its massive use have expanded the spread of fake news by increasing the speed and scope at which it can spread. Fake news may refer to the manipulation of information that can be carried out through the production of false information, or the distortion of true information. However, that does not mean that this problem is only created with social media. A long time ago, there were rumors in the traditional media that Elvis was not dead, Footnote 2 that the Earth was flat, Footnote 3 that aliens had invaded us, Footnote 4 , etc.

Therefore, social media has become nowadays a powerful source for fake news dissemination (Sharma et al. 2019 ; Shu et al. 2017 ). According to Pew Research Center’s analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, Footnote 5 while in 2018, only one-fifth of them say they often get news via social media. Footnote 6

Hence, fake news can have a significant impact on society as manipulated and false content is easier to generate and harder to detect (Kumar and Shah 2018 ) and as disinformation actors change their tactics (Kumar and Shah 2018 ; Micallef et al. 2020 ). In 2017, Snow predicted in the MIT Technology Review (Snow 2017 ) that most individuals in mature economies will consume more false than valid information by 2022.

Recent news on the COVID-19 pandemic, which has flooded the web and created panic in many countries, has been reported as fake. Footnote 7 For example, holding your breath for ten seconds to one minute is not a self-test for COVID-19 Footnote 8 (see Fig.  1 ). Similarly, online posts claiming to reveal various “cures” for COVID-19 such as eating boiled garlic or drinking chlorine dioxide (which is an industrial bleach), were verified Footnote 9 as fake and in some cases as dangerous and will never cure the infection.

Social media outperformed television as the major news source for young people of the UK and the USA. Footnote 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017 ). Furthermore, it has been reported in a previous study about the spread of online news on Twitter (Vosoughi et al. 2018 ) that the spread of false news online is six times faster than truthful content and that 70% of the users could not distinguish real from fake news (Vosoughi et al. 2018 ) due to the attraction of the novelty of the latter (Bovet and Makse 2019 ). It was determined that falsehood spreads significantly farther, faster, deeper and more broadly than the truth in all categories of information, and the effects are more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information (Vosoughi et al. 2018 ).

Over 1 million tweets were estimated to be related to fake news by the end of the 2016 US presidential election. Footnote 11 In 2017, in Germany, a government spokesman affirmed: “We are dealing with a phenomenon of a dimension that we have not seen before,” referring to an unprecedented spread of fake news on social networks. Footnote 12 Given the strength of this new phenomenon, fake news has been chosen as the word of the year by the Macquarie dictionary both in 2016 Footnote 13 and in 2018 Footnote 14 as well as by the Collins dictionary in 2017. Footnote 15 \(^,\) Footnote 16 Since 2020, the new term “infodemic” was coined, reflecting widespread researchers’ concern (Gupta et al. 2022 ; Apuke and Omar 2021 ; Sharma et al. 2020 ; Hartley and Vu 2020 ; Micallef et al. 2020 ) about the proliferation of misinformation linked to the COVID-19 pandemic.

figure 1

Fake news example about a self-test for COVID-19 source: https://cdn.factcheck.org/UploadedFiles/Screenshot031120_false.jpg , last access date: 26-12-2022

The Gartner Group’s top strategic predictions for 2018 and beyond included the need for IT leaders to quickly develop Artificial Intelligence (AI) algorithms to address counterfeit reality and fake news. Footnote 17 However, fake news identification is a complex issue. (Snow 2017 ) questioned the ability of AI to win the war against fake news. Similarly, other researchers concurred that even the best AI for spotting fake news is still ineffective. Footnote 18 Besides, recent studies have shown that the power of AI algorithms for identifying fake news is lower than its ability to create it Paschen ( 2019 ). Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed to closely resemble the truth in order to deceive users, and as a result, it is often hard to determine its veracity by AI alone. Therefore, it is crucial to consider more effective approaches to solve the problem of fake news in social media.

1.2 Contribution

The fake news problem has been addressed by researchers from various perspectives related to different topics. These topics include, but are not restricted to, social science studies , which investigate why and who falls for fake news (Altay et al. 2022 ; Batailler et al. 2022 ; Sterret et al. 2018 ; Badawy et al. 2019 ; Pennycook and Rand 2020 ; Weiss et al. 2020 ; Guadagno and Guttieri 2021 ), whom to trust and how perceptions of misinformation and disinformation relate to media trust and media consumption patterns (Hameleers et al. 2022 ), how fake news differs from personal lies (Chiu and Oh 2021 ; Escolà-Gascón 2021 ), examine how can the law regulate digital disinformation and how governments can regulate the values of social media companies that themselves regulate disinformation spread on their platforms (Marsden et al. 2020 ; Schuyler 2019 ; Vasu et al. 2018 ; Burshtein 2017 ; Waldman 2017 ; Alemanno 2018 ; Verstraete et al. 2017 ), and argue the challenges to democracy (Jungherr and Schroeder 2021 ); Behavioral interventions studies , which examine what literacy ideas mean in the age of dis/mis- and malinformation (Carmi et al. 2020 ), investigate whether media literacy helps identification of fake news (Jones-Jang et al. 2021 ) and attempt to improve people’s news literacy (Apuke et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers 2022 ; Nagel 2022 ; Jones-Jang et al. 2021 ; Mihailidis and Viotty 2017 ; García et al. 2020 ) by encouraging people to pause to assess credibility of headlines (Fazio 2020 ), promote civic online reasoning (McGrew 2020 ; McGrew et al. 2018 ) and critical thinking (Lutzke et al. 2019 ), together with evaluations of credibility indicators (Bhuiyan et al. 2020 ; Nygren et al. 2019 ; Shao et al. 2018a ; Pennycook et al. 2020a , b ; Clayton et al. 2020 ; Ozturk et al. 2015 ; Metzger et al. 2020 ; Sherman et al. 2020 ; Nekmat 2020 ; Brashier et al. 2021 ; Chung and Kim 2021 ; Lanius et al. 2021 ); as well as social media-driven studies , which investigate the effect of signals (e.g., sources) to detect and recognize fake news (Vraga and Bode 2017 ; Jakesch et al. 2019 ; Shen et al. 2019 ; Avram et al. 2020 ; Hameleers et al. 2020 ; Dias et al. 2020 ; Nyhan et al. 2020 ; Bode and Vraga 2015 ; Tsang 2020 ; Vishwakarma et al. 2019 ; Yavary et al. 2020 ) and investigate fake and reliable news sources using complex networks analysis based on search engine optimization metric (Mazzeo and Rapisarda 2022 ).

The impacts of fake news have reached various areas and disciplines beyond online social networks and society (García et al. 2020 ) such as economics (Clarke et al. 2020 ; Kogan et al. 2019 ; Goldstein and Yang 2019 ), psychology (Roozenbeek et al. 2020a ; Van der Linden and Roozenbeek 2020 ; Roozenbeek and van der Linden 2019 ), political science (Valenzuela et al. 2022 ; Bringula et al. 2022 ; Ricard and Medeiros 2020 ; Van der Linden et al. 2020 ; Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ), health science (Alonso-Galbán and Alemañy-Castilla 2022 ; Desai et al. 2022 ; Apuke and Omar 2021 ; Escolà-Gascón 2021 ; Wang et al. 2019c ; Hartley and Vu 2020 ; Micallef et al. 2020 ; Pennycook et al. 2020b ; Sharma et al. 2020 ; Roozenbeek et al. 2020b ), environmental science (e.g., climate change) (Treen et al. 2020 ; Lutzke et al. 2019 ; Lewandowsky 2020 ; Maertens et al. 2020 ), etc.

Interesting research has been carried out to review and study the fake news issue in online social networks. Some focus not only on fake news, but also distinguish between fake news and rumor (Bondielli and Marcelloni 2019 ; Meel and Vishwakarma 2020 ), while others tackle the whole problem, from characterization to processing techniques (Shu et al. 2017 ; Guo et al. 2020 ; Zhou and Zafarani 2020 ). However, they mostly focus on studying approaches from a machine learning perspective (Bondielli and Marcelloni 2019 ), data mining perspective (Shu et al. 2017 ), crowd intelligence perspective (Guo et al. 2020 ), or knowledge-based perspective (Zhou and Zafarani 2020 ). Furthermore, most of these studies ignore at least one of the mentioned perspectives, and in many cases, they do not cover other existing detection approaches using methods such as blockchain and fact-checking, as well as analysis on metrics used for Search Engine Optimization (Mazzeo and Rapisarda 2022 ). However, in our work and to the best of our knowledge, we cover all the approaches used for fake news detection. Indeed, we investigate the proposed solutions from broader perspectives (i.e., the detection techniques that are used, as well as the different aspects and types of the information used).

Therefore, in this paper, we are highly motivated by the following facts. First, fake news detection on social media is still in the early age of development, and many challenging issues remain that require deeper investigation. Hence, it is necessary to discuss potential research directions that can improve fake news detection and mitigation tasks. However, the dynamic nature of fake news propagation through social networks further complicates matters (Sharma et al. 2019 ). False information can easily reach and impact a large number of users in a short time (Friggeri et al. 2014 ; Qian et al. 2018 ). Moreover, fact-checking organizations cannot keep up with the dynamics of propagation as they require human verification, which can hold back a timely and cost-effective response (Kim et al. 2018 ; Ruchansky et al. 2017 ; Shu et al. 2018a ).

Our work focuses primarily on understanding the “fake news” problem, its related challenges and root causes, and reviewing automatic fake news detection and mitigation methods in online social networks as addressed by researchers. The main contributions that differentiate us from other works are summarized below:

We present the general context from which the fake news problem emerged (i.e., online deception)

We review existing definitions of fake news, identify the terms and features most commonly used to define fake news, and categorize related works accordingly.

We propose a fake news typology classification based on the various categorizations of fake news reported in the literature.

We point out the most challenging factors preventing researchers from proposing highly effective solutions for automatic fake news detection in social media.

We highlight and classify representative studies in the domain of automatic fake news detection and mitigation on online social networks including the key methods and techniques used to generate detection models.

We discuss the key shortcomings that may inhibit the effectiveness of the proposed fake news detection methods in online social networks.

We provide recommendations that can help address these shortcomings and improve the quality of research in this domain.

The rest of this article is organized as follows. We explain the methodology with which the studied references are collected and selected in Sect.  2 . We introduce the online deception problem in Sect.  3 . We highlight the modern-day problem of fake news in Sect.  4 , followed by challenges facing fake news detection and mitigation tasks in Sect.  5 . We provide a comprehensive literature review of the most relevant scholarly works on fake news detection in Sect.  6 . We provide a critical discussion and recommendations that may fill some of the gaps we have identified, as well as a classification of the reviewed automatic fake news detection approaches, in Sect.  7 . Finally, we provide a conclusion and propose some future directions in Sect.  8 .

2 Review methodology

This section introduces the systematic review methodology on which we relied to perform our study. We start with the formulation of the research questions, which allowed us to select the relevant research literature. Then, we provide the different sources of information together with the search and inclusion/exclusion criteria we used to select the final set of papers.

2.1 Research questions formulation

The research scope, research questions, and inclusion/exclusion criteria were established following an initial evaluation of the literature and the following research questions were formulated and addressed.

RQ1: what is fake news in social media, how is it defined in the literature, what are its related concepts, and the different types of it?

RQ2: What are the existing challenges and issues related to fake news?

RQ3: What are the available techniques used to perform fake news detection in social media?

2.2 Sources of information

We broadly searched for journal and conference research articles, books, and magazines as a source of data to extract relevant articles. We used the main sources of scientific databases and digital libraries in our search, such as Google Scholar, Footnote 19 IEEE Xplore, Footnote 20 Springer Link, Footnote 21 ScienceDirect, Footnote 22 Scopus, Footnote 23 ACM Digital Library. Footnote 24 Also, we screened most of the related high-profile conferences such as WWW, SIGKDD, VLDB, ICDE and so on to find out the recent work.

2.3 Search criteria

We focused our research over a period of ten years, but we made sure that about two-thirds of the research papers that we considered were published in or after 2019. Additionally, we defined a set of keywords to search the above-mentioned scientific databases since we concentrated on reviewing the current state of the art in addition to the challenges and the future direction. The set of keywords includes the following terms: fake news, disinformation, misinformation, information disorder, social media, detection techniques, detection methods, survey, literature review.

2.4 Study selection, exclusion and inclusion criteria

To retrieve relevant research articles, based on our sources of information and search criteria, a systematic keyword-based search was carried out by posing different search queries, as shown in Table  1 .

We discovered a primary list of articles. On the obtained initial list of studies, we applied a set of inclusion/exclusion criteria presented in Table  2 to select the appropriate research papers. The inclusion and exclusion principles are applied to determine whether a study should be included or not.

After reading the abstract, we excluded some articles that did not meet our criteria. We chose the most important research to help us understand the field. We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its related concepts (see Table  4 ). We used the remaining papers to understand the field, reveal the challenges, review the detection techniques, and discuss future directions.

3 A brief introduction of online deception

The Cambridge Online Dictionary defines Deception as “ the act of hiding the truth, especially to get an advantage .” Deception relies on peoples’ trust, doubt and strong emotions that may prevent them from thinking and acting clearly (Aïmeur et al. 2018 ). We also define it in previous work (Aïmeur et al. 2018 ) as the process that undermines the ability to consciously make decisions and take convenient actions, following personal values and boundaries. In other words, deception gets people to do things they would not otherwise do. In the context of online deception, several factors need to be considered: the deceiver, the purpose or aim of the deception, the social media service, the deception technique and the potential target (Aïmeur et al. 2018 ; Hage et al. 2021 ).

Researchers are working on developing new ways to protect users and prevent online deception (Aïmeur et al. 2018 ). Due to the sophistication of attacks, this is a complex task. Hence, malicious attackers are using more complex tools and strategies to deceive users. Furthermore, the way information is organized and exchanged in social media may lead to exposing OSN users to many risks (Aïmeur et al. 2013 ).

In fact, this field is one of the recent research areas that need collaborative efforts of multidisciplinary practices such as psychology, sociology, journalism, computer science as well as cyber-security and digital marketing (which are not yet well explored in the field of dis/mis/malinformation but relevant for future research). Moreover, Ismailov et al. ( 2020 ) analyzed the main causes that could be responsible for the efficiency gap between laboratory results and real-world implementations.

In this paper, it is not in our scope of work to review online deception state of the art. However, we think it is crucial to note that fake news, misinformation and disinformation are indeed parts of the larger landscape of online deception (Hage et al. 2021 ).

4 Fake news, the modern-day problem

Fake news has existed for a very long time, much before their wide circulation became facilitated by the invention of the printing press. Footnote 25 For instance, Socrates was condemned to death more than twenty-five hundred years ago under the fake news that he was guilty of impiety against the pantheon of Athens and corruption of the youth. Footnote 26 A Google Trends Analysis of the term “fake news” reveals an explosion in popularity around the time of the 2016 US presidential election. Footnote 27 Fake news detection is a problem that has recently been addressed by numerous organizations, including the European Union Footnote 28 and NATO. Footnote 29

In this section, we first overview the fake news definitions as they were provided in the literature. We identify the terms and features used in the definitions, and we classify the latter based on them. Then, we provide a fake news typology based on distinct categorizations that we propose, and we define and compare the most cited forms of one specific fake news category (i.e., the intent-based fake news category).

4.1 Definitions of fake news

“Fake news” is defined in the Collins English Dictionary as false and often sensational information disseminated under the guise of news reporting, Footnote 30 yet the term has evolved over time and has become synonymous with the spread of false information (Cooke 2017 ).

The first definition of the term fake news was provided by Allcott and Gentzkow ( 2017 ) as news articles that are intentionally and verifiably false and could mislead readers. Then, other definitions were provided in the literature, but they all agree on the authenticity of fake news to be false (i.e., being non-factual). However, they disagree on the inclusion and exclusion of some related concepts such as satire , rumors , conspiracy theories , misinformation and hoaxes from the given definition. More recently, Nakov ( 2020 ) reported that the term fake news started to mean different things to different people, and for some politicians, it even means “news that I do not like.”

Hence, there is still no agreed definition of the term “fake news.” Moreover, we can find many terms and concepts in the literature that refer to fake news (Van der Linden et al. 2020 ; Molina et al. 2021 ) (Abu Arqoub et al. 2022 ; Allen et al. 2020 ; Allcott and Gentzkow 2017 ; Shu et al. 2017 ; Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Conroy et al. 2015 ; Celliers and Hattingh 2020 ; Nakov 2020 ; Shu et al. 2020c ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ; Egelhofer and Lecheler 2019 ; Mustafaraj and Metaxas 2017 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Lazer et al. 2018 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ), disinformation (Kapantai et al. 2021 ; Shu et al. 2020a , c ; Kumar et al. 2016 ; Bhattacharjee et al. 2020 ; Marsden et al. 2020 ; Jungherr and Schroeder 2021 ; Starbird et al. 2019 ; Ireton and Posetti 2018 ), misinformation (Wu et al. 2019 ; Shu et al. 2020c ; Shao et al. 2016 , 2018b ; Pennycook and Rand 2019 ; Micallef et al. 2020 ), malinformation (Dame Adjin-Tettey 2022 ) (Carmi et al. 2020 ; Shu et al. 2020c ), false information (Kumar and Shah 2018 ; Guo et al. 2020 ; Habib et al. 2019 ), information disorder (Shu et al. 2020c ; Wardle and Derakhshan 2017 ; Wardle 2018 ; Derakhshan and Wardle 2017 ), information warfare (Guadagno and Guttieri 2021 ) and information pollution (Meel and Vishwakarma 2020 ).

There is also a remarkable amount of disagreement over the classification of the term fake news in the research literature, as well as in policy (de Cock Buning 2018 ; ERGA 2018 , 2021 ). Some consider fake news as a type of misinformation (Allen et al. 2020 ; Singh et al. 2021 ; Ha et al. 2021 ; Pennycook and Rand 2019 ; Shao et al. 2018b ; Di Domenico et al. 2021 ; Sharma et al. 2019 ; Celliers and Hattingh 2020 ; Klein and Wueller 2017 ; Potthast et al. 2017 ; Islam et al. 2020 ), others consider it as a type of disinformation (de Cock Buning 2018 ) (Bringula et al. 2022 ; Baptista and Gradim 2022 ; Tsang 2020 ; Tandoc Jr et al. 2021 ; Bastick 2021 ; Khan et al. 2019 ; Shu et al. 2017 ; Nakov 2020 ; Shu et al. 2020c ; Egelhofer and Lecheler 2019 ), while others associate the term with both disinformation and misinformation (Wu et al. 2022 ; Dame Adjin-Tettey 2022 ; Hameleers et al. 2022 ; Carmi et al. 2020 ; Allcott and Gentzkow 2017 ; Zhang and Ghorbani 2020 ; Potthast et al. 2017 ; Weiss et al. 2020 ; Tandoc Jr et al. 2021 ; Guadagno and Guttieri 2021 ). On the other hand, some prefer to differentiate fake news from both terms (ERGA 2018 ; Molina et al. 2021 ; ERGA 2021 ) (Zhou and Zafarani 2020 ; Jin et al. 2016 ; Rubin et al. 2016 ; Balmas 2014 ; Brewer et al. 2013 ).

The existing terms can be separated into two groups. The first group represents the general terms, which are information disorder , false information and fake news , each of which includes a subset of terms from the second group. The second group represents the elementary terms, which are misinformation , disinformation and malinformation . The literature agrees on the definitions of the latter group, but there is still no agreed-upon definition of the first group. In Fig.  2 , we model the relationship between the most used terms in the literature.

figure 2

Modeling of the relationship between terms related to fake news

The terms most used in the literature to refer, categorize and classify fake news can be summarized and defined as shown in Table  3 , in which we capture the similarities and show the differences between the different terms based on two common key features, which are the intent and the authenticity of the news content. The intent feature refers to the intention behind the term that is used (i.e., whether or not the purpose is to mislead or cause harm), whereas the authenticity feature refers to its factual aspect. (i.e., whether the content is verifiably false or not, which we label as genuine in the second case). Some of these terms are explicitly used to refer to fake news (i.e., disinformation, misinformation and false information), while others are not (i.e., malinformation). In the comparison table, the empty dash (–) cell denotes that the classification does not apply.

In Fig.  3 , we identify the different features used in the literature to define fake news (i.e., intent, authenticity and knowledge). Hence, some definitions are based on two key features, which are authenticity and intent (i.e., news articles that are intentionally and verifiably false and could mislead readers). However, other definitions are based on either authenticity or intent. Other researchers categorize false information on the web and social media based on its intent and knowledge (i.e., when there is a single ground truth). In Table  4 , we classify the existing fake news definitions based on the used term and the used features . In the classification, the references in the cells refer to the research study in which a fake news definition was provided, while the empty dash (–) cells denote that the classification does not apply.

figure 3

The features used for fake news definition

4.2 Fake news typology

Various categorizations of fake news have been provided in the literature. We can distinguish two major categories of fake news based on the studied perspective (i.e., intention or content) as shown in Fig.  4 . However, our proposed fake news typology is not about detection methods, and it is not exclusive. Hence, a given category of fake news can be described based on both perspectives (i.e., intention and content) at the same time. For instance, satire (i.e., intent-based fake news) can contain text and/or multimedia content types of data (e.g., headline, body, image, video) (i.e., content-based fake news) and so on.

figure 4

Fake news typology

Most researchers classify fake news based on the intent (Collins et al. 2020 ; Bondielli and Marcelloni 2019 ; Zannettou et al. 2019 ; Kumar et al. 2016 ; Wardle 2017 ; Shu et al. 2017 ; Kumar and Shah 2018 ) (see Sect.  4.2.2 ). However, other researchers (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ) focus on the content to categorize types of fake news through distinguishing the different formats and content types of data in the news (e.g., text and/or multimedia).

Recently, another classification was proposed by Zhang and Ghorbani ( 2020 ). It is based on the combination of content and intent to categorize fake news. They distinguish physical news content and non-physical news content from fake news. Physical content consists of the carriers and format of the news, and non-physical content consists of the opinions, emotions, attitudes and sentiments that the news creators want to express.

4.2.1 Content-based fake news category

According to researchers of this category (Parikh and Atrey 2018 ; Fraga-Lamas and Fernández-Caramés 2020 ; Hasan and Salah 2019 ; Masciari et al. 2020 ; Bakdash et al. 2018 ; Elhadad et al. 2019 ; Yang et al. 2019b ), forms of fake news may include false text such as hyperlinks or embedded content; multimedia such as false videos (Demuyakor and Opata 2022 ), images (Masciari et al. 2020 ; Shen et al. 2019 ), audios (Demuyakor and Opata 2022 ) and so on. Moreover, we can also find multimodal content (Shu et al. 2020a ) that is fake news articles and posts composed of multiple types of data combined together, for example, a fabricated image along with a text related to the image (Shu et al. 2020a ). In this category of fake news forms, we can mention as examples deepfake videos (Yang et al. 2019b ) and GAN-generated fake images (Zhang et al. 2019b ), which are artificial intelligence-based machine-generated fake content that are hard for unsophisticated social network users to identify.

The effects of these forms of fake news content vary on the credibility assessment, as well as sharing intentions which influences the spread of fake news on OSNs. For instance, people with little knowledge about the issue compared to those who are strongly concerned about the key issue of fake news tend to be easier to convince that the misleading or fake news is real, especially when shared via a video modality as compared to the text or the audio modality (Demuyakor and Opata 2022 ).

4.2.2 Intent-based Fake News Category

The most often mentioned and discussed forms of fake news according to researchers in this category include but are not restricted to clickbait , hoax , rumor , satire , propaganda , framing , conspiracy theories and others. In the following subsections, we explain these types of fake news as they were defined in the literature and undertake a brief comparison between them as depicted in Table  5 . The following are the most cited forms of intent-based types of fake news, and their comparison is based on what we suspect are the most common criteria mentioned by researchers.

Clickbait refers to misleading headlines and thumbnails of content on the web (Zannettou et al. 2019 ) that tend to be fake stories with catchy headlines aimed at enticing the reader to click on a link (Collins et al. 2020 ). This type of fake news is considered to be the least severe type of false information because if a user reads/views the whole content, it is possible to distinguish if the headline and/or the thumbnail was misleading (Zannettou et al. 2019 ). However, the goal behind using clickbait is to increase the traffic to a website (Zannettou et al. 2019 ).

A hoax is a false (Zubiaga et al. 2018 ) or inaccurate (Zannettou et al. 2019 ) intentionally fabricated (Collins et al. 2020 ) news story used to masquerade the truth (Zubiaga et al. 2018 ) and is presented as factual (Zannettou et al. 2019 ) to deceive the public or audiences (Collins et al. 2020 ). This category is also known either as half-truth or factoid stories (Zannettou et al. 2019 ). Popular examples of hoaxes are stories that report the false death of celebrities (Zannettou et al. 2019 ) and public figures (Collins et al. 2020 ). Recently, hoaxes about the COVID-19 have been circulating through social media.

The term rumor refers to ambiguous or never confirmed claims (Zannettou et al. 2019 ) that are disseminated with a lack of evidence to support them (Sharma et al. 2019 ). This kind of information is widely propagated on OSNs (Zannettou et al. 2019 ). However, they are not necessarily false and may turn out to be true (Zubiaga et al. 2018 ). Rumors originate from unverified sources but may be true or false or remain unresolved (Zubiaga et al. 2018 ).

Satire refers to stories that contain a lot of irony and humor (Zannettou et al. 2019 ). It presents stories as news that might be factually incorrect, but the intent is not to deceive but rather to call out, ridicule, or to expose behavior that is shameful, corrupt, or otherwise “bad” (Golbeck et al. 2018 ). This is done with a fabricated story or by exaggerating the truth reported in mainstream media in the form of comedy (Collins et al. 2020 ). The intent behind satire seems kind of legitimate and many authors (such as Wardle (Wardle 2017 )) do include satire as a type of fake news as there is no intention to cause harm but it has the potential to mislead or fool people.

Also, Golbeck et al. ( 2018 ) mention that there is a spectrum from fake to satirical news that they found to be exploited by many fake news sites. These sites used disclaimers at the bottom of their webpages to suggest they were “satirical” even when there was nothing satirical about their articles, to protect them from accusations about being fake. The difference with a satirical form of fake news is that the authors or the host present themselves as a comedian or as an entertainer rather than a journalist informing the public (Collins et al. 2020 ). However, most audiences believed the information passed in this satirical form because the comedian usually projects news from mainstream media and frames them to suit their program (Collins et al. 2020 ).

Propaganda refers to news stories created by political entities to mislead people. It is a special instance of fabricated stories that aim to harm the interests of a particular party and, typically, has a political context (Zannettou et al. 2019 ). Propaganda was widely used during both World Wars (Collins et al. 2020 ) and during the Cold War (Zannettou et al. 2019 ). It is a consequential type of false information as it can change the course of human history (e.g., by changing the outcome of an election) (Zannettou et al. 2019 ). States are the main actors of propaganda. Recently, propaganda has been used by politicians and media organizations to support a certain position or view (Collins et al. 2020 ). Online astroturfing can be an example of the tools used for the dissemination of propaganda. It is a covert manipulation of public opinion (Peng et al. 2017 ) that aims to make it seem that many people share the same opinion about something. Astroturfing can affect different domains of interest, based on which online astroturfing can be mainly divided into political astroturfing, corporate astroturfing and astroturfing in e-commerce or online services (Mahbub et al. 2019 ). Propaganda types of fake news can be debunked with manual fact-based detection models such as the use of expert-based fact-checkers (Collins et al. 2020 ).

Framing refers to employing some aspect of reality to make content more visible, while the truth is concealed (Collins et al. 2020 ) to deceive and misguide readers. People will understand certain concepts based on the way they are coined and invented. An example of framing was provided by Collins et al. ( 2020 ): “suppose a leader X says “I will neutralize my opponent” simply meaning he will beat his opponent in a given election. Such a statement will be framed such as “leader X threatens to kill Y” and this framed statement provides a total misrepresentation of the original meaning.

Conspiracy Theories

Conspiracy theories refer to the belief that an event is the result of secret plots generated by powerful conspirators. Conspiracy belief refers to people’s adoption and belief of conspiracy theories, and it is associated with psychological, political and social factors (Douglas et al. 2019 ). Conspiracy theories are widespread in contemporary democracies (Sutton and Douglas 2020 ), and they have major consequences. For instance, lately and during the COVID-19 pandemic, conspiracy theories have been discussed from a public health perspective (Meese et al. 2020 ; Allington et al. 2020 ; Freeman et al. 2020 ).

4.2.3 Comparison Between Most Popular Intent-based Types of Fake News

Following a review of the most popular intent-based types of fake news, we compare them as shown in Table  5 based on the most common criteria mentioned by researchers in their definitions as listed below.

the intent behind the news, which refers to whether a given news type was mainly created to intentionally deceive people or not (e.g., humor, irony, entertainment, etc.);

the way that the news propagates through OSN, which determines the nature of the propagation of each type of fake news and this can be either fast or slow propagation;

the severity of the impact of the news on OSN users, which refers to whether the public has been highly impacted by the given type of fake news; the mentioned impact of each fake news type is mainly the proportion of the negative impact;

and the goal behind disseminating the news, which can be to gain popularity for a particular entity (e.g., political party), for profit (e.g., lucrative business), or other reasons such as humor and irony in the case of satire, spreading panic or anger, and manipulating the public in the case of hoaxes, made-up stories about a particular person or entity in the case of rumors, and misguiding readers in the case of framing.

However, the comparison provided in Table  5 is deduced from the studied research papers; it is our point of view, which is not based on empirical data.

We suspect that the most dangerous types of fake news are the ones with high intention to deceive the public, fast propagation through social media, high negative impact on OSN users, and complicated hidden goals and agendas. However, while the other types of fake news are less dangerous, they should not be ignored.

Moreover, it is important to highlight that the existence of the overlap in the types of fake news mentioned above has been proven, thus it is possible to observe false information that may fall within multiple categories (Zannettou et al. 2019 ). Here, we provide two examples by Zannettou et al. ( 2019 ) to better understand possible overlaps: (1) a rumor may also use clickbait techniques to increase the audience that will read the story; and (2) propaganda stories, as a special instance of a framing story.

5 Challenges related to fake news detection and mitigation

To alleviate fake news and its threats, it is crucial to first identify and understand the factors involved that continue to challenge researchers. Thus, the main question is to explore and investigate the factors that make it easier to fall for manipulated information. Despite the tremendous progress made in alleviating some of the challenges in fake news detection (Sharma et al. 2019 ; Zhou and Zafarani 2020 ; Zhang and Ghorbani 2020 ; Shu et al. 2020a ), much more work needs to be accomplished to address the problem effectively.

In this section, we discuss several open issues that have been making fake news detection in social media a challenging problem. These issues can be summarized as follows: content-based issues (i.e., deceptive content that resembles the truth very closely), contextual issues (i.e., lack of user awareness, social bots spreaders of fake content, and OSN’s dynamic natures that leads to the fast propagation), as well as the issue of existing datasets (i.e., there still no one size fits all benchmark dataset for fake news detection). These various aspects have proven (Shu et al. 2017 ) to have a great impact on the accuracy of fake news detection approaches.

5.1 Content-based issue, deceptive content

Automatic fake news detection remains a huge challenge, primarily because the content is designed in a way that it closely resembles the truth. Besides, most deceivers choose their words carefully and use their language strategically to avoid being caught. Therefore, it is often hard to determine its veracity by AI without the reliance on additional information from third parties such as fact-checkers.

Abdullah-All-Tanvir et al. ( 2020 ) reported that fake news tends to have more complicated stories and hardly ever make any references. It is more likely to contain a greater number of words that express negative emotions. This makes it so complicated that it becomes impossible for a human to manually detect the credibility of this content. Therefore, detecting fake news on social media is quite challenging. Moreover, fake news appears in multiple types and forms, which makes it hard and challenging to define a single global solution able to capture and deal with the disseminated content. Consequently, detecting false information is not a straightforward task due to its various types and forms Zannettou et al. ( 2019 ).

5.2 Contextual issues

Contextual issues are challenges that we suspect may not be related to the content of the news but rather they are inferred from the context of the online news post (i.e., humans are the weakest factor due to lack of user awareness, social bots spreaders, dynamic nature of online social platforms and fast propagation of fake news).

5.2.1 Humans are the weakest factor due to the lack of awareness

Recent statistics Footnote 31 show that the percentage of unintentional fake news spreaders (people who share fake news without the intention to mislead) over social media is five times higher than intentional spreaders. Moreover, another recent statistic Footnote 32 shows that the percentage of people who were confident about their ability to discern fact from fiction is ten times higher than those who were not confident about the truthfulness of what they are sharing. As a result, we can deduce the lack of human awareness about the ascent of fake news.

Public susceptibility and lack of user awareness (Sharma et al. 2019 ) have always been the most challenging problem when dealing with fake news and misinformation. This is a complex issue because many people believe almost everything on the Internet and the ones who are new to digital technology or have less expertise may be easily fooled (Edgerly et al. 2020 ).

Moreover, it has been widely proven (Metzger et al. 2020 ; Edgerly et al. 2020 ) that people are often motivated to support and accept information that goes with their preexisting viewpoints and beliefs, and reject information that does not fit in as well. Hence, Shu et al. ( 2017 ) illustrate an interesting correlation between fake news spread and psychological and cognitive theories. They further suggest that humans are more likely to believe information that confirms their existing views and ideological beliefs. Consequently, they deduce that humans are naturally not very good at differentiating real information from fake information.

Recent research by Giachanou et al. ( 2020 ) studies the role of personality and linguistic patterns in discriminating between fake news spreaders and fact-checkers. They classify a user as a potential fact-checker or a potential fake news spreader based on features that represent users’ personality traits and linguistic patterns used in their tweets. They show that leveraging personality traits and linguistic patterns can improve the performance in differentiating between checkers and spreaders.

Furthermore, several researchers studied the prevalence of fake news on social networks during (Allcott and Gentzkow 2017 ; Grinberg et al. 2019 ; Guess et al. 2019 ; Baptista and Gradim 2020 ) and after (Garrett and Bond 2021 ) the 2016 US presidential election and found that individuals most likely to engage with fake news sources were generally conservative-leaning, older, and highly engaged with political news.

Metzger et al. ( 2020 ) examine how individuals evaluate the credibility of biased news sources and stories. They investigate the role of both cognitive dissonance and credibility perceptions in selective exposure to attitude-consistent news information. They found that online news consumers tend to perceive attitude-consistent news stories as more accurate and more credible than attitude-inconsistent stories.

Similarly, Edgerly et al. ( 2020 ) explore the impact of news headlines on the audience’s intent to verify whether given news is true or false. They concluded that participants exhibit higher intent to verify the news only when they believe the headline to be true, which is predicted by perceived congruence with preexisting ideological tendencies.

Luo et al. ( 2022 ) evaluate the effects of endorsement cues in social media on message credibility and detection accuracy. Results showed that headlines associated with a high number of likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. Consequently, they highlight the urgency of empowering individuals to assess both news veracity and endorsement cues appropriately on social media.

Moreover, misinformed people are a greater problem than uninformed people (Kuklinski et al. 2000 ), because the former hold inaccurate opinions (which may concern politics, climate change, medicine) that are harder to correct. Indeed, people find it difficult to update their misinformation-based beliefs even after they have been proved to be false (Flynn et al. 2017 ). Moreover, even if a person has accepted the corrected information, his/her belief may still affect their opinion (Nyhan and Reifler 2015 ).

Falling for disinformation may also be explained by a lack of critical thinking and of the need for evidence that supports information (Vilmer et al. 2018 ; Badawy et al. 2019 ). However, it is also possible that people choose misinformation because they engage in directionally motivated reasoning (Badawy et al. 2019 ; Flynn et al. 2017 ). Online clients are normally vulnerable and will, in general, perceive web-based networking media as reliable, as reported by Abdullah-All-Tanvir et al. ( 2019 ), who propose to mechanize fake news recognition.

It is worth noting that in addition to bots causing the outpouring of the majority of the misrepresentations, specific individuals are also contributing a large share of this issue (Abdullah-All-Tanvir et al. 2019 ). Furthermore, Vosoughi et al. (Vosoughi et al. 2018 ) found that contrary to conventional wisdom, robots have accelerated the spread of real and fake news at the same rate, implying that fake news spreads more than the truth because humans, not robots, are more likely to spread it.

In this case, verified users and those with numerous followers were not necessarily responsible for spreading misinformation of the corrupted posts (Abdullah-All-Tanvir et al. 2019 ).

Viral fake news can cause much havoc to our society. Therefore, to mitigate the negative impact of fake news, it is important to analyze the factors that lead people to fall for misinformation and to further understand why people spread fake news (Cheng et al. 2020 ). Measuring the accuracy, credibility, veracity and validity of news contents can also be a key countermeasure to consider.

5.2.2 Social bots spreaders

Several authors (Shu et al. 2018b , 2017 ; Shi et al. 2019 ; Bessi and Ferrara 2016 ; Shao et al. 2018a ) have also shown that fake news is likely to be created and spread by non-human accounts with similar attributes and structure in the network, such as social bots (Ferrara et al. 2016 ). Bots (short for software robots) exist since the early days of computers. A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior (Ferrara et al. 2016 ). Although they are designed to provide a useful service, they can be harmful, for example when they contribute to the spread of unverified information or rumors (Ferrara et al. 2016 ). However, it is important to note that bots are simply tools created and maintained by humans for some specific hidden agendas.

Social bots tend to connect with legitimate users instead of other bots. They try to act like a human with fewer words and fewer followers on social media. This contributes to the forwarding of fake news (Jiang et al. 2019 ). Moreover, there is a difference between bot-generated and human-written clickbait (Le et al. 2019 ).

Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. ( 2020a ) describes social bots use of two strategies to spread low-credibility content. First, they amplify interactions with content as soon as it is created to make it look legitimate and to facilitate its spread across social networks. Next, they try to increase public exposure to the created content and thus boost its perceived credibility by targeting influential users that are more likely to believe disinformation in the hope of getting them to “repost” the fabricated content. They further discuss the social bot detection systems taxonomy proposed by Ferrara et al. ( 2016 ) which divides bot detection methods into three classes: (1) graph-based, (2) crowdsourcing and (3) feature-based social bot detection methods.

Similarly, Shao et al. ( 2018a ) examine social bots and how they promote the spread of misinformation through millions of Twitter posts during and following the 2016 US presidential campaign. They found that social bots played a disproportionate role in spreading articles from low-credibility sources by amplifying such content in the early spreading moments and targeting users with many followers through replies and mentions to expose them to this content and induce them to share it.

Ismailov et al. ( 2020 ) assert that the techniques used to detect bots depend on the social platform and the objective. They note that a malicious bot designed to make friends with as many accounts as possible will require a different detection approach than a bot designed to repeatedly post links to malicious websites. Therefore, they identify two models for detecting malicious accounts, each using a different set of features. Social context models achieve detection by examining features related to an account’s social presence including features such as relationships to other accounts, similarities to other users’ behaviors, and a variety of graph-based features. User behavior models primarily focus on features related to an individual user’s behavior, such as frequency of activities (e.g., number of tweets or posts per time interval), patterns of activity and clickstream sequences.

Therefore, it is crucial to consider bot detection techniques to distinguish bots from normal users to better leverage user profile features to detect fake news.

However, there is also another “bot-like” strategy that aims to massively promote disinformation and fake content in social platforms, which is called bot farms or also troll farms. It is not social bots, but it is a group of organized individuals engaging in trolling or bot-like promotion of narratives in a coordinated fashion (Wardle 2018 ) hired to massively spread fake news or any other harmful content. A prominent troll farm example is the Russia-based Internet Research Agency (IRA), which disseminated inflammatory content online to influence the outcome of the 2016 U.S. presidential election. Footnote 33 As a result, Twitter suspended accounts connected to the IRA and deleted 200,000 tweets from Russian trolls (Jamieson 2020 ). Another example to mention in this category is review bombing (Moro and Birt 2022 ). Review bombing refers to coordinated groups of people massively performing the same negative actions online (e.g., dislike, negative review/comment) on an online video, game, post, product, etc., in order to reduce its aggregate review score. The review bombers can be both humans and bots coordinated in order to cause harm and mislead people by falsifying facts.

5.2.3 Dynamic nature of online social platforms and fast propagation of fake news

Sharma et al. ( 2019 ) affirm that the fast proliferation of fake news through social networks makes it hard and challenging to assess the information’s credibility on social media. Similarly, Qian et al. ( 2018 ) assert that fake news and fabricated content propagate exponentially at the early stage of its creation and can cause a significant loss in a short amount of time (Friggeri et al. 2014 ) including manipulating the outcome of political events (Liu and Wu 2018 ; Bessi and Ferrara 2016 ).

Moreover, while analyzing the way source and promoters of fake news operate over the web through multiple online platforms, Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to real information (11%).

Furthermore, recently, Shu et al. ( 2020c ) attempted to understand the propagation of disinformation and fake news in social media and found that such content is produced and disseminated faster and easier through social media because of the low barriers that prevent doing so. Similarly, Shu et al. ( 2020b ) studied hierarchical propagation networks for fake news detection. They performed a comparative analysis between fake and real news from structural, temporal and linguistic perspectives. They demonstrated the potential of using these features to detect fake news and they showed their effectiveness for fake news detection as well.

Lastly, Abdullah-All-Tanvir et al. ( 2020 ) note that it is almost impossible to manually detect the sources and authenticity of fake news effectively and efficiently, due to its fast circulation in such a small amount of time. Therefore, it is crucial to note that the dynamic nature of the various online social platforms, which results in the continued rapid and exponential propagation of such fake content, remains a major challenge that requires further investigation while defining innovative solutions for fake news detection.

5.3 Datasets issue

The existing approaches lack an inclusive dataset with derived multidimensional information to detect fake news characteristics to achieve higher accuracy of machine learning classification model performance (Nyow and Chua 2019 ). These datasets are primarily dedicated to validating the machine learning model and are the ultimate frame of reference to train the model and analyze its performance. Therefore, if a researcher evaluates their model based on an unrepresentative dataset, the validity and the efficiency of the model become questionable when it comes to applying the fake news detection approach in a real-world scenario.

Moreover, several researchers (Shu et al. 2020d ; Wang et al. 2020 ; Pathak and Srihari 2019 ; Przybyla 2020 ) believe that fake news is diverse and dynamic in terms of content, topics, publishing methods and media platforms, and sophisticated linguistic styles geared to emulate true news. Consequently, training machine learning models on such sophisticated content requires large-scale annotated fake news data that are difficult to obtain (Shu et al. 2020d ).

Therefore, datasets are also a great topic to work on to enhance data quality and have better results while defining our solutions. Adversarial learning techniques (e.g., GAN, SeqGAN) can be used to provide machine-generated data that can be used to train deeper models and build robust systems to detect fake examples from the real ones. This approach can be used to counter the lack of datasets and the scarcity of data available to train models.

6 Fake news detection literature review

Fake news detection in social networks is still in the early stage of development and there are still challenging issues that need further investigation. This has become an emerging research area that is attracting huge attention.

There are various research studies on fake news detection in online social networks. Few of them have focused on the automatic detection of fake news using artificial intelligence techniques. In this section, we review the existing approaches used in automatic fake news detection, as well as the techniques that have been adopted. Then, a critical discussion built on a primary classification scheme based on a specific set of criteria is also emphasized.

6.1 Categories of fake news detection

In this section, we give an overview of most of the existing automatic fake news detection solutions adopted in the literature. A recent classification by Sharma et al. ( 2019 ) uses three categories of fake news identification methods. Each category is further divided based on the type of existing methods (i.e., content-based, feedback-based and intervention-based methods). However, a review of the literature for fake news detection in online social networks shows that the existing studies can be classified into broader categories based on two major aspects that most authors inspect and make use of to define an adequate solution. These aspects can be considered as major sources of extracted information used for fake news detection and can be summarized as follows: the content-based (i.e., related to the content of the news post) and the contextual aspect (i.e., related to the context of the news post).

Consequently, the studies we reviewed can be classified into three different categories based on the two aspects mentioned above (the third category is hybrid). As depicted in Fig.  5 , fake news detection solutions can be categorized as news content-based approaches, the social context-based approaches that can be divided into network and user-based approaches, and hybrid approaches. The latter combines both content-based and contextual approaches to define the solution.

figure 5

Classification of fake news detection approaches

6.1.1 News Content-based Category

News content-based approaches are fake news detection approaches that use content information (i.e., information extracted from the content of the news post) and that focus on studying and exploiting the news content in their proposed solutions. Content refers to the body of the news, including source, headline, text and image-video, which can reflect subtle differences.

Researchers of this category rely on content-based detection cues (i.e., text and multimedia-based cues), which are features extracted from the content of the news post. Text-based cues are features extracted from the text of the news, whereas multimedia-based cues are features extracted from the images and videos attached to the news. Figure  6 summarizes the most widely used news content representation (i.e., text and multimedia/images) and detection techniques (i.e., machine learning (ML), deep Learning (DL), natural language processing (NLP), fact-checking, crowdsourcing (CDS) and blockchain (BKC)) in news content-based category of fake news detection approaches. Most of the reviewed research works based on news content for fake news detection rely on the text-based cues (Kapusta et al. 2019 ; Kaur et al. 2020 ; Vereshchaka et al. 2020 ; Ozbay and Alatas 2020 ; Wang 2017 ; Nyow and Chua 2019 ; Hosseinimotlagh and Papalexakis 2018 ; Abdullah-All-Tanvir et al. 2019 , 2020 ; Mahabub 2020 ; Bahad et al. 2019 ; Hiriyannaiah et al. 2020 ) extracted from the text of the news content including the body of the news and its headline. However, a few researchers such as Vishwakarma et al. ( 2019 ) and Amri et al. ( 2022 ) try to recognize text from the associated image.

Most researchers of this category rely on artificial intelligence (AI) techniques (such as ML, DL and NLP models) to improve performance in terms of prediction accuracy. Others use different techniques such as fact-checking, crowdsourcing and blockchain. Specifically, the AI- and ML-based approaches in this category are trying to extract features from the news content, which they use later for content analysis and training tasks. In this particular case, the extracted features are the different types of information considered to be relevant for the analysis. Feature extraction is considered as one of the best techniques to reduce data size in automatic fake news detection. This technique aims to choose a subset of features from the original set to improve classification performance (Yazdi et al. 2020 ).

Table  6 lists the distinct features and metadata, as well as the used datasets in the news content-based category of fake news detection approaches.

figure 6

News content-based category: news content representation and detection techniques

6.1.2 Social Context-based Category

Unlike news content-based solutions, the social context-based approaches capture the skeptical social context of the online news (Zhang and Ghorbani 2020 ) rather than focusing on the news content. The social context-based category contains fake news detection approaches that use the contextual aspects (i.e., information related to the context of the news post). These aspects are based on social context and they offer additional information to help detect fake news. They are the surrounding data outside of the fake news article itself, where they can be an essential part of automatic fake news detection. Some useful examples of contextual information may include checking if the news itself and the source that published it are credible, checking the date of the news or the supporting resources, and checking if any other online news platforms are reporting the same or similar stories (Zhang and Ghorbani 2020 ).

Social context-based aspects can be classified into two subcategories, user-based and network-based, and they can be used for context analysis and training tasks in the case of AI- and ML-based approaches. User-based aspects refer to information captured from OSN users such as user profile information (Shu et al. 2019b ; Wang et al. 2019c ; Hamdi et al. 2020 ; Nyow and Chua 2019 ; Jiang et al. 2019 ) and user behavior (Cardaioli et al. 2020 ) such as user engagement (Uppada et al. 2022 ; Jiang et al. 2019 ; Shu et al. 2018b ; Nyow and Chua 2019 ) and response (Zhang et al. 2019a ; Qian et al. 2018 ). Meanwhile, network-based aspects refer to information captured from the properties of the social network where the fake content is shared and disseminated such as news propagation path (Liu and Wu 2018 ; Wu and Liu 2018 ) (e.g., propagation times and temporal characteristics of propagation), diffusion patterns (Shu et al. 2019a ) (e.g., number of retweets, shares), as well as user relationships (Mishra 2020 ; Hamdi et al. 2020 ; Jiang et al. 2019 ) (e.g., friendship status among users).

Figure  7 summarizes some of the most widely adopted social context representations, as well as the most used detection techniques (i.e., AI, ML, DL, fact-checking and blockchain), in the social context-based category of approaches.

Table  7 lists the distinct features and metadata, the adopted detection cues, as well as the used datasets, in the context-based category of fake news detection approaches.

figure 7

Social context-based category: social context representation and detection techniques

6.1.3 Hybrid approaches

Most researchers are focusing on employing a specific method rather than a combination of both content- and context-based methods. This is because some of them (Wu and Rao 2020 ) believe that there still some challenging limitations in the traditional fusion strategies due to existing feature correlations and semantic conflicts. For this reason, some researchers focus on extracting content-based information, while others are capturing some social context-based information for their proposed approaches.

However, it has proven challenging to successfully automate fake news detection based on just a single type of feature (Ruchansky et al. 2017 ). Therefore, recent directions tend to do a mixture by using both news content-based and social context-based approaches for fake news detection.

Table  8 lists the distinct features and metadata, as well as the used datasets, in the hybrid category of fake news detection approaches.

6.2 Fake news detection techniques

Another vision for classifying automatic fake news detection is to look at techniques used in the literature. Hence, we classify the detection methods based on the techniques into three groups:

Human-based techniques: This category mainly includes the use of crowdsourcing and fact-checking techniques, which rely on human knowledge to check and validate the veracity of news content.

Artificial Intelligence-based techniques: This category includes the most used AI approaches for fake news detection in the literature. Specifically, these are the approaches in which researchers use classical ML, deep learning techniques such as convolutional neural network (CNN), recurrent neural network (RNN), as well as natural language processing (NLP).

Blockchain-based techniques: This category includes solutions using blockchain technology to detect and mitigate fake news in social media by checking source reliability and establishing the traceability of the news content.

6.2.1 Human-based Techniques

One specific research direction for fake news detection consists of using human-based techniques such as crowdsourcing (Pennycook and Rand 2019 ; Micallef et al. 2020 ) and fact-checking (Vlachos and Riedel 2014 ; Chung and Kim 2021 ; Nyhan et al. 2020 ) techniques.

These approaches can be considered as low computational requirement techniques since both rely on human knowledge and expertise for fake news detection. However, fake news identification cannot be addressed solely through human force since it demands a lot of effort in terms of time and cost, and it is ineffective in terms of preventing the fast spread of fake content.

Crowdsourcing. Crowdsourcing approaches (Kim et al. 2018 ) are based on the “wisdom of the crowds” (Collins et al. 2020 ) for fake content detection. These approaches rely on the collective contributions and crowd signals (Tschiatschek et al. 2018 ) of a group of people for the aggregation of crowd intelligence to detect fake news (Tchakounté et al. 2020 ) and to reduce the spread of misinformation on social media (Pennycook and Rand 2019 ; Micallef et al. 2020 ).

Micallef et al. ( 2020 ) highlight the role of the crowd in countering misinformation. They suspect that concerned citizens (i.e., the crowd), who use platforms where disinformation appears, can play a crucial role in spreading fact-checking information and in combating the spread of misinformation.

Recently Tchakounté et al. ( 2020 ) proposed a voting system as a new method of binary aggregation of opinions of the crowd and the knowledge of a third-party expert. The aggregator is based on majority voting on the crowd side and weighted averaging on the third-party site.

Similarly, Huffaker et al. ( 2020 ) propose a crowdsourced detection of emotionally manipulative language. They introduce an approach that transforms classification problems into a comparison task to mitigate conflation content by allowing the crowd to detect text that uses manipulative emotional language to sway users toward positions or actions. The proposed system leverages anchor comparison to distinguish between intrinsically emotional content and emotionally manipulative language.

La Barbera et al. ( 2020 ) try to understand how people perceive the truthfulness of information presented to them. They collect data from US-based crowd workers, build a dataset of crowdsourced truthfulness judgments for political statements, and compare it with expert annotation data generated by fact-checkers such as PolitiFact.

Coscia and Rossi ( 2020 ) introduce a crowdsourced flagging system that consists of online news flagging. The bipolar model of news flagging attempts to capture the main ingredients that they observe in empirical research on fake news and disinformation.

Unlike the previously mentioned researchers who focus on news content in their approaches, Pennycook and Rand ( 2019 ) focus on using crowdsourced judgments of the quality of news sources to combat social media disinformation.

Fact-Checking. The fact-checking task is commonly manually performed by journalists to verify the truthfulness of a given claim. Indeed, fact-checking features are being adopted by multiple online social network platforms. For instance, Facebook Footnote 34 started addressing false information through independent fact-checkers in 2017, followed by Google Footnote 35 the same year. Two years later, Instagram Footnote 36 followed suit. However, the usefulness of fact-checking initiatives is questioned by journalists Footnote 37 , as well as by researchers such as Andersen and Søe ( 2020 ). On the other hand, work is being conducted to boost the effectiveness of these initiatives to reduce misinformation (Chung and Kim 2021 ; Clayton et al. 2020 ; Nyhan et al. 2020 ).

Most researchers use fact-checking websites (e.g., politifact.com, Footnote 38 snopes.com, Footnote 39 Reuters, Footnote 40 , etc.) as data sources to build their datasets and train their models. Therefore, in the following, we specifically review examples of solutions that use fact-checking (Vlachos and Riedel 2014 ) to help build datasets that can be further used in the automatic detection of fake content.

Yang et al. ( 2019a ) use PolitiFact fact-checking website as a data source to train, tune, and evaluate their model named XFake, on political data. The XFake system is an explainable fake news detector that assists end users to identify news credibility. The fakeness of news items is detected and interpreted considering both content and contextual (e.g., statements) information (e.g., speaker).

Based on the idea that fact-checkers cannot clean all data, and it must be a selection of what “matters the most” to clean while checking a claim, Sintos et al. ( 2019 ) propose a solution to help fact-checkers combat problems related to data quality (where inaccurate data lead to incorrect conclusions) and data phishing. The proposed solution is a combination of data cleaning and perturbation analysis to avoid uncertainties and errors in data and the possibility that data can be phished.

Tchechmedjiev et al. ( 2019 ) propose a system named “ClaimsKG” as a knowledge graph of fact-checked claims aiming to facilitate structured queries about their truth values, authors, dates, journalistic reviews and other kinds of metadata. “ClaimsKG” designs the relationship between vocabularies. To gather vocabularies, a semi-automated pipeline periodically gathers data from popular fact-checking websites regularly.

6.2.2 AI-based Techniques

Previous work by Yaqub et al. ( 2020 ) has shown that people lack trust in automated solutions for fake news detection However, work is already being undertaken to increase this trust, for instance by von der Weth et al. ( 2020 ).

Most researchers consider fake news detection as a classification problem and use artificial intelligence techniques, as shown in Fig.  8 . The adopted AI techniques may include machine learning ML (e.g., Naïve Bayes, logistic regression, support vector machine SVM), deep learning DL (e.g., convolutional neural networks CNN, recurrent neural networks RNN, long short-term memory LSTM) and natural language processing NLP (e.g., Count vectorizer, TF-IDF Vectorizer). Most of them combine many AI techniques in their solutions rather than relying on one specific approach.

figure 8

Examples of the most widely used AI techniques for fake news detection

Many researchers are developing machine learning models in their solutions for fake news detection. Recently, deep neural network techniques are also being employed as they are generating promising results (Islam et al. 2020 ). A neural network is a massively parallel distributed processor with simple units that can store important information and make it available for use (Hiriyannaiah et al. 2020 ). Moreover, it has been proven (Cardoso Durier da Silva et al. 2019 ) that the most widely used method for automatic detection of fake news is not simply a classical machine learning technique, but rather a fusion of classical techniques coordinated by a neural network.

Some researchers define purely machine learning models (Del Vicario et al. 2019 ; Elhadad et al. 2019 ; Aswani et al. 2017 ; Hakak et al. 2021 ; Singh et al. 2021 ) in their fake news detection approaches. The more commonly used machine learning algorithms (Abdullah-All-Tanvir et al. 2019 ) for classification problems are Naïve Bayes, logistic regression and SVM.

Other researchers (Wang et al. 2019c ; Wang 2017 ; Liu and Wu 2018 ; Mishra 2020 ; Qian et al. 2018 ; Zhang et al. 2020 ; Goldani et al. 2021 ) prefer to do a mixture of different deep learning models, without combining them with classical machine learning techniques. Some even prove that deep learning techniques outperform traditional machine learning techniques (Mishra et al. 2022 ). Deep learning is one of the most widely popular research topics in machine learning. Unlike traditional machine learning approaches, which are based on manually crafted features, deep learning approaches can learn hidden representations from simpler inputs both in context and content variations (Bondielli and Marcelloni 2019 ). Moreover, traditional machine learning algorithms almost always require structured data and are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets, which requires human intervention to “teach them” when the result is incorrect (Parrish 2018 ), while deep learning networks rely on layers of artificial neural networks (ANN) and do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes (Parrish 2018 ). The two most widely implemented paradigms in deep neural networks are recurrent neural networks (RNN) and convolutional neural networks (CNN).

Still other researchers (Abdullah-All-Tanvir et al. 2019 ; Kaliyar et al. 2020 ; Zhang et al. 2019a ; Deepak and Chitturi 2020 ; Shu et al. 2018a ; Wang et al. 2019c ) prefer to combine traditional machine learning and deep learning classification, models. Others combine machine learning and natural language processing techniques. A few combine deep learning models with natural language processing (Vereshchaka et al. 2020 ). Some other researchers (Kapusta et al. 2019 ; Ozbay and Alatas 2020 ; Ahmed et al. 2020 ) combine natural language processing with machine learning models. Furthermore, others (Abdullah-All-Tanvir et al. 2019 ; Kaur et al. 2020 ; Kaliyar 2018 ; Abdullah-All-Tanvir et al. 2020 ; Bahad et al. 2019 ) prefer to combine all the previously mentioned techniques (i.e., ML, DL and NLP) in their approaches.

Table  11 , which is relegated to the Appendix (after the bibliography) because of its size, shows a comparison of the fake news detection solutions that we have reviewed based on their main approaches, the methodology that was used and the models.

6.2.3 Blockchain-based Techniques for Source Reliability and Traceability

Another research direction for detecting and mitigating fake news in social media focuses on using blockchain solutions. Blockchain technology is recently attracting researchers’ attention due to the interesting features it offers. Immutability, decentralization, tamperproof, consensus, record keeping and non-repudiation of transactions are some of the key features that make blockchain technology exploitable, not just for cryptocurrencies, but also to prove the authenticity and integrity of digital assets.

However, the proposed blockchain approaches are few in number and they are fundamental and theoretical approaches. Specifically, the solutions that are currently available are still in research, prototype, and beta testing stages (DiCicco and Agarwal 2020 ; Tchechmedjiev et al. 2019 ). Furthermore, most researchers (Ochoa et al. 2019 ; Song et al. 2019 ; Shang et al. 2018 ; Qayyum et al. 2019 ; Jing and Murugesan 2018 ; Buccafurri et al. 2017 ; Chen et al. 2018 ) do not specify which fake news type they are mitigating in their studies. They mention news content in general, which is not adequate for innovative solutions. For that, serious implementations should be provided to prove the usefulness and feasibility of this newly developing research vision.

Table  9 shows a classification of the reviewed blockchain-based approaches. In the classification, we listed the following:

The type of fake news that authors are trying to mitigate, which can be multimedia-based or text-based fake news.

The techniques used for fake news mitigation, which can be either blockchain only, or blockchain combined with other techniques such as AI, Data mining, Truth-discovery, Preservation metadata, Semantic similarity, Crowdsourcing, Graph theory and SIR model (Susceptible, Infected, Recovered).

The feature that is offered as an advantage of the given solution (e.g., Reliability, Authenticity and Traceability). Reliability is the credibility and truthfulness of the news content, which consists of proving the trustworthiness of the content. Traceability aims to trace and archive the contents. Authenticity consists of checking whether the content is real and authentic.

A checkmark ( \(\checkmark \) ) in Table  9 denotes that the mentioned criterion is explicitly mentioned in the proposed solution, while the empty dash (–) cell for fake news type denotes that it depends on the case: The criterion was either not explicitly mentioned (e.g., fake news type) in the work or the classification does not apply (e.g., techniques/other).

7 Discussion

After reviewing the most relevant state of the art for automatic fake news detection, we classify them as shown in Table  10 based on the detection aspects (i.e., content-based, contextual, or hybrid aspects) and the techniques used (i.e., AI, crowdsourcing, fact-checking, blockchain or hybrid techniques). Hybrid techniques refer to solutions that simultaneously combine different techniques from previously mentioned categories (i.e., inter-hybrid methods), as well as techniques within the same class of methods (i.e., intra-hybrid methods), in order to define innovative solutions for fake news detection. A hybrid method should bring the best of both worlds. Then, we provide a discussion based on different axes.

7.1 News content-based methods

Most of the news content-based approaches consider fake news detection as a classification problem and they use AI techniques such as classical machine learning (e.g., regression, Bayesian) as well as deep learning (i.e., neural methods such as CNN and RNN). More specifically, classification of social media content is a fundamental task for social media mining, so that most existing methods regard it as a text categorization problem and mainly focus on using content features, such as words and hashtags (Wu and Liu 2018 ). The main challenge facing these approaches is how to extract features in a way to reduce the data used to train their models and what features are the most suitable for accurate results.

Researchers using such approaches are motivated by the fact that the news content is the main entity in the deception process, and it is a straightforward factor to analyze and use while looking for predictive clues of deception. However, detecting fake news only from the content of the news is not enough because the news is created in a strategic intentional way to mimic the truth (i.e., the content can be intentionally manipulated by the spreader to make it look like real news). Therefore, it is considered to be challenging, if not impossible, to identify useful features (Wu and Liu 2018 ) and consequently tell the nature of such news solely from the content.

Moreover, works that utilize only the news content for fake news detection ignore the rich information and latent user intelligence (Qian et al. 2018 ) stored in user responses toward previously disseminated articles. Therefore, the auxiliary information is deemed crucial for an effective fake news detection approach.

7.2 Social context-based methods

The context-based approaches explore the surrounding data outside of the news content, which can be an effective direction and has some advantages in areas where the content approaches based on text classification can run into issues. However, most existing studies implementing contextual methods mainly focus on additional information coming from users and network diffusion patterns. Moreover, from a technical perspective, they are limited to the use of sophisticated machine learning techniques for feature extraction, and they ignore the usefulness of results coming from techniques such as web search and crowdsourcing which may save much time and help in the early detection and identification of fake content.

7.3 Hybrid approaches

Hybrid approaches can simultaneously model different aspects of fake news such as the content-based aspects, as well as the contextual aspect based on both the OSN user and the OSN network patterns. However, these approaches are deemed more complex in terms of models (Bondielli and Marcelloni 2019 ), data availability, and the number of features. Furthermore, it remains difficult to decide which information among each category (i.e., content-based and context-based information) is most suitable and appropriate to be used to achieve accurate and precise results. Therefore, there are still very few studies belonging to this category of hybrid approaches.

7.4 Early detection

As fake news usually evolves and spreads very fast on social media, it is critical and urgent to consider early detection directions. Yet, this is a challenging task to do especially in highly dynamic platforms such as social networks. Both news content- and social context-based approaches suffer from this challenging early detection of fake news.

Although approaches that detect fake news based on content analysis face this issue less, they are still limited by the lack of information required for verification when the news is in its early stage of spread. However, approaches that detect fake news based on contextual analysis are most likely to suffer from the lack of early detection since most of them rely on information that is mostly available after the spread of fake content such as social engagement, user response, and propagation patterns. Therefore, it is crucial to consider both trusted human verification and historical data as an attempt to detect fake content during its early stage of propagation.

8 Conclusion and future directions

In this paper, we introduced the general context of the fake news problem as one of the major issues of the online deception problem in online social networks. Based on reviewing the most relevant state of the art, we summarized and classified existing definitions of fake news, as well as its related terms. We also listed various typologies and existing categorizations of fake news such as intent-based fake news including clickbait, hoax, rumor, satire, propaganda, conspiracy theories, framing as well as content-based fake news including text and multimedia-based fake news, and in the latter, we can tackle deepfake videos and GAN-generated fake images. We discussed the major challenges related to fake news detection and mitigation in social media including the deceptiveness nature of the fabricated content, the lack of human awareness in the field of fake news, the non-human spreaders issue (e.g., social bots), the dynamicity of such online platforms, which results in a fast propagation of fake content and the quality of existing datasets, which still limits the efficiency of the proposed solutions. We reviewed existing researchers’ visions regarding the automatic detection of fake news based on the adopted approaches (i.e., news content-based approaches, social context-based approaches, or hybrid approaches) and the techniques that are used (i.e., artificial intelligence-based methods; crowdsourcing, fact-checking, and blockchain-based methods; and hybrid methods), then we showed a comparative study between the reviewed works. We also provided a critical discussion of the reviewed approaches based on different axes such as the adopted aspect for fake news detection (i.e., content-based, contextual, and hybrid aspects) and the early detection perspective.

To conclude, we present the main issues for combating the fake news problem that needs to be further investigated while proposing new detection approaches. We believe that to define an efficient fake news detection approach, we need to consider the following:

Our choice of sources of information and search criteria may have introduced biases in our research. If so, it would be desirable to identify those biases and mitigate them.

News content is the fundamental source to find clues to distinguish fake from real content. However, contextual information derived from social media users and from the network can provide useful auxiliary information to increase detection accuracy. Specifically, capturing users’ characteristics and users’ behavior toward shared content can be a key task for fake news detection.

Moreover, capturing users’ historical behavior, including their emotions and/or opinions toward news content, can help in the early detection and mitigation of fake news.

Furthermore, adversarial learning techniques (e.g., GAN, SeqGAN) can be considered as a promising direction for mitigating the lack and scarcity of available datasets by providing machine-generated data that can be used to train and build robust systems to detect the fake examples from the real ones.

Lastly, analyzing how sources and promoters of fake news operate over the web through multiple online platforms is crucial; Zannettou et al. ( 2019 ) discovered that false information is more likely to spread across platforms (18% appearing on multiple platforms) compared to valid information (11%).

Availability of data and material

All the data and material are available in the papers cited in the references.

https://www.nationalacademies.org/news/2021/07/as-surgeon-general-urges-whole-of-society-effort-to-fight-health-misinformation-the-work-of-the-national-academies-helps-foster-an-evidence-based-information-environment , last access date: 26-12-2022.

https://time.com/4897819/elvis-presley-alive-conspiracy-theories/ , last access date: 26-12-2022.

https://www.therichest.com/shocking/the-evidence-15-reasons-people-think-the-earth-is-flat/ , last access date: 26-12-2022.

https://www.grunge.com/657584/the-truth-about-1952s-alien-invasion-of-washington-dc/ , last access date: 26-12-2022.

https://www.journalism.org/2021/01/12/news-use-across-social-media-platforms-in-2020/ , last access date: 26-12-2022.

https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/ , last access date: 26-12-2022.

https://www.buzzfeednews.com/article/janelytvynenko/coronavirus-fake-news-disinformation-rumors-hoaxes , last access date: 26-12-2022.

https://www.factcheck.org/2020/03/viral-social-media-posts-offer-false-coronavirus-tips/ , last access date: 26-12-2022.

https://www.factcheck.org/2020/02/fake-coronavirus-cures-part-2-garlic-isnt-a-cure/ , last access date: 26-12-2022.

https://www.bbc.com/news/uk-36528256 , last access date: 26-12-2022.

https://en.wikipedia.org/wiki/Pizzagate_conspiracy_theory , last access date: 26-12-2022.

https://www.theguardian.com/world/2017/jan/09/germany-investigating-spread-fake-news-online-russia-election , last access date: 26-12-2022.

https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2016 , last access date: 26-12-2022.

https://www.macquariedictionary.com.au/resources/view/word/of/the/year/2018 , last access date: 26-12-2022.

https://apnews.com/article/47466c5e260149b1a23641b9e319fda6 , last access date: 26-12-2022.

https://blog.collinsdictionary.com/language-lovers/collins-2017-word-of-the-year-shortlist/ , last access date: 26-12-2022.

https://www.gartner.com/smarterwithgartner/gartner-top-strategic-predictions-for-2018-and-beyond/ , last access date: 26-12-2022.

https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/ , last access date: 26-12-2022.

https://scholar.google.ca/ , last access date: 26-12-2022.

https://ieeexplore.ieee.org/ , last access date: 26-12-2022.

https://link.springer.com/ , last access date: 26-12-2022.

https://www.sciencedirect.com/ , last access date: 26-12-2022.

https://www.scopus.com/ , last access date: 26-12-2022.

https://www.acm.org/digital-library , last access date: 26-12-2022.

https://www.politico.com/magazine/story/2016/12/fake-news-history-long-violent-214535 , last access date: 26-12-2022.

https://en.wikipedia.org/wiki/Trial_of_Socrates , last access date: 26-12-2022.

https://trends.google.com/trends/explore?hl=en-US &tz=-180 &date=2013-12-06+2018-01-06 &geo=US &q=fake+news &sni=3 , last access date: 26-12-2022.

https://ec.europa.eu/digital-single-market/en/tackling-online-disinformation , last access date: 26-12-2022.

https://www.nato.int/cps/en/natohq/177273.htm , last access date: 26-12-2022.

https://www.collinsdictionary.com/dictionary/english/fake-news , last access date: 26-12-2022.

https://www.statista.com/statistics/657111/fake-news-sharing-online/ , last access date: 26-12-2022.

https://www.statista.com/statistics/657090/fake-news-recogition-confidence/ , last access date: 26-12-2022.

https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731 , last access date: 26-12-2022.

https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news , last access date: 26-12-2022.

https://www.theguardian.com/technology/2017/apr/07/google-to-display-fact-checking-labels-to-show-if-news-is-true-or-false , last access date: 26-12-2022.

https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram , last access date: 26-12-2022.

https://www.wired.com/story/instagram-fact-checks-who-will-do-checking/ , last access date: 26-12-2022.

https://www.politifact.com/ , last access date: 26-12-2022.

https://www.snopes.com/ , last access date: 26-12-2022.

https://www.reutersagency.com/en/ , last access date: 26-12-2022.

Abdullah-All-Tanvir, Mahir EM, Akhter S, Huq MR (2019) Detecting fake news using machine learning and deep learning algorithms. In: 7th international conference on smart computing and communications (ICSCC), IEEE, pp 1–5 https://doi.org/10.1109/ICSCC.2019.8843612

Abdullah-All-Tanvir, Mahir EM, Huda SMA, Barua S (2020) A hybrid approach for identifying authentic news using deep learning methods on popular Twitter threads. In: International conference on artificial intelligence and signal processing (AISP), IEEE, pp 1–6 https://doi.org/10.1109/AISP48273.2020.9073583

Abu Arqoub O, Abdulateef Elega A, Efe Özad B, Dwikat H, Adedamola Oloyede F (2022) Mapping the scholarship of fake news research: a systematic review. J Pract 16(1):56–86. https://doi.org/10.1080/17512786.2020.1805791

Article   Google Scholar  

Ahmed S, Hinkelmann K, Corradini F (2020) Development of fake news model using machine learning through natural language processing. Int J Comput Inf Eng 14(12):454–460

Google Scholar  

Aïmeur E, Brassard G, Rioux J (2013) Data privacy: an end-user perspective. Int J Comput Netw Commun Secur 1(6):237–250

Aïmeur E, Hage H, Amri S (2018) The scourge of online deception in social networks. In: 2018 international conference on computational science and computational intelligence (CSCI), IEEE, pp 1266–1271 https://doi.org/10.1109/CSCI46756.2018.00244

Alemanno A (2018) How to counter fake news? A taxonomy of anti-fake news approaches. Eur J Risk Regul 9(1):1–5. https://doi.org/10.1017/err.2018.12

Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–36. https://doi.org/10.1257/jep.31.2.211

Allen J, Howland B, Mobius M, Rothschild D, Watts DJ (2020) Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv. https://doi.org/10.1126/sciadv.aay3539

Allington D, Duffy B, Wessely S, Dhavan N, Rubin J (2020) Health-protective behaviour, social media usage and conspiracy belief during the Covid-19 public health emergency. Psychol Med. https://doi.org/10.1017/S003329172000224X

Alonso-Galbán P, Alemañy-Castilla C (2022) Curbing misinformation and disinformation in the Covid-19 era: a view from cuba. MEDICC Rev 22:45–46 https://doi.org/10.37757/MR2020.V22.N2.12

Altay S, Hacquin AS, Mercier H (2022) Why do so few people share fake news? It hurts their reputation. New Media Soc 24(6):1303–1324. https://doi.org/10.1177/1461444820969893

Amri S, Sallami D, Aïmeur E (2022) Exmulf: an explainable multimodal content-based fake news detection system. In: International symposium on foundations and practice of security. Springer, Berlin, pp 177–187. https://doi.org/10.1109/IJCNN48605.2020.9206973

Andersen J, Søe SO (2020) Communicative actions we live by: the problem with fact-checking, tagging or flagging fake news-the case of Facebook. Eur J Commun 35(2):126–139. https://doi.org/10.1177/0267323119894489

Apuke OD, Omar B (2021) Fake news and Covid-19: modelling the predictors of fake news sharing among social media users. Telematics Inform 56:101475. https://doi.org/10.1016/j.tele.2020.101475

Apuke OD, Omar B, Tunca EA, Gever CV (2022) The effect of visual multimedia instructions against fake news spread: a quasi-experimental study with Nigerian students. J Librariansh Inf Sci. https://doi.org/10.1177/09610006221096477

Aswani R, Ghrera S, Kar AK, Chandra S (2017) Identifying buzz in social media: a hybrid approach using artificial bee colony and k-nearest neighbors for outlier detection. Soc Netw Anal Min 7(1):1–10. https://doi.org/10.1007/s13278-017-0461-2

Avram M, Micallef N, Patil S, Menczer F (2020) Exposure to social engagement metrics increases vulnerability to misinformation. arXiv preprint arxiv:2005.04682 , https://doi.org/10.37016/mr-2020-033

Badawy A, Lerman K, Ferrara E (2019) Who falls for online political manipulation? In: Companion proceedings of the 2019 world wide web conference, pp 162–168 https://doi.org/10.1145/3308560.3316494

Bahad P, Saxena P, Kamal R (2019) Fake news detection using bi-directional LSTM-recurrent neural network. Procedia Comput Sci 165:74–82. https://doi.org/10.1016/j.procs.2020.01.072

Bakdash J, Sample C, Rankin M, Kantarcioglu M, Holmes J, Kase S, Zaroukian E, Szymanski B (2018) The future of deception: machine-generated and manipulated images, video, and audio? In: 2018 international workshop on social sensing (SocialSens), IEEE, pp 2–2 https://doi.org/10.1109/SocialSens.2018.00009

Balmas M (2014) When fake news becomes real: combined exposure to multiple news sources and political attitudes of inefficacy, alienation, and cynicism. Commun Res 41(3):430–454. https://doi.org/10.1177/0093650212453600

Baptista JP, Gradim A (2020) Understanding fake news consumption: a review. Soc Sci. https://doi.org/10.3390/socsci9100185

Baptista JP, Gradim A (2022) A working definition of fake news. Encyclopedia 2(1):632–645. https://doi.org/10.3390/encyclopedia2010043

Bastick Z (2021) Would you notice if fake news changed your behavior? An experiment on the unconscious effects of disinformation. Comput Hum Behav 116:106633. https://doi.org/10.1016/j.chb.2020.106633

Batailler C, Brannon SM, Teas PE, Gawronski B (2022) A signal detection approach to understanding the identification of fake news. Perspect Psychol Sci 17(1):78–98. https://doi.org/10.1177/1745691620986135

Bessi A, Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. First Monday 21(11-7). https://doi.org/10.5210/fm.v21i11.7090

Bhattacharjee A, Shu K, Gao M, Liu H (2020) Disinformation in the online information ecosystem: detection, mitigation and challenges. arXiv preprint arXiv:2010.09113

Bhuiyan MM, Zhang AX, Sehat CM, Mitra T (2020) Investigating differences in crowdsourced news credibility assessment: raters, tasks, and expert criteria. Proc ACM Hum Comput Interact 4(CSCW2):1–26. https://doi.org/10.1145/3415164

Bode L, Vraga EK (2015) In related news, that was wrong: the correction of misinformation through related stories functionality in social media. J Commun 65(4):619–638. https://doi.org/10.1111/jcom.12166

Bondielli A, Marcelloni F (2019) A survey on fake news and rumour detection techniques. Inf Sci 497:38–55. https://doi.org/10.1016/j.ins.2019.05.035

Bovet A, Makse HA (2019) Influence of fake news in Twitter during the 2016 US presidential election. Nat Commun 10(1):1–14. https://doi.org/10.1038/s41467-018-07761-2

Brashier NM, Pennycook G, Berinsky AJ, Rand DG (2021) Timing matters when correcting fake news. Proc Natl Acad Sci. https://doi.org/10.1073/pnas.2020043118

Brewer PR, Young DG, Morreale M (2013) The impact of real news about “fake news’’: intertextual processes and political satire. Int J Public Opin Res 25(3):323–343. https://doi.org/10.1093/ijpor/edt015

Bringula RP, Catacutan-Bangit AE, Garcia MB, Gonzales JPS, Valderama AMC (2022) “Who is gullible to political disinformation?’’ Predicting susceptibility of university students to fake news. J Inf Technol Polit 19(2):165–179. https://doi.org/10.1080/19331681.2021.1945988

Buccafurri F, Lax G, Nicolazzo S, Nocera A (2017) Tweetchain: an alternative to blockchain for crowd-based applications. In: International conference on web engineering, Springer, Berlin, pp 386–393. https://doi.org/10.1007/978-3-319-60131-1_24

Burshtein S (2017) The true story on fake news. Intell Prop J 29(3):397–446

Cardaioli M, Cecconello S, Conti M, Pajola L, Turrin F (2020) Fake news spreaders profiling through behavioural analysis. In: CLEF (working notes)

Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news? A survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences. https://doi.org/10.24251/HICSS.2019.332

Carmi E, Yates SJ, Lockley E, Pawluczuk A (2020) Data citizenship: rethinking data literacy in the age of disinformation, misinformation, and malinformation. Intern Policy Rev 9(2):1–22 https://doi.org/10.14763/2020.2.1481

Celliers M, Hattingh M (2020) A systematic review on fake news themes reported in literature. In: Conference on e-Business, e-Services and e-Society. Springer, Berlin, pp 223–234. https://doi.org/10.1007/978-3-030-45002-1_19

Chen Y, Li Q, Wang H (2018) Towards trusted social networks with blockchain technology. arXiv preprint arXiv:1801.02796

Cheng L, Guo R, Shu K, Liu H (2020) Towards causal understanding of fake news dissemination. arXiv preprint arXiv:2010.10580

Chiu MM, Oh YW (2021) How fake news differs from personal lies. Am Behav Sci 65(2):243–258. https://doi.org/10.1177/0002764220910243

Chung M, Kim N (2021) When I learn the news is false: how fact-checking information stems the spread of fake news via third-person perception. Hum Commun Res 47(1):1–24. https://doi.org/10.1093/hcr/hqaa010

Clarke J, Chen H, Du D, Hu YJ (2020) Fake news, investor attention, and market reaction. Inf Syst Res. https://doi.org/10.1287/isre.2019.0910

Clayton K, Blair S, Busam JA, Forstner S, Glance J, Green G, Kawata A, Kovvuri A, Martin J, Morgan E et al (2020) Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit Behav 42(4):1073–1095. https://doi.org/10.1007/s11109-019-09533-0

Collins B, Hoang DT, Nguyen NT, Hwang D (2020) Fake news types and detection models on social media a state-of-the-art survey. In: Asian conference on intelligent information and database systems. Springer, Berlin, pp 562–573 https://doi.org/10.1007/978-981-15-3380-8_49

Conroy NK, Rubin VL, Chen Y (2015) Automatic deception detection: methods for finding fake news. Proc Assoc Inf Sci Technol 52(1):1–4. https://doi.org/10.1002/pra2.2015.145052010082

Cooke NA (2017) Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age. Libr Q 87(3):211–221. https://doi.org/10.1086/692298

Article   MathSciNet   Google Scholar  

Coscia M, Rossi L (2020) Distortions of political bias in crowdsourced misinformation flagging. J R Soc Interface 17(167):20200020. https://doi.org/10.1098/rsif.2020.0020

Dame Adjin-Tettey T (2022) Combating fake news, disinformation, and misinformation: experimental evidence for media literacy education. Cogent Arts Human 9(1):2037229. https://doi.org/10.1080/23311983.2022.2037229

Deepak S, Chitturi B (2020) Deep neural approach to fake-news identification. Procedia Comput Sci 167:2236–2243. https://doi.org/10.1016/j.procs.2020.03.276

de Cock Buning M (2018) A multi-dimensional approach to disinformation: report of the independent high level group on fake news and online disinformation. Publications Office of the European Union

Del Vicario M, Quattrociocchi W, Scala A, Zollo F (2019) Polarization and fake news: early warning of potential misinformation targets. ACM Trans Web (TWEB) 13(2):1–22. https://doi.org/10.1145/3316809

Demuyakor J, Opata EM (2022) Fake news on social media: predicting which media format influences fake news most on facebook. J Intell Commun. https://doi.org/10.54963/jic.v2i1.56

Derakhshan H, Wardle C (2017) Information disorder: definitions. In: Understanding and addressing the disinformation ecosystem, pp 5–12

Desai AN, Ruidera D, Steinbrink JM, Granwehr B, Lee DH (2022) Misinformation and disinformation: the potential disadvantages of social media in infectious disease and how to combat them. Clin Infect Dis 74(Supplement–3):e34–e39. https://doi.org/10.1093/cid/ciac109

Di Domenico G, Sit J, Ishizaka A, Nunan D (2021) Fake news, social media and marketing: a systematic review. J Bus Res 124:329–341. https://doi.org/10.1016/j.jbusres.2020.11.037

Dias N, Pennycook G, Rand DG (2020) Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harv Kennedy School Misinform Rev. https://doi.org/10.37016/mr-2020-001

DiCicco KW, Agarwal N (2020) Blockchain technology-based solutions to fight misinformation: a survey. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 267–281, https://doi.org/10.1007/978-3-030-42699-6_14

Douglas KM, Uscinski JE, Sutton RM, Cichocka A, Nefes T, Ang CS, Deravi F (2019) Understanding conspiracy theories. Polit Psychol 40:3–35. https://doi.org/10.1111/pops.12568

Edgerly S, Mourão RR, Thorson E, Tham SM (2020) When do audiences verify? How perceptions about message and source influence audience verification of news headlines. J Mass Commun Q 97(1):52–71. https://doi.org/10.1177/1077699019864680

Egelhofer JL, Lecheler S (2019) Fake news as a two-dimensional phenomenon: a framework and research agenda. Ann Int Commun Assoc 43(2):97–116. https://doi.org/10.1080/23808985.2019.1602782

Elhadad MK, Li KF, Gebali F (2019) A novel approach for selecting hybrid features from online news textual metadata for fake news detection. In: International conference on p2p, parallel, grid, cloud and internet computing. Springer, Berlin, pp 914–925, https://doi.org/10.1007/978-3-030-33509-0_86

ERGA (2018) Fake news, and the information disorder. European Broadcasting Union (EBU)

ERGA (2021) Notions of disinformation and related concepts. European Regulators Group for Audiovisual Media Services (ERGA)

Escolà-Gascón Á (2021) New techniques to measure lie detection using Covid-19 fake news and the Multivariable Multiaxial Suggestibility Inventory-2 (MMSI-2). Comput Hum Behav Rep 3:100049. https://doi.org/10.1016/j.chbr.2020.100049

Fazio L (2020) Pausing to consider why a headline is true or false can help reduce the sharing of false news. Harv Kennedy School Misinformation Rev. https://doi.org/10.37016/mr-2020-009

Ferrara E, Varol O, Davis C, Menczer F, Flammini A (2016) The rise of social bots. Commun ACM 59(7):96–104. https://doi.org/10.1145/2818717

Flynn D, Nyhan B, Reifler J (2017) The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol 38:127–150. https://doi.org/10.1111/pops.12394

Fraga-Lamas P, Fernández-Caramés TM (2020) Fake news, disinformation, and deepfakes: leveraging distributed ledger technologies and blockchain to combat digital deception and counterfeit reality. IT Prof 22(2):53–59. https://doi.org/10.1109/MITP.2020.2977589

Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, Jenner L, Teale AL, Carr L, Mulhall S et al (2020) Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. https://doi.org/10.1017/S0033291720001890

Friggeri A, Adamic L, Eckles D, Cheng J (2014) Rumor cascades. In: Proceedings of the international AAAI conference on web and social media

García SA, García GG, Prieto MS, Moreno Guerrero AJ, Rodríguez Jiménez C (2020) The impact of term fake news on the scientific community. Scientific performance and mapping in web of science. Soc Sci. https://doi.org/10.3390/socsci9050073

Garrett RK, Bond RM (2021) Conservatives’ susceptibility to political misperceptions. Sci Adv. https://doi.org/10.1126/sciadv.abf1234

Giachanou A, Ríssola EA, Ghanem B, Crestani F, Rosso P (2020) The role of personality and linguistic patterns in discriminating between fake news spreaders and fact checkers. In: International conference on applications of natural language to information systems. Springer, Berlin, pp 181–192 https://doi.org/10.1007/978-3-030-51310-8_17

Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, Buntain C, Chanduka R, Cheakalos P, Everett JB et al (2018) Fake news vs satire: a dataset and analysis. In: Proceedings of the 10th ACM conference on web science, pp 17–21, https://doi.org/10.1145/3201064.3201100

Goldani MH, Momtazi S, Safabakhsh R (2021) Detecting fake news with capsule neural networks. Appl Soft Comput 101:106991. https://doi.org/10.1016/j.asoc.2020.106991

Goldstein I, Yang L (2019) Good disclosure, bad disclosure. J Financ Econ 131(1):118–138. https://doi.org/10.1016/j.jfineco.2018.08.004

Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D (2019) Fake news on Twitter during the 2016 US presidential election. Science 363(6425):374–378. https://doi.org/10.1126/science.aau2706

Guadagno RE, Guttieri K (2021) Fake news and information warfare: an examination of the political and psychological processes from the digital sphere to the real world. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 218–242 https://doi.org/10.4018/978-1-7998-7291-7.ch013

Guess A, Nagler J, Tucker J (2019) Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci Adv. https://doi.org/10.1126/sciadv.aau4586

Guo C, Cao J, Zhang X, Shu K, Yu M (2019) Exploiting emotions for fake news detection on social media. arXiv preprint arXiv:1903.01728

Guo B, Ding Y, Yao L, Liang Y, Yu Z (2020) The future of false information detection on social media: new perspectives and trends. ACM Comput Surv (CSUR) 53(4):1–36. https://doi.org/10.1145/3393880

Gupta A, Li H, Farnoush A, Jiang W (2022) Understanding patterns of covid infodemic: a systematic and pragmatic approach to curb fake news. J Bus Res 140:670–683. https://doi.org/10.1016/j.jbusres.2021.11.032

Ha L, Andreu Perez L, Ray R (2021) Mapping recent development in scholarship on fake news and misinformation, 2008 to 2017: disciplinary contribution, topics, and impact. Am Behav Sci 65(2):290–315. https://doi.org/10.1177/0002764219869402

Habib A, Asghar MZ, Khan A, Habib A, Khan A (2019) False information detection in online content and its role in decision making: a systematic literature review. Soc Netw Anal Min 9(1):1–20. https://doi.org/10.1007/s13278-019-0595-5

Hage H, Aïmeur E, Guedidi A (2021) Understanding the landscape of online deception. In: Research anthology on fake news, political warfare, and combatting the spread of misinformation. IGI Global, pp 39–66. https://doi.org/10.4018/978-1-7998-2543-2.ch014

Hakak S, Alazab M, Khan S, Gadekallu TR, Maddikunta PKR, Khan WZ (2021) An ensemble machine learning approach through effective feature extraction to classify fake news. Futur Gener Comput Syst 117:47–58. https://doi.org/10.1016/j.future.2020.11.022

Hamdi T, Slimi H, Bounhas I, Slimani Y (2020) A hybrid approach for fake news detection in Twitter based on user features and graph embedding. In: International conference on distributed computing and internet technology. Springer, Berlin, pp 266–280. https://doi.org/10.1007/978-3-030-36987-3_17

Hameleers M (2022) Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the us and netherlands. Inf Commun Soc 25(1):110–126. https://doi.org/10.1080/1369118X.2020.1764603

Hameleers M, Powell TE, Van Der Meer TG, Bos L (2020) A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Polit Commun 37(2):281–301. https://doi.org/10.1080/10584609.2019.1674979

Hameleers M, Brosius A, de Vreese CH (2022) Whom to trust? media exposure patterns of citizens with perceptions of misinformation and disinformation related to the news media. Eur J Commun. https://doi.org/10.1177/02673231211072667

Hartley K, Vu MK (2020) Fighting fake news in the Covid-19 era: policy insights from an equilibrium model. Policy Sci 53(4):735–758. https://doi.org/10.1007/s11077-020-09405-z

Hasan HR, Salah K (2019) Combating deepfake videos using blockchain and smart contracts. IEEE Access 7:41596–41606. https://doi.org/10.1109/ACCESS.2019.2905689

Hiriyannaiah S, Srinivas A, Shetty GK, Siddesh G, Srinivasa K (2020) A computationally intelligent agent for detecting fake news using generative adversarial networks. Hybrid computational intelligence: challenges and applications. pp 69–96 https://doi.org/10.1016/B978-0-12-818699-2.00004-4

Hosseinimotlagh S, Papalexakis EE (2018) Unsupervised content-based identification of fake news articles with tensor decomposition ensembles. In: Proceedings of the workshop on misinformation and misbehavior mining on the web (MIS2)

Huckle S, White M (2017) Fake news: a technological approach to proving the origins of content, using blockchains. Big Data 5(4):356–371. https://doi.org/10.1089/big.2017.0071

Huffaker JS, Kummerfeld JK, Lasecki WS, Ackerman MS (2020) Crowdsourced detection of emotionally manipulative language. In: Proceedings of the 2020 CHI conference on human factors in computing systems. pp 1–14 https://doi.org/10.1145/3313831.3376375

Ireton C, Posetti J (2018) Journalism, fake news & disinformation: handbook for journalism education and training. UNESCO Publishing, Paris

Islam MR, Liu S, Wang X, Xu G (2020) Deep learning for misinformation detection on online social networks: a survey and new perspectives. Soc Netw Anal Min 10(1):1–20. https://doi.org/10.1007/s13278-020-00696-x

Ismailov M, Tsikerdekis M, Zeadally S (2020) Vulnerabilities to online social network identity deception detection research and recommendations for mitigation. Fut Internet 12(9):148. https://doi.org/10.3390/fi12090148

Jakesch M, Koren M, Evtushenko A, Naaman M (2019) The role of source and expressive responding in political news evaluation. In: Computation and journalism symposium

Jamieson KH (2020) Cyberwar: how Russian hackers and trolls helped elect a president: what we don’t, can’t, and do know. Oxford University Press, Oxford. https://doi.org/10.1093/poq/nfy049

Book   Google Scholar  

Jiang S, Chen X, Zhang L, Chen S, Liu H (2019) User-characteristic enhanced model for fake news detection in social media. In: CCF International conference on natural language processing and Chinese computing, Springer, Berlin, pp 634–646. https://doi.org/10.1007/978-3-030-32233-5_49

Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence

Jing TW, Murugesan RK (2018) A theoretical framework to build trust and prevent fake news in social media using blockchain. In: International conference of reliable information and communication technology. Springer, Berlin, pp 955–962, https://doi.org/10.1007/978-3-319-99007-1_88

Jones-Jang SM, Mortensen T, Liu J (2021) Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. Am Behav Sci 65(2):371–388. https://doi.org/10.1177/0002764219869406

Jungherr A, Schroeder R (2021) Disinformation and the structural transformations of the public arena: addressing the actual challenges to democracy. Soc Media Soc. https://doi.org/10.1177/2056305121988928

Kaliyar RK (2018) Fake news detection using a deep neural network. In: 2018 4th international conference on computing communication and automation (ICCCA), IEEE, pp 1–7 https://doi.org/10.1109/CCAA.2018.8777343

Kaliyar RK, Goswami A, Narang P, Sinha S (2020) Fndnet—a deep convolutional neural network for fake news detection. Cogn Syst Res 61:32–44. https://doi.org/10.1016/j.cogsys.2019.12.005

Kapantai E, Christopoulou A, Berberidis C, Peristeras V (2021) A systematic literature review on disinformation: toward a unified taxonomical framework. New Media Soc 23(5):1301–1326. https://doi.org/10.1177/1461444820959296

Kapusta J, Benko L, Munk M (2019) Fake news identification based on sentiment and frequency analysis. In: International conference Europe middle east and North Africa information systems and technologies to support learning. Springer, Berlin, pp 400–409, 10.1007/978-3-030-36778-7_44

Kaur S, Kumar P, Kumaraguru P (2020) Automating fake news detection system using multi-level voting model. Soft Comput 24(12):9049–9069. https://doi.org/10.1007/s00500-019-04436-y

Khan SA, Alkawaz MH, Zangana HM (2019) The use and abuse of social media for spreading fake news. In: 2019 IEEE international conference on automatic control and intelligent systems (I2CACIS), IEEE, pp 145–148. https://doi.org/10.1109/I2CACIS.2019.8825029

Kim J, Tabibian B, Oh A, Schölkopf B, Gomez-Rodriguez M (2018) Leveraging the crowd to detect and reduce the spread of fake news and misinformation. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 324–332. https://doi.org/10.1145/3159652.3159734

Klein D, Wueller J (2017) Fake news: a legal perspective. J Internet Law 20(10):5–13

Kogan S, Moskowitz TJ, Niessner M (2019) Fake news: evidence from financial markets. Available at SSRN 3237763

Kuklinski JH, Quirk PJ, Jerit J, Schwieder D, Rich RF (2000) Misinformation and the currency of democratic citizenship. J Polit 62(3):790–816. https://doi.org/10.1111/0022-3816.00033

Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv preprint arXiv:1804.08559

Kumar S, West R, Leskovec J (2016) Disinformation on the web: impact, characteristics, and detection of Wikipedia hoaxes. In: Proceedings of the 25th international conference on world wide web, pp 591–602. https://doi.org/10.1145/2872427.2883085

La Barbera D, Roitero K, Demartini G, Mizzaro S, Spina D (2020) Crowdsourcing truthfulness: the impact of judgment scale and assessor bias. In: European conference on information retrieval. Springer, Berlin, pp 207–214. https://doi.org/10.1007/978-3-030-45442-5_26

Lanius C, Weber R, MacKenzie WI (2021) Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey. Soc Netw Anal Min 11(1):1–15. https://doi.org/10.1007/s13278-021-00739-x

Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D et al (2018) The science of fake news. Science 359(6380):1094–1096. https://doi.org/10.1126/science.aao2998

Le T, Shu K, Molina MD, Lee D, Sundar SS, Liu H (2019) 5 sources of clickbaits you should know! Using synthetic clickbaits to improve prediction and distinguish between bot-generated and human-written headlines. In: 2019 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 33–40. https://doi.org/10.1145/3341161.3342875

Lewandowsky S (2020) Climate change, disinformation, and how to combat it. In: Annual Review of Public Health 42. https://doi.org/10.1146/annurev-publhealth-090419-102409

Liu Y, Wu YF (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, pp 354–361

Luo M, Hancock JT, Markowitz DM (2022) Credibility perceptions and detection accuracy of fake news headlines on social media: effects of truth-bias and endorsement cues. Commun Res 49(2):171–195. https://doi.org/10.1177/0093650220921321

Lutzke L, Drummond C, Slovic P, Árvai J (2019) Priming critical thinking: simple interventions limit the influence of fake news about climate change on Facebook. Glob Environ Chang 58:101964. https://doi.org/10.1016/j.gloenvcha.2019.101964

Maertens R, Anseel F, van der Linden S (2020) Combatting climate change misinformation: evidence for longevity of inoculation and consensus messaging effects. J Environ Psychol 70:101455. https://doi.org/10.1016/j.jenvp.2020.101455

Mahabub A (2020) A robust technique of fake news detection using ensemble voting classifier and comparison with other classifiers. SN Applied Sciences 2(4):1–9. https://doi.org/10.1007/s42452-020-2326-y

Mahbub S, Pardede E, Kayes A, Rahayu W (2019) Controlling astroturfing on the internet: a survey on detection techniques and research challenges. Int J Web Grid Serv 15(2):139–158. https://doi.org/10.1504/IJWGS.2019.099561

Marsden C, Meyer T, Brown I (2020) Platform values and democratic elections: how can the law regulate digital disinformation? Comput Law Secur Rev 36:105373. https://doi.org/10.1016/j.clsr.2019.105373

Masciari E, Moscato V, Picariello A, Sperlí G (2020) Detecting fake news by image analysis. In: Proceedings of the 24th symposium on international database engineering and applications, pp 1–5. https://doi.org/10.1145/3410566.3410599

Mazzeo V, Rapisarda A (2022) Investigating fake and reliable news sources using complex networks analysis. Front Phys 10:886544. https://doi.org/10.3389/fphy.2022.886544

McGrew S (2020) Learning to evaluate: an intervention in civic online reasoning. Comput Educ 145:103711. https://doi.org/10.1016/j.compedu.2019.103711

McGrew S, Breakstone J, Ortega T, Smith M, Wineburg S (2018) Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory Res Soc Educ 46(2):165–193. https://doi.org/10.1080/00933104.2017.1416320

Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986. https://doi.org/10.1016/j.eswa.2019.112986

Meese J, Frith J, Wilken R (2020) Covid-19, 5G conspiracies and infrastructural futures. Media Int Aust 177(1):30–46. https://doi.org/10.1177/1329878X20952165

Metzger MJ, Hartsell EH, Flanagin AJ (2020) Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun Res 47(1):3–28. https://doi.org/10.1177/0093650215613136

Micallef N, He B, Kumar S, Ahamad M, Memon N (2020) The role of the crowd in countering misinformation: a case study of the Covid-19 infodemic. arXiv preprint arXiv:2011.05773

Mihailidis P, Viotty S (2017) Spreadable spectacle in digital culture: civic expression, fake news, and the role of media literacies in “post-fact society. Am Behav Sci 61(4):441–454. https://doi.org/10.1177/0002764217701217

Mishra R (2020) Fake news detection using higher-order user to user mutual-attention progression in propagation paths. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp 652–653

Mishra S, Shukla P, Agarwal R (2022) Analyzing machine learning enabled fake news detection techniques for diversified datasets. Wirel Commun Mobile Comput. https://doi.org/10.1155/2022/1575365

Molina MD, Sundar SS, Le T, Lee D (2021) “Fake news’’ is not simply false information: a concept explication and taxonomy of online content. Am Behav Sci 65(2):180–212. https://doi.org/10.1177/0002764219878224

Moro C, Birt JR (2022) Review bombing is a dirty practice, but research shows games do benefit from online feedback. Conversation. https://research.bond.edu.au/en/publications/review-bombing-is-a-dirty-practice-but-research-shows-games-do-be

Mustafaraj E, Metaxas PT (2017) The fake news spreading plague: was it preventable? In: Proceedings of the 2017 ACM on web science conference, pp 235–239. https://doi.org/10.1145/3091478.3091523

Nagel TW (2022) Measuring fake news acumen using a news media literacy instrument. J Media Liter Educ 14(1):29–42. https://doi.org/10.23860/JMLE-2022-14-1-3

Nakov P (2020) Can we spot the “fake news” before it was even written? arXiv preprint arXiv:2008.04374

Nekmat E (2020) Nudge effect of fact-check alerts: source influence and media skepticism on sharing of news misinformation in social media. Soc Media Soc. https://doi.org/10.1177/2056305119897322

Nygren T, Brounéus F, Svensson G (2019) Diversity and credibility in young people’s news feeds: a foundation for teaching and learning citizenship in a digital era. J Soc Sci Educ 18(2):87–109. https://doi.org/10.4119/jsse-917

Nyhan B, Reifler J (2015) Displacing misinformation about events: an experimental test of causal corrections. J Exp Polit Sci 2(1):81–93. https://doi.org/10.1017/XPS.2014.22

Nyhan B, Porter E, Reifler J, Wood TJ (2020) Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Polit Behav 42(3):939–960. https://doi.org/10.1007/s11109-019-09528-x

Nyow NX, Chua HN (2019) Detecting fake news with tweets’ properties. In: 2019 IEEE conference on application, information and network security (AINS), IEEE, pp 24–29. https://doi.org/10.1109/AINS47559.2019.8968706

Ochoa IS, de Mello G, Silva LA, Gomes AJ, Fernandes AM, Leithardt VRQ (2019) Fakechain: a blockchain architecture to ensure trust in social media networks. In: International conference on the quality of information and communications technology. Springer, Berlin, pp 105–118. https://doi.org/10.1007/978-3-030-29238-6_8

Ozbay FA, Alatas B (2020) Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A 540:123174. https://doi.org/10.1016/j.physa.2019.123174

Ozturk P, Li H, Sakamoto Y (2015) Combating rumor spread on social media: the effectiveness of refutation and warning. In: 2015 48th Hawaii international conference on system sciences, IEEE, pp 2406–2414. https://doi.org/10.1109/HICSS.2015.288

Parikh SB, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 436–441. https://doi.org/10.1109/MIPR.2018.00093

Parrish K (2018) Deep learning & machine learning: what’s the difference? Online: https://parsers.me/deep-learning-machine-learning-whats-the-difference/ . Accessed 20 May 2020

Paschen J (2019) Investigating the emotional appeal of fake news using artificial intelligence and human contributions. J Prod Brand Manag 29(2):223–233. https://doi.org/10.1108/JPBM-12-2018-2179

Pathak A, Srihari RK (2019) Breaking! Presenting fake news corpus for automated fact checking. In: Proceedings of the 57th annual meeting of the association for computational linguistics: student research workshop, pp 357–362

Peng J, Detchon S, Choo KKR, Ashman H (2017) Astroturfing detection in social media: a binary n-gram-based approach. Concurr Comput: Pract Exp 29(17):e4013. https://doi.org/10.1002/cpe.4013

Pennycook G, Rand DG (2019) Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc Natl Acad Sci 116(7):2521–2526. https://doi.org/10.1073/pnas.1806781116

Pennycook G, Rand DG (2020) Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Pers 88(2):185–200. https://doi.org/10.1111/jopy.12476

Pennycook G, Bear A, Collins ET, Rand DG (2020a) The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag Sci 66(11):4944–4957. https://doi.org/10.1287/mnsc.2019.3478

Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG (2020b) Fighting Covid-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci 31(7):770–780. https://doi.org/10.1177/0956797620939054

Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B (2017) A stylometric inquiry into hyperpartisan and fake news. arXiv preprint arXiv:1702.05638

Previti M, Rodriguez-Fernandez V, Camacho D, Carchiolo V, Malgeri M (2020) Fake news detection using time series and user features classification. In: International conference on the applications of evolutionary computation (Part of EvoStar), Springer, Berlin, pp 339–353. https://doi.org/10.1007/978-3-030-43722-0_22

Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, pp 490–497. https://doi.org/10.1609/aaai.v34i01.5386

Qayyum A, Qadir J, Janjua MU, Sher F (2019) Using blockchain to rein in the new post-truth world and check the spread of fake news. IT Prof 21(4):16–24. https://doi.org/10.1109/MITP.2019.2910503

Qian F, Gong C, Sharma K, Liu Y (2018) Neural user response generator: fake news detection with collective user intelligence. In: IJCAI, vol 18, pp 3834–3840. https://doi.org/10.24963/ijcai.2018/533

Raza S, Ding C (2022) Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal 13(4):335–362. https://doi.org/10.1007/s41060-021-00302-z

Ricard J, Medeiros J (2020) Using misinformation as a political weapon: Covid-19 and Bolsonaro in Brazil. Harv Kennedy School misinformation Rev 1(3). https://misinforeview.hks.harvard.edu/article/using-misinformation-as-a-political-weapon-covid-19-and-bolsonaro-in-brazil/

Roozenbeek J, van der Linden S (2019) Fake news game confers psychological resistance against online misinformation. Palgrave Commun 5(1):1–10. https://doi.org/10.1057/s41599-019-0279-9

Roozenbeek J, van der Linden S, Nygren T (2020a) Prebunking interventions based on the psychological theory of “inoculation’’ can reduce susceptibility to misinformation across cultures. Harv Kennedy School Misinformation Rev. https://doi.org/10.37016//mr-2020-008

Roozenbeek J, Schneider CR, Dryhurst S, Kerr J, Freeman AL, Recchia G, Van Der Bles AM, Van Der Linden S (2020b) Susceptibility to misinformation about Covid-19 around the world. R Soc Open Sci 7(10):201199. https://doi.org/10.1098/rsos.201199

Rubin VL, Conroy N, Chen Y, Cornwell S (2016) Fake news or truth? Using satirical cues to detect potentially misleading news. In: Proceedings of the second workshop on computational approaches to deception detection, pp 7–17

Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806. https://doi.org/10.1145/3132847.3132877

Schuyler AJ (2019) Regulating facts: a procedural framework for identifying, excluding, and deterring the intentional or knowing proliferation of fake news online. Univ Ill JL Technol Pol’y, vol 2019, pp 211–240

Shae Z, Tsai J (2019) AI blockchain platform for trusting news. In: 2019 IEEE 39th international conference on distributed computing systems (ICDCS), IEEE, pp 1610–1619. https://doi.org/10.1109/ICDCS.2019.00160

Shang W, Liu M, Lin W, Jia M (2018) Tracing the source of news based on blockchain. In: 2018 IEEE/ACIS 17th international conference on computer and information science (ICIS), IEEE, pp 377–381. https://doi.org/10.1109/ICIS.2018.8466516

Shao C, Ciampaglia GL, Flammini A, Menczer F (2016) Hoaxy: A platform for tracking online misinformation. In: Proceedings of the 25th international conference companion on world wide web, pp 745–750. https://doi.org/10.1145/2872518.2890098

Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F (2018) The spread of low-credibility content by social bots. Nat Commun 9(1):1–9. https://doi.org/10.1038/s41467-018-06930-7

Shao C, Hui PM, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL (2018) Anatomy of an online misinformation network. PLoS ONE 13(4):e0196087. https://doi.org/10.1371/journal.pone.0196087

Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y (2019) Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 10(3):1–42. https://doi.org/10.1145/3305260

Sharma K, Seo S, Meng C, Rambhatla S, Liu Y (2020) Covid-19 on social media: analyzing misinformation in Twitter conversations. arXiv preprint arXiv:2003.12309

Shen C, Kasra M, Pan W, Bassett GA, Malloch Y, O’Brien JF (2019) Fake images: the effects of source, intermediary, and digital media literacy on contextual assessment of image credibility online. New Media Soc 21(2):438–463. https://doi.org/10.1177/1461444818799526

Sherman IN, Redmiles EM, Stokes JW (2020) Designing indicators to combat fake media. arXiv preprint arXiv:2010.00544

Shi P, Zhang Z, Choo KKR (2019) Detecting malicious social bots based on clickstream sequences. IEEE Access 7:28855–28862. https://doi.org/10.1109/ACCESS.2019.2901864

Shu K, Sliva A, Wang S, Tang J, Liu H (2017) Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor Newsl 19(1):22–36. https://doi.org/10.1145/3137597.3137600

Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2018a) Fakenewsnet: a data repository with news content, social context and spatialtemporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286 , https://doi.org/10.1089/big.2020.0062

Shu K, Wang S, Liu H (2018b) Understanding user profiles on social media for fake news detection. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR), IEEE, pp 430–435. https://doi.org/10.1109/MIPR.2018.00092

Shu K, Wang S, Liu H (2019a) Beyond news contents: the role of social context for fake news detection. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 312–320. https://doi.org/10.1145/3289600.3290994

Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019b) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439. https://doi.org/10.1145/3341161.3342927

Shu K, Bhattacharjee A, Alatawi F, Nazer TH, Ding K, Karami M, Liu H (2020a) Combating disinformation in a social media age. Wiley Interdiscip Rev: Data Min Knowl Discov 10(6):e1385. https://doi.org/10.1002/widm.1385

Shu K, Mahudeswaran D, Wang S, Liu H (2020b) Hierarchical propagation networks for fake news detection: investigation and exploitation. Proc Int AAAI Conf Web Soc Media AAAI Press 14:626–637

Shu K, Wang S, Lee D, Liu H (2020c) Mining disinformation and fake news: concepts, methods, and recent advancements. In: Disinformation, misinformation, and fake news in social media. Springer, Berlin, pp 1–19 https://doi.org/10.1007/978-3-030-42699-6_1

Shu K, Zheng G, Li Y, Mukherjee S, Awadallah AH, Ruston S, Liu H (2020d) Early detection of fake news with multi-source weak social supervision. In: ECML/PKDD (3), pp 650–666

Singh VK, Ghosh I, Sonagara D (2021) Detecting fake news stories via multimodal analysis. J Am Soc Inf Sci 72(1):3–17. https://doi.org/10.1002/asi.24359

Sintos S, Agarwal PK, Yang J (2019) Selecting data to clean for fact checking: minimizing uncertainty vs. maximizing surprise. Proc VLDB Endowm 12(13), 2408–2421. https://doi.org/10.14778/3358701.3358708

Snow J (2017) Can AI win the war against fake news? MIT Technology Review Online: https://www.technologyreview.com/s/609717/can-ai-win-the-war-against-fake-news/ . Accessed 3 Oct. 2020

Song G, Kim S, Hwang H, Lee K (2019) Blockchain-based notarization for social media. In: 2019 IEEE international conference on consumer clectronics (ICCE), IEEE, pp 1–2 https://doi.org/10.1109/ICCE.2019.8661978

Starbird K, Arif A, Wilson T (2019) Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. In: Proceedings of the ACM on human–computer interaction, vol 3(CSCW), pp 1–26 https://doi.org/10.1145/3359229

Sterret D, Malato D, Benz J, Kantor L, Tompson T, Rosenstiel T, Sonderman J, Loker K, Swanson E (2018) Who shared it? How Americans decide what news to trust on social media. Technical report, Norc Working Paper Series, WP-2018-001, pp 1–24

Sutton RM, Douglas KM (2020) Conspiracy theories and the conspiracy mindset: implications for political ideology. Curr Opin Behav Sci 34:118–122. https://doi.org/10.1016/j.cobeha.2020.02.015

Tandoc EC Jr, Thomas RJ, Bishop L (2021) What is (fake) news? Analyzing news values (and more) in fake stories. Media Commun 9(1):110–119. https://doi.org/10.17645/mac.v9i1.3331

Tchakounté F, Faissal A, Atemkeng M, Ntyam A (2020) A reliable weighting scheme for the aggregation of crowd intelligence to detect fake news. Information 11(6):319. https://doi.org/10.3390/info11060319

Tchechmedjiev A, Fafalios P, Boland K, Gasquet M, Zloch M, Zapilko B, Dietze S, Todorov K (2019) Claimskg: a knowledge graph of fact-checked claims. In: International semantic web conference. Springer, Berlin, pp 309–324 https://doi.org/10.1007/978-3-030-30796-7_20

Treen KMd, Williams HT, O’Neill SJ (2020) Online misinformation about climate change. Wiley Interdiscip Rev Clim Change 11(5):e665. https://doi.org/10.1002/wcc.665

Tsang SJ (2020) Motivated fake news perception: the impact of news sources and policy support on audiences’ assessment of news fakeness. J Mass Commun Q. https://doi.org/10.1177/1077699020952129

Tschiatschek S, Singla A, Gomez Rodriguez M, Merchant A, Krause A (2018) Fake news detection in social networks via crowd signals. In: Companion proceedings of the the web conference 2018, pp 517–524. https://doi.org/10.1145/3184558.3188722

Uppada SK, Manasa K, Vidhathri B, Harini R, Sivaselvan B (2022) Novel approaches to fake news and fake account detection in OSNS: user social engagement and visual content centric model. Soc Netw Anal Min 12(1):1–19. https://doi.org/10.1007/s13278-022-00878-9

Van der Linden S, Roozenbeek J (2020) Psychological inoculation against fake news. In: Accepting, sharing, and correcting misinformation, the psychology of fake news. https://doi.org/10.4324/9780429295379-11

Van der Linden S, Panagopoulos C, Roozenbeek J (2020) You are fake news: political bias in perceptions of fake news. Media Cult Soc 42(3):460–470. https://doi.org/10.1177/0163443720906992

Valenzuela S, Muñiz C, Santos M (2022) Social media and belief in misinformation in mexico: a case of maximal panic, minimal effects? Int J Press Polit. https://doi.org/10.1177/19401612221088988

Vasu N, Ang B, Teo TA, Jayakumar S, Raizal M, Ahuja J (2018) Fake news: national security in the post-truth era. RSIS

Vereshchaka A, Cosimini S, Dong W (2020) Analyzing and distinguishing fake and real news to mitigate the problem of disinformation. In: Computational and mathematical organization theory, pp 1–15. https://doi.org/10.1007/s10588-020-09307-8

Verstraete M, Bambauer DE, Bambauer JR (2017) Identifying and countering fake news. Arizona legal studies discussion paper 73(17-15). https://doi.org/10.2139/ssrn.3007971

Vilmer J, Escorcia A, Guillaume M, Herrera J (2018) Information manipulation: a challenge for our democracies. In: Report by the Policy Planning Staff (CAPS) of the ministry for europe and foreign affairs, and the institute for strategic research (RSEM) of the Ministry for the Armed Forces

Vishwakarma DK, Varshney D, Yadav A (2019) Detection and veracity analysis of fake news via scrapping and authenticating the web search. Cogn Syst Res 58:217–229. https://doi.org/10.1016/j.cogsys.2019.07.004

Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22. https://doi.org/10.3115/v1/W14-2508

von der Weth C, Abdul A, Fan S, Kankanhalli M (2020) Helping users tackle algorithmic threats on social media: a multimedia research agenda. In: Proceedings of the 28th ACM international conference on multimedia, pp 4425–4434. https://doi.org/10.1145/3394171.3414692

Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380):1146–1151. https://doi.org/10.1126/science.aap9559

Vraga EK, Bode L (2017) Using expert sources to correct health misinformation in social media. Sci Commun 39(5):621–645. https://doi.org/10.1177/1075547017731776

Waldman AE (2017) The marketplace of fake news. Univ Pa J Const Law 20:845

Wang WY (2017) “Liar, liar pants on fire”: a new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648

Wang L, Wang Y, de Melo G, Weikum G (2019a) Understanding archetypes of fake news via fine-grained classification. Soc Netw Anal Min 9(1):1–17. https://doi.org/10.1007/s13278-019-0580-z

Wang Y, Han H, Ding Y, Wang X, Liao Q (2019b) Learning contextual features with multi-head self-attention for fake news detection. In: International conference on cognitive computing. Springer, Berlin, pp 132–142. https://doi.org/10.1007/978-3-030-23407-2_11

Wang Y, McKee M, Torbica A, Stuckler D (2019c) Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med 240:112552. https://doi.org/10.1016/j.socscimed.2019.112552

Wang Y, Yang W, Ma F, Xu J, Zhong B, Deng Q, Gao J (2020) Weak supervision for fake news detection via reinforcement learning. In: Proceedings of the AAAI conference on artificial intelligence, pp 516–523. https://doi.org/10.1609/aaai.v34i01.5389

Wardle C (2017) Fake news. It’s complicated. Online: https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 3 Oct 2020

Wardle C (2018) The need for smarter definitions and practical, timely empirical research on information disorder. Digit J 6(8):951–963. https://doi.org/10.1080/21670811.2018.1502047

Wardle C, Derakhshan H (2017) Information disorder: toward an interdisciplinary framework for research and policy making. Council Eur Rep 27:1–107

Weiss AP, Alwan A, Garcia EP, Garcia J (2020) Surveying fake news: assessing university faculty’s fragmented definition of fake news and its impact on teaching critical thinking. Int J Educ Integr 16(1):1–30. https://doi.org/10.1007/s40979-019-0049-x

Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645. https://doi.org/10.1145/3159652.3159677

Wu L, Rao Y (2020) Adaptive interaction fusion networks for fake news detection. arXiv preprint arXiv:2004.10009

Wu L, Morstatter F, Carley KM, Liu H (2019) Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD Explor Newsl 21(2):80–90. https://doi.org/10.1145/3373464.3373475

Wu Y, Ngai EW, Wu P, Wu C (2022) Fake news on the internet: a literature review, synthesis and directions for future research. Intern Res. https://doi.org/10.1108/INTR-05-2021-0294

Xu K, Wang F, Wang H, Yang B (2019) Detecting fake news over online social media via domain reputations and content understanding. Tsinghua Sci Technol 25(1):20–27. https://doi.org/10.26599/TST.2018.9010139

Yang F, Pentyala SK, Mohseni S, Du M, Yuan H, Linder R, Ragan ED, Ji S, Hu X (2019a) Xfake: explainable fake news detector with visualizations. In: The world wide web conference, pp 3600–3604. https://doi.org/10.1145/3308558.3314119

Yang X, Li Y, Lyu S (2019b) Exposing deep fakes using inconsistent head poses. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 8261–8265. https://doi.org/10.1109/ICASSP.2019.8683164

Yaqub W, Kakhidze O, Brockman ML, Memon N, Patil S (2020) Effects of credibility indicators on social media news sharing intent. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–14. https://doi.org/10.1145/3313831.3376213

Yavary A, Sajedi H, Abadeh MS (2020) Information verification in social networks based on user feedback and news agencies. Soc Netw Anal Min 10(1):1–8. https://doi.org/10.1007/s13278-019-0616-4

Yazdi KM, Yazdi AM, Khodayi S, Hou J, Zhou W, Saedy S (2020) Improving fake news detection using k-means and support vector machine approaches. Int J Electron Commun Eng 14(2):38–42. https://doi.org/10.5281/zenodo.3669287

Zannettou S, Sirivianos M, Blackburn J, Kourtellis N (2019) The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J Data Inf Qual (JDIQ) 11(3):1–37. https://doi.org/10.1145/3309699

Zellers R, Holtzman A, Rashkin H, Bisk Y, Farhadi A, Roesner F, Choi Y (2019) Defending against neural fake news. arXiv preprint arXiv:1905.12616

Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization, detection, and discussion. Inf Process Manag 57(2):102025. https://doi.org/10.1016/j.ipm.2019.03.004

Zhang J, Dong B, Philip SY (2020) Fakedetector: effective fake news detection with deep diffusive neural network. In: 2020 IEEE 36th international conference on data engineering (ICDE), IEEE, pp 1826–1829. 10.1109/ICDE48307.2020.00180

Zhang Q, Lipani A, Liang S, Yilmaz E (2019a) Reply-aided detection of misinformation via Bayesian deep learning. In: The world wide web conference, pp 2333–2343. https://doi.org/10.1145/3308558.3313718

Zhang X, Karaman S, Chang SF (2019b) Detecting and simulating artifacts in GAN fake images. In: 2019 IEEE international workshop on information forensics and security (WIFS), IEEE, pp 1–6 https://doi.org/10.1109/WIFS47025.2019.9035107

Zhou X, Zafarani R (2020) A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 53(5):1–40. https://doi.org/10.1145/3395046

Zubiaga A, Aker A, Bontcheva K, Liakata M, Procter R (2018) Detection and resolution of rumours in social media: a survey. ACM Comput Surv (CSUR) 51(2):1–36. https://doi.org/10.1145/3161603

Download references

This work is supported in part by Canada’s Natural Sciences and Engineering Research Council.

Author information

Authors and affiliations.

Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada

Esma Aïmeur, Sabrine Amri & Gilles Brassard

You can also search for this author in PubMed   Google Scholar

Contributions

The order of authors is alphabetic as is customary in the third author’s field. The lead author was Sabrine Amri, who collected and analyzed the data and wrote a first draft of the paper, all along under the supervision and tight guidance of Esma Aïmeur. Gilles Brassard reviewed, criticized and polished the work into its final form.

Corresponding author

Correspondence to Sabrine Amri .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: A Comparison of AI-based fake news detection techniques

This Appendix consists only in the rather long Table  11 . It shows a comparison of the fake news detection solutions based on artificial intelligence that we have reviewed according to their main approaches, the methodology that was used, and the models, as explained in Sect.  6.2.2 .

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Aïmeur, E., Amri, S. & Brassard, G. Fake news, disinformation and misinformation in social media: a review. Soc. Netw. Anal. Min. 13 , 30 (2023). https://doi.org/10.1007/s13278-023-01028-5

Download citation

Received : 20 October 2022

Revised : 07 January 2023

Accepted : 12 January 2023

Published : 09 February 2023

DOI : https://doi.org/10.1007/s13278-023-01028-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Disinformation
  • Misinformation
  • Information disorder
  • Online deception
  • Online social networks
  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliation School of Intelligence Computing, Hanyang University, Seoul, Republic of Korea

Roles Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

Affiliation College of Information Sciences and Technology, Pennsylvania State University, State College, PA, United States of America

Roles Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

ORCID logo

  • Bogoan Kim, 
  • Aiping Xiong, 
  • Dongwon Lee, 
  • Kyungsik Han

PLOS

  • Published: December 9, 2021
  • https://doi.org/10.1371/journal.pone.0260080
  • Reader Comments

28 Dec 2023: The PLOS One Staff (2023) Correction: A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions. PLOS ONE 18(12): e0296554. https://doi.org/10.1371/journal.pone.0296554 View correction

Fig 1

Although fake news creation and consumption are mutually related and can be changed to one another, our review indicates that a significant amount of research has primarily focused on news creation. To mitigate this research gap, we present a comprehensive survey of fake news research, conducted in the fields of computer and social sciences, through the lens of news creation and consumption with internal and external factors.

We collect 2,277 fake news-related literature searching six primary publishers (ACM, IEEE, arXiv, APA, ELSEVIER, and Wiley) from July to September 2020. These articles are screened according to specific inclusion criteria (see Fig 1). Eligible literature are categorized, and temporal trends of fake news research are examined.

As a way to acquire more comprehensive understandings of fake news and identify effective countermeasures, our review suggests (1) developing a computational model that considers the characteristics of news consumption environments leveraging insights from social science, (2) understanding the diversity of news consumers through mental models, and (3) increasing consumers’ awareness of the characteristics and impacts of fake news through the support of transparent information access and education.

We discuss the importance and direction of supporting one’s “digital media literacy” in various news generation and consumption environments through the convergence of computational and social science research.

Citation: Kim B, Xiong A, Lee D, Han K (2021) A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions. PLoS ONE 16(12): e0260080. https://doi.org/10.1371/journal.pone.0260080

Editor: Luigi Lavorgna, Universita degli Studi della Campania Luigi Vanvitelli, ITALY

Received: March 24, 2021; Accepted: November 2, 2021; Published: December 9, 2021

Copyright: © 2021 Kim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript.

Funding: This research was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (2019-0-01584, 2020-0-01373).

Competing interests: The authors have declared that no competing interests exist.

1 Introduction

The spread of fake news not only deceives the public, but also affects society, politics, the economy and culture. For instance, Buzzfeed ( https://www.buzzfeed.com/ ) compared and analyzed participation in 20 real news and 20 fake news articles (e.g., likes, comments, share activities) that spread the most on Facebook during the last three months of the 2016 US Presidential Election. According to the results, the participation rate of fake news (8.7 million) was higher than that of mainstream news (7.3 million), and 17 of the 20 fake news played an advantageous role in winning the election [ 1 ]. Pakistan’s ministry of Defense posted a tweet fiercely condemning Israel after coming to believe that Israel had threatened Pakistan with nuclear weapons, which was later found to be false [ 2 ]. Recently, the spread of the absurd rumor that COVID-19 propagates through 5G base stations in the UK caused many people to become upset and resulted in a base station being set on fire [ 3 ].

Such fake news phenomenon has been rapidly evolving with the emergence of social media [ 4 , 5 ]. Fake news can be quickly shared by friends, followers, or even strangers within only a few seconds. Repeating a series of these processes could lead the public to form the wrong collective intelligence [ 6 ]. This could further develop into diverse social problems (i.e., setting a base station on fire because of rumors). In addition, some people believe and propagate fake news due to their personal norms, regardless of the factuality of the content [ 7 ]. Research in social science has suggested that cognitive bias (e.g., confirmation bias, bandwagon effect, and choice-supportive bias) [ 8 ] is one of the most pivotal factors in making irrational decisions in terms of the both creation and consumption of fake news [ 9 , 10 ]. Cognitive bias greatly contributes to the formation and enhancement of the echo chamber [ 11 ], meaning that news consumers share and consume information only in the direction of strengthening their beliefs [ 12 ].

Research using computational techniques (e.g., machine or deep learning) has been actively conducted for the past decade to investigate the current state of fake news and detect it effectively [ 13 ]. In particular, research into text-based feature selection and the development of detection models has been very actively and extensively conducted [ 14 – 17 ]. Research has been also active in the collection of fake news datasets [ 18 , 19 ] and fact-checking methodologies for model development [ 20 – 22 ]. Recently, Deepfake, which can manipulate images or videos through deep learning technology, has been used to create fake news images or videos, significantly increasing social concerns [ 23 ], and a growing body of research is being conducted to find ways of mitigating such concerns [ 24 – 26 ]. In addition, some research on system development (i.e., a game to increase awareness of the negative aspects of fake news) has been conducted to educate the public to avoid and prevent them from the situation where they could fall into the echo chamber, misunderstandings, wrong decision-making, blind belief, and propagating fake news [ 27 – 29 ].

While the creation and consumption of fake news are clearly different behaviors, due to the characteristics of the online environment (e.g., information can be easily created, shared, and consumed by anyone at anytime from anywhere), the boundaries between fake news creators and consumers have started to become blurred. Depending on the situation, people can quickly change their roles from fake news consumers to creators, or vice versa (with or without their intention). Furthermore, news creation and consumption are the most fundamental aspects that form the relationship between news and people. However, a significant amount of fake news research has positioned in news creation while considerably less research focus has been placed in news consumption (see Figs 1 & 2 ). This suggests that we must consider fake news as a comprehensive aspect of news consumption and creation .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0260080.g001

thumbnail

The papers were published in IEEE, ACM, ELSEVIER, arXiv, Wiley, APA from 2010 to 2020 classified by publisher, main category, sub category, and evaluation method (left to right).

https://doi.org/10.1371/journal.pone.0260080.g002

In this paper, we looked into fake news research through the lens of news creation and consumption ( Fig 3 ). Our survey results offer different yet salient insights on fake news research compared with other survey papers (e.g., [ 13 , 30 , 31 ]), which primarily focus on fake news creation. The main contributions of our survey are as follows:

  • We investigate trends in fake news research from 2010 to 2020 and confirm a need for applying a comprehensive perspective to fake news phenomenon.
  • We present fake news research through the lens of news creation and consumption with external and internal factors.
  • We examine key findings with a mental model approach, which highlights individuals’ differences in information understandings, expectations, or consumption.
  • We summarize our review and discuss complementary roles of computer and social sciences and potential future directions for fake news research.

thumbnail

We investigate fake news research trend (Section 2), and examine fake news creation and consumption through the lenses of external and internal factors. We also investigate research efforts to mitigate external factors of fake news creation and consumption: (a) indicates fake news creation (Section 3), and (b) indicates fake news consumption (Section 4). “Possible moves” indicates that news consumers “possibly” create/propagate fake news without being aware of any negative impact.

https://doi.org/10.1371/journal.pone.0260080.g003

2 Fake news definition and trends

There is still no definition of fake news that can encompass false news and various types of disinformation (e.g., satire, fabricated content) and can reach a social consensus [ 30 ]. The definition continues to change over time and may vary depending on the research focus. Some research has defined fake news as false news based on the intention and factuality of the information [ 4 , 15 , 32 – 36 ]. For example, Allcott and Gentzkow [ 4 ] defined fake news as “news articles that are intentionally and verifiably false and could mislead readers.” On the other hand, other studies have defined it as “a news article or message published and propagated through media, carrying false information regardless of the means and motives behind it” [ 13 , 37 – 43 ]. Given this definition, fake news refers to false information that causes an individual to be deceived or doubt the truth, and fake news can only be useful if it actually deceives or confuses consumers. Zhou and Zafarani [ 31 ] proposed a broad definition (“Fake news is false news.”) that encompasses false online content and a narrow definition (“Fake news is intentionally and verifiably false news published by a news outlet.”). The narrow definition is valid from the fake news creation perspective. However, given that fake news creators and consumers are now interchangeable (e.g., news consumers also play a role of gatekeeper for fake news propagation), it has become important to understand and investigate the fake news through consumption perspectives. Thus, in this paper, we use the broad definition of fake news.

Our research motivation for considering news creation and consumption in fake news research was based on the trend analysis. We collected 2,277 fake news-related literature using four keywords (i.e., fake news, false information, misinformation, rumor) to identify longitudinal trends of fake news research from 2010 to 2020. The data collection was conducted from July to September 2020. The criteria of data collection was whether any of these keywords exists in the title or abstract. To reflect diverse research backgrounds/domains, we considered six primary publishers (ACM, IEEE, arXiv, APA, ELSEVIER, and Wiley). The number of papers collected for each publisher is as follows: 852 IEEE (37%), 639 ACM (28%), 463 ELSEVIER (20%), 142 arXiv (7%), 141 Wiley (6%), 40 APA (2%). We excluded 59 papers that did not have the abstract and used 2,218 papers for the analysis. We then randomly chose 200 papers, and two coders conducted manual inspection and categorization. The inter-coder reliability was verified by the Cohen’s Kappa measurement. The scores for each main/sub-category were higher than 0.72 (min: 0.72, max: 0.95, avg: 0.85), indicating that the inter-coder reliability lies between “substantial” to “perfect” [ 44 ]. Through the coding procedure, we excluded non-English studies (n = 12) and reports on study protocol only (n = 6), and 182 papers were included in synthesis. The PRISMA flow chart depicts the number of articles identified, included, and excluded (see Fig 1 ).

The papers were categorized into two main categories: (1) creation (studies with efforts to detect fake news or mitigate spread of fake news) and (2) consumption (studies that reported the social impacts of fake news on individuals or societies and how to appropriately handle fake news). Each main category was then classified into sub-categories. Fig 4 shows the frequency of the entire literature by year and the overall trend of fake news research. It appears that the consumption perspective of fake news still has not received sufficient attention compared with the creation perspective ( Fig 4(a) ). Fake news studies have exploded since the 2016 US Presidential Election, and the trend of increase in fake news research continues. In the creation category, the majority of papers (135 out of 158; 85%) were related to the false information (e.g., fake news, rumor, clickbait, spam) detection model ( Fig 4(b) ). On the other hand, in the consumption category, much research pertains to data-driven fake news trend analysis (18 out of 42; 43%) or fake content consumption behavior (16 out of 42; 38%), including studies for media literacy education or echo chamber awareness ( Fig 4(c) ).

thumbnail

We collected 2,277 fake news related-papers and randomly chose and categorized 200 papers. Each marker indicates the number of fake news studies per type published in a given year. Fig 4(a) shows a research trend of news creation and consumption (main category). Fig 4(b) and 4(c) show a trend of the sub-categories of news creation and consumption. In Fig 4(b), “Miscellaneous” includes studies on stance/propaganda detection and a survey paper. In Fig 4(c), “Data-driven fake news trend analysis” mainly covers the studies reporting the influence of fake news that spread around specific political/social events (e.g., fake news in Presidential Election 2016, Rumor in Weibo after 2015 Tianjin explosions). “Conspiracy theory” refers to an unverified rumor that was passed on to the public.

https://doi.org/10.1371/journal.pone.0260080.g004

3 Fake news creation

Fake news is no longer merely propaganda spread by inflammatory politicians; it is also made for financial benefit or personal enjoyment [ 45 ]. With the development of social media platforms people often create completely false information for reasons beyond satire. Further, there is a vicious cycle of this false information being abused by politicians and agitators.

Fake news creators are indiscriminately producing fake news while considering the behavioral and psychological characteristics of today’s news consumers [ 46 ]. For instance, the sleeper effect [ 47 ] refers to a phenomenon in which the persuasion effect increases over time, even though the pedigree of information shows low reliability. In other words, after a long period of time, memories of the pedigree become poor and only the content tends to be remembered regardless of the reliability of the pedigree. Through this process, less reliable information becomes more persuasive over time. Fake news creators have effectively created and propagated fake news by targeting the public’s preference for news consumption through peripheral processing routes [ 35 , 48 ].

Peripheral routes are based on the elaboration likelihood model (ELM) [ 49 ], one of the representative psychological theories that handles persuasive messages. According to the ELM, the path of persuasive message processing can be divided into the central and the peripheral routes depending on the level of involvement. On one hand, if the message recipient puts a great deal of cognitive effort into processing, the central path is chosen. On the other hand, if the process of the message is limited due to personal characteristics or distractions, the peripheral route is chosen. Through a peripheral route, a decision is made based on other secondary cues (e.g., speakers, comments) rather than the logic or strength of the argument.

Wang et al. [ 50 ] demonstrated that most of the links shared or mentioned in social media have never even been clicked. This implies that many people perceive and process information in only fragmentary way, such as via news headlines and the people sharing news, rather than considering the logical flow of news content.

In this section, we closely examined each of the external and internal factors affecting fake news creation, as well as the research efforts carried out to mitigate the negative results based on the fake news creation perspective.

3.1 External factors: Fake news creation facilitators

We identified two external factors that facilitate fake news creation and propagation: (1) the unification of news creation, consumption, and distribution, (2) the misuse of AI technology, and (3) the use of social media as a news platform (see Fig 5 ).

thumbnail

We identify two external factors—The unification of news and the misuse of AI technology—That facilitate fake news creation.

https://doi.org/10.1371/journal.pone.0260080.g005

3.1.1 The unification of news creation, consumption, and distribution.

The public’s perception of news and the major media of news consumption has gradually changed. The public no longer passively consumes news exclusively through traditional news organizations with specific formats (e.g., the inverted pyramid style, verified sources) nor view those news simply as a medium for information acquisition. The public’s active news consumption behaviors began in earnest with the advent of citizen journalism by implementing journalistic behavior based on citizen participation [ 51 ] and became commonplace with the emergence of social media. As a result, the public began to prefer interactive media, in which new information could be acquired, their opinions can be offered, and they can discuss the news with other news consumers. This environment has motivated the public to make content about their beliefs and deliver the content to many people as “news.” For example, a recent police crackdown video posted in social media quickly spread around the world that influenced protesters and civic movements. Then, it was reported later by the mainstream media [ 52 ].

The boundaries between professional journalists and amateurs, as well as between news consumers and creators, are disappearing. This has led to a potential increase in deceptive communications, making news consumers suspicious and misinterpreted the reality. Online platforms (e.g., YouTube, Facebook) that allow users to freely produce and distribute content have been growing significantly. As a result, fake news content can be used to attract secondary income (e.g., multinational enterprises’ advertising fees), which contributes to accelerating fake news creation and propagation. An environment in which the public can only consume news that suits their preferences and personal cognitive biases has made it much easier for fake news creators to achieve their specific purposes (e.g., supporting a certain political party or a candidate they favor).

3.1.2 The misuse of AI technology.

The development of AI technology has made it easier to develop and utilize tools for creating fake news, and many studies have confirmed the impact of these technologies— (1) social bots, (2) trolls, and (3) fake media —on social networks and democracy over the past decade.

3.1.2.1 Social bots . Shao et al. [ 53 ] analyzed the pattern of fake news spread and confirmed that social bots play a significant role in fake news propagation and social bot-based automated accounts were largely affected by the initial stage of spreading fake news. In general, it is uneasy for the public to determine whether such accounts are people or bots. In addition, social bots are not illegal tools and many companies legally purchase them as a part of marketing, thus it is not easy to curb the use of social bots systematically.

3.1.2.2 Trolls . The term “trolls” refers to people who deliberately cause conflict or division by uploading inflammatory, provocative content or unrelated posts to online communities. They work with the aim of stimulating people’s feelings or beliefs and hindering mature discussions. For example, the Russian troll army has been active in social media to advance its political agenda and cause social turmoil in the US [ 54 ]. Zannettou et al. [ 55 ] confirmed how effectively the Russian troll army has been spreading fake news URLs on Twitter and its significant impact on making other Twitter users believe misleading information.

3.1.2.3 Fake media . It is now possible to manipulate or reproduce content in 2D or even 3D through AI technology. In particular, the advent of fake news using Deepfake technology (combining various images on an original video and generating a different video) has raised another major social concern that had not been imagined before. Due to the popularity of image or video sharing on social media, such media types have become the dominant form of news consumption, and the Deepfake technology itself is becoming more advanced and applied to images and videos in a variety of domains. We witnessed a video clip of former US President Barack Obama criticizing Donald Trump, which was manipulated by the US online media company BuzzFeed to highlight the influence and danger of Deepfake, causing substantial social confusion [ 56 ].

3.2 Internal factors: Fake news creation purposes

We identified three main purposes for fake news creation— (1) ideological purposes, (2) monetary purposes, and (3) fear/panic reduction .

3.2.1 Ideological purpose.

Fake news has been created and propagated for political purposes by individuals or groups that positively affect the parties or candidates they support or undermine those who are not on the same side. Fake news with this political purpose has shown to negatively influence people and society. For instance, Russia created a fake Facebook account that caused many political disputes and enhanced polarization, affecting the 2016 US Presidential Election [ 57 ]. As polarization has intensified, there has also been a trend in the US that “unfriending” people who have different political tendencies [ 58 ]. This has led the public to decide whether to trust the news or not regardless of its factuality and has resulted in worsening in-group biases. During the Brexit campaign in the UK, many selective news articles were exposed on Facebook, and social bots and trolls were also confirmed as being involved in creating public opinions [ 59 , 60 ].

3.2.2 Monetary purpose.

Financial benefit is another strong motivation for many fake news creators [ 34 , 61 ]. Fake news websites usually reach the public through social media and make profits through posted advertisements. The majority of fake websites are focused on earning advertising revenue by spreading fake news that would attract readers’ attention, rather than political goals. For example, during the 2016 US Presidential Election in Macedonia, young people in their 10s and 20s used content from some extremely right-leaning blogs in the US to mass-produce fake news, earning huge advertising revenues [ 62 ]. This is also why fake news creators use provocative titles, such as clickbait headlines, to induce clicks and attempt to produce as many fake news articles as possible.

3.2.3 Fear and panic reduction.

In general, when epidemics become more common around the world, rumors of absurd and false medical tips spread rapidly in social media. When there is a lack of verified information, people feel great anxious and afraid and easily believe such tips, regardless of whether they are true [ 63 , 64 ]. The term infodemic , which first appeared during the 2003 SARS pandemics, describes this phenomenon [ 65 ]. Regarding COVID-19, health authorities have recently announced that preventing the creation and propagation of fake news about the virus is as important as alleviating the contagious power of COVID-19 [ 66 , 67 ]. The spread of fake news due to the absence of verified information has become more common regarding health-related social issues (e.g., infectious diseases), natural disasters, etc. For example, people with disorders affecting cognition (e.g., neurodegenerative disorder) are tend to easily believe unverified medical news [ 68 – 70 ]. Robledo and Jankovic [ 68 ] confirmed that many fake or exaggerated medical journals are misleading people with Parkinson’s disease by giving false hopes and unfounded fake articles. Another example is a rumor that climate activists set fire to raise awareness of climate change quickly spread as fake news [ 71 ], when a wildfire broke out in Australia in 2019. As a result, people became suspicious and tended to believe that the causes of climate change (e.g., global warming) may not be related to humans, despite scientific evidence and research data.

3.3 Fake news detection and prevention

The main purpose of fake news creation is to make people confused or deceived regardless of topic, social atmosphere, or timing. Due to this purpose, it appears that fake news tends to have similar frames and structural patterns. Many studies have attempted to mitigate the spread of fake news based on these identifiable patterns. In particular, research on developing computational models that detect fake information (text/images/videos), based on machine or deep learning techniques has been actively conducted, as summarized in Table 1 . Other modeling studies include the credibility of weblogs [ 84 , 85 ], communication quality [ 88 ], susceptibility level [ 90 ], and political stance [ 86 , 87 ]. The table was intended to characterize a research scope and direction of the development of fake information creation (e.g., the features employed in each model development), not to present an exhaustive list.

thumbnail

https://doi.org/10.1371/journal.pone.0260080.t001

3.3.1 Fake text information detection.

Research has considered many text-based features, such as structural (e.g., website URLs and headlines with all capital letters or exclamations) and linguistic information (e.g., grammar, spelling, and punctuation errors) about the news. Research has also considered the sentiments of news articles, the frequency of the words used, user information, and who left comments on the news articles, and social network information among users (who were connected based on activities of commenting, replying, liking or following) were used as key features for model development. These text-based models have been developed for not only fake news articles but also other types of fake information, such as clickbaits, fake reviews, spams, and spammers. Many of the models developed in this context performed a binary classification that distinguished between fake and non-fake articles, with the accuracy of such models ranging from 86% to 93%. Mainstream news articles were used to build most models, and some studies used articles on social media, such as Twitter [ 15 , 17 ]. Some studies developed fake news detection models by extracting features from images, as well as text, in news articles [ 16 , 17 , 75 ].

3.3.2 Fake visual media detection.

The generative adversary network (GAN) is an unsupervised learning method that estimates the probability distribution of original data and allows an artificial neural network to produce similar distributions [ 109 ]. With the advancement of GAN, it has become possible to transform faces in images into those of others. However, photos of famous celebrities have been misused (e.g., being distorted into pornographic videos), increasing concerns about the possible misuse of such technology [ 110 ] (e.g., creating rumors about a certain political candidate). To mitigate this, research has been conducted to develop detection models for fake images. Most studies developed binary classification models (fake image or not), and the accuracy of fake image detection models was high, ranging from 81% to 97%. However, challenges still exist. Unlike fake news detection models that employ fact-checking websites or mainstream news as data verification or ground-truth, fake image detection models were developed using the same or slightly modified image datasets (e.g., CelebA [ 97 ], FFHQ [ 99 ]), asking for the collection and preparation of a large amount of highly diverse data.

4 Fake news consumption

4.1 external factors: fake news consumption circumstances.

The implicit social contract between civil society and the media has gradually disintegrated in modern society, and accordingly, citizens’ trust in the media began to decline [ 111 ]. In addition, the growing number of digital media platforms has changed people’s news consumption environment. This change has increased the diversity of news content and the autonomy of information creation and sharing. At the same time, however, it blurred the line between traditional mainstream media news and fake news in the Internet environment, contributing to polarization.

Here, we identified three external factors that have forced the public to encounter fake news: (1) the decline of trust in the mainstream media, (2) a high-choice media environment, and (3) the use of social media as a news platform .

4.1.1 Fall of mainstream media trust.

Misinformation and unverified or biased reports have gradually undermined the credibility of the mainstream media. According to the 2019 American mass media trust survey conducted by Gallup, only 13% of Americans said they trusted traditional mainstream media: newspapers or TV news [ 112 ]. The decline in traditional media trust is not only a problem for the US, but also a common concern in Europe and Asia [ 113 – 115 ].

4.1.2 High-choice media environment.

Over the past decade, news consumption channels have been radically diversified, and the mainstream has shifted from broadcasting and print media to mobile and social media environments. Despite the diversity of news consumption channels, personalized preferences and repetitive patterns have led people to be exposed to limited information and continue to consume such information increasingly [ 116 ]. This selective news consumption attitude has enhanced the polarization of the public in many multi-media environments [ 117 ]. In addition, the commercialization of digital platforms have created an environment in which cognitive bias can be easily strengthened. In other words, a digital platform based on recommended algorithms has the convenience of providing similar content continuously after a given type of content is consumed. As a result, it may be easy for users to fall into the echo chamber because they only access recommended content. A survey of 1,000 YouTube videos found that more than two-thirds of the videos contained content in favor of a particular candidate [ 118 ].

News consumption in social media does not simply mean the delivery of messages from creators to consumers. The multi-directionality of social media has blurred the boundaries between information creators and consumers. In other words, users are already interacting with one another in various fashions, and when a new interaction type emerges and is supported by the platform, users will display other types of new interactions, which will also influence ways of consuming news information.

4.1.3 Use of social media as news platform.

Here we focus on the most widely used social media platforms—YouTube, Facebook, and Twitter—where each has characteristics of encouraging limited news consumption.

First, YouTube is the most unidirectional of social media. Many YouTube creators tend to convey arguments in a strong, definitive tone through their videos, and these content characteristics make viewers judge the objectivity of the information via non-verbal elements (e.g., speaker, thumbnail, title, comments) rather than facts. Furthermore, many comments often support the content of the video, which may increase the chances of viewers accepting somewhat biased information. In addition, a YouTube video recommendation algorithm causes users who watch certain news to continuously be exposed to other news containing the same or similar information. This behavior and direction on the part of isolated content consumption could undermine the viewer’s media literacy, and is likely to create a screening effect that blocks the user’s eyes and ears.

Second, Facebook is somewhat invisible regarding the details of news articles because this platform ostensibly shows only the title, the number of likes, and the comments of the posts. Often, users have to click on the article and go to the URL to read the article. This structure and consumptive content orientation on the part of Facebook presents obstacles that prevent users from checking the details of their posts. As a result, users have become likely to make limited and biased judgments and perceive content through provocative headlines and comments.

Third, the largest feature of Twitter is anonymity because Twitter asks users to make their own pseudonyms [ 119 ]. Twitter has a limited number of letters to upload, and compared to other platforms, users can produce and spread indiscriminate information anonymously and do not know who is behind the anonymity [ 120 , 121 ]. On the other hand, many accounts on Facebook operate under real names and generally share information with others who are friends or followers. Information creators are not held accountable for anonymous information.

4.2 Internal factors: Cognitive mechanism

Due to the characteristics of the Internet and social media, people are accustomed to consuming information quickly, such as reading only news headlines and checking photos in news articles. This type of news consumption practice could lead people to consider news information mostly based on their beliefs or values. This practice can make it easier for people to fall into an echo chamber and further social confusion. We identified two internal factors affecting fake news consumption: (1) cognitive biases and (2) personal traits (see Fig 6 ).

thumbnail

https://doi.org/10.1371/journal.pone.0260080.g006

4.2.1 Cognitive biases.

Cognitive bias is an observer effect that is broadly recognized in cognitive science and includes basic statistical and memory errors [ 8 ]. However, this bias may vary depending on what factors are most important to affect individual judgments and choices. We identified five cognitive biases that affect fake news consumption: confirmation bias, in-group bias, choice-supportive bias, cognitive dissonance, and primacy effect.

Confirmation bias relates to a human tendency to seek out information in line with personal thoughts or beliefs, as well as to ignore information that goes against such beliefs. This stems from the human desire to be reaffirmed, rather than accept denials of one’s opinion or hypothesis. If the process of confirmation bias is repeated, a more solid belief is gradually formed, and the belief remains unchanged even after encountering logical and objective counterexamples. Evaluating information with an objective attitude is essential to properly investigating any social phenomenon. However, confirmation bias significantly hinders this. Kunda [ 122 ] discussed experiments that investigated the cognitive processes as a function of accuracy goals and directional goals. Her analysis demonstrated that people use different cognitive processes to achieve the two different goals. For those who pursue accuracy goals (reaching a “right conclusion”), information is used as a tool to determine whether they are right or not [ 123 ], and for those with directional goals (reaching a desirable conclusion), information is used as a tool to justify their claims. Thus, biased information processing is more frequently observed by people with directional goals [ 124 ].

People with directional goals have a desire to reach the conclusion they want. The more we emphasize the seriousness and omnipresence of fake news, the less people with directional goals can identify fake news. Moreover, their confirmation bias through social media could result in an echo chamber, triggering a differentiation of public opinion in the media. The algorithm of the media platform further strengthens the tendency of biased information consumption (e.g., filter bubble).

In-group bias is a phenomenon in which an individual favors a group that he or she belongs to. The causes of in-group bias are two [ 125 ]. One is a categorization process, which exaggerates the similarities between members within one category (the internal group) and differences with others (the external groups). Consequently, positive reactions towards the internal group and negative reactions (e.g., hostility) towards the external group are both increased. The other reason is self-respect based on social identity theory. To positively evaluate the internal group, a member tends to perceive that other group members are similar to himself or herself.

In-group bias has a significant impact on fake news consumption because of radical changes in the media environment [ 126 ]. The public recognizes and forms groups based on issues through social media. The emotions and intentions of such groups of people online can be easily transferred or developed into offline activities, such as demonstrations and rallies. Information exchanges within such internal groups proceeds similarly to the situation with confirmation bias. If confirmation bias is keeping to one’s beliefs, in-group bias equates the beliefs of my group with my beliefs.

Choice-supportive bias refers to an individual’s tendency to justify his or her decision by highlighting the evidence that he or she did not consider in making the decision [ 127 ]. For instance, people sometimes have no particular purpose when they purchase a certain brand of products or service, or support a particular politician or political party. They emphasize that their choices at the time were right and inevitable. They also tend to focus more on positive aspects than negative effects or consequences to justify their choice. However, these positive aspects can be distorted because they are mainly based on memory. Thus, choice-supportive bias, can be regarded as the cognitive errors caused by memory distortion.

The behavioral condition of choice-supportive bias is used to justify oneself, which usually occurs in the context of external factors (e.g., maintaining social status or relationships) [ 7 ]. For example, if people express a certain political opinion within a social group, people may seek information with which to justify the opinion and minimize its flaws. In this procedure, people may accept fake news as a supporting source for their opinions.

Cognitive dissonance was based on the notion that some psychological tension would occur when an individual had two perceptions that were inconsistent [ 128 ]. Humans have a desire to identify and resolve the psychological tension that occurs when a cognitive dissonance is established. Regarding fake news consumption, people easily accept fake news if it is aligned with their beliefs or faith. However, if such news is seen as working against their beliefs or faith, people define even real news as fake and consume biased information in order to avoid cognitive dissonance. This is quite similar to cognitive bias. Selective exposure to biased information intensifies its extent and impact in social media. In these circumstances, an individual’s cognitive state is likely to be formed by information from unclear sources, which can be seen as a negative state of perception. In that case, information consumers selectively consume only information that can be in harmony with negative perceptions.

Primacy effect means that information presented previously will have a stronger effect on the memory and decision-making than information presented later [ 129 ]. The “interference theory [ 130 ]” is often referred to as a theoretical basis for supporting the primacy effect, which highlights the fact that the impression formed by the information presented earlier influences subsequent judgments and the process of forming the next impression.

The significance of the primary effect for fake news consumption is that it can be a starting point for biased cognitive processes. If an individual first encounters an issue in fake news and does not go through a critical thinking process about that information, he or she may form false attitudes regarding the issue [ 131 , 132 ]. Fake news is a complex combination of facts and fiction, making it difficult for information consumers to correctly judge whether the news is right or wrong. These cognitive biases induce the selective collection of information that feels more valid for news consumers, rather than information that is really valid.

4.2.2 Personal traits.

We two aspects of personal characteristics or traits can influence one’s behaviors in terms of news consumption: susceptibility and personality.

4.2.2.1 Susceptibility . The most prominent feature of social media is that consumers can be also creators, and the boundaries between the creators and consumers of information become unclear. New media literacy (i.e., the ability to critically and suitably consume messages in a variety of digital media channels, such as social media) can have a significant impact on the degree of consumption and dissemination of fake news [ 133 , 134 ]. In other words, the higher new media literacy is, the higher the probability that an individual is likely to take a critical standpoint toward fake news. Also, the susceptibility level of fake news is related to one’s selective news consumption behaviors. Bessi et al. [ 35 ] studied misinformation on Facebook and found that users who frequently interact with alternative media tend to interact with intentionally false claims more often.

Personality is an individual’s traits or behavior style. Many scholars have agreed that the personality can be largely divided into five categories (Big Five)—extraversion, agreeableness, neuroticism, openness, and conscientiousness [ 135 , 136 ]—and used them to understand the relationship between personality and news consumption.

Extroversion is related to active information use. Previous studies have confirmed that extroverts tend to use social media and that their main purpose of use is to acquire information [ 137 ] and better determine the factuality of news on social media [ 138 ]. Furthermore, people with high agreeableness, which refers to how friendly, warm, and tactful, tend to trust real news than fake news [ 138 ]. Neuroticism refers to a broad personality trait dimension representing the degree to which a person experiences the world as distressing, threatening, and unsafe. People with high neuroticism usually show negative emotions or information sharing behavior [ 139 ]. Neuroticism is positively related to fake news consumption [ 138 ]. Openness refers to the degree of enjoying new experiences. High openness is associated with high curiosity and engagement in learning [ 140 ], which enhances critical thinking ability and decreases negative effects of fake news consumption [ 138 , 141 ]. Conscientiousness refers to a person’s work ethic, being orderly, and thoroughness [ 142 ]. People with high conscientiousness tend to regard social media use as distraction from their tasks [ 143 – 145 ].

4.3 Fake news awareness and prevention

4.3.1 decision-making support tools..

News on social media does not go through the verification process, because of its high degree of freedom to create, share, and access information. The study reported that most citizens in advanced countries will have more fake information than real information in 2022 [ 146 ]. This indicates that potential personal and social damage from fake news may increase. Paradoxically, many countries that suffer from fake news problems strongly guarantee the freedom of expression under their constitutions; thus, it would be very difficult to block all possible production and distribution of fake news sources through laws and regulations. In this respect, it would be necessary to put in place not only technical efforts to detect and prevent the production and dissemination of fake news but also social efforts to make news consumers aware of the characteristics of online fake information.

Inoculation theory highlights that human attitudes and beliefs can form psychological resistance by being properly exposed to arguments against belief in advance. To have the ability to strongly protest an argument, it is necessary to expose and refute the same sort of content with weak arguments first. Doris-Down et al. [ 147 ] asked people who were from different political backgrounds to communicate directly through mobile apps and investigated whether these methods alleviated their echo-chamberness. As a result, the participants made changes, such as realizing that they had a lot in common with people who had conflicting political backgrounds and that what they thought was different was actually trivial. Karduni et al. [ 148 ] provided comprehensive information (e.g., connections among news accounts and a summary of the location entities) to study participants through the developed visual analytic system and examined how they accepted fake news. Another study was conducted to confirm how people determine the veracity of news by establishing a system similar to social media and analyzing the eye tracking of the study participants while reading fake news articles [ 28 ].

Some research has applied the inoculation theory to gamification. A “Bad News” game was designed to proactively warn people and expose them to a certain amount of false information through interactions with the gamified system [ 29 , 149 ]. The results confirmed the high effectiveness of inoculation through the game and highlighted the need to educate people about how to respond appropriately to misinformation through computer systems and games [ 29 ].

4.3.2 Fake information propagation analysis.

Fake information tends to show a certain pattern in terms of consumption and propagation, and many studies have attempted to identify the propagation patterns of fake information (e.g., the count of unique users, the depth of a network) [ 150 – 153 ].

4.3.2.1 Psychological characteristics . The theoretical foundation of research intended to examine the diffusion patterns of fake news lies in psychology [ 154 , 155 ] because psychological theories explain why and how people react to fake news. For instance, a news consumer who comes across fake news will first have doubts, judge the news against his background knowledge, and want to clarify the sources in the news. This series of processes ends when sufficient evidence is collected. Then the news consumer ends in accepting, ignoring, or suspecting the news. The psychological elements that can be defined in this process are doubts, negatives, conjectures, and skepticism [ 156 ].

4.3.2.2 Temporal characteristics . Fake news exhibits different propagation patterns from real news. The propagation of real news tends to slowly decrease over time after a single peak in the public’s interest, whereas fake news does not have a fixed timing for peak consumption, and a number of peaks appear in many cases [ 157 ]. Tambuscio et al. [ 151 ] proved that the pattern of the spread of rumors is similar to the existing epidemic model [ 158 ]. Their empirical observations confirmed that the same fake news reappears periodically and infects news consumers. For example, rumors that include the malicious political message that “Obama is a Muslim” are still being spread a decade later [ 159 ]. This pattern of proliferation and consumption shows that fake news may be consumed for a certain purpose.

5 A mental-model approach

We have examined news consumers’ susceptibility to fake news due to internal and external factors, including personal traits, cognitive biases, and the contexts. Beyond an investigation on the factor level, we seek to understand people’s susceptibility to misinformation by considering people’s internal representations and external environments holistically [ 5 ]. Specifically, we propose to comprehend people’s mental models of fake news. In this section, we first briefly introduce mental models and discuss their connection to misinformation. Then, we discuss the potential contribution of using a mental-model approach to the field of misinformation.

5.1 Mental models

A mental model is an internal representation or simulation that people carry in their minds of how the world works [ 160 , 161 ]. Typically, mental models are constructed in people’s working memory, in which information from long-term memory and the environments are combined [ 162 ]. They also indicate that individuals represent complex phenomena with somewhat abstraction based on their own experiences and understanding of the contexts. People rely on mental models to understand and predict their interactions with environments, artifacts and computing systems, as well as other individuals [ 163 , 164 ]. Generally, individuals’ ability to represent the continually changing environments is limited and unique. Thus, mental models tend to be functional and dynamic but not necessarily accurate or complete [ 163 , 165 ]. Mental models also differ between various groups and in particular between experts and novices [ 164 , 166 ].

5.2 Mental models and misinformation

Mental models have been proposed to understand human behaviors in spatial navigation [ 167 ], learning [ 168 , 169 ], deductive reasoning [ 170 ], mental presentations of real or imagined situations [ 171 ], risk communication [ 172 ], and usable cybersecurity and privacy [ 166 , 173 , 174 ]. People use mental models to facilitate their comprehension, judgment, and actions, and can be the basis of individual behaviors. In particular, the connection between a mental-model approach and misinformation has been revealed in risk communication regarding vaccines [ 175 , 176 ]. For example, Downs et al. [ 176 ] interviewed 30 parents from three US cities to understand their mental models about vaccination for their children aged 18 to 23 months. The results revealed two mental models about vaccination: (1) heath oriented : parents who focused on health-oriented topics trusted anecdotal communication more than statistical arguments; and (2) risk oriented : parents with some knowledge about vaccine mechanisms trusted communication with statistical arguments more than anecdotal information. Also, the authors found that many parents, even those favorable to vaccination, can be confused by ongoing debate, suggesting somewhat incompleteness of their mental models.

5.3 Potential contributions of a mental-model approach

Recognizing and dealing with the plurality of news consumers’ perception, cognition and actions is currently considered as key aspects of misinformation research. Thus, a mental model approach could significantly improve our understanding of people’s susceptibility to misinformation, as well as inform the development of mechanisms to mitigate misinformation.

One possible direction is to investigate the demographic differences in the context of mental models. As more Americans have adopted social media, the social media users have become more representative for the population. Usage by older adults has increased in recent years, with the use rate of about 12% in 2012 to about 35% in 2016 ( https://www.pewresearch.org/internet/fact-sheet/social-media/ ). Guess et al. (2019) analyzed participants’ profiles and their sharing activity on Facebook during the 2016 US Presidential campaign. A strong age effect was revealed. While controlled the effects of ideology and education, their results showed that Facebook users who are over 65 years old were associated with sharing nearly seven times as many articles from fake news domains on Facebook as those who are between 18–29 years old, or about 2.3 times as many as those in the age between 45 to 65.

Besides older adults, college students were shown more susceptibility to misinformation [ 177 ]. We can identify which mental models a particular age group ascribes to, and compare the incompleteness or incorrectness of the mental models by age. On the other hand, such comparison might be informative to design general mechanisms to mitigate misinformation independent of the different concrete mental models possessed by different types of users.

Users’ actions and decisions are directed by their mental models. We can also explore news consumers’ mental models and discover unanticipated and potentially risky human system interactions, which will inform the development and design of user interactions and education endeavors to mitigate misinformation.

A mental-model approach supplies an important, and as yet unconsidered, dimension to fake news research. To date, research on people’s susceptibility to fake news in social media has lagged behind research on computational aspect research on fake news. Scholars have not considered issues of news consumers’ susceptibility across the spectrum of their internal representations and external environments. An investigation from the mental model’s perspective is a step toward addressing such need.

6 Discussion and future work

In this section, we highlight the importance of balancing research efforts on fake news creation and consumption and discuss potential future directions of fake news research.

6.1 Leveraging insights of social science to model development

Developing fake news detection models has achieved great performance. Feature groups used in the model are diverse including linguistics, vision, sentiment, topic, user, and network, and many models used multiple groups to increase the performance. By using datasets with different size and characteristics, research has demonstrated the effectiveness of the models through a comparison analysis. However, much research has considered and used the features that are easily quantifiable, and many of them tend to have unclear justification or rationale of being used in modeling. For example, what is the relationship between the use of question (?), exclamation (!), or quotation marks (“…”) and fake news?, what does it mean by a longer description relates to news trustworthiness?. There are also many important aspects that can be used as additional features for modeling and have not yet found a way to be quantified. For example, journalistic styles are important characteristics that determine a level of information credibility [ 156 ], but it is challenging to accurately and reliably quantified them. There are many intentions (e.g., ideological standpoint, financial gain, panic creation) that authors may implicitly or explicitly display in the post but measuring them is uneasy and not straightforward. Social science research can play a role in here coming up with a valid research methodology to measure such subjective perceptions or notions considering various types and characteristics of them depending on a context or environment. Some research efforts in this research direction include quantifying salient factors of people’s decision-making identified in social science research and demonstrating the effectiveness of using the factors in improving model performance and interpreting model results [ 70 ]. Yet more research that applies socio-technical aspects in model development and application would be needed to better study complex characteristics of fake news.

6.1.1 Future direction.

Insights from social science may help develop transparent and applicable fake news detection models. Such socio-technical models may allow news consumers to have a better understanding of fake news detection results and its application as well as to take more appropriate actions to control fake news phenomenon.

6.2 Lack of research on fake news consumption

Regarding fake news consumption, we confirmed that only few studies involve the development of web- or mobile-based technology systems to help consumers aware possible dangers of fake news. Those studies [ 28 , 29 , 147 , 148 ] tried to demonstrate the feasibility of developed self-awareness systems through user studies. However, due to the limited number of study participants (min: 11, max: 60) and their lack of demographic diversity (i.e., recruited only college students of one school, the psychology research pool at the authors’ institution), the generalization and applicability of these systems are still questionable. On the other hand, research that involves the development of fake news detection models or network analysis to identify the pattern of fake news propagation has been relatively active. These results can be used to identify people (or entities) who intentionally create malicious fake content; however, it is still challenging to restrict people who originally had not shown any behaviors or indications of sharing or creating fake information but later manipulated real news to fake or disseminated fake news with their malicious intention or cognitive biases.

In other words, although fake news detection models have shown great, promising performance, the influence of the models may be exerted in limited cases. This is because fake news detection models heavily rely on the data that were labeled as fake by other fact-checking institutions or sites. If someone manipulates the news that were not covered by fact-checking, the format or characteristics of the manipulated news may be different from those (i.e., conventional features) that are identified and managed in the detection model. Such differences may not be captured by the model. Therefore, to prevent fake news phenomenon more effectively, research needs to consider changes of news consumption.

6.2.1 Future direction.

It may be desirable to support people recognizing that their news consumption behaviors (e.g., like, comment, share) can have a significant ripple effect. Developing a system that tracks activities of people’s news consumption and creation, measures similarity and differences between those activities, and presents behaviors or patterns of news consumption and creation to people would be helpful.

6.3 Limited coverage of fact-checking websites and regulatory approach

Some of the well-known fact-checking websites (e.g., snopes.com, politifact.com) cover news shared mostly on the Internet and label the authenticity or deficiencies of the content (e.g., miscaptioned, legend, misattributed). However, these fact-checking websites may show limited coverage in that they are only used for those who are willing to check the veracity of certain news articles. Social media platforms have been making continuous efforts to mitigate the spread of fake news. For example, Facebook shows that content that has been falsely assessed by fact-checkers is relatively less exposed to news feeds or shows warning indicators [ 178 ]. Instagram has also changed the way that warning labels are displayed when users attempt to view the content that has been falsely assessed [ 179 ]. However, this type of an interface could lead news consumers to relying on algorithmic decision-making rather than self-judgment because these ostensible regulations (e.g., warning labels) tend to lack transparency of the decision. As we explained previously, this is related to filter bubbles. Therefore, it is important to provide a more clear and transparent communicative interface for news consumers to access and understand underlying information of the algorithm results.

6.3.1 Future direction.

It is necessary to create a news consumption circumstance that gives a wider coverage of fake news and more transparent information of algorithmic decisions on news credibility. This will help news consumers preemptively avoid fake news consumption and contribute more to preventing fake news propagation. Consumers also make more proper and accurate decisions based on their understanding of the news.

6.4 New media literacy

With the diversification of news channels, we can easily consume news. However, we are also in a media environment that asks us to self-critically verify news content (e.g., whether the news title reads like a clickbait, whether the news title and content are related), which in reality is hard to be done. Moreover, in social media, news consumers can be news creators or reproducers. During this process, news information could be changed based on a consumer’s beliefs or interests. A problem here is that people may not know how to verify news content or not be aware of whether the information could be distorted or biased. As the news consumer environment changes rapidly and faces modern media deluge, the importance of media literacy education is high. Media literacy refers to the ability to decipher media content, but in a broad sense, to understand the principles of media operation and media content sensibly and critically, and in turn to the ability to utilize and creatively reproduce content. Being a “lazy thinker” is more susceptible to fake news than having a “partisan bias” [ 32 ]. As “screen time” (i.e., time spent looking at smartphone, computer, or television screens) has become more common, people are consuming only stimulating (e.g., sensual pleasure and excitement) information [ 180 ]. This could gradually lower one’s ability of critical, reasonable thinking, leading to making wrong judgments and actions. In France, when fake news problem became more serious, and a great amount of efforts were made to create “European Media Literacy Week” in schools [ 181 ]. The US is also making legislative efforts to add media literacy to the general education curriculum [ 182 ]. However, the acquisition of new media literacy through education may be limited to people in school (e.g., young students) and would be challenging to be expanded to wider populations. Thus, there is also a need for supplementary tools and research efforts to support more people to critically interpret and appropriately consume news.

In addition, more critical social attention is needed because visual content (e.g., images, videos), which had been naturally accepted as facts, can be easily manipulated in a malicious fashion and looked very natural. We have seen that people prefer to watch YouTube videos for news consumption rather than reading news articles. This visual content makes it relatively easy for news consumers to trust the content compared to text-based information and makes it easier to obtain information simply by playing the video. Since visual content will become a more dominant medium in future news consumption, educating and inoculating news consumers about potential threats of fake information in such news media would be important. More attention and research are needed on the technology supporting fake visual content awareness.

6.4.1 Future direction.

Research in both computer science and social science should find ways (e.g., developing a game-based education system or curriculum) to help news consumers aware of their practice of news consumption and maintain right news consumption behaviors.

7 Conclusion

We presented a comprehensive summary of fake news research through the lenses of news creation and consumption. The trends analysis indicated a growing increase in fake news research and a great amount of research focus on news creation compared to news consumption. By looking into internal and external factors, we unpacked the characteristics of fake news creation and consumption and presented the use of people’s mental models to better understand people’s susceptibility to misinformation. Based on the reviews, we suggested four future directions on fake news research—(1) a socio-technical model development using insights from social science, (2) in-depth understanding of news consumption behaviors, (3) preemptive decision-making and action support, and (4) educational, new media literacy support—as ways to reduce the gap between news creation and consumption and between computer science and social science research and to support healthy news environments.

Supporting information

S1 checklist..

https://doi.org/10.1371/journal.pone.0260080.s001

  • View Article
  • Google Scholar
  • 2. Goldman R. Reading fake news, Pakistani minister directs nuclear threat at Israel. The New York Times . 2016;24.
  • PubMed/NCBI
  • 6. Lévy P, Bononno R. Collective intelligence: Mankind’s emerging world in cyberspace. Perseus Books; 1997.
  • 11. Jamieson KH, Cappella JN. Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press; 2008.
  • 14. Shu K, Cui L, Wang S, Lee D, Liu H. defend: Explainable fake news detection. In: In Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD); 2019. p. 395–405.
  • 15. Ruchansky N, Seo S, Liu Y. Csi: A hybrid deep model for fake news detection. In: In Proc. of the 2017 ACM on Conference on Information and Knowledge Management (CIKM); 2017. p. 797–806.
  • 16. Cui L, Wang S, Lee D. Same: sentiment-aware multi-modal embedding for detecting fake news. In: In Proc. of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM); 2019. p. 41–48.
  • 17. Wang Y, Ma F, Jin Z, Yuan Y, Xun G, Jha K, et al. Eann: Event adversarial neural networks for multi-modal fake news detection. In: In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data mining (KDD); 2018. p. 849–857.
  • 18. Nørregaard J, Horne BD, Adalı S. Nela-gt-2018: A large multi-labelled news for the study of misinformation in news articles. In: In Proc. of the International AAAI Conference on Web and Social Media (ICWSM). vol. 13; 2019. p. 630–638.
  • 20. Nguyen AT, Kharosekar A, Krishnan S, Krishnan S, Tate E, Wallace BC, et al. Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking. In: In Proc. of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST); 2018. p. 189–199.
  • 23. Brandon J. Terrifying high-tech porn: creepy’deepfake’videos are on the rise. Fox News . 2018;20.
  • 24. Nguyen TT, Nguyen CM, Nguyen DT, Nguyen DT, Nahavandi S. Deep Learning for Deepfakes Creation and Detection. arXiv . 2019;1.
  • 25. Rossler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Nießner M. Faceforensics++: Learning to detect manipulated facial images. In: IEEE International Conference on Computer Vision (ICCV); 2019. p. 1–11.
  • 26. Nirkin Y, Keller Y, Hassner T. Fsgan: Subject agnostic face swapping and reenactment. In: In Proc. of the IEEE International Conference on Computer Vision (ICCV); 2019. p. 7184–7193.
  • 28. Simko J, Hanakova M, Racsko P, Tomlein M, Moro R, Bielikova M. Fake news reading on social media: an eye-tracking study. In: In Proc. of the 30th ACM Conference on Hypertext and Social Media (HT); 2019. p. 221–230.
  • 35. Horne B, Adali S. This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In: In Proc. of the 11th International AAAI Conference on Web and Social Media (ICWSM); 2017. p. 759–766.
  • 36. Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, et al. Fake news vs satire: A dataset and analysis. In: In Proc. of the 10th ACM Conference on Web Science (WebSci); 2018. p. 17–21.
  • 37. Mustafaraj E, Metaxas PT. The fake news spreading plague: was it preventable? In: In Proc. of the 9th ACM Conference on Web Science (WebSci); 2017. p. 235–239.
  • 40. Jin Z, Cao J, Zhang Y, Luo J. News verification by exploiting conflicting social viewpoints in microblogs. In: In Proc. of the 13th AAAI Conference on Artificial Intelligence (AAAI); 2016. p. 2972–2978.
  • 41. Rubin VL, Conroy N, Chen Y, Cornwell S. Fake news or truth? using satirical cues to detect potentially misleading news. In: In Proc. of the Second Workshop on Computational Approaches to Deception Detection ; 2016. p. 7–17.
  • 45. Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. In: Handbook of the fundamentals of financial decision making: Part I. World Scientific; 2013. p. 99–127.
  • 46. Hanitzsch T, Wahl-Jorgensen K. Journalism studies: Developments, challenges, and future directions. The Handbook of Journalism Studies . 2020; p. 3–20.
  • 48. Osatuyi B, Hughes J. A tale of two internet news platforms-real vs. fake: An elaboration likelihood model perspective. In: In Proc. of the 51st Hawaii International Conference on System Sciences (HICSS); 2018. p. 3986–3994.
  • 49. Cacioppo JT, Petty RE. The elaboration likelihood model of persuasion. ACR North American Advances. 1984; p. 673–675.
  • 50. Wang LX, Ramachandran A, Chaintreau A. Measuring click and share dynamics on social media: a reproducible and validated approach. In Proc of the 10th International AAAI Conference on Web and Social Media (ICWSM). 2016; p. 108–113.
  • 51. Bowman S, Willis C. How audiences are shaping the future of news and information. We Media . 2003; p. 1–66.
  • 52. Hill E, Tiefenthäler A, Triebert C, Jordan D, Willis H, Stein R. 8 Minutes and 46 Seconds: How George Floyd Was Killed in Police Custody; 2020. Available from: https://www.nytimes.com/2020/06/18/us/george-floyd-timing.html .
  • 54. Carroll O. St Petersburg ‘troll farm’ had 90 dedicated staff working to influence US election campaign; 2017.
  • 55. Zannettou S, Caulfield T, Setzer W, Sirivianos M, Stringhini G, Blackburn J. Who let the trolls out? towards understanding state-sponsored trolls. In: Proc. of the 10th ACM Conference on Web Science (WebSci); 2019. p. 353–362.
  • 56. Vincent J. Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news. The Verge . 2018;17.
  • 58. Linder M. Block. Mute. Unfriend. Tensions rise on Facebook after election results. Chicago Tribune . 2016;9.
  • 60. Howard PN, Kollanyi B. Bots, #StrongerIn, and #Brexit: computational propaganda during the UK-EU referendum. arXiv . 2016; p. arXiv–1606.
  • 61. Kasra M, Shen C, O’Brien JF. Seeing is believing: how people fail to identify fake images on the Web. In Proc of the 2018 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI). 2018; p. 1–6.
  • 62. Kirby EJ. The city getting rich from fake news. BBC News . 2016;5.
  • 63. Hu Z, Yang Z, Li Q, Zhang A, Huang Y. Infodemiological study on COVID-19 epidemic and COVID-19 infodemic. Preprints . 2020; p. 2020020380.
  • 71. Knaus C. Disinformation and lies are spreading faster than Australia’s bushfires. The Guardian . 2020;11.
  • 72. Karimi H, Roy P, Saba-Sadiya S, Tang J. Multi-source multi-class fake news detection. In: In Proc. of the 27th International Conference on Computational Linguistics ; 2018. p. 1546–1557.
  • 73. Wang WY. “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv . 2017; p. arXiv–1705.
  • 74. Pérez-Rosas V, Kleinberg B, Lefevre A, Mihalcea R. Automatic Detection of Fake News. arXiv . 2017; p. arXiv–1708.
  • 75. Yang Y, Zheng L, Zhang J, Cui Q, Li Z, Yu PS. TI-CNN: Convolutional Neural Networks for Fake News Detection. arXiv . 2018; p. arXiv–1806.
  • 76. Kumar V, Khattar D, Gairola S, Kumar Lal Y, Varma V. Identifying clickbait: A multi-strategy approach using neural networks. In: In Proc. of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR); 2018. p. 1225–1228.
  • 77. Yoon S, Park K, Shin J, Lim H, Won S, Cha M, et al. Detecting incongruity between news headline and body text via a deep hierarchical encoder. In: Proc. of the AAAI Conference on Artificial Intelligence. vol. 33; 2019. p. 791–800.
  • 78. Lu Y, Zhang L, Xiao Y, Li Y. Simultaneously detecting fake reviews and review spammers using factor graph model. In: In Proc. of the 5th Annual ACM Web Science Conference (WebSci); 2013. p. 225–233.
  • 79. Mukherjee A, Venkataraman V, Liu B, Glance N. What yelp fake review filter might be doing? In: In Proc. of The International AAAI Conference on Weblogs and Social Media (ICWSM); 2013. p. 409–418.
  • 80. Benevenuto F, Magno G, Rodrigues T, Almeida V. Detecting spammers on twitter. In: In Proc. of the 8th Annual Collaboration , Electronic messaging , Anti-Abuse and Spam Conference (CEAS). vol. 6; 2010. p. 12.
  • 81. Lee K, Caverlee J, Webb S. Uncovering social spammers: social honeypots+ machine learning. In: In Proc. of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR); 2010. p. 435–442.
  • 82. Li FH, Huang M, Yang Y, Zhu X. Learning to identify review spam. In: In Proc. of the 22nd International Joint Conference on Artificial Intelligence (IJCAI); 2011. p. 2488–2493.
  • 83. Wang J, Wen R, Wu C, Huang Y, Xion J. Fdgars: Fraudster detection via graph convolutional networks in online app review system. In: In Proc. of The 2019 World Wide Web Conference (WWW); 2019. p. 310–316.
  • 84. Castillo C, Mendoza M, Poblete B. Information credibility on twitter. In: In Proc. of the 20th International Conference on World Wide Web (WWW); 2011. p. 675–684.
  • 85. Jo Y, Kim M, Han K. How Do Humans Assess the Credibility on Web Blogs: Qualifying and Verifying Human Factors with Machine Learning. In: In Proc. of the 2019 CHI Conference on Human Factors in Computing Systems (CHI); 2019. p. 1–12.
  • 86. Che X, Metaxa-Kakavouli D, Hancock JT. Fake News in the News: An Analysis of Partisan Coverage of the Fake News Phenomenon. In: In Proc. of the 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW); 2018. p. 289–292.
  • 87. Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B. A Stylometric Inquiry into Hyperpartisan and Fake News. arXiv . 2017; p. arXiv–1702.
  • 89. Popat K, Mukherjee S, Strötgen J, Weikum G. Credibility assessment of textual claims on the web. In: In Proc. of the 25th ACM International on Conference on Information and Knowledge Management (CIKM); 2016. p. 2173–2178.
  • 90. Shen TJ, Cowell R, Gupta A, Le T, Yadav A, Lee D. How gullible are you? Predicting susceptibility to fake news. In: In Proc. of the 10th ACM Conference on Web Science (WebSci); 2019. p. 287–288.
  • 91. Gupta A, Lamba H, Kumaraguru P, Joshi A. Faking sandy: characterizing and identifying fake images on twitter during hurricane sandy. In: In Proc. of the 22nd International Conference on World Wide Web ; 2013. p. 729–736.
  • 92. He P, Li H, Wang H. Detection of fake images via the ensemble of deep representations from multi color spaces. In: In Proc. of the 26th IEEE International Conference on Image Processing (ICIP). IEEE; 2019. p. 2299–2303.
  • 93. Sun Y, Chen Y, Wang X, Tang X. Deep learning face representation by joint identification-verification. Advances in Neural Information Processing Systems . 2014; p. 1–9.
  • 94. Huh M, Liu A, Owens A, Efros AA. Fighting fake news: Image splice detection via learned self-consistency. In: In Proc. of the European Conference on Computer Vision (ECCV); 2018. p. 101–117.
  • 95. Dang H, Liu F, Stehouwer J, Liu X, Jain AK. On the detection of digital face manipulation. In: In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020. p. 5781–5790.
  • 96. Tariq S, Lee S, Kim H, Shin Y, Woo SS. Detecting both machine and human created fake face images in the wild. In Proc of the 2nd International Workshop on Multimedia Privacy and Security (MPS). 2018; p. 81–87.
  • 97. Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild. In: In Proc. of the IEEE International Conference on Computer Vision (ICCV); 2015. p. 3730–3738.
  • 98. Wang R, Ma L, Juefei-Xu F, Xie X, Wang J, Liu Y. Fakespotter: A simple baseline for spotting ai-synthesized fake faces. arXiv . 2019; p. arXiv–1909.
  • 99. Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2019. p. 4401–4410.
  • 100. Yang X, Li Y, Qi H, Lyu S. Exposing GAN-synthesized faces using landmark locations. In Proc of the ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec). 2019; p. 113–118.
  • 101. Zhang X, Karaman S, Chang SF. Detecting and simulating artifacts in gan fake images. In Proc of the 2019 IEEE International Workshop on Information Forensics and Security (WIFS). 2019; p. 1–6.
  • 102. Amerini I, Galteri L, Caldelli R, Del Bimbo A. Deepfake video detection through optical flow based cnn. In Proc of the IEEE International Conference on Computer Vision Workshops (ICCV). 2019; p. 1205–1207.
  • 103. Li Y, Lyu S. Exposing deepfake videos by detecting face warping artifacts. arXiv . 2018; p. 46–52.
  • 104. Korshunov P, Marcel S. Deepfakes: a new threat to face recognition? assessment and detection. arXiv . 2018; p. arXiv–1812.
  • 105. Jeon H, Bang Y, Woo SS. Faketalkerdetect: Effective and practical realistic neural talking head detection with a highly unbalanced dataset. In Proc of the IEEE International Conference on Computer Vision Workshops (ICCV). 2019; p. 1285–1287.
  • 106. Chung JS, Nagrani A, Zisserman A. Voxceleb2: Deep speaker recognition. arXiv . 2018; p. arXiv–1806.
  • 107. Songsri-in K, Zafeiriou S. Complement face forensic detection and localization with faciallandmarks. arXiv . 2019; p. arXiv–1910.
  • 108. Ma S, Cui L, Dai D, Wei F, Sun X. Livebot: Generating live video comments based on visual and textual contexts. In Proc of the AAAI Conference on Artificial Intelligence (AAAI). 2019; p. 6810–6817.
  • 109. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Advances in Neural Information Processing Systems . 2014; p. arXiv–1406.
  • 110. Metz R. The number of deepfake videos online is spiking. Most are porn; 2019. Available from: https://cnn.it/3xPJRT2 .
  • 111. Strömbäck J. In search of a standard: Four models of democracy and their normative implications for journalism. Journalism Studies . 2005; p. 331–345.
  • 112. Brenan M. Americans’ Trust in Mass Media Edges Down to 41%; 2019. Available from: https://bit.ly/3ejl6ql .
  • 114. Ladd JM. Why Americans hate the news media and how it matters. Princeton University Press; 2012.
  • 116. Weisberg J. Bubble trouble: Is web personalization turning us into solipsistic twits; 2011. Available from: https://bit.ly/3xOGFqD .
  • 117. Pariser E. The filter bubble: How the new personalized web is changing what we read and how we think. Penguin; 2011.
  • 118. Lewis P, McCormick E. How an ex-YouTube insider investigated its secret algorithm. The Guardian . 2018;2.
  • 120. Kavanaugh AL, Yang S, Li LT, Sheetz SD, Fox EA, et al. Microblogging in crisis situations: Mass protests in Iran, Tunisia, Egypt; 2011.
  • 121. Mustafaraj E, Metaxas PT, Finn S, Monroy-Hernández A. Hiding in Plain Sight: A Tale of Trust and Mistrust inside a Community of Citizen Reporters. In Proc of the 6th International AAAI Conference on Weblogs and Social Media (ICWSM) . 2012; p. 250–257.
  • 125. Tajfel H. Human groups and social categories: Studies in social psychology. Cup Archive ; 1981.
  • 127. Correia V, Festinger L. Biased argumentation and critical thinking. Rhetoric and Cognition: Theoretical Perspectives and Persuasive Strategies . 2014; p. 89–110.
  • 128. Festinger L. A theory of cognitive dissonance. Stanford University Press; 1957.
  • 136. John OP, Srivastava S, et al. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: theory and research . 1999; p. 102–138.
  • 138. Shu K, Wang S, Liu H. Understanding user profiles on social media for fake news detection. In: 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE; 2018. p. 430–435.
  • 142. Costa PT, McCrae RR. The NEO personality inventory. Psychological Assessment Resources; 1985.
  • 146. Panetta K. Gartner top strategic predictions for 2018 and beyond; 2017. Available from: https://gtnr.it/33kuljQ .
  • 147. Doris-Down A, Versee H, Gilbert E. Political blend: an application designed to bring people together based on political differences. In Proc of the 6th International Conference on Communities and Technologies (C&T). 2013; p. 120–130.
  • 148. Karduni A, Wesslen R, Santhanam S, Cho I, Volkova S, Arendt D, et al. Can You Verifi This? Studying Uncertainty and Decision-Making About Misinformation Using Visual Analytics. In Proc of the 12th International AAAI Conference on Web and Social Media (ICWSM). 2018;12(1).
  • 149. Basol M, Roozenbeek J, van der Linden S. Good news about bad news: gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition . 2020;3(1).
  • 151. Tambuscio M, Ruffo G, Flammini A, Menczer F. Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks. In Proc of the 24th International Conference on World Wide Web (WWW). 2015; p. 977–982.
  • 152. Friggeri A, Adamic L, Eckles D, Cheng J. Rumor cascades. In Proc of the 8th International AAAI Conference on Weblogs and Social Media (ICWSM) . 2014;8.
  • 153. Lerman K, Ghosh R. Information contagion: An empirical study of the spread of news on digg and twitter social networks. arXiv . 2010; p. arXiv–1003.
  • 155. Cantril H. The invasion from Mars: A study in the psychology of panic. Transaction Publishers; 1952.
  • 158. Bailey NT, et al. The mathematical theory of infectious diseases and its applications. Charles Griffin & Company Ltd; 1975.
  • 159. on Religion PF, Life P. Growing Number of Americans Say Obama Is a Muslim; 2010.
  • 160. Craik KJW. The nature of explanation. Cambridge University Press; 1943.
  • 161. Johnson-Laird PN. Mental models: Towards a cognitive science of language, inference, and consciousness. 6. Harvard University Press; 1983.
  • 162. Johnson-Laird PN, Girotto V, Legrenzi P. Mental models: a gentle guide for outsiders. Sistemi Intelligenti . 1998;9(68).
  • 164. Rouse WB, Morris NM. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin . 1986;100(3).
  • 166. Wash R, Rader E. Influencing mental models of security: a research agenda. In Proc of the 2011 New Security Paradigms Workshop (NSPW). 2011; p. 57–66.
  • 167. Tversky B. Cognitive maps, cognitive collages, and spatial mental models. In Proc of European conference on spatial information theory (COSIT). 1993; p. 14–24.
  • 169. Mayer RE, Mathias A, Wetzell K. Fostering understanding of multimedia messages through pre-training: Evidence for a two-stage theory of mental model construction. Journal of Experimental Psychology: Applied . 2002;8(3).
  • 172. Morgan MG, Fischhoff B, Bostrom A, Atman CJ, et al. Risk communication: A mental models approach. Cambridge University Press; 2002.
  • 174. Kang R, Dabbish L, Fruchter N, Kiesler S. “My Data Just Goes Everywhere:” User mental models of the internet and implications for privacy and security. In Proc of 11th Symposium On Usable Privacy and Security . 2015; p. 39–52.
  • 178. Facebook Journalism Project. Facebook’s Approach to Fact-Checking: How It Works; 2020. https://bit.ly/34QgOlj .
  • 179. Sardarizadeh S. Instagram fact-check: Can a new flagging tool stop fake news?; 2019. Available from: https://bbc.in/33fg5ZR .
  • 180. Greenfield S. Mind change: How digital technologies are leaving their mark on our brains. Random House Incorporated ; 2015.
  • 181. European Commission. European Media Literacy Week; 2020. https://bit.ly/36H9MR3 .
  • 182. Media Literacy Now. U.S. media literacy policy report 2020; 2020. https://bit.ly/33LkLqQ .
  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Understanding the...

Understanding the complex links between social media and health behaviour

How are social media influencing vaccination read the full collection.

  • Related content
  • Peer review
  • Fabiana Zollo , professor 1 2 ,
  • Cornelia Betsch , professor 4 5 ,
  • Marco Delmastro , research fellow 1 6 ,
  • 1 Ca’ Foscari University of Venice, Venice, Italy
  • 2 New Institute Centre for Environmental Humanities, Venice, Italy
  • 3 City University of London, London, United Kingdom
  • 4 University of Erfurt, Erfurt, Germany
  • 5 Bernhard Nocht Institute for Tropical Medicine, Hamburg, Germany
  • 6 Enrico Fermi Center for Study and Research, Rome, Italy
  • 7 Sapienza University, Rome, Italy
  • Correspondence to: F Zollo fabiana.zollo{at}unive.it

Fabiana Zollo and colleagues call for comprehensive, robust research on the influence of social media on health behaviour in order to improve public health responses

Key messages

Monitoring social media is important to understand public perceptions, biases, and false beliefs

Drawing conclusions on how social media affects health behaviour is difficult because measures are unstandardised, sources are limited, and data are incomplete and biased

Rigorous research is needed from varied settings and demographics to improve understanding of the effect of social media on health behaviour

Over 90% of people connected to the internet are active on social media, with a total of 4.76 billion users worldwide in January 2023. 1 The digital revolution has reshaped the news landscape and changed the way users interact with information. Social media’s targeted communication rapidly reaches vast audiences, who in turn actively participate in shaping and engaging with content. This marks a departure from the more passive consumption patterns associated with traditional media.

Over the past few years, social media have emerged as a primary source of news for many people, despite widespread user concerns about potential misinformation ( box 1 ) and the necessity to discern between reliable and untrustworthy information. 4 Data from six continents also indicate a preference among users for content that reflects their reading or viewing history, rather than content selected by journalists, suggesting a shift towards personalised and user driven content curation. In this evolving landscape, celebrities, influencers, and social media personalities are increasingly assuming roles as news sources, especially on platforms such as TikTok, Instagram, and Snapchat.

What is misinformation?

The term “misinformation” is commonly used, yet its definitions can vary between studies, methods, and scholars, leading to disagreements on its precise meaning 2

Misinformation encompasses false, inaccurate, or misleading information, and is often distinguished from disinformation by its lack of deliberate creation and dissemination with the intent to deceive. Classifications may also extend to conspiracy theories and propaganda

Misinformation poses a substantial risk to public health since it can undermine compliance with important public health measures such as vaccination uptake or physical distancing guidelines 3

Public health organisations have recognised the crucial role of social media in shaping the public debate and are working to utilise social media platforms to inform the public, combat misinformation, and improve health knowledge, attitudes, and behaviour. However, causal research on how social media information affects actual health behaviour is inconclusive, primarily because of methodological challenges associated with connecting online activity to offline actions and accurately measuring behavioural outcomes. 5 Thus, exploring the complex relations between information consumption, personal beliefs, and societal effects remains an important area of study. The development of vaccines against covid-19 was accompanied by an infodemic—an overabundance of information, not all of which is accurate. 6 Study of this phenomenon provides useful insight into the interplay between social media and health behaviours and the opportunities and challenges for research and practice.

Access to data on misinformation and health behaviour

Social media have provided unprecedented opportunities through which health information, including misinformation, can be amplified and spread. However, the impact of exposure to and interaction with misinformation on health behaviour remains a subject of debate within the scientific community. 7 While considerable evidence exists from research indicating that misinformation can affect knowledge, attitudes, or behavioural intentions, reaching a consensus in the scientific community on the links between social media and actual health behaviour has been challenging due to lack of data and inherent limitations in study design.

A recent systematic review of randomised controlled trials, for example, highlighted the need for more conceptual and theoretical work on the causal pathways through which misinformation shapes people’s beliefs and behaviours. 8 This influence is often indirect, meaning that exposure to misinformation may affect changes in health behaviour by shaping psychological factors such as beliefs, feelings, and motivations (the so called psychological antecedents) which are commonly used to explain and predict behaviours. However, the roles of potential mediators such as emotions, social norms, and trust are still poorly understood. While all the studies in that review assessed the effect of misinformation on antecedents (intentions, attitudes, and subjective norms), only two of them measured actual behaviour. These studies included behavioural measures of activism, such as the act of signing petitions, yet none examined the effects of misinformation exposure on direct health measures or behaviours, such as vaccination. Indeed, the literature is unclear about the causal effect of individual online activity on behaviour. For example, while some research shows that risk perceptions and vaccination intentions can be affected by short visits to antivaccination websites, 9 exposure to antivaccination comments posted on news stories online appears to have little influence on individuals’ perspectives regarding vaccines, although it could potentially undermine individuals’ trust in important health communication institutions. 10

Furthermore, drawing links or establishing causality is not a trivial endeavour. One important obstacle to understanding the effect of social media on behaviour lies in the challenge of linking online activity with offline behaviour. This difficulty stems from factors such as data scarcity and privacy concerns, particularly regarding personal and sensitive information, that complicate efforts to assess how online interactions translate into real word actions. Establishing a clear connection between information consumption on social media platforms and tangible behavioural outcomes, while excluding the influence of external variables, is a complex task, especially when examining behaviour over medium to long term periods. Examining behaviour in the long term requires longitudinal data, which are often lacking due to the resources (such as costs and time) required for such research. Adding to these challenges is the lack of standardised measures and definitions across studies. As we have seen, misinformation is not unanimously defined ( box 1 ), and health behaviours also encompass a variety of actions—in the covid-19 pandemic alone, behaviours ranged from adherence to nonpharmaceutical measures such as physical distancing to lockdowns, from handwashing to vaccinations. Even with clear definitions, measuring health behaviours reliably and accurately remains a considerable challenge. 11 Many studies rely heavily on self reported data, which may have a low correlation with objectively measured behaviour. Moreover, many differences exist between countries in terms of how data of this nature are collected. This variability makes it difficult to extrapolate definitive findings from different settings and contexts.

The covid-19 pandemic presented a unique opportunity to further investigate the potential effect of infodemics on health behaviour, especially on vaccine hesitancy and refusal. The evidence in the literature paints a complex picture of the relationship between social media misinformation and vaccination. On the one hand, researchers have identified a negative relationship between sharing misinformation online and vaccination uptake in the United States of America. 12 Similarly, a study in the UK and US suggests that exposure to misinformation reduces individuals’ intention to vaccinate for their own and others’ protection. 13 These findings highlight the potentially detrimental effect of misinformation on public health efforts. A large review of 205 articles looked more specifically into conspiracies around vaccination (under review by journal not yet accepted). While some studies showed causal evidence of exposure to conspiracies on vaccination intentions, most studies were correlative and behaviour was not investigated. Thus, the findings of many studies suggest that uncertainty persists on the causality of this relationship. Further investigation into the association between social media behaviour and attitudes towards covid-19 vaccines showed that vaccine hesitancy was associated with interaction and consumption of low quality information online. 14 These results remained significant even after accounting for relevant variables, suggesting that social media behaviour may play an important role in predicting vaccine attitudes. Supporting this finding, a recent systematic review examining the role of social media as a predictor of covid-19 vaccine outcomes showed predominantly negative associations between social media predictors and vaccine perceptions, in particular concerning vaccine hesitancy. However, the evidence suggests a multifaceted landscape, with findings varying across different social media predictors, populations, and platforms. 15 Moreover, while concerns about infodemics shaping individuals’ behavioural intentions are prevalent, some findings suggest a more nuanced reality. Despite the proliferation of information and debate on covid-19 vaccines, the relatively stable and positive trend in vaccine acceptance rates at an aggregated level challenges simplistic explanations about the effect of misinformation. 16

Overall, despite the importance of the effect of social media and misinformation on health behaviour and the extensive assumptions within policy debates, the literature fails to provide definitive conclusions on a clear association between social media and health behaviour. As discussed earlier, measuring behavioural change is challenging due to the scarcity of studies incorporating actual behavioural measures, limitations in laboratory experiments, and difficulties in establishing connections with online activity. Studies are often confined to specific geographical areas, primarily Western countries (notably the US), or limited to specific time periods. In addition, data samples are often constrained by the lack of comprehensive information, such as demographics or geolocation. Furthermore, longitudinal studies are required with extensive access to social media behaviour as well as access to actual behavioural data.

Therefore, further studies are needed to assess the causal effect of social media on offline behaviour. This will require overcoming the ethical issues of data linkage and protection. Such studies will need to integrate social media data with information from different sources, adjusting statistical methods to handle sampling biases, and accounting for the inherent dynamics of social media discussions, which are often characterised by extreme polarisation and user segregation.

Social media dynamics and health

Social media debates are often marked by intense segregation. Users tend to seek out information that aligns with their existing beliefs while dismissing opposing viewpoints. Social media platforms, especially those employing content filtering algorithms, tend to exploit this natural tendency by favouring content aligned with the user’s history and preferences. 17 After all, platforms such as Facebook are built on the foundational unit of the “like,” which represents the most fundamental action a user can take within the environment. Selective exposure to like minded content can contribute to the formation of echo chambers—that is, well separated groups of like minded users—where individuals are surrounded by others who share similar opinions. This phenomenon can act as a breeding ground for the spread of misinformation and hinder its correction. 18 Analysis of Facebook users has shown the existence of opposing and separate communities— provaccine and antivaccine—with the latter group generally being more active. 19 Another study on the public discussion on covid-19 vaccines found similar results, showing users’ inclination to interact with like minded individuals and the presence of segregated communities, with antivaccine groups exhibiting greater cohesiveness and stability over time. 20 Recent research has found that, on Facebook, like minded sources—that is, sources that align with users’ political leanings— are indeed prevalent in what people see on the platform, although they do not seem to affect polarisation. In other words, no measurable effect on polarisation was seen when exposure to content from like minded sources was reduced. 21 Echo chambers and user segregation are crucial factors, as provaccination campaigns, for example, may become confined to individuals who already support vaccination, thus limiting their overall effectiveness. Recent research has explored how users engaged with covid-19 information on social media, and how such engagement changed over time. 22 Despite earlier findings suggesting that false news might spread faster than trustworthy information, analysis of various platforms indicates no substantial difference in the spread of reliable versus questionable information. Posts and interactions with misinformation sources follow similar growth patterns to those of reliable ones, although scaling factors specific to the platform apply. Mainstream platforms and Reddit have a smaller proportion of posts from questionable sources relative to reliable ones, while Gab stands out by notably amplifying posts from questionable sources. These results suggest that the primary drivers behind the spreading of reliable information and misinformation are the specific rules of the platform and the behaviours of groups and individuals engaged in the discussion, rather than the nature of the content.

Opportunities and challenges for using social media to improve health

The public actively engages in public debates through social media platforms based on their prior perceptions and beliefs. Identity is important; the extent to which people identify with their vaccination status is linked to the way the social media platforms are used. People who identify more strongly with being unvaccinated are less likely to use traditional news sources and rely more on information from social media and messaging services. 23 In this context, monitoring social media has become an essential and powerful tool for a dynamic and real time understanding of the information available to large parts of the public, their perceptions, and the presence of biases and false beliefs. The vast amount of data generated online enables the exploration and analysis of sociocognitive factors underlying the consumption and processing of information. When examined and aggregated, these data can provide valuable insights and reveal hidden patterns on people’s perspectives. These insights can, in turn, support public communication efforts, ranging from monitoring public sentiment, concerns, and reactions to helping identify the informational needs of the population. Ultimately, this information can drive the development of recommendations aimed at improving the effectiveness of communication strategies and health measures. For instance, a recent World Health Organization manual offers a guide to addressing the gap between health guidance recommendations and population behaviour using social listening. 24 Social media sources can be used to respond to specific question of concern, such as understanding why a certain community remains undervaccinated despite widespread availability of vaccines and strong recommendations for vaccination. This approach may facilitate a deeper understanding of the information environment of the population, their behaviour in seeking health information, and their health behaviours, thereby enabling the development of tailored strategies and recommendations.

Social media analyses usually rely on large amounts of data. However, it is important to acknowledge that these data may relate to unrepresentative segments of the population. 25 Therefore, it is crucial to pay careful attention to sample creation, which involves selecting a smaller subset of data from a larger population using a predefined selection method. This statistical challenge, known as sample selection bias, must be duly considered when seeking information about the overall population or specific groups who are less inclined to use social media. Although often oversimplified, social media presents a varied landscape, and the extent of sample selection problems may vary across countries and platforms. 26 For instance, Facebook usually covers a broader spectrum of the population in terms of both audience size and diversity of social groups, while X (formerly Twitter) and TikTok predominantly cater to specific subgroups, such as professionals and younger individuals. Additionally, the varying levels of user engagement in actively participating in conversations on social networks through comments, posts, likes, and other forms of interaction can also contribute to sample bias. Combining social data with information from other sources (for example, census data, electoral rolls, surveys, and health data) and employing statistical methods to adjust for sampling biases is thus crucial to obtain solid research outcomes (see, in another context, previous work on the Brexit referendum 27 ). Such considerations are important for health communication campaigns that are inclusive and resonate with target audiences.

Learning the lessons

The covid-19 pandemic has heightened concerns about the potential effect of misinformation in posing risks to public health. Yet, the issue extends beyond the recent crisis, and is important in shaping our response to future pandemics. Ensuring dissemination of accurate information is essential not only to safeguard public health in the present but also to mitigate risks and enhance preparedness for potential future crises. Assessing the effect of social media use on health behaviour is a complex task, with current evidence yet to be consolidated. To avoid biased outcomes, a comprehensive, multidimensional, and causal approach is necessary when investigating the interplay between online information and real world behaviour. Understanding causal relationships and their drivers will allow interventions to be developed to reduce the detrimental effects of online information on health. It is also essential to define clear outcomes. Indeed, online information about health can cover various aspects, including the formation of public opinion, effects on public discourse and agenda setting, interactions between doctors and patients, as well as influences on health behaviours in the short, medium, or long term. 28

Further research is required to identify vulnerable populations and gain a better understanding of sociodemographic and ideological factors influencing users’ behaviour. Additionally, cultural differences in information consumption and behaviours must also be considered to develop targeted and effective interventions and mitigate the influence of health misinformation. 29

Addressing these questions requires robust data and study designs, with collaboration from digital platforms being crucial in accessing such data. A recent cooperation with Meta 30 allowed researchers to conduct multiple experiments and provided extensive access to user data from Facebook and Instagram. However, the success of this model relies entirely on the willingness of social media companies to participate. This highlights the need for an ethical, transparent collaboration and advocates for the democratisation of social media research through equitable data access. 31 Future studies should replicate these efforts in contexts other than politics, such as health, and expand research beyond the US to achieve a more comprehensive understanding of the effect of social media on behaviour globally.

Acknowledgments

AB, FZ, and WQ acknowledge support from the IRIS Infodemic Coalition (UK government, grant No SCH-00001-3391).

Contributors and sources: The authors have collective experience in studying social dynamics and misinformation. AB, FZ, and WQ have expertise in data science and are cofounders of the IRIS Academic Research Group, dedicated to understanding infodemics and fostering healthy information ecosystems through cross disciplinary collaboration. CB specialises in health communication and decision making. MD is an economist with expertise in investigating social issues, including public discourse and health related decisions. All authors contributed to the writing of the paper and developing the list of references. FZ is the guarantor.

Competing interests: We have read and understood the BMJ policy on declaration of interests and have no interests to declare.

Provenance and peer review Commissioned; externally peer reviewed.

This article is part of a collection that was proposed by the Advancing Health Online Initiative (AHO), a consortium of partners including Meta and MSD, and several non-profit collaborators ( https://www.bmj.com/social-media-influencing-vaccination ). Research articles were submitted following invitations by The BMJ and associated BMJ journals, after consideration by an internal BMJ committee. Non-research articles were independently commissioned by The BMJ with advice from Sander van der Linden, Alison Buttenheim, Briony Swire-Thompson, and Charles Shey Wiysonge. Peer review, editing, and decisions to publish articles were carried out by the respective BMJ journals. Emma Veitch was the editor for this collection.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

  • ↵ Kemp S. Digital 2023: global overview report. DataReportal–Global Digital Insights, 2023. https://datareportal.com/reports/digital-2023-global-overview-report
  • Kozyreva A ,
  • Lewandowsky S ,
  • van der Linden S
  • ↵ Reuters Institute digital news report. 2023. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf
  • Miller BS ,
  • Manning EM ,
  • ↵ World Health Organization. Infodemic. 2023 https://www.who.int/health-topics/infodemic
  • Briand SC ,
  • Cinelli M ,
  • Renkewitz F ,
  • DeVerna MR ,
  • de Figueiredo A ,
  • Piatek SJ ,
  • de Graaf K ,
  • Roozenbeek J ,
  • Van Bavel JJ ,
  • McKinley CJ ,
  • ↵ Valensise CM, Cinelli M, Nadini M, Galeazzi A, Peruzzi A, Etta G, et al. Lack of evidence for correlation between COVID-19 infodemic and vaccine acceptance. arXiv 2021:2107.07946. [Preprint.] doi: 10.48550/arXiv.2107.07946
  • De Francisci Morales G ,
  • Galeazzi A ,
  • Quattrociocchi W ,
  • Del Vicario M ,
  • Schmidt AL ,
  • Quattrociocchi W
  • Santoro A ,
  • Scantamburlo T ,
  • Baronchelli A ,
  • Thorson E ,
  • Sprengholz P ,
  • ↵ World Health Organization, United Nations Children’s Fund. How to build an infodemic insights report in six steps. 2023. https://www.who.int/publications/i/item/9789240075658
  • Olteanu A ,
  • Castillo C ,
  • Cherepnalkoski D ,
  • Mozetič I ,
  • Kralj Novak P
  • Torbica A ,
  • Lorenz-Spreen P ,
  • ↵ Tollefson J. Tweaking Facebook feeds is no easy fix for polarization, studies find. Nature 2023 Jul 27. doi: 10.1038/d41586-023-02420-z

research paper on news media

  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

Fake news and the spread of misinformation: A research roundup

This collection of research offers insights into the impacts of fake news and other forms of misinformation, including fake Twitter images, and how people use the internet to spread rumors and misinformation.

research paper on news media

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by Denise-Marie Ordway, The Journalist's Resource September 1, 2017

This <a target="_blank" href="https://journalistsresource.org/politics-and-government/fake-news-conspiracy-theories-journalism-research/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

It’s too soon to say whether Google ’s and Facebook ’s attempts to clamp down on fake news will have a significant impact. But fabricated stories posing as serious journalism are not likely to go away as they have become a means for some writers to make money and potentially influence public opinion. Even as Americans recognize that fake news causes confusion about current issues and events, they continue to circulate it. A December 2016 survey by the Pew Research Center suggests that 23 percent of U.S. adults have shared fake news, knowingly or unknowingly, with friends and others.

“Fake news” is a term that can mean different things, depending on the context. News satire is often called fake news as are parodies such as the “Saturday Night Live” mock newscast Weekend Update. Much of the fake news that flooded the internet during the 2016 election season consisted of written pieces and recorded segments promoting false information or perpetuating conspiracy theories. Some news organizations published reports spotlighting examples of hoaxes, fake news and misinformation  on Election Day 2016.

The news media has written a lot about fake news and other forms of misinformation, but scholars are still trying to understand it — for example, how it travels and why some people believe it and even seek it out. Below, Journalist’s Resource has pulled together academic studies to help newsrooms better understand the problem and its impacts. Two other resources that may be helpful are the Poynter Institute’s tips on debunking fake news stories and the  First Draft Partner Network , a global collaboration of newsrooms, social media platforms and fact-checking organizations that was launched in September 2016 to battle fake news. In mid-2018, JR ‘s managing editor, Denise-Marie Ordway, wrote an article for  Harvard Business Review explaining what researchers know to date about the amount of misinformation people consume, why they believe it and the best ways to fight it.

—————————

“The Science of Fake News” Lazer, David M. J.; et al.   Science , March 2018. DOI: 10.1126/science.aao2998.

Summary: “The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.”

“Who Falls for Fake News? The Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytical Thinking” Pennycook, Gordon; Rand, David G. May 2018. Available at SSRN. DOI: 10.2139/ssrn.3023545.

Abstract:  “Inaccurate beliefs pose a threat to democracy and fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. Here we present three studies (MTurk, N = 1,606) investigating the cognitive psychological profile of individuals who fall prey to fake news. We find consistent evidence that the tendency to ascribe profundity to randomly generated sentences — pseudo-profound bullshit receptivity — correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim regarding their level of knowledge (i.e. who produce bullshit) also perceive fake news as more accurate. Conversely, the tendency to ascribe profundity to prototypically profound (non-bullshit) quotations is not associated with media truth discernment; and both profundity measures are positively correlated with willingness to share both fake and real news on social media. We also replicate prior results regarding analytic thinking — which correlates negatively with perceived accuracy of fake news and positively with media truth discernment — and shed further light on this relationship by showing that it is not moderated by the presence versus absence of information about the new headline’s source (which has no effect on perceived accuracy), or by prior familiarity with the news headlines (which correlates positively with perceived accuracy of fake and real news). Our results suggest that belief in fake news has similar cognitive properties to other forms of bullshit receptivity, and reinforce the important role that analytic thinking plays in the recognition of misinformation.”

“Social Media and Fake News in the 2016 Election” Allcott, Hunt; Gentzkow, Matthew. Working paper for the National Bureau of Economic Research, No. 23089, 2017.

Abstract: “We present new evidence on the role of false stories circulated on social media prior to the 2016 U.S. presidential election. Drawing on audience data, archives of fact-checking websites, and results from a new online survey, we find: (i) social media was an important but not dominant source of news in the run-up to the election, with 14 percent of Americans calling social media their “most important” source of election news; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared eight million times; (iii) the average American saw and remembered 0.92 pro-Trump fake news stories and 0.23 pro-Clinton fake news stories, with just over half of those who recalled seeing fake news stories believing them; (iv) for fake news to have changed the outcome of the election, a single fake article would need to have had the same persuasive effect as 36 television campaign ads.”

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” Chan, Man-pui Sally; Jones, Christopher R.; Jamieson, Kathleen Hall; Albarracín, Dolores. Psychological Science , September 2017. DOI: 10.1177/0956797617714579.

Abstract: “This meta-analysis investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Because misinformation can lead to poor decisions about consequential matters and is persistent and difficult to correct, debunking it is an important scientific and public-policy goal. This meta-analysis (k = 52, N = 6,878) revealed large effects for presenting misinformation (ds = 2.41–3.08), debunking (ds = 1.14–1.33), and the persistence of misinformation in the face of debunking (ds = 0.75–1.06). Persistence was stronger and the debunking effect was weaker when audiences generated reasons in support of the initial misinformation. A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect.”

“Displacing Misinformation about Events: An Experimental Test of Causal Corrections” Nyhan, Brendan; Reifler, Jason. Journal of Experimental Political Science , 2015. doi: 10.1017/XPS.2014.22.

Abstract: “Misinformation can be very difficult to correct and may have lasting effects even after it is discredited. One reason for this persistence is the manner in which people make causal inferences based on available information about a given event or outcome. As a result, false information may continue to influence beliefs and attitudes even after being debunked if it is not replaced by an alternate causal explanation. We test this hypothesis using an experimental paradigm adapted from the psychology literature on the continued influence effect and find that a causal explanation for an unexplained event is significantly more effective than a denial even when the denial is backed by unusually strong evidence. This result has significant implications for how to most effectively counter misinformation about controversial political events and outcomes.”

“Rumors and Health Care Reform: Experiments in Political Misinformation” Berinsky, Adam J. British Journal of Political Science , 2015. doi: 10.1017/S0007123415000186.

Abstract: “This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on ‘fluency’ — the ease of information recall — this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.”

“Rumors and Factitious Informational Blends: The Role of the Web in Speculative Politics” Rojecki, Andrew; Meraz, Sharon. New Media & Society , 2016. doi: 10.1177/1461444814535724.

Abstract: “The World Wide Web has changed the dynamics of information transmission and agenda-setting. Facts mingle with half-truths and untruths to create factitious informational blends (FIBs) that drive speculative politics. We specify an information environment that mirrors and contributes to a polarized political system and develop a methodology that measures the interaction of the two. We do so by examining the evolution of two comparable claims during the 2004 presidential campaign in three streams of data: (1) web pages, (2) Google searches, and (3) media coverage. We find that the web is not sufficient alone for spreading misinformation, but it leads the agenda for traditional media. We find no evidence for equality of influence in network actors.”

“Analyzing How People Orient to and Spread Rumors in Social Media by Looking at Conversational Threads” Zubiaga, Arkaitz; et al. PLOS ONE, 2016. doi: 10.1371/journal.pone.0150989.

Abstract: “As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumors, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumor. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumor threads (4,842 tweets) associated with 9 newsworthy events. We analyze this dataset to understand how users spread, support, or deny rumors that are later proven true or false, by distinguishing two levels of status in a rumor life cycle i.e., before and after its veracity status is resolved. The identification of rumors associated with each event, as well as the tweet that resolved each rumor as true or false, was performed by journalist members of the research team who tracked the events in real time. Our study shows that rumors that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumors once they have been debunked, users appear to be less capable of distinguishing true from false rumors when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumor. We also analyze the role of different types of users, finding that highly reputable users such as news organizations endeavor to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumors. Our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumors. The findings of our study provide useful insights for achieving this aim.”

“Miley, CNN and The Onion” Berkowitz, Dan; Schwartz, David Asa. Journalism Practice , 2016. doi: 10.1080/17512786.2015.1006933.

Abstract: “Following a twerk-heavy performance by Miley Cyrus on the Video Music Awards program, CNN featured the story on the top of its website. The Onion — a fake-news organization — then ran a satirical column purporting to be by CNN’s Web editor explaining this decision. Through textual analysis, this paper demonstrates how a Fifth Estate comprised of bloggers, columnists and fake news organizations worked to relocate mainstream journalism back to within its professional boundaries.”

“Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation”

Weeks, Brian E. Journal of Communication , 2015. doi: 10.1111/jcom.12164.

Abstract: “Citizens are frequently misinformed about political issues and candidates but the circumstances under which inaccurate beliefs emerge are not fully understood. This experimental study demonstrates that the independent experience of two emotions, anger and anxiety, in part determines whether citizens consider misinformation in a partisan or open-minded fashion. Anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. However, exposure to corrections improves belief accuracy, regardless of emotion or partisanship. The results indicate that the unique experience of anger and anxiety can affect the accuracy of political beliefs by strengthening or attenuating the influence of partisanship.”

“Deception Detection for News: Three Types of Fakes” Rubin, Victoria L.; Chen, Yimin; Conroy, Niall J. Proceedings of the Association for Information Science and Technology , 2015, Vol. 52. doi: 10.1002/pra2.2015.145052010083.

Abstract: “A fake news detection system aims to assist users in detecting and filtering out varieties of potentially deceptive news. The prediction of the chances that a particular news item is intentionally deceptive is based on the analysis of previously seen truthful and deceptive news. A scarcity of deceptive news, available as corpora for predictive modeling, is a major stumbling block in this field of natural language processing (NLP) and deception detection. This paper discusses three types of fake news, each in contrast to genuine serious reporting, and weighs their pros and cons as a corpus for text analytics and predictive modeling. Filtering, vetting, and verifying online information continues to be essential in library and information science (LIS), as the lines between traditional news and online information are blurring.”

“When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political Attitudes of Inefficacy, Alienation, and Cynicism” Balmas, Meital. Communication Research , 2014, Vol. 41. doi: 10.1177/0093650212453600.

Abstract: “This research assesses possible associations between viewing fake news (i.e., political satire) and attitudes of inefficacy, alienation, and cynicism toward political candidates. Using survey data collected during the 2006 Israeli election campaign, the study provides evidence for an indirect positive effect of fake news viewing in fostering the feelings of inefficacy, alienation, and cynicism, through the mediator variable of perceived realism of fake news. Within this process, hard news viewing serves as a moderator of the association between viewing fake news and their perceived realism. It was also demonstrated that perceived realism of fake news is stronger among individuals with high exposure to fake news and low exposure to hard news than among those with high exposure to both fake and hard news. Overall, this study contributes to the scientific knowledge regarding the influence of the interaction between various types of media use on political effects.”

“Faking Sandy: Characterizing and Identifying Fake Images on Twitter During Hurricane Sandy” Gupta, Aditi; Lamba, Hemank; Kumaraguru, Ponnurangam; Joshi, Anupam. Proceedings of the 22nd International Conference on World Wide Web , 2013. doi: 10.1145/2487788.2488033.

Abstract: “In today’s world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events. It can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper is to highlight the role of Twitter during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty-six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that the top 30 users out of 10,215 users (0.3 percent) resulted in 90 percent of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very little (only 11 percent) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97 percent accuracy in predicting fake images from real. Also, tweet-based features were very effective in distinguishing fake images tweets from real, while the performance of user-based features was very poor. Our results showed that automated techniques can be used in identifying real images from fake images posted on Twitter.”

“The Impact of Real News about ‘Fake News’: Intertextual Processes and Political Satire” Brewer, Paul R.; Young, Dannagal Goldthwaite; Morreale, Michelle. International Journal of Public Opinion Research , 2013. doi: 10.1093/ijpor/edt015.

Abstract: “This study builds on research about political humor, press meta-coverage, and intertextuality to examine the effects of news coverage about political satire on audience members. The analysis uses experimental data to test whether news coverage of Stephen Colbert’s Super PAC influenced knowledge and opinion regarding Citizens United, as well as political trust and internal political efficacy. It also tests whether such effects depended on previous exposure to The Colbert Report (Colbert’s satirical television show) and traditional news. Results indicate that exposure to news coverage of satire can influence knowledge, opinion, and political trust. Additionally, regular satire viewers may experience stronger effects on opinion, as well as increased internal efficacy, when consuming news coverage about issues previously highlighted in satire programming.”

“With Facebook, Blogs, and Fake News, Teens Reject Journalistic ‘Objectivity’” Marchi, Regina. Journal of Communication Inquiry , 2012. doi: 10.1177/0196859912458700.

Abstract: “This article examines the news behaviors and attitudes of teenagers, an understudied demographic in the research on youth and news media. Based on interviews with 61 racially diverse high school students, it discusses how adolescents become informed about current events and why they prefer certain news formats to others. The results reveal changing ways news information is being accessed, new attitudes about what it means to be informed, and a youth preference for opinionated rather than objective news. This does not indicate that young people disregard the basic ideals of professional journalism but, rather, that they desire more authentic renderings of them.”

Keywords: alt-right, credibility, truth discovery, post-truth era, fact checking, news sharing, news literacy, misinformation, disinformation

5 fascinating digital media studies from fall 2018
Facebook and the newsroom: 6 questions for Siva Vaidhyanathan

About The Author

' src=

Denise-Marie Ordway

  • Skip to main content
  • Keyboard shortcuts for audio player

Research News

  • Subscribe to Health Newsletter

Why you shouldn't worry about invasive Joro spiders

Joro spider sits in the middle of a spider web. GummyBone/Getty Images hide caption

Why you shouldn't worry about invasive Joro spiders

June 14, 2024 • Joro spiders are spreading across the east coast. They are an invasive species that most likely arrived in shipping containers from eastern Asia. Today, we look into why some people find them scary, why to not panic about them and what their trajectory illustrates about the wider issue of invasive species.

Misconduct claims may derail MDMA psychedelic treatment for PTSD

Later this year, the FDA plans to decide whether MDMA can be used to treat PTSD Eva Almqvist/Getty Images hide caption

Misconduct claims may derail MDMA psychedelic treatment for PTSD

June 3, 2024 • People with post-traumatic stress disorder (PTSD) may soon have a new treatment option: MDMA, the chemical found in ecstasy. In August, the Food and Drug Administration plans to decide whether MDMA-assisted therapy for PTSD will be approved for market based on years of research. But serious allegations of research misconduct may derail the approval timeline.

Former U.S. President Donald Trump holds a press conference following the verdict in his hush-money trial at Trump Tower on May 31, 2024 in New York City.

Former President Donald Trump holds a press conference following the verdict in his hush-money trial at Trump Tower on May 31 in New York City. Spencer Platt/Getty Images hide caption

Trump repeats claims — without evidence — that his trial was rigged

May 31, 2024 • Former President Donald Trump reiterated many of claims — without evidence — that his criminal trial was rigged, a day after a New York jury found him guilty of 34 counts of falsifying business records.

Plastic junk? Researchers find tiny particles in men's testicles

Researchers have detected microplastics in human testicles. Volodymyr Zakharov/Getty Images hide caption

Shots - Health News

Plastic junk researchers find tiny particles in men's testicles.

May 22, 2024 • The new study has scientists concerned that microplastics may be contributing to reproductive health issues.

To escape hungry bats, these flying beetles create an ultrasound 'illusion'

Harlan Gough holds a recently collected tiger beetle on a tether. Lawrence Reeves hide caption

To escape hungry bats, these flying beetles create an ultrasound 'illusion'

May 22, 2024 • A study of tiger beetles has found a possible explanation for why they produce ultrasound noises right before an echolocating bat swoops in for the kill.

A sea otter in Monterey Bay with a rock anvil on its belly and a scallop in its forepaws.

A sea otter in Monterey Bay with a rock anvil on its belly and a scallop in its forepaws. Jessica Fujii hide caption

When sea otters lose their favorite foods, they can use tools to go after new ones

May 20, 2024 • Some otters rely on tools to bust open hard-shelled prey items like snails, and a new study suggests this tool use is helping them to survive as their favorite, easier-to-eat foods disappear.

On this unassuming trail near LA, bird watchers see something spectacular

Lauren Hill, a graduate student at Cal State LA, holds a bird at the bird banding site at Bear Divide in the San Gabriel Mountains. Grace Widyatmadja/NPR hide caption

On this unassuming trail near LA, bird watchers see something spectacular

May 13, 2024 • At Bear Divide, just outside Los Angeles, you can see a rare spectacle of nature. This is one of the only places in the western United States where you can see bird migration during daylight hours.

AI gets scientists one step closer to mapping the organized chaos in our cells

The inside of a cell is a complicated orchestration of interactions between molecules. Keith Chambers/Science Photo Library hide caption

AI gets scientists one step closer to mapping the organized chaos in our cells

May 13, 2024 • As artificial intelligence seeps into some realms of society, it rushes into others. One area it's making a big difference is protein science — as in the "building blocks of life," proteins! Producer Berly McCoy talks to host Emily Kwong about the newest advance in protein science: AlphaFold3, an AI program from Google DeepMind. Plus, they talk about the wider field of AI protein science and why researchers hope it will solve a range of problems, from disease to the climate.

NOAA Issues First Severe Geomagnetic Storm Watch Since 2005

NASA's Solar Dynamics Observatory captured this image of a strong solar flare on May 8, 2024. The Wednesday solar flares kicked off the geomagnetic storm happening this weekend. NASA/SDO hide caption

NOAA Issues First Severe Geomagnetic Storm Watch Since 2005

May 10, 2024 • Scientists at the National Oceanic and Atmospheric Administration observed a cluster of sunspots on the surface of the sun this week. With them came solar flares that kicked off a severe geomagnetic storm. That storm is expected to last throughout the weekend as at least five coronal mass ejections — chunks of the sun — are flung out into space, towards Earth! NOAA uses a five point scale to rate these storms, and this weekend's storm is a G4. It's expected to produce auroras as far south as Alabama. To contextualize this storm, we are looking back at the largest solar storm on record: the Carrington Event.

In a decade of drug overdoses, more than 320,000 American children lost a parent

Esther Nesbitt lost two of her children to drug overdoses, and her grandchildren are among more than 320,000 who lost parents in the overdose epidemic. Andrew Lichtenstein/Corbis via Getty Images hide caption

In a decade of drug overdoses, more than 320,000 American children lost a parent

May 8, 2024 • New research documents how many children lost a parent to an opioid or other overdose in the period from 2011 to 2021. Bereaved children face elevated risks to their physical and emotional health.

Largest-ever marine reptile found with help from an 11-year-old girl

This illustration depicts a washed-up Ichthyotitan severnensis carcass on the beach. Sergey Krasovskiy hide caption

Largest-ever marine reptile found with help from an 11-year-old girl

May 6, 2024 • A father and daughter discovered fossil remnants of a giant ichthyosaur that scientists say may have been the largest-known marine reptile to ever swim the seas.

When PTO stands for 'pretend time off': Doctors struggle to take real breaks

A survey shows that doctors have trouble taking full vacations from their high-stress jobs. Even when they do, they often still do work on their time off. Wolfgang Kaehler/LightRocket via Getty Images hide caption

Perspective

When pto stands for 'pretend time off': doctors struggle to take real breaks.

May 4, 2024 • What's a typical vacation activity for doctors? Work. A new study finds that most physicians do work on a typical day off. In this essay, a family doctor considers why that is and why it matters.

'Dance Your Ph.D.' winner on science, art, and embracing his identity

Weliton Menário Costa (center) holds a laptop while surrounded by dancers for his music video, "Kangaroo Time." From left: Faux Née Phish (Caitlin Winter), Holly Hazlewood, and Marina de Andrade. Nic Vevers/ANU hide caption

'Dance Your Ph.D.' winner on science, art, and embracing his identity

May 4, 2024 • Weliton Menário Costa's award-winning music video showcases his research on kangaroo personality and behavior — and offers a celebration of human diversity, too.

Orangutan in the wild applied medicinal plant to heal its own injury, biologists say

Researchers in a rainforest in Indonesia spotted an injury on the face of a male orangutan they named Rakus. They were stunned to watch him treat his wound with a medicinal plant. Armas/Suaq Project hide caption

Orangutan in the wild applied medicinal plant to heal its own injury, biologists say

May 3, 2024 • It is "the first known case of active wound treatment in a wild animal with a medical plant," biologist Isabelle Laumer told NPR. She says the orangutan, called Rakus, is now thriving.

Launching an effective bird flu vaccine quickly could be tough, scientists warn

The federal government says it has taken steps toward developing a vaccine to protect against bird flu should it become a threat to humans. skodonnell/Getty Images hide caption

Launching an effective bird flu vaccine quickly could be tough, scientists warn

May 3, 2024 • Federal health officials say the U.S. has the building blocks to make a vaccine to protect humans from bird flu, if needed. But experts warn we're nowhere near prepared for another pandemic.

For birds, siblinghood can be a matter of life or death

A Nazca booby in the Galápagos Islands incubates eggs with its webbed feet. Wolfgang Kaehler/LightRocket via Getty Images hide caption

The Science of Siblings

For birds, siblinghood can be a matter of life or death.

May 1, 2024 • Some birds kill their siblings soon after hatching. Other birds spend their whole lives with their siblings and will even risk their lives to help each other.

How do you counter misinformation? Critical thinking is step one

Planet Money

How do you counter misinformation critical thinking is step one.

April 30, 2024 • An economic perspective on misinformation

Scientists restore brain cells impaired by a rare genetic disorder

This image shows a brain "assembloid" consisting of two connected brain "organoids." Scientists studying these structures have restored impaired brain cells in Timothy syndrome patients. Pasca lab, Stanford University hide caption

Scientists restore brain cells impaired by a rare genetic disorder

April 30, 2024 • A therapy that restores brain cells impaired by a rare genetic disorder may offer a strategy for treating conditions like autism, epilepsy, and schizophrenia.

Helping women get better sleep by calming the relentless 'to-do lists' in their heads

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help. Natalie Champa Jennings/Natalie Jennings, courtesy of Katie Krimitsos hide caption

Helping women get better sleep by calming the relentless 'to-do lists' in their heads

April 26, 2024 • A recent survey found that Americans' sleep patterns have been getting worse. Adult women under 50 are among the most sleep-deprived demographics.

As bird flu spreads in cows, here are 4 big questions scientists are trying to answer

Bird flu is spreading through U.S. dairy cattle. Scientists say the risk to people is minimal, but open questions remain, including how widespread the outbreak is and how the virus is spreading. DOUGLAS MAGNO/AFP via Getty Images hide caption

As bird flu spreads in cows, here are 4 big questions scientists are trying to answer

April 26, 2024 • Health officials say there's very little risk to humans from the bird flu outbreak among dairy cattle, but there's still much they don't know. Here are four questions scientists are trying to answer.

Animals get stressed during eclipses. But not for the reason you think

A coyote at the Fort Worth Zoo is photographed in the hours leading up to the April 8 total solar eclipse. The Hartstone-Rose Research Lab, NC State hide caption

Animals get stressed during eclipses. But not for the reason you think

April 25, 2024 • After studying various species earlier this month, some scientists now say they understand the origin of animal behavior during solar eclipses.

A woman with failing kidneys receives genetically modified pig organs

Dr. Jeffrey Stern, assistant professor in the Department of Surgery at NYU Grossman School of Medicine, and Dr. Robert Montgomery, director of the NYU Langone Transplant Institute, prepare the gene-edited pig kidney with thymus for transplantation. Joe Carrotta for NYU Langone Health hide caption

A woman with failing kidneys receives genetically modified pig organs

April 24, 2024 • Surgeons transplanted a kidney and thymus gland from a gene-edited pig into a 54-year-old woman in an attempt to extend her life. It's the latest experimental use of animal organs in humans.

Help | Advanced Search

Computer Science > Information Retrieval

Title: evaluating ensemble methods for news recommender systems.

Abstract: News recommendation is crucial for facilitating individuals' access to articles, particularly amid the increasingly digital landscape of news consumption. Consequently, extensive research is dedicated to News Recommender Systems (NRS) with increasingly sophisticated algorithms. Despite this sustained scholarly inquiry, there exists a notable research gap regarding the potential synergy achievable by amalgamating these algorithms to yield superior outcomes. This paper endeavours to address this gap by demonstrating how ensemble methods can be used to combine many diverse state-of-the-art algorithms to achieve superior results on the Microsoft News dataset (MIND). Additionally, we identify scenarios where ensemble methods fail to improve results and offer explanations for this occurrence. Our findings demonstrate that a combination of NRS algorithms can outperform individual algorithms, provided that the base learners are sufficiently diverse, with improvements of up to 5\% observed for an ensemble consisting of a content-based BERT approach and the collaborative filtering LSTUR algorithm. Additionally, our results demonstrate the absence of any improvement when combining insufficiently distinct methods. These findings provide insight into successful approaches of ensemble methods in NRS and advocates for the development of better systems through appropriate ensemble solutions.
Subjects: Information Retrieval (cs.IR); Artificial Intelligence (cs.AI)
Cite as: [cs.IR]
  (or [cs.IR] for this version)

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Quick links

Latest news.

Outside a polling station in Wales

Helping to dismiss disinformation around the General Election

Professional cameraman with headphones with HD camcorder in live television

Marking 500 Broadcast Bulletins

election-web

Ofcom’s role in a General Election – what you need to know

Global Online Safety Regulators Network logo

Global online safety regulators map out vision to improve international coordination

News consumption in the uk.

This series of reports looks at UK adults' consumption across television, radio, print, social media and other websites or apps.

We look at the UK news market as a whole, but also publish individual reports for Northern Ireland, Scotland and Wales.

DocumentDate published
20 July 2023
20 July 2023
20 July 2023
20 July 2023
20 July 2023
20 July 2023
20 July 2023

Teen News Consumption Survey

Our research into news consumption across television, radio, print and online among children / teenagers aged 12-15.

DocumentDate published
26 May 2023
26 May 2023
26 May 2023
26 May 2023

News Consumption Survey

Our research into news consumption across television, radio, print and online.

DocumentDate published
26 May 2023
26 May 2023
26 May 2023
26 May 2023
26 May 2023

Older reports

News Consumption in the UK: 2022 (PDF, 3.9 MB)

News Consumption in the UK: overview of 2022 (PDF, 218.2 KB)

Adults News Consumption Survey combined F2F and online technical report (PDF, 213.5 KB)

Adults News Consumption Survey F2F questionnaire (PDF, 512.3 KB)

Adults News Consumption Survey online questionnaire (PDF, 506.7 KB)

Adults News Consumption Survey combined F2F and online data tables (XLSX, 9.8 MB)

Adults News Consumption Survey combined F2F and online respondent level (CSV, 29.3 MB)

Adults News Consumption Survey online sample only data tables (XLSX, 10.4 MB)

Adults News Consumption Survey online sample only respondent level (CSV, 37.3 MB)

Adults News Consumption Survey online sample only technical report (PDF, 164.4 KB)

Teens News Consumption Survey technical report (PDF, 197.6 KB)

Teens News Consumption Survey questionnaire (PDF, 317.9 KB)

Teens News Consumption Survey data tables (XLSX, 3.3 MB)

Teens News Consumption Survey respondent level (CSV, 2.8 MB)

Nations reports

News Consumption Survey 2022 - Wales (PDF, 357.6 KB)

News Consumption Survey 2022 - Scotland (PDF, 858.7 KB)

News Consumption Survey 2022 - Northern Ireland (PDF, 535.9 KB)

News Consumption in the UK: 2021 report (PPTX, 4.6 MB)

News Consumption in the UK: overview of 2021 findings (PDF, 193.2 KB)

Adults News Consumption Survey 2021 combined CATI and online data tables (XLSX, 1.6 MB)

Adults News Consumption Survey 2021 combined CATI and online respondent level (CSV, 7.7 MB)

Adults News Consumption Survey 2021 combined CATI and online technical report (PDF, 193.9 KB)

Adults News Consumption Survey 2021 online sample only data tables (XLSX, 11.7 MB)

Adults News Consumption Survey 2021 online sample only respondent level (CSV, 36.5 MB)

Adults News Consumption Survey 2021 online sample only technical report (PDF, 186.0 KB)

Adults News Consumption Survey 2021 online questionnaire (PDF, 486.6 KB)

Adults News Consumption Survey 2021 CATI questionnaire (PDF, 472.6 KB)

Childrens News Consumption Survey 2021 data tables (XLSX, 4.1 MB)

Childrens News Consumption Survey 2021 respondent level (CSV, 2.9 MB)

Childrens News Consumption Survey 2021 technical report (PDF, 173.1 KB)

Childrens News Consumption Survey 2021 questionnaire (PDF, 320.2 KB)

News Consumption Survey 2021 - Wales (PPTX, 608.3 KB)

News Consumption Survey 2021 - Scotland (PPTX, 517.4 KB)

News Consumption Survey 2021 - Northern Ireland (PPTX, 623.5 KB)

News consumption in the UK: 2020 report (PDF, 6.7 MB)

News consumption in the UK: overview of 2020 findings (PDF, 297.5 KB)

Adults news consumption in the UK: 2020 data tables (XLSX, 10.5 MB)

Adults news consumption in the UK: 2020 questionnaire (PDF, 530.7 KB)

Adults news consumption in the UK: 2020 raw data (CSV, 39.2 MB)

Adults news consumption in the UK: 2020 technical report (PDF, 189.5 KB)

Children's news consumption in the UK: 2020 data tables (XLSX, 3.2 MB)

Children's news consumption in the UK: 2020 questionnaire (PDF, 384.0 KB)

Children's news consumption in the UK: 2020 raw data (CSV, 2.8 MB)

Children's news consumption in the UK: 2020 technical report (PDF, 158.2 KB)

News Consumption Survey - Scotland (PDF, 951.7 KB)

News Consumption Survey - Northern Ireland (PDF, 891.1 KB)

News Consumption Survey - Wales (PDF, 816.5 KB)

News consumption in the UK: 2019 report (PDF, 2.2 MB)

News consumption in the UK: overview of 2019 findings (PDF, 270.3 KB)

News consumption in the UK: 2019 data tables (XLSX, 6.7 MB)

News consumption in the UK: 2019 questionnaire (PDF, 417.5 KB)

News consumption in the UK: 2019 raw data (CSV, 80.9 MB)

News consumption in the UK: 2019 technical report (PDF, 164.2 KB)

News Consumption Survey - Scotland (PDF, 682.6 KB)

News Consumption Survey - Northern Ireland (PDF, 490.8 KB)

News Consumption Survey - Wales (PDF, 737.1 KB)

This report provides the findings of Ofcom’s 2018 research into news consumption across television, radio, print and online. It is published as part of our range of market research reports which examine the consumption of content, and attitudes towards that content, across different platforms.

The aim of this slide pack report is to inform an understanding of news consumption across the UK and within each UK nation. This includes sources and platforms used,  the perceived importance of different outlets for news, attitudes to individual news sources, local news use and news consumption in the nations.

The primary source is Ofcom’s News Consumption Survey. The report also contains information from our Media Tracker survey, and a range of industry currencies including BARB for television viewing, Touchpoints for national newspaper readership, ABC for newspaper circulation, and comScore for online consumption.

Please note, because of changes we have made to the 2018 News Consumption Survey methodology, it is not possible to make direct comparisons to previous data. Further detail on how and why we changed the methodology can be found on slide 143 of the report.

News consumption in the UK: 2018 report (PDF, 2.6 MB)

News consumption in the UK: 2018 data tables (XLSX, 18.2 MB)

News consumption in the UK: 2018 questionnaire (PDF, 2.1 MB)

News consumption in the UK: 2018 raw data

News consumption in the UK: 2018 technical report (PDF, 183.3 KB)

Related content

Media plurality and online news.

Our programme of work on the future of media plurality in the UK, and the role of online intermediaries in the news ecosystem.

Media Nations reports

Media Nations reviews key trends in the television and online video sectors, as well as the radio and other audio sectors.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

6 key takeaways about the state of the news media in 2020

Every two years, Pew Research Center updates its series of fact sheets on the U.S. news media industry, tracking key audience and economic indicators for a variety of sectors. Here are some key findings about the state of the industry in 2020.

The State of the News Media fact sheets use a range of different methodologies to study the health of the U.S. news industry, including custom analysis of news audience behavior, secondary analysis of industry data and direct reporting to solicit information unavailable elsewhere. All sources are cited in chart and graphic notes or within the text of the report. Read the methodology .

Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. This is the latest report in Pew Research Center’s ongoing investigation of the state of news, information and journalism in the digital age, a research program funded by The Pew Charitable Trusts, with generous support from the John S. and James L. Knight Foundation.

A line graph showing the estimated advertising and circulation revenue of the newspaper industry

For the first time, newspapers made more money from circulation than from advertising, according to an analysis of Securities and Exchange Commission (SEC) filings of publicly traded newspaper companies. For more than 50 years, U.S. newspapers had more annual revenue from advertising than from circulation (e.g., selling subscriptions or single issues). But with ad revenue in a long-term decline and circulation revenue holding steady, the two streams finally crossed in 2020.

A line graph showing the average audience for cable TV prime news

In a year dominated by major news events , cable news channels saw explosive audience growth in 2020. In prime time, Fox News’ average audience increased by 61%, CNN’s increased by 72% and MSNBC’s grew by 28%, according to Comscore TV Essentials® data. Other TV news sectors also saw audience growth, but to smaller degrees. The average audience for network nightly news , for example, increased by between 7% and 16%, and the average audience for local TV evening news increased 4%. Spanish-language news on Telemundo and Univision also generally saw an audience increase in 2020.

A line graph showing the political advertising revenue at local TV companies

Political ad revenue at local TV stations was dramatically higher in 2020 . Though it always rises in election years, it totaled $2 billion in 2020 – far above any prior year, according to an analysis of SEC filings of five major publicly held local TV station companies.

Individual giving is making up a larger piece of the revenue pie for public broadcasters. In 2014, for example, just 3% of nonpublic funding for the PBS NewsHour came from individuals. By 2020, the share had climbed to 24%. Over the same period, contributions from corporations fell from 41% to 18%, according to information provided by PBS NewsHour. At public radio stations, meanwhile, individual contributions rose from $261 million to $430 million between 2008 and 2019, while underwriting revenue has risen far less, according to an analysis of public filings provided by 123 of the largest news-oriented licensees.

A bar chart showing the digital and nondigital advertising revenue

Total advertising revenue – beyond just news – is now mostly digital , according to eMarketer estimates. As of 2019, more ad revenue came from digital advertising than nondigital advertising, such as print and broadcast. A major driver of this trend has been mobile advertising, which rose roughly sixtyfold between 2011 and 2020, from $1.7 billion to $102.6 billion.

While terrestrial radio listenership declined in 2020, the audience for online audio has grown . NPR’s weekly podcast audience, for instance, nearly doubled in the past two years, from about 7 million in 2018 to about 14 million in 2020, according to data provided by the broadcaster. (NPR now makes more money from underwriting on its podcasts than its radio shows.) Around three-in-ten Americans ages 12 and older (28%) now say they listened to podcasts in the past week , according to “The Infinite Dial” report by Edison Research and Triton Digital.

A line graph showing the weekly terrestrial radio listenership

But after years of almost perfectly steady listenership, terrestrial radio (i.e., AM/FM) saw its overall audience – not just for news – decline in 2020. The decrease coincided with a sharp decrease in automobile use during the COVID-19 pandemic. In 2020, 83% of Americans ages 12 and older listened to terrestrial radio, down from 89% in 2019 and 92% in 2009, according to Nielsen Media Research data published by the Radio Advertising Bureau .

Note: To learn more, explore all eight fact sheets on the state of the news media and the methodologies used to compile them.

  • Media Industry
  • State of the News Media (Project)

Download Michael Barthel's photo

Michael Barthel is a former senior researcher focusing on journalism research at Pew Research Center .

More Americans want the journalists they get news from to share their politics than any other personal trait

Americans’ changing relationship with local news, introducing the pew-knight initiative, 8 facts about black americans and the news, many americans find value in getting news on social media, but concerns about inaccuracy have risen, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

SBU News

SoCJ mass communication faculty publishes research on social media and e-cigarette use

Xia Zheng

“It is increasingly apparent that social media has enormous impacts on our health and behavior,” said Laura Lindenfeld, dean of the SoCJ and executive director of the Alda Center for Communicating Science. “The messages and content we consume on these digital platforms can also impact what we believe, and how we act, with regards to our health. 

“I applaud these faculty for their ongoing work in understanding how these relationships develop, and in exploring how we can mitigate adverse effects.”

The researchers published the piece, about the impact of social-media on e-cigarette use among youth, in Addictive Behaviors. Assistant professors of mass communication Xia Zheng, Wenbo Li and Ruobing Li, along with two colleagues from Indiana University, published “ Exposure to user-generated e-cigarette content on social media associated with greater vulnerability to e-cigarette use among youth non-users .” 

Their study found that when youth non-users saw people they knew, whether friends or celebrities, posting about using e-cigarettes on social media, those youths were more likely to perceive a reduced risk and more positive norms in using those products themselves. The researchers suggest that encouraging individuals, influencers and celebrities to post interventional messages about the risks of e-cigarettes might help mitigate non-users’ e-cigarette vulnerabilities and, perhaps, reduce e-cigarette use in young people.

Related Posts

A handheld smartphone displays popular social media apps.

Add comment

Cancel reply.

Your Website

Save my name, email, and website in this browser for the next time I comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Keating Zelenke

SoCJ wins Folio Awards for student journalism

Two pieces of student reporting won Folio Awards from Long Island’s Fair Media Council.  Maya Brown, who graduated in 2022, won the Sean A. Fanelli Folio for Education News – Student award for her senior capstone...

A handheld smartphone displays popular social media apps.

SoCJ mass communication students explore how social media and society intersect

Echo chambers. Gender identity. Mom-and-Pop shops in Chinatown. These vastly different topics all converge in one place: social media. At the School of Communication and Journalism, many of this spring’s graduating mass...

four people sit on a small stage with flags behind them in conversation

Lindenfeld contributes to panel about Oscar-nominated French film

Last month, Laura Lindenfeld, dean of the School of Communication and Journalism and executive director of the Alda Center for Communicating Science, was part of a panel conversation about “The Taste of Things,” an...

Search SBU News

Subscribe to newsletter, latest stories.

A handheld smartphone displays popular social media apps.

SoCJ’s Giokas wins SUNY Chancellor’s Award for Excellence

graphic of a robot with red lightning bolts coming from it

Student report: Benefits and risks of using AI for advocacy

research paper on news media

Undergraduate journalism capstones highlight issues on Long Island

line drawings of a hand holding a press notebook, a press flak vest, a hand holding a small candle, and a face wiping a tear away

SoCJ students develop “Coping with Crisis” toolkit for student-journalists

Tim Giorlando headshot

SoCJ’s Giorlando gives student address at Stony Brook commencement

Hoffman headshot 2021 square min

SoCJ’s Hoffman wins Excellence in Educational Effectiveness Award

Red rays full 1

Al-Gharbi’s ‘We Have Never Been Woke’ available for pre-order

Iryna Domnenko

Former visiting scholar teaches journalism from a bomb shelter

woman with flower headdress holding a sign painted in the colors of the Ukrainian flag. The sign says "Stop the War."

Colvin Center director Sarah Baxter serves as guest speaker at screening of Ukrainian journalism documentary

Solutions Journalism Network Hub logo

SoCJ to host Solutions Journalism Educators Academy

research paper on news media

Journalism students tell stories with national impact in senior capstone course

Food book cover

Lindenfeld publishes book chapter about food in film

Matthew salzano

Social movements require dissent, says SoCJ’s Salzano

four people sit on a small stage with flags behind them in conversation

  • Find Stories
  • Media Resources
  • Media Relations Team
  • Press Clip Archives
  • Press Release Archives

Sign Up Today!

Connect with sbu.

Sb matters masthead white

© 2021 Stony Brook University

Subscribe to News

  • SoCJ mass communication faculty publishes research on social media and e-cigarette use June 25, 2024
  • Harmony in the Classroom: Using Music and Poetry to Teach Language  June 25, 2024
  • Innovative Simulation Models Providing More Effective Care for High-Risk Vascular Patients June 25, 2024
  • Stony Brook Athletics Announces 2024 Hall of Fame Class June 25, 2024
  • Master’s in Science Communication Program Launches Concentration in Climate Communication June 24, 2024
  • Alumni News
  • Arts & Entertainment
  • Awards and Honors
  • College of Arts & Sciences
  • College of Business
  • College of Engineering & Applied Sciences
  • Commencement
  • Faculty/Staff
  • Graduate School
  • Long Island
  • School of Communication and Journalism
  • School of Dental Medicine
  • School of Health Professions
  • School of Medicine
  • School of Nursing
  • School of Pharmacy
  • School of Professional Development
  • School of Social Welfare
  • Student Spotlight
  • Sustainability
  • Stay Informed

Get the latest word on Stony Brook news, discoveries and people.

research paper on news media

Make a gift to PBS News Hour and your donation will be doubled !

Support Intelligent, In-Depth, Trustworthy Journalism.

How right-wing disinformation is fueling conspiracy theories about the 2024 election

Laura Barrón-López

Laura Barrón-López Laura Barrón-López

Ali Schmitz Ali Schmitz

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/show/how-right-wing-disinformation-is-fueling-conspiracy-theories-about-the-2024-election

It’s been more than three years since baseless claims about the 2020 election inspired an attack on the Capitol, but the lies haven’t stopped. With less than five months until November, Donald Trump is at it again with help from right-wing media. Laura Barrón-López discusses the conspiracy theories and their impact with David Becker of the nonpartisan Center for Election Innovation and Research.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

Geoff Bennett:

It's been more than three years since baseless claims about a rigged 2020 election inspired an attack on the U.S. Capitol, but the lies have not stopped.

Laura Barron-Lopez is here with more — Laura.

Laura Barron-Lopez:

Thanks, Geoff.

Before and after the 2020 election, Donald Trump repeatedly sowed doubt about the legitimacy of the U.S. election system. Now, less than five months ago before November, he's doing it again. Here he is in the swing state of Wisconsin this week.

Donald Trump, Former President of the United States (R) and Current U.S. Presidential Candidate: The radical left Democrats rigged the presidential election in 2020, and we're not going to let them rig the presidential election in 2024.

(Cheering and applause)

Donald Trump:

And every time — we're not going to let them do it.

And, much like last time, the former president has help from right-wing media.

Greg Gutfeld, FOX News Anchor:

What is up the Dems' sleeve to drag that body back into the White House? What's the dog that's not barking? And then let's say by some weird miraculous chance that we didn't see coming, given that Trump is ahead, has a 66 percent chance of winning, looks like he's going to get the electoral count, and Joe still wins.

Well, then what do you do after you win? How do you convince anyone that's real? Have they even thought of that? Like, even the Dems behind the scenes better hope he doesn't win, because no one's going to believe it.

To separate fact from fiction, I'm joined by David Becker, executive director of the nonpartisan Center for Election Innovation and Research.

David, thanks so much for joining us.

Those two examples were just from recent days. The FOX News host, Greg Gutfeld, repeated his claim, saying that if President Biden wins in November, he will only win if there are — quote — "shenanigans," AKA cheating.

Debunk this for us.

David Becker, Executive Director, Center for Election Innovation and Research: Well, our elections, by every measure, are more secure, transparent, and verified than ever before.

We know this because we have more paper ballots than ever before. Over 95 percent of all voters in the United States are going to vote on verifiable paper ballots this fall, and that's the highest percentage ever. It was about 95 percent or so in 2020. Those ballots are audited. The machines are audited to make sure they were tabulated correctly.

Our voter lists are more clean than ever before, and we have more litigation but before and after the election to confirm the results and the rules than ever before. Our elections are very, very good in the United States. So people should know and can know that we will know the winner, and that winner will be correct.

That spreading of disinformation by Republican politicians, Americans across social media, and right-wing media, is it worse this election cycle than previous cycles?

David Becker:

I think it's worse because of the cumulative effect that we have seen over about four years.

Of course, we had disinformation in the 2020 election cycle, especially during the pandemic, where people were isolated and alone, when people had strong opinions about the election. We saw record turnout, 20 million more ballots cast in 2020 than we'd ever seen in any election before.

There was a lot of disinformation spread, particularly after the election, by former President Trump after he had lost. But that election in 2020 was the most scrutinized election in American history. Roughly, perhaps 20 to 30 percent of the American public still thinks that the most secure, transparent, and verified election we have ever had there was something wrong with.

And that potentially could be problematic for 2024 and the aftermath.

There's another big election conspiracy theory being spread by Republicans right now.

He's going to let everybody come in, because you know what they're trying to do? They're trying to sign these people up and register them. They're not citizens. They're not allowed to do it. It's illegal as hell. So what they're trying to do is they're trying to use all of these people that are pouring into our country to vote. What other reason?

Trump isn't the only person saying this.

This week, in response to President Biden's action to streamline a pathway to citizenship for undocumented spouses of U.S. citizens, House Speaker Mike Johnson posted on X on June 18 — quote — "This is proof-positive of the Democrats' plan to turn illegal aliens into voters."

And FOX News hosts also claimed this week that 49 states are providing voter registration without showing proof of citizenship to undocumented migrants. What's the reality here?

The reality is that this is again a misstatement of what the law and the facts are here in the United States.

First, it is against the law for noncitizens to vote in federal elections. It has been for decades. It's very clear. It comes with criminal penalties. Second, every single voter in the United States to register to vote in a federal election has to provide I.D., almost always a driver's license number.

And thanks to REAL ID and other things, go on to your driver's bureau's Web site and see what you need to bring. You need to bring proof of legal presence, which will either prove that you're a U.S. citizen or you're a noncitizen who's here legally in most cases, in which case you shouldn't be registered to vote when that I.D. is checked against the database, which it is.

And we know this has been incredibly successful. We know that very, very few, if any noncitizens ever actually vote. And we know this because states like Georgia, Republican Secretary of State Brad Raffensperger did a complete audit of their voter list as recently as 2022, looked at all of those that he couldn't find proof of citizenship for in the database.

It was only about 1,500 statewide out of millions of voters. And the total number of those individuals who had voted in previous elections was zero. This is incredibly successful, we are, in terms of keeping noncitizens from voting. Very, very few noncitizens vote.

Zero noncitizens in Georgia.

In Georgia in that one audit, yes.

You work with Republican and Democratic election officials who administer elections and who oversee them. Is this disinformation directly impacting them?

Their jobs are much, much harder now. They're having to face disinformation all the time. They're getting it in their offices. They're getting it at election meetings that are public. They're getting it through public records requests. They're requesting duplicative things that are just sucking up their bandwidth.

I have even heard from particularly Republican election officials that they're getting it in their communities, that, when they go to the grocery store or to their children's school or even to their places of worship, they have people who are accusing them of — engaged in a massive conspiracy with millions of people to overturn the will of the voters.

What are the two big disinformation waves that you think are coming this election cycle?

So I think those waves are really divided by the close of the polls on election night.

We're going to see a wave before then that tries to influence voters, makes them think that voting is rigged or voting is hard, or that their particular polling place or method of voting might not be available to them to get them to self-suppress, to not show up to vote, even though they should still be able to vote.

People should be very skeptical and only rely upon official sources of information, their official election office in their county or locality or state.

And then, after the polls close, I think it's very likely we're going to see a really dangerous wave of disinformation that makes us to believe — that's really going to be focused on the losing candidate or the candidate that thinks he's losing, and is designed to make his supporters feel as if the election has been stolen.

This could lead to a lot of instability and chaos in the post-election period of time and potentially violence like we saw on January 6.

David Becker, thank you for your time.

Thank you, Laura.

Listen to this Segment

Demonstration against Israeli PM Benjamin Netanyahu's government in Jerusalem

Watch the Full Episode

Laura Barrón-López is the White House Correspondent for the PBS News Hour, where she covers the Biden administration for the nightly news broadcast. She is also a CNN political analyst.

Support Provided By: Learn more

Support PBS News:

NewsMatch

More Ways to Watch

Educate your inbox.

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

Cunard

  • Research and Innovation
  • The Abstract
  • Audio Abstract Podcast
  • Centennial Campus
  • Campus Life
  • Faculty and Staff
  • Awards and Honors
  • HR and Finance
  • Resilient Pack
  • We Are the Wolfpack
  • Service and Community
  • Red Chair Chats
  • News Releases
  • In the News
  • NC State Experts on 2022 Elections
  • NC State Experts Available on Climate
  • NC State Experts on Roe v. Wade
  • NC State Supply Chain Experts
  • Experts on COVID-19
  • Hurricane Experts
  • About NC State News
  • Faculty Support
  • Training Program

Researchers Create New Class of Materials Called ‘Glassy Gels’

gloved hands are stretching a thin sheet of clear material over a nail; the material is stretching over the sharp nail without breaking

For Immediate Release

Researchers have created a new class of materials called “glassy gels” that are very hard and difficult to break despite containing more than 50% liquid. Coupled with the fact that glassy gels are simple to produce, the material holds promise for a variety of applications.

Gels and glassy polymers are classes of materials that have historically been viewed as distinct from one another. Glassy polymers are hard, stiff and often brittle. They’re used to make things like water bottles or airplane windows. Gels – such as contact lenses – contain liquid and are soft and stretchy.

“We’ve created a class of materials that we’ve termed glassy gels, which are as hard as glassy polymers, but – if you apply enough force – can stretch up to five times their original length, rather than breaking,” says Michael Dickey, corresponding author of a paper on the work and the Camille and Henry Dreyfus Professor of Chemical and Biomolecular Engineering at North Carolina State University. “What’s more, once the material has been stretched, you can get it to return to its original shape by applying heat. In addition, the surface of the glassy gels is highly adhesive, which is unusual for hard materials.”

“A key thing that distinguishes glassy gels is that they are more than 50% liquid, which makes them more efficient conductors of electricity than common plastics that have comparable physical characteristics,” says Meixiang Wang, co-lead author of the paper and a postdoctoral researcher at NC State.

“Considering the number of unique properties they possess, we’re optimistic that these materials will be useful,” Wang says.

Glassy gels, as the name suggests, are effectively a material that combines some of the most attractive properties of both glassy polymers and gels. To make them, the researchers start with the liquid precursors of glassy polymers and mix them with an ionic liquid. This combined liquid is poured into a mold and exposed to ultraviolet light, which “cures” the material. The mold is then removed, leaving behind the glassy gel.

“The ionic liquid is a solvent, like water, but is made entirely of ions,” says Dickey. “Normally when you add a solvent to a polymer, the solvent pushes apart the polymer chains, making the polymer soft and stretchable. That’s why a wet contact lens is pliable, and a dry contact lens isn’t. In glassy gels, the solvent pushes the molecular chains in the polymer apart, which allows it to be stretchable like a gel. However, the ions in the solvent are strongly attracted to the polymer, which prevents the polymer chains from moving. The inability of chains to move is what makes it glassy. The end result is that the material is hard due to the attractive forces, but is still capable of stretching due to the extra spacing.”

The researchers found that glassy gels could be made with a variety of different polymers and ionic liquids, though not all classes of polymers can be used to create glassy gels.

“Polymers that are charged or polar hold promise for glassy gels, because they’re attracted to the ionic liquid,” Dickey says.

In testing, the researchers found that the glassy gels don’t evaporate or dry out, even though they consist of 50-60% liquid.

“Maybe the most intriguing characteristic of the glassy gels is how adhesive they are,” says Dickey. “Because while we understand what makes them hard and stretchable, we can only speculate about what makes them so sticky.”

The researchers also think glassy gels hold promise for practical applications because they’re easy to make.

“Creating glassy gels is a simple process that can be done by curing it in any type of mold or by 3D printing it,” says Dickey. “Most plastics with similar mechanical properties require manufacturers to create polymer as a feedstock and then transport that polymer to another facility where the polymer is melted and formed into the end product.

“We’re excited to see how glassy gels can be used and are open to working with collaborators on identifying applications for these materials.”

The paper, “ Glassy Gels Toughened by Solvent ,” is published in the journal Nature . Co-lead author of the paper is Xun Xiao of the University of North Carolina at Chapel Hill. The paper was co-authored by Salma Siddika, a Ph.D. student at NC State; Mohammad Shamsi, a former Ph.D. student at NC State; Ethan Frey, a former undergrad at NC state; Brendan O’Connor, a professor of mechanical and aerospace engineering at NC State; Wubin Bai, a professor of applied physical sciences at UNC; and Wen Qian, a research associate professor of mechanical and materials engineering at the University of Nebraska-Lincoln.

A video explaining glassy gels, and demonstrating their properties, can be found at https://www.youtube.com/watch?v=LV-vgxxbNeY

The work was partially supported by funding from the Coastal Studies Institute.

Note to Editors : The study abstract follows.

“Glassy Gels Toughened by Solvent”

Authors : Meixiang Wang, Salma Siddika, Mohammad Shamsi, Ethan Frey, Brendan T. O’Connor and Michael D. Dickey, North Carolina State University; Xun Xiao and Wubin Bai, University of North Carolina at Chapel Hill; Wen Qian, University of Nebraska-Lincoln

Published : June 19, Nature

DOI : 10.1038/s41586-024-07564-0

Abstract : Glassy polymers are generally stiff and strong yet have limited extensibility. By swelling with solvent, glassy polymers can become gels that are soft and weak yet have enhanced extensibility. The marked changes in properties arise from the solvent increasing free volume between chains while weakening polymer–polymer interactions. Here we show that solvating polar polymers with ionic liquids (that is, ionogels) at appropriate concentrations can produce a unique class of materials called glassy gels with desirable properties of both glasses and gels. The ionic liquid increases free volume and therefore extensibility despite the absence of conventional solvent (for example, water). Yet, the ionic liquid forms strong and abundant non-covalent crosslinks between polymer chains to render a stiff, tough, glassy, and homogeneous network (that is, no phase separation), at room temperature. Despite being more than 54 wt% liquid, the glassy gels exhibit enormous fracture strength (42MPa), toughness (110 MJ m −3 ), yield strength (73 MPa) and Young’s modulus (1 GPa). These values are similar to those of thermoplastics such as polyethylene, yet unlike thermoplastics, the glassy gels can be deformed up to 670% strain with full and rapid recovery on heating. These transparent materials form by a one-step polymerization and have impressive adhesive, self-healing and shape-memory properties.

  • Exploration
  • Innovative Outcomes
  • college of engineering
  • department of chemical and biomolecular engineering
  • research news
  • world-leading faculty

More From NC State News

The sun over NC State's campus

Baby, it’s Hot Outside. New Toolkit to Help Reduce Heat-Related Health Problems 

research paper on news media

Bartonella DNA Found in Blood of Patients With Psychosis 

A food scientist hydrates a protein for use in a food formulation.

Bezos Earth Fund Grant Creates Sustainable Protein Research Hub at NC State 

COMMENTS

  1. Full article: News media trust and its impact on media use: toward a

    Recent research has furthermore established that levels of media trust differ depending on whether it refers to news overall, news that people use, or news in digital and social media (Newman et al., Citation 2019), whether it refers to an unspecified referent (such as 'the press' or 'the media') or specified news sources (Daniller et ...

  2. The role of news media knowledge for how people use social media for

    One shortcoming of recent research into news media knowledge is that few studies have thus far explored how it is linked to people's use of social networks to get news. In many countries, the Internet now competes with television as people's main source of news, and social media is used for news by a significant minority (Newman et al ...

  3. Social Media and Trust in News: An Experimental Study of the Effect of

    News Credibility and Social Media. Trust in news is related to a long history of credibility research, and trust and credibility is used nearly interchangeably in the literature (Kohring and Matthes Citation 2007; Kiousis Citation 2003). Footnote 2 A wide range of factors related to the news story itself and the news outlet are thought to influence the credibility of the news story.

  4. News Sharing in Social Media: A Review of Current Research on News

    Already in 2011, scholars from the Pew Research Center concluded that "if searching for news was the most important development of the last decade, sharing news may be among the most important of the next" (Olmstead, Mitchell, & Rosenstiel, 2011, p. 10).Although most visitors still get to online news sites through direct access or search engines, social media referrals have become ...

  5. (PDF) Collaboration between Platforms and News Media: Exploring

    This research paper investigates the privacy implications of platformization in the context of news media, focusing on the collection, use, and sharing of user data by social media platforms.

  6. Measuring News Consumption in a Digital Era

    The news media's transition to digital has brought major upheaval to the industry - including a multitude of new providers and ways to get to news. And just as American news organizations have had to drastically reevaluate their business models, it would make sense that researchers who are trying to measure the U.S. public's news consumption also need to reexamine the traditional ways ...

  7. State of the News Media (Project)

    State of the News Media (Project) Since 2004, Pew Research Center has issued an annual report on key audience and economic indicators for a variety of sectors within the U.S. news media industry. These data speak to the shifting ways in which Americans seek out news and information, how news organizations get their revenue, and the resources ...

  8. Online News User Journeys: The Role of Social Media, News Websites, and

    Gatekeepers in an Online News Environment. The Internet has brought various changes to the way people consume news, as it (1) allows people to have increased control over the news they select; (2) provides a wide variety of sources to keep informed about public issues; and, (3) changes the gatekeeping process so that it is less controlled by traditional media (see Majó-Vázquez, Cardenal, and ...

  9. (PDF) Artificial Intelligence in News Media: Current ...

    Artificial Intelligence in News Media: Current Perceptions and. Future Outlook. Mathias-Felipe de-Lima-Santos 1, * and Wilson Ceron 2. 1 School of Communication, University of Navarra, 31009 ...

  10. Journalism and Media

    In recent years, news media has been greatly disrupted by the potential of technologically driven approaches in the creation, production, and distribution of news products and services. Artificial intelligence (AI) has emerged from the realm of science fiction and has become a very real tool that can aid society in addressing many issues, including the challenges faced by the news industry.

  11. Trends and Facts on Newspapers

    Local Newspapers Fact Sheet, May 26, 2022. U.S. newsroom employment has fallen 26% since 2008, July 13, 2021. A third of large U.S. newspapers experienced layoffs in 2020, more than in 2019, May 21, 2021. Coronavirus-Driven Downturn Hits Newspapers Hard as TV News Thrives, Oct. 29, 2020. Nearly 2,800 newspaper companies received paycheck ...

  12. Longitudinal analysis of sentiment and emotion in news media headlines

    This work describes a chronological (2000-2019) analysis of sentiment and emotion in 23 million headlines from 47 news media outlets popular in the United States. We use Transformer language models fine-tuned for detection of sentiment (positive, negative) and Ekman's six basic emotions (anger, disgust, fear, joy, sadness, surprise) plus neutral to automatically label the headlines ...

  13. Public perspectives on trust in news

    1 Our own work includes the three-year Trust in News-project with extensive research across Brazil, India, the UK, and the US (details here), and last year's Digital News Report data on media criticism and the relationship between press freedom and public trust in news (More here). 2 See Schibsted.

  14. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  15. Fake news on Social Media: the Impact on Society

    Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN is increasingly affecting societal values, changing opinions on critical issues and topics as well as redefining facts, truths, and beliefs. To understand the degree to which FN has changed society ...

  16. Fake news, disinformation and misinformation in social media: a review

    We reviewed the articles completely and found only 61 research papers that discuss the definition of the term fake news and its ... Many researchers have addressed ways of identifying and analyzing possible sources of fake news spread in social media. Recent research by Shu et al. describes social bots use of two strategies to spread low ...

  17. A systematic review on fake news research through the lens of news

    Background Although fake news creation and consumption are mutually related and can be changed to one another, our review indicates that a significant amount of research has primarily focused on news creation. To mitigate this research gap, we present a comprehensive survey of fake news research, conducted in the fields of computer and social sciences, through the lens of news creation and ...

  18. Understanding the complex links between social media and health

    Fabiana Zollo and colleagues call for comprehensive, robust research on the influence of social media on health behaviour in order to improve public health responses ### Key messages Over 90% of people connected to the internet are active on social media, with a total of 4.76 billion users worldwide in January 2023.1 The digital revolution has reshaped the news landscape and changed the way ...

  19. Fake news on Social Media: the Impact on Society

    Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN ...

  20. Fake news and the spread of misinformation: A research roundup

    "Social Media and Fake News in the 2016 Election" Allcott, Hunt; Gentzkow, Matthew. Working paper for the National Bureau of Economic Research, No. 23089, 2017. Abstract: "We present new evidence on the role of false stories circulated on social media prior to the 2016 U.S. presidential election. Drawing on audience data, archives of fact ...

  21. Research News : NPR

    In a decade of drug overdoses, more than 320,000 American children lost a parent. May 8, 2024 • New research documents how many children lost a parent to an opioid or other overdose in the ...

  22. Evaluating Ensemble Methods for News Recommender Systems

    News recommendation is crucial for facilitating individuals' access to articles, particularly amid the increasingly digital landscape of news consumption. Consequently, extensive research is dedicated to News Recommender Systems (NRS) with increasingly sophisticated algorithms. Despite this sustained scholarly inquiry, there exists a notable research gap regarding the potential synergy ...

  23. Full article: Combating fake news, disinformation, and misinformation

    1. Introduction. Fake news is "news articles that are intentionally and verifiably false, and could mislead readers" (Allcott & Gentzkow, Citation 2017, p. 213).It is also sometimes referred to as information pollution (Wardle & Derakshan, Citation 2017), media manipulation (Warwick & Lewis, Citation 2017) or information warfare (Khaldarova & Pantti, Citation 2016).

  24. News consumption in the UK

    Ofcom's latest research into UK adults' news consumption across television, radio, print, social media and other websites or apps. ... The report also contains information from our Media Tracker survey, and a range of industry currencies including BARB for television viewing, Touchpoints for national newspaper readership, ABC for newspaper ...

  25. 6 key takeaways about the state of the news media in 2020

    In a year dominated by major news events, cable news channels saw explosive audience growth in 2020. In prime time, Fox News' average audience increased by 61%, CNN's increased by 72% and MSNBC's grew by 28%, according to Comscore TV Essentials® data. Other TV news sectors also saw audience growth, but to smaller degrees.

  26. SoCJ mass communication faculty publishes research on social media and

    A team of faculty from the School of Communication and Journalism recently published a paper that continues to explore how social media impacts health behavior. "It is increasingly apparent that social media has enormous impacts on our health and behavior," said Laura Lindenfeld, dean of the SoCJ and executive director of the Alda Center for

  27. How right-wing disinformation is fueling conspiracy theories about the

    It's been more than three years since baseless claims about the 2020 election inspired an attack on the Capitol, but the lies haven't stopped. With less than five months until November, Donald ...

  28. New at-home monitoring program for patients with high blood pressure

    (SACRAMENTO) UC Davis Health has launched a new program that monitors patients with high blood pressure at home. To support this initiative, the health system is working with Best Buy Health 's care-at-home platform, Current Health.. Patients will use connected devices including blood pressure cuffs and scales.

  29. Ask the expert: How much do presidential debates matter?

    Debates dominate news coverage over a 48-hour period both prior to and following the debate, and content across other media platforms are likely to skew more political than normal. ... Additionally, research has shown that social media discourse during debates can serve to amplify false and misleading claims made throughout the course of the ...

  30. Researchers Create New Class of Materials Called 'Glassy Gels'

    The paper was co-authored by Salma Siddika, a Ph.D. student at NC State; Mohammad Shamsi, a former Ph.D. student at NC State; Ethan Frey, a former undergrad at NC state; Brendan O'Connor, a professor of mechanical and aerospace engineering at NC State; Wubin Bai, a professor of applied physical sciences at UNC; and Wen Qian, a research ...