Misinformation Syllabus (Advanced Topic)
Course Organization
This course examines online misinformation and disinformation from interdisciplinary perspectives — drawing on communication studies, political science, cognitive psychology, and computational methods. Lectures move from definitional and theoretical foundations through empirical analysis of spread and vulnerability, to computational detection techniques and platform-level interventions, and finally to the emerging challenge of AI-generated misinformation.
For a broader overview of the Trust and Safety space, see the Trust & Safety class.
Lectures and Delivery Order
Lecture 1: Defining Misinformation (Consortium Information Environment)
- Source: Define Misinfo (Consortium Information Environment)
Establishes the foundational vocabulary of the course: the distinctions among misinformation, disinformation, and malinformation; the Wardle & Derakhshan information disorder framework; typologies of false and misleading content; and the information environment as the broader context in which misinformation operates. Students leave with a shared conceptual language for the rest of the course.
Lecture 2: Content Moderation Overview (Consortium)
- Source: Content Moderation Overview (Consortium)
Introduces how platforms respond to misinformation through reactive and proactive moderation. Covers the spectrum of moderation models (removal, labeling, demotion, counter-speech), the role of human reviewers vs. automated systems, and the inherent tradeoffs between free expression and harm reduction. Provides operational context before the course turns to technical detection.
Lecture 3: Detection and Discovery of Misinformation Sources
- Source: Detection and Discovery of Misinformation Sources
Technical lecture covering how to identify and classify misinformation-producing websites using SEO network features, backlinking patterns, and multi-class classification. Key topics: construction of the SEO network from CommonCrawl data, predictive power of network features over credibility labels, limitations of current approaches (implied content, propaganda vs. opinion, link decay). Establishes the computational approach that underpins Lectures 4 and 5.
Lecture 4: Misinformation Resilient Search Rankings
- Source: Misinformation Resilient Search Rankings
Builds on the source detection methods from Lecture 3 to ask: how do we intervene at the search level? Covers small-scale interventions (PageRank, Personalized PageRank, authority-based reranking), large-scale interventions targeting link schemes, and the design principles that make interventions robust. Discusses evidence that link schemes disproportionately link to unreliable news and that "multi-category" scheme removal has higher marginal effectiveness.
Lecture 5: Credibility Pluralism Tradeoff
- Source: Credibility Pluralism Tradeoff
Complicates the intervention story. Using CommonCrawl and GDELT data, this lecture demonstrates that credibility-based filtering and viewpoint diversity (pluralism) are in tension: interventions that reduce low-credibility content tend to reduce the diversity of sources users encounter. Introduces assortativity analysis and Wasserstein distances as tools for measuring polarization in news transition matrices.
Lecture 6: Media Influences — Structural Dimensions of Credibility, Bias, and Ownership
- Source: Media Influences
Zooms out to the structural and economic dimensions of the media ecosystem: how source credibility, political bias, and corporate ownership interact to shape information quality. Covers multi-agent dynamic scenarios and behavioral determinants of media use. Serves as a research-synthesis and ongoing-work session, best positioned after students have seen the computational interventions of Lectures 3–5.
Tutorial Session: Fact-Checking NLP (IC2S2 2025 Tutorial)
- Source: Misinformation Detection Tutorial (IC2S2_25)
Hands-on notebook using the CT24 check-worthiness dataset. Covers the full pipeline: data loading and exploration, feature engineering, training a claim-level check-worthiness classifier, evaluation (precision, recall, F1), and error analysis. Recommended placement: after Lecture 3 (Detection and Discovery), once students have the conceptual vocabulary for detection. Can optionally be extended with the LLM-based detection approaches from the reading list.
Reference syllabi from other misinformation courses
Uni Bamberg: Misinformation, Disinformation and Other Digital Fakery (Andreas Jungherr) UNC: Misinformation and Society (Francesca Tripodi)
Zotero (from King et. al., 2025): https://www.zotero.org/groups/5535941/interventions-literature-review/library
Readings
Surveys and Review Articles
King, Catherine, Peter Carragher, and Kathleen M. Carley. "Mapping the Scientific Literature on Misinformation Interventions: A Bibliometric Review." Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media. Vol. 2025. 2025.
Aïmeur, Esma, Sabrine Amri, and Gilles Brassard. "Fake news, disinformation and misinformation in social media: a review." Social Network Analysis and Mining 13.1 (2023): 30. https://doi.org/10.1007/s13278-023-01028-5
Altay, S., Berriche, M., Heuer, H., Farkas, J., & Rathje, S. (2023). A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-119
Broda, E., & Strömbäck, J. (2024). Misinformation, Disinformation, and Fake News: Lessons from an Interdisciplinary, Systematic Literature Review. Annals of the International Communication Association, 48(2), 139–166. https://doi.org/10.1080/23808985.2024.2323736
Ecker, U. K. H., Tay, L. Q., Roozenbeek, J., van der Linden, S., Cook, J., Oreskes, N., & Lewandowsky, S. (2024). Why misinformation must not be ignored.American Psychologist. Advance online publication. https://doi.org/10.1037/amp0001448
Kapantai, E., Christopoulou, A., Berberidis, C., & Peristeras, V. (2020). A systematic literature review on disinformation: Toward a unified taxonomical framework. New Media & Society, 23(5), 1301-1326. https://doi.org/10.1177/1461444820959296
Murphy, G., de Saint Laurent, C., Reynolds, M., Aftab, O., Hegarty, K. Sun, Y. & Greene, C. M. (2023). What do we study when we study misinformation? A scoping review of experimental research (2016-2022). Harvard Kennedy School (HKS) Misinformation Review. ttps://doi.org/10.37016/mr-2020-130
Pérez-Escolar, M., Lilleker, D., & Tapia-Frade, A. (2023). A systematic literature review of the phenomenon of disinformation and misinformation. Media and communication, 11(2), 76-87. https://doi.org/10.17645/mac.v11i2.6453
Saeidnia, H. R., Hosseini, E., Lund, B., Tehrani, M. A., Zaker, S., & Molaei, S. (2025). Artificial intelligence in the battle against disinformation and misinformation: A systematic review of challenges and approaches. Knowledge and Information Systems, 67(4), 3139–3158. https://doi.org/10.1007/s10115-024-02337-7
Tandoc Jr. EC. The facts of fake news: A research review. Sociology Compass. 2019; 13:e12724. https://doi.org/10.1111/soc4.12724
Conceptualizing misinformation
Chadwick, A., & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1–24. https://doi.org/10.1093/ct/qtab019
Freelon, D., & and Wells, C. (2020). Disinformation as Political Communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755
Molina, M. D., Sundar, S. S., Le, T., & Lee, D. (2019). “Fake News” Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content. American Behavioral Scientist, 65(2), 180-212. https://doi.org/10.1177/0002764219878224 (Original work published 2021)
Starbird, K. (2024). Facts, frames, and (mis) interpretations: understanding rumors as collective sensemaking. Link: Facts, frames, and (mis)interpretations: Understanding rumors as collective sensemaking
Tandoc, E. C., Lim, Z. W., & Ling, R. (2017). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137–153. https://doi.org/10.1080/21670811.2017.1360143
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Vol. 27, pp. 1-107). Strasbourg: Council of Europe.
Wu, L., Morstatter, F., Carley, K. M., & Liu, H. (2019). Misinformation in social media: definition, manipulation, and detection. ACM SIGKDD explorations newsletter, 21(2), 80-90. https://doi.org/10.1145/3373464.3373475
Implications of misinformation
Adams, Z., Osman, M., Bechlivanidis, C., & Meder, B. (2023). (Why) Is Misinformation a Problem? Perspectives on Psychological Science, 18(6), 1436-1463. https://doi.org/10.1177/17456916221141344 (Original work published 2023)
Ecker, U., Roozenbeek, J., Van Der Linden, S., Tay, L. Q., Cook, J., Oreskes, N., & Lewandowsky, S. (2024). Misinformation poses a bigger threat to democracy than you might think. Nature, 630(8015), 29-32. https://www.nature.com/articles/d41586-024-01587-3
McKay, S., & Tenove, C. (2020). Disinformation as a Threat to Deliberative Democracy. Political Research Quarterly, 74(3), 703-717. https://doi.org/10.1177/1065912920938143 (Original work published 2021)
Woolley, S. C., & Howard, P. N. (2016). Automation, Algorithms, and Politics.pdf| Political Communication, Computational Propaganda, and Autonomous Agents—Introduction. International Journal of Communication, 10(0),
Misinformation on misinformation (in research and public discourse)
Altay, S., Berriche, M., & Acerbi, A. (2023). Misinformation on Misinformation: Conceptual and Methodological Challenges. Social Media + Society, 9(1), 20563051221150412. https://doi.org/10.1177/20563051221150412
Budak, C., Nyhan, B., Rothschild, D. M., Thorson, E., & Watts, D. J. (2024). Misunderstanding the harms of online misinformation. Nature, 630(8015), 45–53. https://doi.org/10.1038/s41586-024-07417-w
Harsin, J. (2024). Three Critiques of Disinformation (For-Hire) Scholarship: Definitional Vortexes, Disciplinary Unneighborliness, and Cryptonormativity. Social Media + Society, 10(1). https://doi.org/10.1177/20563051231224732
Nyhan, B. (2020). Facts and Myths about Misperceptions. Journal of Economic Perspectives, 34(3), 220–236. https://doi.org/10.1257/jep.34.3.220
Pasquetto, I. V., Lim, G., & Bradshaw, S. (2024). Misinformed about misinformation: On the polarizing discourse on misinformation and its consequences for the field. Harvard Kennedy School (HKS) Misinformation Review, 5(5).
Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School Misinformation Review, 4(5).
Prevalence and spread of misinformation
Allen, J., Howland, B., Mobius, M., Rothschild, D., & Watts, D. J. (2020). Evaluating the fake news problem at the scale of the information ecosystem. Science Advances, 6(14), eaay3539. https://doi.org/10.1126/sciadv.aay3539
Baribi-Bartov, S., Swire-Thompson, B., & Grinberg, N. (2024). Supersharers of fake news on Twitter. Science, 384(6699), 979–982. https://doi.org/10.1126/science.adl4435
Chadwick, A., Vaccari, C., & Kaiser, J. (2022). The Amplification of Exaggerated and False News on Social Media: The Roles of Platform Use, Motivations, Affect, and Ideology. American Behavioral Scientist, 69(2), 113-130. https://doi.org/10.1177/00027642221118264
Goel, P., Green, J., Lazer, D. et al. Using co-sharing to identify use of mainstream news for promoting potentially misleading narratives. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02223-4
Ozawa, J. V., Woolley, S., & Lukito, J. (2024). Taking the power back: How diaspora community organizations are fighting misinformation spread on encrypted messaging apps. Harvard Kennedy School Misinformation Review.
Pathak, R., Spezzano, F., & Pera, M. S. (2023). Understanding the contribution of recommendation algorithms on misinformation recommendation and misinformation dissemination on social networks. ACM Transactions on the Web, 17(4), 1-26.
Renault, T., Mosleh, M., & Rand, D. G. (2025). Republicans are flagged more often than Democrats for sharing misinformation on X’s Community Notes. Proceedings of the National Academy of Sciences, 122(25), e2502053122. https://doi.org/10.1073/pnas.2502053122
Tomassi, A., Falegnami, A., & Romano, E. (2024). Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society. Plos one, 19(5), e0303183.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Vulnerability to misinformation
Anspach, N. M., & Carlson, T. N. (2024). Not who you think? Exposure and vulnerability to misinformation. New Media & Society, 26(8), 4847–4866. https://doi.org/10.1177/14614448221130422
Altay, S., & Acerbi, A. (2024). People believe misinformation is a threat because they assume others are gullible. New Media & Society, 26(11), 6440–6461. https://doi.org/10.1177/14614448231153379
Aslett, K., Sanderson, Z., Godel, W., Persily, N., Nagler, J., & Tucker, J. A. (2024). Online searches to evaluate misinformation can increase its perceived veracity. Nature, 625(7995), 548–556. https://doi.org/10.1038/s41586-023-06883-y
Ceylan, G., Anderson, I. A., & Wood, W. (2023). Sharing of misinformation is habitual, not just lazy or biased. Proceedings of the National Academy of Sciences, 120(4), e2216614120. https://doi.org/10.1073/pnas.2216614120
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
Ecker, U. K. H., Lewandowsky, S., Fenton, O., & Martin, K. (2014). Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Memory & Cognition, 42(2), 292–304. https://doi.org/10.3758/s13421-013-0358-x
Flynn, D. j., Nyhan, B., & Reifler, J. (2017). The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology, 38(S1), 127–150. https://doi.org/10.1111/pops.12394
Kunst, J. R., Gundersen, A. B., Krysińska, I., Piasecki, J., Wójtowicz, T., Rygula, R., van der Linden, S., & Morzy, M. (2024). Leveraging artificial intelligence to identify the psychological factors associated with conspiracy theory beliefs online. Nature Communications, 15(1), 7497. https://doi.org/10.1038/s41467-024-51740-9
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Pantazi, M., Hale, S., & Klein, O. (2021). Social and Cognitive Aspects of the Vulnerability to Political Misinformation. Political Psychology, 42(S1), 267–304. https://doi.org/10.1111/pops.12797
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39–50. https://doi.org/10.1016/j.cognition.2018.06.011
Sultan, M., Tump, A. N., Ehmann, N., Lorenz-Spreen, P., Hertwig, R., Gollwitzer, A., & Kurvers, R. H. J. M. (2024). Susceptibility to online misinformation: A systematic meta-analysis of demographic and psychological factors. Proceedings of the National Academy of Sciences, 121(47), e2409329121. https://doi.org/10.1073/pnas.2409329121
Van Bavel, J. J., Harris, E. A., Pärnamets, P., Rathje, S., Doell, K. C., & Tucker, J. A. (2021). Political Psychology in the Digital (mis)Information age: A Model of News Belief and Sharing. Social Issues and Policy Review, 15(1), 84–113. https://doi.org/10.1111/sipr.12077
Weeks, B. E. (2015). Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation. Journal of Communication, 65(4), 699–719. https://doi.org/10.1111/jcom.12164
Combatting misinformation
Arechar, A. A., Allen, J., Berinsky, A. J., Cole, R., Epstein, Z., Garimella, K., Gully, A., Lu, J. G., Ross, R. M., Stagnaro, M. N., Zhang, Y., Pennycook, G., & Rand, D. G. (2023). Understanding and combatting misinformation across 16 countries on six continents. Nature Human Behaviour, 7(9), 1502–1513. https://doi.org/10.1038/s41562-023-01641-6
Aruguete, N., Batista, F., Calvo, E., Guizzo-Altube, M., Scartascini, C., & Ventura, T. (2024). Framing fact-checks as a “confirmation” increases engagement with corrections of misinformation: A four-country study. Scientific Reports, 14(1), 3201. https://doi.org/10.1038/s41598-024-53337-0
Bak-Coleman, J. B., Kennedy, I., Wack, M., Beers, A., Schafer, J. S., Spiro, E. S., ... & West, J. D. (2022). Combining interventions to reduce the spread of viral misinformation. Nature Human Behaviour, 6(10), 1372-1380.
Ecker, U. K. H., & Ang, L. C. (2019). Political Attitudes and the Processing of Misinformation Corrections. Political Psychology, 40(2), 241–260. https://doi.org/10.1111/pops.12494
Feuerriegel, S., DiResta, R., Goldstein, J. A., Kumar, S., Lorenz-Spreen, P., Tomz, M., & Pröllochs, N. (2023). Research can help to tackle AI-generated disinformation. Nature Human Behaviour, 7(11), 1818–1821. https://doi.org/10.1038/s41562-023-01726-2
Hoes, E., Aitken, B., Zhang, J., Gackowski, T., & Wojcieszak, M. (2024). Prominent misinformation interventions reduce misperceptions but increase scepticism. Nature Human Behaviour, 8(8), 1545–1553. https://doi.org/10.1038/s41562-024-01884-x
Kozyreva, A., Lorenz-Spreen, P., Herzog, S. M., Ecker, U. K. H., Lewandowsky, S., Hertwig, R., Ali, A., Bak-Coleman, J., Barzilai, S., Basol, M., Berinsky, A. J., Betsch, C., Cook, J., Fazio, L. K., Geers, M., Guess, A. M., Huang, H., Larreguy, H., Maertens, R., … Wineburg, S. (2024). Toolbox of individual-level interventions against online misinformation. Nature Human Behaviour, 8(6), 1044–1052. https://doi.org/10.1038/s41562-024-01881-0
Lewandowsky, S, and van der Linden, S. (2021). “Countering Misinformation and Fake News Through Inoculation and Prebunking.” European Review of Social Psychology 32 (2): 348–84. doi.org/10.1080/10463283.2021.1876983
Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1.
Martel, C., & Rand, D. G. (2023). Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Current Opinion in Psychology, 54, 101710. https://doi.org/10.1016/j.copsyc.2023.101710
Martel, C., & Rand, D. G. (2024). Fact-checker warning labels are effective even for those who distrust fact-checkers. Nature Human Behaviour, 8(10), 1957–1967. https://doi.org/10.1038/s41562-024-01973-x
McCabe, S.D., Ferrari, D., Green, J. et al. Post-January 6th deplatforming reduced the reach of misinformation on Twitter. Nature 630, 132–140 (2024). https://doi.org/10.1038/s41586-024-07524-8
Nyhan, B., & Reifler, J. (2010). When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior, 32(2), 303–330. https://doi.org/10.1007/s11109-010-9112-2
Nyhan, B. (2021). Why the backfire effect does not explain the durability of political misperceptions. Proceedings of the National Academy of Sciences, 118(15), e1912440117. https://doi.org/10.1073/pnas.1912440117
Pennycook, G., & Rand, D. G. (2022). Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation. Nature Communications, 13(1), 2333. https://doi.org/10.1038/s41467-022-30073-5
van der Linden, S. (2022). Misinformation: Susceptibility, spread, and interventions to immunize the public. Nature Medicine, 28(3), 460–467. https://doi.org/10.1038/s41591-022-01713-6
Science-related misinformation
Allen, J., Watts, D. J., & Rand, D. G. (2024). Quantifying the impact of misinformation and vaccine-skeptical content on Facebook. Science, 384(6699), eadk3451. https://doi.org/10.1126/science.adk3451
Lenti, J., Mejova, Y., Kalimeri, K., Panisson, A., Paolotti, D., Tizzani, M., & Starnini, M. (2023). Global misinformation spillovers in the vaccination debate before and during the COVID-19 pandemic: multilingual Twitter study. JMIR infodemiology, 3, e44714.
Pielke Jr, R. A. (2004). When scientists politicize science: making sense of controversy over The Skeptical Environmentalist. Environmental Science & Policy, 7(5), 405-417.
Vicari, R., & Komendatova, N. (2023). Systematic meta-analysis of research on AI tools to deal with misinformation on social media during natural and anthropogenic hazards and disasters. Humanities and Social Sciences Communications, 10(1), 1-14.
West, J. D., & Bergstrom, C. T. (2021). Misinformation in and about science. Proceedings of the National Academy of Sciences, 118(15), e1912444117.
AI to address misinfo (incl. fact-checking).
Note: We don’t focus on CS literature for fake news detection etc. here but there is a ton of work in that space. These selected papers focus on applications of AI in the “era” of AI.
Augenstein, I., Bakker, M., Chakraborty, T., Corney, D., Ferrara, E., Gurevych, I., Hale, S., Hovy, E., Ji, H., Larraz, I., Menczer, F., Nakov, P., Papotti, P., Sahnan, D., Warren, G., & Zagni, G. (2025). Community Moderation and the New Epistemology of Fact Checking on Social Media (No. arXiv:2505.20067). arXiv. https://doi.org/10.48550/arXiv.2505.20067
Augenstein, I., Baldwin, T., Cha, M., Chakraborty, T., Ciampaglia, G. L., Corney, D., DiResta, R., Ferrara, E., Hale, S., Halevy, A., Hovy, E., Ji, H., Menczer, F., Miguez, R., Nakov, P., Scheufele, D., Sharma, S., & Zagni, G. (2024). Factuality challenges in the era of large language models and opportunities for fact-checking. Nature Machine Intelligence, 6(8), 852–863. https://doi.org/10.1038/s42256-024-00881-z
Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714), eadq1814.
Costello, T. H., Pennycook, G., & Rand, D. (2025). Just the facts: How dialogues with AI reduce conspiracy beliefs. OSF Preprint.
Luceri, L., Salkar, T. V., Balasubramanian, A., Pinto, G., Sun, C., & Ferrara, E. (2025). Coordinated Inauthentic Behavior on TikTok: Challenges and Opportunities for Detection in a Video-First Ecosystem (No. arXiv:2505.10867). arXiv. https://doi.org/10.48550/arXiv.2505.10867
Shoaib, M. R., Wang, Z., Ahvanooey, M. T., & Zhao, J. (2023). Deepfakes, Misinformation, and Disinformation in the Era of Frontier AI, Generative AI, and Large AI Models. 2023 International Conference on Computer and Applications (ICCA), 1–7. https://doi.org/10.1109/ICCA59364.2023.10401723
Schmitt, V., Villa-Arenas, L.-F., Feldhus, N., Meyer, J., Spang, R. P., & Möller, S. (2024). The Role of Explainability in Collaborative Human-AI Disinformation Detection. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 2157–2174. https://doi.org/10.1145/3630106.3659031
Wang, J., Wang, X., & Yu, A. (2025). Tackling misinformation in mobile social networks a BERT-LSTM approach for enhancing digital literacy. Scientific Reports, 15(1), 1118.
Xu, D., Fan, S., & Kankanhalli, M. (2023, October). Combating misinformation in the era of generative AI models. In Proceedings of the 31st ACM International Conference on Multimedia (pp. 9291-9298).
Yang, K. C., Varol, O., Davis, C. A., Ferrara, E., Flammini, A., & Menczer, F. (2019). Arming the public with artificial intelligence to counter social bots. Human Behavior and Emerging Technologies, 1(1), 48-61.
Yi, J., Xu, Z., Huang, T., & Yu, P. (2025). Challenges and Innovations in LLM-Powered Fake News Detection: A Synthesis of Approaches and Future Directions. In Proceedings of the 2025 2nd International Conference on Generative Artificial Intelligence and Information Security (pp. 87–93). Association for Computing Machinery. https://doi.org/10.1145/3728725.3728739
Zhang, Y., Sharma, K., Du, L., & Liu, Y. (2024). Toward Mitigating Misinformation and Social Media Manipulation in LLM Era. Companion Proceedings of the ACM Web Conference 2024, 1302–1305. https://doi.org/10.1145/3589335.3641256
Zhao, Y., Liu, B., Ding, M., Liu, B., Zhu, T., & Yu, X. (2023). Proactive deepfake defence via identity watermarking. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 4602-4611).
AI-generated misinformation datasets and detection
Chen, C., & Shu, K. (2024). Combating misinformation in the age of LLMs: Opportunities and challenges. AI Magazine, 45(3), 354-368. https://doi.org/10.1002/aaai.12188
- LLMs Meet Misinformation (Canyu Chen and Kai Shu) (Project Website)
Chen, C., & Shu, K (2024). Can LLM-Generated Misinformation Be Detected?. In The Twelfth International Conference on Learning Representations.
- Can LLM-Generated Misinformation Be Detected (ICLR 2024) (Github Repo)
Huang, R., Dugan, L., Yang, Y., & Callison-Burch, C. (2024, November). MiRAGeNews: Multimodal Realistic AI-Generated News Detection. In Findings of the Association for Computational Linguistics: EMNLP 2024 (pp. 16436-16448).
Lin, L., Gupta, N., Zhang, Y., Ren, H., Liu, C.-H., Ding, F., Wang, X., Li, X., Verdoliva, L., & Hu, S. (2025). Detecting Multimedia Generated by Large AI Models: A Survey (No. arXiv:2402.00045). arXiv. https://doi.org/10.48550/arXiv.2402.00045
Liu, A., Sheng, Q., & Hu, X. (2024). Preventing and Detecting Misinformation Generated by Large Language Models. Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, 3001–3004. https://doi.org/10.1145/3626772.3661377
Wang, L. Z., Ma, Y., Gao, R., Guo, B., Zhu, H., Fan, W., ... & Ng, K. C. (2024). Megafake: a theory-driven dataset of fake news generated by large language models. arXiv preprint arXiv:2408.11871.
Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & De Choudhury, M. (2023, April). Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI conference on human factors in computing systems (pp. 1-20).
Sociopolitical implications of AI-generated misinformation
Barman, D., Guo, Z., & Conlan, O. (2024). The dark side of language models: Exploring the potential of LLMs in multimedia disinformation generation and dissemination. Machine Learning with Applications, 100545.
Calvo, P., & Saura García, C. (2024). Generative AI and Democracy: the synthetification of public opinion and its impacts. Available at SSRN 4911710.
Chu-Ke, C., & Dong, Y. (2024). Misinformation and Literacies in the Era of Generative Artificial Intelligence: A Brief Overview and a Call for Future Research. Emerging Media, 2(1), 70-85. https://doi.org/10.1177/27523543241240285
De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health, 11, 1166120.
Ferrara, E. (2025). Charting the Landscape of Nefarious Uses of Generative Artificial Intelligence for Online Election Interference (No. arXiv:2406.01862). arXiv. https://doi.org/10.48550/arXiv.2406.01862
Garry, M., Chan, W. M., Foster, J., & Henkel, L. A. (2024). Large language models (LLMs) and the institutionalization of misinformation. Trends in cognitive sciences. https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(24)00221-3
Jaidka, K., Chen, T., Chesterman, S., Hsu, W., Kan, M.-Y., Kankanhalli, M., Lee, M. L., Seres, G., Sim, T., Taeihagh, A., Tung, A., Xiao, X., & Yue, A. (2025). Misinformation, Disinformation, and Generative AI: Implications for Perception and Policy. Digit. Gov.: Res. Pract., 6(1), 11:1-11:15. https://doi.org/10.1145/3689372
Schroeder, D. T., Cha, M., Baronchelli, A., Bostrom, N., Christakis, N. A., Garcia, D., Goldenberg, A., Kyrychenko, Y., Leyton-Brown, K., Lutz, N., Marcus, G., Menczer, F., Pennycook, G., Rand, D. G., Schweitzer, F., Summerfield, C., Tang, A., Bavel, J. V., Linden, S. van der, … Kunst, J. R. (2025). How Malicious AI Swarms Can Threaten Democracy (No. arXiv:2506.06299). arXiv. https://doi.org/10.48550/arXiv.2506.06299
Wack, M., Ehrett, C., Linvill, D., & Warren, P. (2025). Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. PNAS Nexus, 4(4), pgaf083. https://doi.org/10.1093/pnasnexus/pgaf083
How people respond to AI-generated misinformation
Bashardoust, A., Feuerriegel, S., & Shrestha, Y. R. (2024). Comparing the Willingness to Share for Human-generated vs. AI-generated Fake News. Proc. ACM Hum.-Comput. Interact., 8(CSCW2), 489:1-489:21. https://doi.org/10.1145/3687028
Danry, V., Pataranutaporn, P., Groh, M., & Epstein, Z. (2025). Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–31. https://doi.org/10.1145/3706598.3713408
Groh, M., Sankaranarayanan, A., Singh, N., Kim, D. Y., Lippman, A., & Picard, R. (2024). Human detection of political speech deepfakes across transcripts, audio, and video. Nature Communications, 15(1), 7629. https://doi.org/10.1038/s41467-024-51998-z
Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social media+ society, 6(1), 2056305120903408.
Wittenberg, C., Epstein, Z., Péloquin-Skulski, G., Berinsky, A. J., & Rand, D. G. (2025). Labeling AI-generated media online. PNAS Nexus, 4(6), pgaf170. https://doi.org/10.1093/pnasnexus/pgaf170