Connect with us

Artificial Intelligence

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

Published

on

AI
The human proclivity for storytelling makes disinformation difficult to combat. Westend61 via Getty Images

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

Mark Finlayson, Florida International University and Azwad Anjum Islam, Florida International University It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it’s a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs. This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election. While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content. At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references.

Disinformation vs. misinformation

In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren’t isolated incidents. They were part of an organized campaign, powered in part by AI. Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information – getting facts wrong – disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook. Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans. Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don’t just help people remember – they help us feel. They foster emotional connections and shape our interpretations of social and political events.
Stories have profound effects on human beliefs and behavior.
This makes them especially powerful tools for persuasion – and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data.

Usernames, cultural context and narrative time

Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn’t add up. Narratives are not confined to the content users share – they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience. For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations. Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection – one that considers not just what is said but who appears to be saying it and why. Also, stories don’t always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between. Humans handle this effortlessly – we’re used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge. Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion. Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation. Consider the following sentence: “The woman in the white dress was filled with joy.” In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive. In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it’s critical to give AI this sort of cultural literacy. In our research, we’ve found that training AI on diverse cultural narratives improves its sensitivity to such distinctions.

Who benefits from narrative-aware AI?

Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time. In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable. Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root. As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold. Mark Finlayson, Associate Professor of Computer Science, Florida International University and Azwad Anjum Islam, Ph.D. Student in Computing and Information Sciences, Florida International University This article is republished from The Conversation under a Creative Commons license. Read the original article.

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement Tool Tickets

Artificial Intelligence

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

A new study finds over 50% of online articles are now AI-generated, raising questions about the future of human writing. Discover why formulaic content is most at risk, and why authentic, creative voices may become more valuable than ever.

Published

on

Is AI Replacing Human Writers? Why Over Half of Online Articles Are Now AI-Generated
Preserving the value of real human voices will likely depend on how people adapt to artificial intelligence and collaborate with it. BlackJack3D/E+ via Getty Images

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

Francesco Agnellini, Binghamton University, State University of New York The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it’s affecting culture, I’ve thought a lot about what this technology can do and where it falls short. If you’re more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?

It isn’t all or nothing

Thinking about these questions reminded me of Umberto Eco’s essay “Apocalyptic and Integrated,” which was originally written in the early 1960s. Parts of it were later included in an anthology titled “Apocalypse Postponed,” which I first read as a college student in Italy. In it, Eco draws a contrast between two attitudes toward mass media. There are the “apocalyptics” who fear cultural degradation and moral collapse. Then there are the “integrated” who champion new media technologies as a democratizing force for culture.
An older man with a beard, glasses and a suit poses while holding a cigarette.
Italian philosopher, cultural critic and novelist Umberto Eco cautioned against overreacting to the impact of new technologies. Leonardo Cendamo/Getty Images
Back then, Eco was writing about the proliferation of TV and radio. Today, you’ll often see similar reactions to AI. Yet Eco argued that both positions were too extreme. It isn’t helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shape – and sometimes reinforce – power structures. While I was teaching a course on deepfakes during the 2024 election, Eco’s lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent “deepfake apocalypse.” Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home? Those fears weren’t groundless: Research shows that people aren’t especially good at identifying deepfakes. At the same time, they consistently overestimate their ability to do so. In the end, though, the apocalypse was postponed. Post-election analyses found that deepfakes did seem to intensify some ongoing political trends, such as the erosion of trust and polarization, but there’s no evidence that they affected the final outcome of the election.

Listicles, news updates and how-to guides

Of course, the fears that AI raises for supporters of democracy are not the same as those it creates for writers and artists. For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning? It’s important to clarify what’s meant by “online content,” the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers. https://stmdailynews.com/wp-admin/post-new.php#visibility The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writers – mostly freelance, including many translators – has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.

Collaborating with AI

The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them and then reshape that output into the final text. This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone. Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. It’s worth noting that even before AI, human writing has never been entirely human, either. Every technology, from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it. Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and human–AI co-produced text. This has raised concerns about their ability to continue improving over time. Some commentators have already described a sense of disillusionment following the release of newer large models, with companies struggling to deliver on their promises.

Human voices may matter even more

But what happens when people become overly reliant on AI in their writing? Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Western – and especially English-speaking – norms in the writing of people from other cultures, raising concerns about a new form of AI colonialism. In this context, texts that display originality, voice and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models. If you set aside the more apocalyptic scenarios and assume that AI will continue to advance – perhaps at a slower pace than in the recent past – it’s quite possible that thoughtful, original, human-generated writing will become even more valuable. Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans. Francesco Agnellini, Lecturer in Digital and Data Studies, Binghamton University, State University of New York This article is republished from The Conversation under a Creative Commons license. Read the original article.
 

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/

View recent photos

Unlock fun facts & lost history—get The Knowledge in your inbox!

We don’t spam! Read our privacy policy for more info.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Learning with AI falls short compared to old-fashioned web search

Learning with AI falls short: New research with 10,000+ participants reveals people who learn using ChatGPT develop shallower knowledge than those using Google search. Discover why AI-generated summaries reduce learning effectiveness and how to use AI tools strategically for education.

Published

on

Learning with AI falls short compared to old-fashioned web search
The work of seeking and synthesizing information can improve understanding of it compared to reading a summary. Tom Werner/DigitalVision via Getty Images

Learning with AI falls short compared to old-fashioned web search

Shiri Melumad, University of Pennsylvania Since the release of ChatGPT in late 2022, millions of people have started using large language models to access knowledge. And it’s easy to understand their appeal: Ask a question, get a polished synthesis and move on – it feels like effortless learning. However, a new paper I co-authored offers experimental evidence that this ease may come at a cost: When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search. Co-author Jin Ho Yun and I, both professors of marketing, reported this finding in a paper based on seven studies with more than 10,000 participants. Most of the studies used the same basic paradigm: Participants were asked to learn about a topic – such as how to grow a vegetable garden – and were randomly assigned to do so by using either an LLM like ChatGPT or the “old-fashioned way,” by navigating links using a standard Google search. No restrictions were put on how they used the tools; they could search on Google as long as they wanted and could continue to prompt ChatGPT if they felt they wanted more information. Once they completed their research, they were then asked to write advice to a friend on the topic based on what they learned. The data revealed a consistent pattern: People who learned about a topic through an LLM versus web search felt that they learned less, invested less effort in subsequently writing their advice, and ultimately wrote advice that was shorter, less factual and more generic. In turn, when this advice was presented to an independent sample of readers, who were unaware of which tool had been used to learn about the topic, they found the advice to be less informative, less helpful, and they were less likely to adopt it. We found these differences to be robust across a variety of contexts. For example, one possible reason LLM users wrote briefer and more generic advice is simply that the LLM results exposed users to less eclectic information than the Google results. To control for this possibility, we conducted an experiment where participants were exposed to an identical set of facts in the results of their Google and ChatGPT searches. Likewise, in another experiment we held constant the search platform – Google – and varied whether participants learned from standard Google results or Google’s AI Overview feature. The findings confirmed that, even when holding the facts and platform constant, learning from synthesized LLM responses led to shallower knowledge compared to gathering, interpreting and synthesizing information for oneself via standard web links.

Why it matters

Why did the use of LLMs appear to diminish learning? One of the most fundamental principles of skill development is that people learn best when they are actively engaged with the material they are trying to learn. When we learn about a topic through Google search, we face much more “friction”: We must navigate different web links, read informational sources, and interpret and synthesize them ourselves. While more challenging, this friction leads to the development of a deeper, more original mental representation of the topic at hand. But with LLMs, this entire process is done on the user’s behalf, transforming learning from a more active to passive process.

What’s next?

To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals. Need a quick, factual answer to a question? Feel free to use your favorite AI co-pilot. But if your aim is to develop deep and generalizable knowledge in an area, relying on LLM syntheses alone will be less helpful. As part of my research on the psychology of new technology and new media, I am also interested in whether it’s possible to make LLM learning a more active process. In another experiment we tested this by having participants engage with a specialized GPT model that offered real-time web links alongside its synthesized responses. There, however, we found that once participants received an LLM summary, they weren’t motivated to dig deeper into the original sources. The result was that the participants still developed shallower knowledge compared to those who used standard Google. Building on this, in my future research I plan to study generative AI tools that impose healthy frictions for learning tasks – specifically, examining which types of guardrails or speed bumps most successfully motivate users to actively learn more beyond easy, synthesized answers. Such tools would seem particularly critical in secondary education, where a major challenge for educators is how best to equip students to develop foundational reading, writing and math skills while also preparing for a real world where LLMs are likely to be an integral part of their daily lives. The Research Brief is a short take on interesting academic work. Shiri Melumad, Associate Professor of Marketing, University of Pennsylvania This article is republished from The Conversation under a Creative Commons license. Read the original article.

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AII: Cognizant guides organizations in AI adoption, addressing challenges like talent shortages and governance while empowering employees to transform business practices and achieve lasting impact.

Published

on

Last Updated on November 10, 2025 by Daily News Staff

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AI

(Family Features) In today’s AI-powered economy, transformation is no longer optional – it’s essential. Enterprises are eager to embrace generative and agentic AI, but many lack the clarity and confidence to scale it responsibly.

As a global leader in technology and consulting services, Cognizant is helping organizations bridge that gap – turning possibility into progress.

The Moment is Now

AI is reshaping industries, redefining roles, and revolutionizing decision-making. According to Cognizant Research, 61% of senior decision-makers expect AI to drive complete business transformation. Yet, 83% feel unprepared to embed AI into their organizations, citing gaps in talent, governance, and culture.

This disconnect presents a powerful opportunity.

“In the age of AI, transformation isn’t just about technology, it’s about trust, talent and the ability to turn possibility into progress,” said Shveta Arora, head of Cognizant Consulting. “The true impact of AI is delivered when organizations build trust, invest in adaptable talent and embrace bold ideas. By empowering people and embedding AI responsibly, leaders can bridge the gap between potential and progress, ensuring lasting value for business and society.”

A Trusted Voice in AI

As a recognized leader in AI strategy and enterprise transformation, Cognizant brings credibility and clarity to this evolving space. It has been named a Leader and Star Performer by Everest Group in their 2024 AI and Generative AI Services PEAK Matrix Assessment, underscoring its strategic vision and execution.

With thought leadership in AI strategy and enterprise transformation published across thousands of U.S. outlets, its position as a trusted voice in shaping the future of AI has been reinforced. It has also been recognized across the industry for excellence in client service and innovation.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Its platforms – Neuro, Flowsource and the Data and Intelligence Toolkit – are driving real-world impact across industries. Furthermore, a strategic collaboration with a leading enterprise-grade generative AI provider enables secure and scalable deployment of agentic AI in regulated settings, ensuring adherence to compliance and data governance standards

Bridging the AI Adoption Gap

When a leading property intelligence provider’s IT systems were hampering progressing turnaround times, the company turned to Cognizant’s Gen AI-powered Data as a Service and Neuro Business Process (BP) platform. Driven by AI insights and learning, Neuro BP centralized business processing. It automated data collection, case reviews and decision-making to align with the client’s goals. Powered by the platform, the organization saw a reduction in processing time and errors and an increase in productivity.

Stories like these are still the exception.

Despite enthusiasm and investment – global businesses are spending an average of $47.5 million on generative AI this year – many feel they’re moving too slowly. The barriers include talent shortages, infrastructure gaps and unclear governance. These challenges can be overcome by moving from experimentation to execution. With clarity, credibility and conviction, organizations can scale AI responsibly and effectively.

Accelerating Enterprise AI Transformations

Unlike traditional software, AI models are contextual computing engines. They don’t require every path to be spelled out in advance but instead interpret broad instructions and intent, and adapt based on the context they are given. Agentic AI systems lacking business-specific knowledge can lead to generic or unreliable outputs.

To address this, enterprises need systems that can deliver the right information and tools to AI models – enabling accurate decisions, alignment with human goals, compliance with policy frameworks and adaptability to real-time challenges. This is the role of context engineering, an emerging discipline focused on delivering the right context at the right time to agentic systems. Context refers to the sum of a company’s institutional knowledge, including its operating models, roles, goals, metrics, processes, policies and governance – essential ingredients for effective AI.

To guide clients through their AI journey, Cognizant developed the Strategic Enterprise Agentification Framework, an end-to-end model designed to unlock productivity, market expansion and new business models.

At its core is the Agent Development Lifecycle (ADLC), which guides the development of enterprise agents and agentic AI systems across six distinct stages. Designed to align with real-world enterprise dynamics, ADLC supports seamless integration with business applications. This unique approach embeds context engineering into ADLC, ensuring agents are tailored to support real-world enterprise dynamics.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

To help bridge vision and execution, businesses can utilize the Neuro AI Multi-Agent Accelerator. This no-code framework allows rapid deployment of custom multi-agent systems.

People Power the Progress

Technology alone doesn’t transform enterprises – people do. With an AI-driven Workforce Transformation (WFT), Cognizant helps organizations reskill employees, redesign roles and build AI fluency. Integrated with the Agentification Framework, WFT is designed to accelerate transformation and support long-term resilience.

From Possibility to Progress

From strategic frameworks to enterprise platforms to workforce readiness, Cognizant equips organizations with the confidence to harness AI responsibly and at scale. In the age of AI, it’s not just about transformation – it’s about leading with purpose.

Explore more at cognizant.com.

Photo courtesy of Shutterstock

collect?v=1&tid=UA 482330 7&cid=1955551e 1975 5e52 0cdb 8516071094cd&sc=start&t=pageview&dl=http%3A%2F%2Ftrack.familyfeatures
SOURCE:
Cognizant

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending