Connect with us

Artificial Intelligence

AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries

Published

on

AI
This screenshot of an AI-generated video depicts Christopher Pelkey, who was killed in 2021. Screenshot: Stacey Wales/YouTube
Nir Eisikovits, UMass Boston and Daniel J. Feldman, UMass Boston Christopher Pelkey was shot and killed in a road range incident in 2021. On May 8, 2025, at the sentencing hearing for his killer, an AI video reconstruction of Pelkey delivered a victim impact statement. The trial judge reported being deeply moved by this performance and issued the maximum sentence for manslaughter. As part of the ceremonies to mark Israel’s 77th year of independence on April 30, 2025, officials had planned to host a concert featuring four iconic Israeli singers. All four had died years earlier. The plan was to conjure them using AI-generated sound and video. The dead performers were supposed to sing alongside Yardena Arazi, a famous and still very much alive artist. In the end Arazi pulled out, citing the political atmosphere, and the event didn’t happen. In April, the BBC created a deep-fake version of the famous mystery writer Agatha Christie to teach a “maestro course on writing.” Fake Agatha would instruct aspiring murder mystery authors and “inspire” their “writing journey.” The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic. Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes.
Chris Pelkey was shot and killed in 2021. This AI ‘reanimation’ of him was presented in court as a victim impact statement.

Moral quandaries

The first moral quandary the technology raises has to do with consent: Would the deceased have agreed to do what their likeness is doing? Would the dead Israeli singers have wanted to sing at an Independence ceremony organized by the nation’s current government? Would Pelkey, the road-rage victim, be comfortable with the script his family wrote for his avatar to recite? What would Christie think about her AI double teaching that class? The answers to these questions can only be deduced circumstantially – from examining the kinds of things the dead did and the views they expressed when alive. And one could ask if the answers even matter. If those in charge of the estates agree to the reanimations, isn’t the question settled? After all, such trustees are the legal representatives of the departed. But putting aside the question of consent, a more fundamental question remains. What do these reanimations do to the legacy and reputation of the dead? Doesn’t their reputation depend, to some extent, on the scarcity of appearance, on the fact that the dead can’t show up anymore? Dying can have a salutary effect on the reputation of prominent people; it was good for John F. Kennedy, and it was good for Israeli Prime Minister Yitzhak Rabin. The fifth-century B.C. Athenian leader Pericles understood this well. In his famous Funeral Oration, delivered at the end of the first year of the Peloponnesian War, he asserts that a noble death can elevate one’s reputation and wash away their petty misdeeds. That is because the dead are beyond reach and their mystique grows postmortem. “Even extreme virtue will scarcely win you a reputation equal to” that of the dead, he insists. Do AI reanimations devalue the currency of the dead by forcing them to keep popping up? Do they cheapen and destabilize their reputation by having them comment on events that happened long after their demise? In addition, these AI representations can be a powerful tool to influence audiences for political or legal purposes. Bringing back a popular dead singer to legitimize a political event and reanimating a dead victim to offer testimony are acts intended to sway an audience’s judgment. It’s one thing to channel a Churchill or a Roosevelt during a political speech by quoting them or even trying to sound like them. It’s another thing to have “them” speak alongside you. The potential of harnessing nostalgia is supercharged by this technology. Imagine, for example, what the Soviets, who literally worshipped Lenin’s dead body, would have done with a deep fake of their old icon.

Good intentions

You could argue that because these reanimations are uniquely engaging, they can be used for virtuous purposes. Consider a reanimated Martin Luther King Jr., speaking to our currently polarized and divided nation, urging moderation and unity. Wouldn’t that be grand? Or what about a reanimated Mordechai Anielewicz, the commander of the Warsaw Ghetto uprising, speaking at the trial of a Holocaust denier like David Irving? But do we know what MLK would have thought about our current political divisions? Do we know what Anielewicz would have thought about restrictions on pernicious speech? Does bravely campaigning for civil rights mean we should call upon the digital ghost of King to comment on the impact of populism? Does fearlessly fighting the Nazis mean we should dredge up the AI shadow of an old hero to comment on free speech in the digital age?
a man in a suit and tie stands in front of a microphone
No one can know with certainty what Martin Luther King Jr. would say about today’s society. AP Photo/Chick Harrity
Even if the political projects these AI avatars served were consistent with the deceased’s views, the problem of manipulation – of using the psychological power of deepfakes to appeal to emotions – remains. But what about enlisting AI Agatha Christie to teach a writing class? Deep fakes may indeed have salutary uses in educational settings. The likeness of Christie could make students more enthusiastic about writing. Fake Aristotle could improve the chances that students engage with his austere Nicomachean Ethics. AI Einstein could help those who want to study physics get their heads around general relativity. But producing these fakes comes with a great deal of responsibility. After all, given how engaging they can be, it’s possible that the interactions with these representations will be all that students pay attention to, rather than serving as a gateway to exploring the subject further.

Living on in the living

In a poem written in memory of W.B. Yeats, W.H. Auden tells us that, after the poet’s death, Yeats “became his admirers.” His memory was now “scattered among a hundred cities,” and his work subject to endless interpretation: “the words of a dead man are modified in the guts of the living.” The dead live on in the many ways we reinterpret their words and works. Auden did that to Yeats, and we’re doing it to Auden right here. That’s how people stay in touch with those who are gone. In the end, we believe that using technological prowess to concretely bring them back disrespects them and, perhaps more importantly, is an act of disrespect to ourselves – to our capacity to abstract, think and imagine. Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston and Daniel J. Feldman, Senior Research Fellow, Applied Ethics Center, UMass Boston This article is republished from The Conversation under a Creative Commons license. Read the original article.

STM Daily News is a vibrant news blog dedicated to sharing the brighter side of human experiences. Emphasizing positive, uplifting stories, the site focuses on delivering inspiring, informative, and well-researched content. With a commitment to accurate, fair, and responsible journalism, STM Daily News aims to foster a community of readers passionate about positive change and engaged in meaningful conversations. Join the movement and explore stories that celebrate the positive impacts shaping our world.

https://stmdailynews.com/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Artificial Intelligence

Learning with AI falls short compared to old-fashioned web search

Learning with AI falls short: New research with 10,000+ participants reveals people who learn using ChatGPT develop shallower knowledge than those using Google search. Discover why AI-generated summaries reduce learning effectiveness and how to use AI tools strategically for education.

Published

on

Learning with AI falls short compared to old-fashioned web search
The work of seeking and synthesizing information can improve understanding of it compared to reading a summary. Tom Werner/DigitalVision via Getty Images

Learning with AI falls short compared to old-fashioned web search

Shiri Melumad, University of Pennsylvania Since the release of ChatGPT in late 2022, millions of people have started using large language models to access knowledge. And it’s easy to understand their appeal: Ask a question, get a polished synthesis and move on – it feels like effortless learning. However, a new paper I co-authored offers experimental evidence that this ease may come at a cost: When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search. Co-author Jin Ho Yun and I, both professors of marketing, reported this finding in a paper based on seven studies with more than 10,000 participants. Most of the studies used the same basic paradigm: Participants were asked to learn about a topic – such as how to grow a vegetable garden – and were randomly assigned to do so by using either an LLM like ChatGPT or the “old-fashioned way,” by navigating links using a standard Google search. No restrictions were put on how they used the tools; they could search on Google as long as they wanted and could continue to prompt ChatGPT if they felt they wanted more information. Once they completed their research, they were then asked to write advice to a friend on the topic based on what they learned. The data revealed a consistent pattern: People who learned about a topic through an LLM versus web search felt that they learned less, invested less effort in subsequently writing their advice, and ultimately wrote advice that was shorter, less factual and more generic. In turn, when this advice was presented to an independent sample of readers, who were unaware of which tool had been used to learn about the topic, they found the advice to be less informative, less helpful, and they were less likely to adopt it. We found these differences to be robust across a variety of contexts. For example, one possible reason LLM users wrote briefer and more generic advice is simply that the LLM results exposed users to less eclectic information than the Google results. To control for this possibility, we conducted an experiment where participants were exposed to an identical set of facts in the results of their Google and ChatGPT searches. Likewise, in another experiment we held constant the search platform – Google – and varied whether participants learned from standard Google results or Google’s AI Overview feature. The findings confirmed that, even when holding the facts and platform constant, learning from synthesized LLM responses led to shallower knowledge compared to gathering, interpreting and synthesizing information for oneself via standard web links.

Why it matters

Why did the use of LLMs appear to diminish learning? One of the most fundamental principles of skill development is that people learn best when they are actively engaged with the material they are trying to learn. When we learn about a topic through Google search, we face much more “friction”: We must navigate different web links, read informational sources, and interpret and synthesize them ourselves. While more challenging, this friction leads to the development of a deeper, more original mental representation of the topic at hand. But with LLMs, this entire process is done on the user’s behalf, transforming learning from a more active to passive process.

What’s next?

To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals. Need a quick, factual answer to a question? Feel free to use your favorite AI co-pilot. But if your aim is to develop deep and generalizable knowledge in an area, relying on LLM syntheses alone will be less helpful. As part of my research on the psychology of new technology and new media, I am also interested in whether it’s possible to make LLM learning a more active process. In another experiment we tested this by having participants engage with a specialized GPT model that offered real-time web links alongside its synthesized responses. There, however, we found that once participants received an LLM summary, they weren’t motivated to dig deeper into the original sources. The result was that the participants still developed shallower knowledge compared to those who used standard Google. Building on this, in my future research I plan to study generative AI tools that impose healthy frictions for learning tasks – specifically, examining which types of guardrails or speed bumps most successfully motivate users to actively learn more beyond easy, synthesized answers. Such tools would seem particularly critical in secondary education, where a major challenge for educators is how best to equip students to develop foundational reading, writing and math skills while also preparing for a real world where LLMs are likely to be an integral part of their daily lives. The Research Brief is a short take on interesting academic work. Shiri Melumad, Associate Professor of Marketing, University of Pennsylvania This article is republished from The Conversation under a Creative Commons license. Read the original article.

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/

View recent photos

Unlock fun facts & lost history—get The Knowledge in your inbox!

We don’t spam! Read our privacy policy for more info.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AII: Cognizant guides organizations in AI adoption, addressing challenges like talent shortages and governance while empowering employees to transform business practices and achieve lasting impact.

Published

on

Last Updated on November 10, 2025 by Daily News Staff

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AI

(Family Features) In today’s AI-powered economy, transformation is no longer optional – it’s essential. Enterprises are eager to embrace generative and agentic AI, but many lack the clarity and confidence to scale it responsibly.

As a global leader in technology and consulting services, Cognizant is helping organizations bridge that gap – turning possibility into progress.

The Moment is Now

AI is reshaping industries, redefining roles, and revolutionizing decision-making. According to Cognizant Research, 61% of senior decision-makers expect AI to drive complete business transformation. Yet, 83% feel unprepared to embed AI into their organizations, citing gaps in talent, governance, and culture.

This disconnect presents a powerful opportunity.

“In the age of AI, transformation isn’t just about technology, it’s about trust, talent and the ability to turn possibility into progress,” said Shveta Arora, head of Cognizant Consulting. “The true impact of AI is delivered when organizations build trust, invest in adaptable talent and embrace bold ideas. By empowering people and embedding AI responsibly, leaders can bridge the gap between potential and progress, ensuring lasting value for business and society.”

A Trusted Voice in AI

As a recognized leader in AI strategy and enterprise transformation, Cognizant brings credibility and clarity to this evolving space. It has been named a Leader and Star Performer by Everest Group in their 2024 AI and Generative AI Services PEAK Matrix Assessment, underscoring its strategic vision and execution.

With thought leadership in AI strategy and enterprise transformation published across thousands of U.S. outlets, its position as a trusted voice in shaping the future of AI has been reinforced. It has also been recognized across the industry for excellence in client service and innovation.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Its platforms – Neuro, Flowsource and the Data and Intelligence Toolkit – are driving real-world impact across industries. Furthermore, a strategic collaboration with a leading enterprise-grade generative AI provider enables secure and scalable deployment of agentic AI in regulated settings, ensuring adherence to compliance and data governance standards

Bridging the AI Adoption Gap

When a leading property intelligence provider’s IT systems were hampering progressing turnaround times, the company turned to Cognizant’s Gen AI-powered Data as a Service and Neuro Business Process (BP) platform. Driven by AI insights and learning, Neuro BP centralized business processing. It automated data collection, case reviews and decision-making to align with the client’s goals. Powered by the platform, the organization saw a reduction in processing time and errors and an increase in productivity.

Stories like these are still the exception.

Despite enthusiasm and investment – global businesses are spending an average of $47.5 million on generative AI this year – many feel they’re moving too slowly. The barriers include talent shortages, infrastructure gaps and unclear governance. These challenges can be overcome by moving from experimentation to execution. With clarity, credibility and conviction, organizations can scale AI responsibly and effectively.

Accelerating Enterprise AI Transformations

Unlike traditional software, AI models are contextual computing engines. They don’t require every path to be spelled out in advance but instead interpret broad instructions and intent, and adapt based on the context they are given. Agentic AI systems lacking business-specific knowledge can lead to generic or unreliable outputs.

To address this, enterprises need systems that can deliver the right information and tools to AI models – enabling accurate decisions, alignment with human goals, compliance with policy frameworks and adaptability to real-time challenges. This is the role of context engineering, an emerging discipline focused on delivering the right context at the right time to agentic systems. Context refers to the sum of a company’s institutional knowledge, including its operating models, roles, goals, metrics, processes, policies and governance – essential ingredients for effective AI.

To guide clients through their AI journey, Cognizant developed the Strategic Enterprise Agentification Framework, an end-to-end model designed to unlock productivity, market expansion and new business models.

At its core is the Agent Development Lifecycle (ADLC), which guides the development of enterprise agents and agentic AI systems across six distinct stages. Designed to align with real-world enterprise dynamics, ADLC supports seamless integration with business applications. This unique approach embeds context engineering into ADLC, ensuring agents are tailored to support real-world enterprise dynamics.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

To help bridge vision and execution, businesses can utilize the Neuro AI Multi-Agent Accelerator. This no-code framework allows rapid deployment of custom multi-agent systems.

People Power the Progress

Technology alone doesn’t transform enterprises – people do. With an AI-driven Workforce Transformation (WFT), Cognizant helps organizations reskill employees, redesign roles and build AI fluency. Integrated with the Agentification Framework, WFT is designed to accelerate transformation and support long-term resilience.

From Possibility to Progress

From strategic frameworks to enterprise platforms to workforce readiness, Cognizant equips organizations with the confidence to harness AI responsibly and at scale. In the age of AI, it’s not just about transformation – it’s about leading with purpose.

Explore more at cognizant.com.

Photo courtesy of Shutterstock

collect?v=1&tid=UA 482330 7&cid=1955551e 1975 5e52 0cdb 8516071094cd&sc=start&t=pageview&dl=http%3A%2F%2Ftrack.familyfeatures
SOURCE:
Cognizant

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Can AI keep students motivated, or does it do the opposite?

Published

on

AI
AI-based tools can be effective in motivating students but require proper design and thoughtful implementation. Associated Press

Yurou Wang, University of Alabama

Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.

These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?

As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.

What the evidence shows so far

Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.

For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.

A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.

But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.

When AI backfires

There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.

Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.

Not all AI tools work the same way

The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.

Young student working on tablet at school.
Specialized AI-based tools designed for learning tend to work better for students with proper teacher support compared to general-purpose chatbots such as ChatGPT and Claude. But those specialized products typically cost money, raising questions over equity and quality of education. Charlie Riedel/AP

There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.

These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.

Why it matters for all of us

The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.

But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.

For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.

Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.

Yurou Wang, Associate Professor of Educational Psychology, University of Alabama

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://stmdailynews.com/%f0%9f%93%9c-who-created-blogging-a-look-back-at-the-birth-of-the-blog/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending