This screenshot of an AI-generated video depicts Christopher Pelkey, who was killed in 2021.
Screenshot: Stacey Wales/YouTubeNir Eisikovits, UMass Boston and Daniel J. Feldman, UMass Boston
Christopher Pelkey was shot and killed in a road range incident in 2021. On May 8, 2025, at the sentencing hearing for his killer, an AI video reconstruction of Pelkey delivered a victim impact statement. The trial judge reported being deeply moved by this performance and issued the maximum sentence for manslaughter.
As part of the ceremonies to mark Israel’s 77th year of independence on April 30, 2025, officials had planned to host a concert featuring four iconic Israeli singers. All four had died years earlier. The plan was to conjure them using AI-generated sound and video. The dead performers were supposed to sing alongside Yardena Arazi, a famous and still very much alive artist. In the end Arazi pulled out, citing the political atmosphere, and the event didn’t happen.
In April, the BBC created a deep-fake version of the famous mystery writer Agatha Christie to teach a “maestro course on writing.” Fake Agatha would instruct aspiring murder mystery authors and “inspire” their “writing journey.”
The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic.
Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes.
Chris Pelkey was shot and killed in 2021. This AI ‘reanimation’ of him was presented in court as a victim impact statement.
Moral quandaries
The first moral quandary the technology raises has to do with consent: Would the deceased have agreed to do what their likeness is doing? Would the dead Israeli singers have wanted to sing at an Independence ceremony organized by the nation’s current government? Would Pelkey, the road-rage victim, be comfortable with the script his family wrote for his avatar to recite? What would Christie think about her AI double teaching that class?
The answers to these questions can only be deduced circumstantially – from examining the kinds of things the dead did and the views they expressed when alive. And one could ask if the answers even matter. If those in charge of the estates agree to the reanimations, isn’t the question settled? After all, such trustees are the legal representatives of the departed.
But putting aside the question of consent, a more fundamental question remains.
What do these reanimations do to the legacy and reputation of the dead? Doesn’t their reputation depend, to some extent, on the scarcity of appearance, on the fact that the dead can’t show up anymore? Dying can have a salutary effect on the reputation of prominent people; it was good for John F. Kennedy, and it was good for Israeli Prime Minister Yitzhak Rabin.
The fifth-century B.C. Athenian leader Pericles understood this well. In his famous Funeral Oration, delivered at the end of the first year of the Peloponnesian War, he asserts that a noble death can elevate one’s reputation and wash away their petty misdeeds. That is because the dead are beyond reach and their mystique grows postmortem. “Even extreme virtue will scarcely win you a reputation equal to” that of the dead, he insists.
Do AI reanimations devalue the currency of the dead by forcing them to keep popping up? Do they cheapen and destabilize their reputation by having them comment on events that happened long after their demise?
In addition, these AI representations can be a powerful tool to influence audiences for political or legal purposes. Bringing back a popular dead singer to legitimize a political event and reanimating a dead victim to offer testimony are acts intended to sway an audience’s judgment.
It’s one thing to channel a Churchill or a Roosevelt during a political speech by quoting them or even trying to sound like them. It’s another thing to have “them” speak alongside you. The potential of harnessing nostalgia is supercharged by this technology. Imagine, for example, what the Soviets, who literally worshipped Lenin’s dead body, would have done with a deep fake of their old icon.
Good intentions
You could argue that because these reanimations are uniquely engaging, they can be used for virtuous purposes. Consider a reanimated Martin Luther King Jr., speaking to our currently polarized and divided nation, urging moderation and unity. Wouldn’t that be grand? Or what about a reanimated Mordechai Anielewicz, the commander of the Warsaw Ghetto uprising, speaking at the trial of a Holocaust denier like David Irving?
But do we know what MLK would have thought about our current political divisions? Do we know what Anielewicz would have thought about restrictions on pernicious speech? Does bravely campaigning for civil rights mean we should call upon the digital ghost of King to comment on the impact of populism? Does fearlessly fighting the Nazis mean we should dredge up the AI shadow of an old hero to comment on free speech in the digital age?
No one can know with certainty what Martin Luther King Jr. would say about today’s society.AP Photo/Chick Harrity
Even if the political projects these AI avatars served were consistent with the deceased’s views, the problem of manipulation – of using the psychological power of deepfakes to appeal to emotions – remains.
But what about enlisting AI Agatha Christie to teach a writing class? Deep fakes may indeed have salutary uses in educational settings. The likeness of Christie could make students more enthusiastic about writing. Fake Aristotle could improve the chances that students engage with his austere Nicomachean Ethics. AI Einstein could help those who want to study physics get their heads around general relativity.
But producing these fakes comes with a great deal of responsibility. After all, given how engaging they can be, it’s possible that the interactions with these representations will be all that students pay attention to, rather than serving as a gateway to exploring the subject further.
Living on in the living
In a poem written in memory of W.B. Yeats, W.H. Auden tells us that, after the poet’s death, Yeats “became his admirers.” His memory was now “scattered among a hundred cities,” and his work subject to endless interpretation: “the words of a dead man are modified in the guts of the living.”
The dead live on in the many ways we reinterpret their words and works. Auden did that to Yeats, and we’re doing it to Auden right here. That’s how people stay in touch with those who are gone. In the end, we believe that using technological prowess to concretely bring them back disrespects them and, perhaps more importantly, is an act of disrespect to ourselves – to our capacity to abstract, think and imagine.
Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston and Daniel J. Feldman, Senior Research Fellow, Applied Ethics Center, UMass Boston
This article is republished from The Conversation under a Creative Commons license. Read the original article.
STM Daily News is a vibrant news blog dedicated to sharing the brighter side of human experiences. Emphasizing positive, uplifting stories, the site focuses on delivering inspiring, informative, and well-researched content. With a commitment to accurate, fair, and responsible journalism, STM Daily News aims to foster a community of readers passionate about positive change and engaged in meaningful conversations. Join the movement and explore stories that celebrate the positive impacts shaping our world.
3D-printed model of a 500-year-old prosthetic hand hints at life of a Renaissance amputee
Technology is more than just mechanisms and design – it’s ultimately about people.
Adriene Simon/College of Liberal Arts, Auburn University, CC BY-SAHeidi Hausse, Auburn University and Peden Jones, Auburn University
To think about an artificial limb is to think about a person. It’s an object of touch and motion made to be used, one that attaches to the body and interacts with its user’s world.
Historical artifacts of prosthetic limbs are far removed from this lived context. Their users are gone. They are damaged – deteriorated by time and exposure to the elements. They are motionless, kept on display or in museum storage.
Yet, such artifacts are rare direct sources into thelives of historical amputees. We focus on the tools amputees used in 16th- and 17th-century Europe. There are few records written from amputees’ perspectives at that time, and those that exist say little about what everyday life with a prosthesis was like.
Engineering offers historians new tools to examine physical evidence. This is particularly important for the study of early modern mechanical hands, a new kind of prosthetic technology that appeared at the turn of the 16th century. Most of the artifacts are of unknown provenance. Many work only partially and some not at all. Their practical functions remain a mystery.
But computer-aided design software can help scholars reconstruct the artifacts’ internal mechanisms. This, in turn, helps us understand how the objects once moved.
Even more exciting, 3D printing lets scholars create physical models. Rather than imagining how a Renaissance prosthesis worked, scholars can physically test one. It’s a form of investigation that opens new possibilities for exploring the development of prosthetic technology and user experience through the centuries. It creates a trail of breadcrumbs that can bring us closer to the everyday experiences of premodern amputees.
But what does this work, which brings together two very different fields, look like in action?
What follows is a glimpse into our experience of collaboration on a team of historians and engineers, told through the story of one week. Working together, we shared a model of a 16th-century prosthesis with the public and learned a lesson about humans and technology in the process.
A historian encounters a broken model
THE HISTORIAN: On a cloudy day in late March, I walked into the University of Alabama Birmingham’s Center for Teaching and Learning holding a weatherproof case and brimming with excitement. Nestled within the case’s foam inserts was a functioning 3D-printed model of a 500-year-old prosthetic hand.
Fifteen minutes later, it broke.
This 3D-printed model of a 16th-century hand prosthesis has working mechanisms.Heidi Hausse, CC BY-SA
For two years, my team of historians and engineers at Auburn University had worked tirelessly to turn an idea – recreating the mechanisms of a 16th-century artifact from Germany – into reality. The original iron prosthesis, the Kassel Hand, is one of approximately 35 from Renaissance Europe known today.
As an early modern historian who studies these artifacts, I work with a mechanical engineer, Chad Rose, to find new ways to explore them. The Kassel Hand is our case study. Our goal is to learn more about the life of the unknown person who used this artifact 500 years ago.
Using 3D-printed models, we’ve run experiments to test what kinds of activities its user could have performed with it. We modeled in inexpensive polylactic acid – plastic – to make this fragile artifact accessible to anyone with a consumer-grade 3D printer. But before sharing our files with the public, we needed to see how the model fared when others handled it.
An invitation to guest lecture on our experiments in Birmingham was our opportunity to do just that.
We brought two models. The main release lever broke first in one and then the other. This lever has an interior triangular plate connected to a thin rod that juts out of the wrist like a trigger. After pressing the fingers into a locked position, pulling the trigger is the only way to free them. If it breaks, the fingers become stuck.
The thin rod of the main release lever snapped in this model.Heidi Hausse, CC BY-SA
I was baffled. During testing, the model had lifted a 20-pound simulation of a chest lid by its fingertips. Yet, the first time we shared it with a general audience, a mechanism that had never broken in testing simply snapped.
Was it a printing error? Material defect? Design flaw?
We consulted our Hand Whisperer: our lead student engineer whose feel for how the model works appears at times preternatural.
An engineer becomes a hand whisperer
THE ENGINEER: I was sitting at my desk in Auburn’s mechanical engineering 3D print lab when I heard the news.
As a mechanical engineering graduate student concentrating on additive manufacturing, commonly known as 3D printing, I explore how to use this technology to reconstruct historical mechanisms. Over the two years I’ve worked on this project, I’ve come to know the Kassel Hand model well. As we fine-tuned designs, I’ve created and edited its computer-aided design files – the digital 3D constructions of the model – and printed and assembled its parts countless times.
This view of the computer-aided design file of a strengthened version of the model, which includes ribs and fillets to reinforce the plastic material, highlights the main release lever in orange.Peden Jones, CC BY-SA
Examining parts midassembly is a crucial checkpoint for our prototypes. This quality control catches, corrects and prevents any defects, such as misprinted or damaged parts. It’s crucial for creating consistent and repeatable experiments. A new model version or component change never leaves the lab without passing rigorous inspection. This process means there are ways this model has behaved over time that the rest of the team has never seen. But I have.
So when I heard the release lever had broken in Birmingham, it was just another Thursday. While it had never snapped when we tested the model on people, I’d seen it break plenty of times while performing checks on components.
Our model reconstructs the Kassel Hand’s original metal mechanisms in plastic.Heidi Hausse, CC BY-SA
After all, the model is made from relatively weak polylactic acid. Perhaps the most difficult part of our work is making a plastic model as durable as possible while keeping it visually consistent with the 500-year-old original. The iron rod of the artifact’s lever can handle more force than our plastic version, at least five times the yield strength.
I suspected the lever had snapped because people pulled the trigger too far back and too quickly. The challenge, then, was to prevent this. But redesigning the lever to be thicker or a different shape would make it less like the historical artifact.
This raised the question: Why could I use the model without breaking the lever, but no one else could?
The team makes a plan
THE TEAM: A flurry of discussion led to growing consensus – the crux of the issue was not the model, it was the user.
The original Kassel Hand’s wearer would have learned to use their prosthesis through practice. Likewise, our team had learned to use the model over time. Through the process of design and development, prototyping and printing, we were inadvertently practicing how to operate it.
We needed to teach others to do the same. And this called for a two-pronged approach.
Perspective on using the Kassel Hand, as a modern prosthetist.
The engineers reexamined the opening through which the release trigger poked out of the model. They proposed shortening it to limit how far back users could pull it. When we checked how this change would affect the model’s accuracy, we found that a smaller opening was actually closer to the artifact’s dimensions. While the larger opening had been necessary for an earlier version of the release lever that needed to travel farther, now it only caused problems. The engineers got to work.
The historians, meanwhile, created plans to document and share the various techniques to operating the model the team hadn’t realized it had honed. To teach someone at home how to operate their own copy, we filmed a short video explaining how to lock and release the fingers and troubleshoot when a finger sticks.
Testing the plan
Exactly one week after what we called “the Birmingham Break,” we shared the model with a general audience again. This time we visited a colleague’s history class at Auburn.
We brought four copies. Each had an insert to shorten the opening around the trigger. First, we played our new instructional video on a projector. Then we turned the models over to the students to try.
The team brought these four models with inserts to shorten the opening below the release trigger to test with a general audience of undergraduate and graduate students.Heidi Hausse, CC BY-SA
The result? Not a single broken lever. We publicly launched the project on schedule.
The process of introducing the Kassel Hand model to the public highlights that just as the 16th-century amputee who wore the artifact had to learn to use it, one must learn to use the 3D-printed model, too.
It is a potent reminder that technology is not just a matter of mechanisms and design. It is fundamentally about people – and how people use it.
Heidi Hausse, Associate Professor of History, Auburn University and Peden Jones, Graduate Student in Mechanical Engineering, Auburn University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them
Arryn Robbins discusses the challenges of recognizing AI-generated images due to human cognitive limitations and inattentional blindness, emphasizing the importance of critical thinking in a visually fast-paced online environment.
I’m more of a scroller than a poster on social media. Like many people, I wind down at the end of the day with a scroll binge, taking in videos of Italian grandmothers making pasta or baby pygmy hippos frolicking.
For a while, my feed was filled with immaculately designed tiny homes, fueling my desire for a minimalist paradise. Then, I started seeing AI-generated images; many contained obvious errors, such as staircases to nowhere or sinks within sinks. Yet, commenters rarely pointed them out, instead admiring the aesthetic.
These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?
As a cognitive psychologist, I’d guess “yes” and “yes.” My expertise is in how people process and use visual information. I primarily investigate how people look for objects and information visually, from the mundane searches of daily life, such as trying to find a dropped earring, to more critical searches, like those conducted by radiologists or search-and-rescue teams.
With my understanding of how people process images and notice − or don’t notice − detail, it’s not surprising to me that people aren’t tuning in to the fact that many images are AI-generated.
We’ve been here before
The struggle to detect AI-generated images mirrors past detection challenges such as spotting photoshopped images or computer-generated images in movies.
Advertisement
But there’s a key difference: Photo editing and CGI require intentional design by artists, while AI images are generated by algorithms trained on datasets, often without human oversight. The lack of oversight can lead to imperfections or inconsistencies that can feel unnatural, such as the unrealistic physics or lack of consistency between frames that characterize what’s sometimes called “AI slop.”
Despite these differences, studies show people struggle to distinguish real images from synthetic ones, regardless of origin. Even when explicitly asked to identify images as real, synthetic or AI-generated, accuracy hovers near the level of chance, meaning people did only a little better than if they’d just guessed.
In everyday interactions, where you aren’t actively scrutinizing images, your ability to detect synthetic content might even be weaker.
Attention shapes what you see, what you miss
Spotting errors in AI images requires noticing small details, but the human visual system isn’t wired for that when you’re casually scrolling. Instead, while online, people take in the gist of what they’re viewing and can overlook subtle inconsistencies.
Visual attention operates like a zoom lens: You scan broadly to get an overview of your environment or phone screen, but fine details require focused effort. Human perceptual systems evolved to quickly assess environments for any threats to survival, with sensitivity to sudden changes − such as a quick-moving predator − sacrificing precision for speed of detection.
This speed-accuracy trade-off allows for rapid, efficient processing, which helped early humans survive in natural settings. But it’s a mismatch with modern tasks such as scrolling through devices, where small mistakes or unusual details in AI-generated images can easily go unnoticed.
People also miss things they aren’t actively paying attention to or looking for. Psychologists call this inattentional blindness: Focusing on one task causes you to overlook other details, even obvious ones. In the famous invisible gorilla study, participants asked to count basketball passes in a video failed to notice someone in a gorilla suit walking through the middle of the scene.
Advertisement
If you’re counting how many passes the people in white make, do you even notice someone walk through in a gorilla suit?
Similarly, when your focus is on the broader content of an AI image, such as a cozy tiny home, you’re less likely to notice subtle distortions. In a way, the sixth finger in an AI image is today’s invisible gorilla − hiding in plain sight because you’re not looking for it.
Efficiency over accuracy in thinking
Our cognitive limitations go beyond visual perception. Human thinking uses two types of processing: fast, intuitive thinking based on mental shortcuts, and slower, analytical thinking that requires effort. When scrolling, our fast system likely dominates, leading us to accept images at face value.
Adding to this issue is the tendency to seek information that confirms your beliefs or reject information that goes against them. This means AI-generated images are more likely to slip by you when they align with your expectations or worldviews. If an AI-generated image of a basketball player making an impossible shot jibes with a fan’s excitement, they might accept it, even if something feels exaggerated.
While not a big deal for tiny home aesthetics, these issues become concerning when AI-generated images may be used to influence public opinion. For example, research shows that people tend to assume images are relevant to accompanying text. Even when the images provide no actual evidence, they make people more likely to accept the text’s claims as true.
Misleading real or generated images can make false claims seem more believable and even cause people to misremember real events. AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.
Trust your gut. If something feels off, it probably is. Your brain expertly recognizes objects and faces, even under varying conditions. Perhaps you’ve experienced what psychologists call the uncanny valley and felt unease with certain humanoid faces. This experience shows people can detect anomalies, even when they can’t fully explain what’s wrong.
Scan for clues. AI struggles with certain elements: hands, text, reflections, lighting inconsistencies and unnatural textures. If an image seems suspicious, take a closer look.
Think critically. Sometimes, AI generates photorealistic images with impossible scenarios. If you see a political figure casually surprising baristas or a celebrity eating concrete, ask yourself: Does this make sense? If not, it’s probably fake.
Check the source. Is the poster a real person? Reverse image search can help trace a picture’s origin. If the metadata is missing, it might be generated by AI.
AI-generated images are becoming harder to spot. During scrolling, the brain processes visuals quickly, not critically, making it easy to miss details that reveal a fake. As technology advances, slow down, look closer and think critically.
How close are quantum computers to being really useful? Podcast
Quantum computers could revolutionize science by solving complex problems. However, scaling and error correction remain significant challenges before achieving practical applications.
Quantum computers have the potential to solve big scientific problems that are beyond the reach of today’s most powerful supercomputers, such as discovering new antibiotics or developing new materials.
But to achieve these breakthroughs, quantum computers will need to perform better than today’s best classical computers at solving real-world problems. And they’re not quite there yet. So what is still holding quantum computing back from becoming useful?
In this episode of The Conversation Weekly podcast, we speak to quantum computing expert Daniel Lidar at the University of Southern California in the US about what problems scientists are still wrestling with when it comes to scaling up quantum computing, and how close they are to overcoming them.
Quantum computers harness the power of quantum mechanics, the laws that govern subatomic particles. Instead of the classical bits of information used by microchips inside traditional computers, which are either a 0 or a 1, the chips in quantum computers use qubits, which can be both 0 and 1 at the same time or anywhere in between. Daniel Lidar explains:
“Put a lot of these qubits together and all of a sudden you have a computer that can simultaneously represent many, many different possibilities … and that is the starting point for the speed up that we can get from quantum computing.”
Faulty qubits
One of the biggest problems scientist face is how to scale up quantum computing power. Qubits are notoriously prone to errors – which means that they can quickly revert to being either a 0 or a 1, and so lose their advantage over classical computers.
Scientists have focused on trying to solve these errors through the concept of redundancy – linking strings of physical qubits together into what’s called a “logical qubit” to try and maximise the number of steps in a computation. And, little by little, they’re getting there.
Advertisement
In December 2024, Google announced that its new quantum chip, Willow, had demonstrated what’s called “beyond breakeven”, when its logical qubits worked better than the constituent parts and even kept on improving as it scaled up.
Lidar says right now the development of this technology is happening very fast:
“For quantum computing to scale and to take off is going to still take some real science breakthroughs, some real engineering breakthroughs, and probably overcoming some yet unforeseen surprises before we get to the point of true quantum utility. With that caution in mind, I think it’s still very fair to say that we are going to see truly functional, practical quantum computers kicking into gear, helping us solve real-life problems, within the next decade or so.”
Listen to Lidar explain more about how quantum computers and quantum error correction works on The Conversation Weekly podcast.
This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Katie Flood and Mend Mariwany. Sound design was by Michelle Macklem, and theme music by Neeta Sarl.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy