Connect with us

Artificial Intelligence

With a Lithium-6 Test Case, Quantum Computing Comes to a Historic Nuclear Physics Problem

Researchers worked out how to efficiently prepare wave functions for the lithium-6 nuclear ground state and implemented those on quantum hardware.

Published

on

image 17
Credit: Image courtesy of O.O. Kiss et al., Quantum computing of the 6Li nucleus via ordered unitary coupled clusters. Physical Review C 106, 034325 (2022).
Training curves using trial wave functions that employ a different order of operators to generate them. The x axis shows the number of steps used in the computation. The y axis shows the accuracy of the ground-state energy.
« With a Lithium-6 Test Case, Quantum Computing Comes to a Historic Nuclear Physics Problem

The Science

In nuclear physics, quantum computing cannot yet solve problems better than classical computing. However, quantum computing hardware continues to advance. This progress makes it interesting to evaluate how these tools could be used to solve physics problems. This research applied quantum computing to determine different energy levels of the lithium-6 nucleus. Nuclear energy levels involve different configurations of protons and neutrons in  a nucleus. To prepare the ground state of a nucleus—its lowest energy level—on a quantum computer, scientists must try out many different statistical operations to define that state. Scientists try these alternatives to see which trial order of operations produces an accurate description. This research applied this approach to the lithium-6 nucleus using quantum hardware. This demonstrated how quantum computing can solve real physics problems.

The Impact

This work shows how to solve a historic – yet realistic – nuclear physics research problem on present-day quantum computers. The problem involves working out how to refine trial wave functions with a small number of steps to find the best approximation of a nucleus’s ground state. Each step must be simple enough to be implemented on today’s limited quantum computer hardware. At the same time, the steps taken together must be sophisticated enough to actually solve the problem. This work showed how to efficiently order the operations (“steps”) that prepare wave functions, descriptions of the location of the electrons in an atom. It compared different ways to do this and successfully implemented the optimal strategy on a commercially available quantum chip.

Summary

An international team of researchers found out how to solve a 1960s research problem in nuclear physics on present-day quantum computers. Quantum computers can perform operations on trial states to refine them such that they solve a given problem. It is interesting and important to find out which operations should be applied to prepare such states, and in what order they should be done. This research tested different schemes for the preparation of states of an atomic nucleus. An efficient scheme was found and used to actually perform such operations on quantum hardware. The researchers computed the ground state and an excited state of the lithium-6 nucleus. They also found that it is important to mitigate measurement errors to achieve accurate results. These errors stem from noise that disturbs the operation of today’s quantum chips.

Funding

This work was supported by CERN Quantum Technology Initiative, the Department of Energy (DOE) Office of Science, Office of Nuclear Physics, and by the Quantum Science Center, a DOE National Quantum Information Science Research Center. Access to the IBM Quantum Services was obtained through the IBM Quantum Hub at CERN.

Journal Link: Physical Review C, Sep-2022

Source:  Department of Energy, Office of Science

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement SodaStream USA, inc

Artificial Intelligence

AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries

Published

on

AI
This screenshot of an AI-generated video depicts Christopher Pelkey, who was killed in 2021. Screenshot: Stacey Wales/YouTube
Nir Eisikovits, UMass Boston and Daniel J. Feldman, UMass Boston Christopher Pelkey was shot and killed in a road range incident in 2021. On May 8, 2025, at the sentencing hearing for his killer, an AI video reconstruction of Pelkey delivered a victim impact statement. The trial judge reported being deeply moved by this performance and issued the maximum sentence for manslaughter. As part of the ceremonies to mark Israel’s 77th year of independence on April 30, 2025, officials had planned to host a concert featuring four iconic Israeli singers. All four had died years earlier. The plan was to conjure them using AI-generated sound and video. The dead performers were supposed to sing alongside Yardena Arazi, a famous and still very much alive artist. In the end Arazi pulled out, citing the political atmosphere, and the event didn’t happen. In April, the BBC created a deep-fake version of the famous mystery writer Agatha Christie to teach a “maestro course on writing.” Fake Agatha would instruct aspiring murder mystery authors and “inspire” their “writing journey.” The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic. Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes.
Chris Pelkey was shot and killed in 2021. This AI ‘reanimation’ of him was presented in court as a victim impact statement.

Moral quandaries

The first moral quandary the technology raises has to do with consent: Would the deceased have agreed to do what their likeness is doing? Would the dead Israeli singers have wanted to sing at an Independence ceremony organized by the nation’s current government? Would Pelkey, the road-rage victim, be comfortable with the script his family wrote for his avatar to recite? What would Christie think about her AI double teaching that class? The answers to these questions can only be deduced circumstantially – from examining the kinds of things the dead did and the views they expressed when alive. And one could ask if the answers even matter. If those in charge of the estates agree to the reanimations, isn’t the question settled? After all, such trustees are the legal representatives of the departed. But putting aside the question of consent, a more fundamental question remains. What do these reanimations do to the legacy and reputation of the dead? Doesn’t their reputation depend, to some extent, on the scarcity of appearance, on the fact that the dead can’t show up anymore? Dying can have a salutary effect on the reputation of prominent people; it was good for John F. Kennedy, and it was good for Israeli Prime Minister Yitzhak Rabin. The fifth-century B.C. Athenian leader Pericles understood this well. In his famous Funeral Oration, delivered at the end of the first year of the Peloponnesian War, he asserts that a noble death can elevate one’s reputation and wash away their petty misdeeds. That is because the dead are beyond reach and their mystique grows postmortem. “Even extreme virtue will scarcely win you a reputation equal to” that of the dead, he insists. Do AI reanimations devalue the currency of the dead by forcing them to keep popping up? Do they cheapen and destabilize their reputation by having them comment on events that happened long after their demise? In addition, these AI representations can be a powerful tool to influence audiences for political or legal purposes. Bringing back a popular dead singer to legitimize a political event and reanimating a dead victim to offer testimony are acts intended to sway an audience’s judgment. It’s one thing to channel a Churchill or a Roosevelt during a political speech by quoting them or even trying to sound like them. It’s another thing to have “them” speak alongside you. The potential of harnessing nostalgia is supercharged by this technology. Imagine, for example, what the Soviets, who literally worshipped Lenin’s dead body, would have done with a deep fake of their old icon.

Good intentions

You could argue that because these reanimations are uniquely engaging, they can be used for virtuous purposes. Consider a reanimated Martin Luther King Jr., speaking to our currently polarized and divided nation, urging moderation and unity. Wouldn’t that be grand? Or what about a reanimated Mordechai Anielewicz, the commander of the Warsaw Ghetto uprising, speaking at the trial of a Holocaust denier like David Irving? But do we know what MLK would have thought about our current political divisions? Do we know what Anielewicz would have thought about restrictions on pernicious speech? Does bravely campaigning for civil rights mean we should call upon the digital ghost of King to comment on the impact of populism? Does fearlessly fighting the Nazis mean we should dredge up the AI shadow of an old hero to comment on free speech in the digital age?
a man in a suit and tie stands in front of a microphone
No one can know with certainty what Martin Luther King Jr. would say about today’s society. AP Photo/Chick Harrity
Even if the political projects these AI avatars served were consistent with the deceased’s views, the problem of manipulation – of using the psychological power of deepfakes to appeal to emotions – remains. But what about enlisting AI Agatha Christie to teach a writing class? Deep fakes may indeed have salutary uses in educational settings. The likeness of Christie could make students more enthusiastic about writing. Fake Aristotle could improve the chances that students engage with his austere Nicomachean Ethics. AI Einstein could help those who want to study physics get their heads around general relativity. But producing these fakes comes with a great deal of responsibility. After all, given how engaging they can be, it’s possible that the interactions with these representations will be all that students pay attention to, rather than serving as a gateway to exploring the subject further.

Living on in the living

In a poem written in memory of W.B. Yeats, W.H. Auden tells us that, after the poet’s death, Yeats “became his admirers.” His memory was now “scattered among a hundred cities,” and his work subject to endless interpretation: “the words of a dead man are modified in the guts of the living.” The dead live on in the many ways we reinterpret their words and works. Auden did that to Yeats, and we’re doing it to Auden right here. That’s how people stay in touch with those who are gone. In the end, we believe that using technological prowess to concretely bring them back disrespects them and, perhaps more importantly, is an act of disrespect to ourselves – to our capacity to abstract, think and imagine. Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston and Daniel J. Feldman, Senior Research Fellow, Applied Ethics Center, UMass Boston This article is republished from The Conversation under a Creative Commons license. Read the original article.

STM Daily News is a vibrant news blog dedicated to sharing the brighter side of human experiences. Emphasizing positive, uplifting stories, the site focuses on delivering inspiring, informative, and well-researched content. With a commitment to accurate, fair, and responsible journalism, STM Daily News aims to foster a community of readers passionate about positive change and engaged in meaningful conversations. Join the movement and explore stories that celebrate the positive impacts shaping our world.

https://stmdailynews.com/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

STM Blog

AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them

Arryn Robbins discusses the challenges of recognizing AI-generated images due to human cognitive limitations and inattentional blindness, emphasizing the importance of critical thinking in a visually fast-paced online environment.

Published

on

Arryn Robbins, University of Richmond

I’m more of a scroller than a poster on social media. Like many people, I wind down at the end of the day with a scroll binge, taking in videos of Italian grandmothers making pasta or baby pygmy hippos frolicking.

For a while, my feed was filled with immaculately designed tiny homes, fueling my desire for a minimalist paradise. Then, I started seeing AI-generated images; many contained obvious errors, such as staircases to nowhere or sinks within sinks. Yet, commenters rarely pointed them out, instead admiring the aesthetic.

https://www.facebook.com/photo?fbid=948015667455503&set=gm.665399635824007&idorvanity=351768197187154

These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?

As a cognitive psychologist, I’d guess “yes” and “yes.” My expertise is in how people process and use visual information. I primarily investigate how people look for objects and information visually, from the mundane searches of daily life, such as trying to find a dropped earring, to more critical searches, like those conducted by radiologists or search-and-rescue teams.

With my understanding of how people process images and notice − or don’t notice − detail, it’s not surprising to me that people aren’t tuning in to the fact that many images are AI-generated.

We’ve been here before

The struggle to detect AI-generated images mirrors past detection challenges such as spotting photoshopped images or computer-generated images in movies.

Advertisement
image 101376000 12222003

But there’s a key difference: Photo editing and CGI require intentional design by artists, while AI images are generated by algorithms trained on datasets, often without human oversight. The lack of oversight can lead to imperfections or inconsistencies that can feel unnatural, such as the unrealistic physics or lack of consistency between frames that characterize what’s sometimes called “AI slop.”

Despite these differences, studies show people struggle to distinguish real images from synthetic ones, regardless of origin. Even when explicitly asked to identify images as real, synthetic or AI-generated, accuracy hovers near the level of chance, meaning people did only a little better than if they’d just guessed.

In everyday interactions, where you aren’t actively scrutinizing images, your ability to detect synthetic content might even be weaker.

Attention shapes what you see, what you miss

Spotting errors in AI images requires noticing small details, but the human visual system isn’t wired for that when you’re casually scrolling. Instead, while online, people take in the gist of what they’re viewing and can overlook subtle inconsistencies.

Visual attention operates like a zoom lens: You scan broadly to get an overview of your environment or phone screen, but fine details require focused effort. Human perceptual systems evolved to quickly assess environments for any threats to survival, with sensitivity to sudden changes − such as a quick-moving predator − sacrificing precision for speed of detection.

This speed-accuracy trade-off allows for rapid, efficient processing, which helped early humans survive in natural settings. But it’s a mismatch with modern tasks such as scrolling through devices, where small mistakes or unusual details in AI-generated images can easily go unnoticed.

People also miss things they aren’t actively paying attention to or looking for. Psychologists call this inattentional blindness: Focusing on one task causes you to overlook other details, even obvious ones. In the famous invisible gorilla study, participants asked to count basketball passes in a video failed to notice someone in a gorilla suit walking through the middle of the scene.

Advertisement
image 101376000 12222003
If you’re counting how many passes the people in white make, do you even notice someone walk through in a gorilla suit?

Similarly, when your focus is on the broader content of an AI image, such as a cozy tiny home, you’re less likely to notice subtle distortions. In a way, the sixth finger in an AI image is today’s invisible gorilla − hiding in plain sight because you’re not looking for it.

Efficiency over accuracy in thinking

Our cognitive limitations go beyond visual perception. Human thinking uses two types of processing: fast, intuitive thinking based on mental shortcuts, and slower, analytical thinking that requires effort. When scrolling, our fast system likely dominates, leading us to accept images at face value.

Adding to this issue is the tendency to seek information that confirms your beliefs or reject information that goes against them. This means AI-generated images are more likely to slip by you when they align with your expectations or worldviews. If an AI-generated image of a basketball player making an impossible shot jibes with a fan’s excitement, they might accept it, even if something feels exaggerated.

While not a big deal for tiny home aesthetics, these issues become concerning when AI-generated images may be used to influence public opinion. For example, research shows that people tend to assume images are relevant to accompanying text. Even when the images provide no actual evidence, they make people more likely to accept the text’s claims as true.

Misleading real or generated images can make false claims seem more believable and even cause people to misremember real events. AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.

https://www.facebook.com/photo/?fbid=1010254754457256&set=a.407186301430774

Beating the machine

While AI gets better at detecting AI, humans need tools to do the same. Here’s how:

Advertisement
image 101376000 12222003
  1. Trust your gut. If something feels off, it probably is. Your brain expertly recognizes objects and faces, even under varying conditions. Perhaps you’ve experienced what psychologists call the uncanny valley and felt unease with certain humanoid faces. This experience shows people can detect anomalies, even when they can’t fully explain what’s wrong.
  2. Scan for clues. AI struggles with certain elements: hands, text, reflections, lighting inconsistencies and unnatural textures. If an image seems suspicious, take a closer look.
  3. Think critically. Sometimes, AI generates photorealistic images with impossible scenarios. If you see a political figure casually surprising baristas or a celebrity eating concrete, ask yourself: Does this make sense? If not, it’s probably fake.
  4. Check the source. Is the poster a real person? Reverse image search can help trace a picture’s origin. If the metadata is missing, it might be generated by AI.

AI-generated images are becoming harder to spot. During scrolling, the brain processes visuals quickly, not critically, making it easy to miss details that reveal a fake. As technology advances, slow down, look closer and think critically.The Conversation

Arryn Robbins, Assistant Professor of Psychology, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.


A beautiful kitchen to scroll past – but check out the clock. Tiny Homes via Facebook
AI-generated images

https://stmdailynews.com/space-force-faces-new-challenge-tracking-debris-from-intelsat-33e-breakdown/

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Tech

How close are quantum computers to being really useful? Podcast

Quantum computers could revolutionize science by solving complex problems. However, scaling and error correction remain significant challenges before achieving practical applications.

Published

on

quantum computers
Audio und verbung/Shutterstock

Gemma Ware, The Conversation

Quantum computers have the potential to solve big scientific problems that are beyond the reach of today’s most powerful supercomputers, such as discovering new antibiotics or developing new materials.

But to achieve these breakthroughs, quantum computers will need to perform better than today’s best classical computers at solving real-world problems. And they’re not quite there yet. So what is still holding quantum computing back from becoming useful?

In this episode of The Conversation Weekly podcast, we speak to quantum computing expert Daniel Lidar at the University of Southern California in the US about what problems scientists are still wrestling with when it comes to scaling up quantum computing, and how close they are to overcoming them.

https://cdn.theconversation.com/infographics/561/4fbbd099d631750693d02bac632430b71b37cd5f/site/index.html

Quantum computers harness the power of quantum mechanics, the laws that govern subatomic particles. Instead of the classical bits of information used by microchips inside traditional computers, which are either a 0 or a 1, the chips in quantum computers use qubits, which can be both 0 and 1 at the same time or anywhere in between. Daniel Lidar explains:

“Put a lot of these qubits together and all of a sudden you have a computer that can simultaneously represent many, many different possibilities …  and that is the starting point for the speed up that we can get from quantum computing.”

Faulty qubits

One of the biggest problems scientist face is how to scale up quantum computing power. Qubits are notoriously prone to errors – which means that they can quickly revert to being either a 0 or a 1, and so lose their advantage over classical computers.

Scientists have focused on trying to solve these errors through the concept of redundancy – linking strings of physical qubits together into what’s called a “logical qubit” to try and maximise the number of steps in a computation. And, little by little, they’re getting there.

Advertisement
image 101376000 12222003

In December 2024, Google announced that its new quantum chip, Willow, had demonstrated what’s called “beyond breakeven”, when its logical qubits worked better than the constituent parts and even kept on improving as it scaled up.

Lidar says right now the development of this technology is happening very fast:

“For quantum computing to scale and to take off is going to still take some real science breakthroughs, some real engineering breakthroughs, and probably overcoming some yet unforeseen surprises before we get to the point of true quantum utility. With that caution in mind, I think it’s still very fair to say that we are going to see truly functional, practical quantum computers kicking into gear, helping us solve real-life problems, within the next decade or so.”

Listen to Lidar explain more about how quantum computers and quantum error correction works on The Conversation Weekly podcast.


This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Katie Flood and Mend Mariwany. Sound design was by Michelle Macklem, and theme music by Neeta Sarl.

Clips in this episode from Google Quantum AI and 10 Hours Channel.

You can find us on Instagram at theconversationdotcom or via e-mail. You can also subscribe to The Conversation’s free daily e-mail here.

Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.

Advertisement
image 101376000 12222003

Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending