Artificial Intelligence
Quantum Material Exhibits “Non-Local” Behavior That Mimics Brain Function
New research shows a possible way to improve energy-efficient computing

Electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes.
« Quantum Material Exhibits “Non-Local” Behavior That Mimics Brain Function
Newswise — We often believe computers are more efficient than humans. After all, computers can complete a complex math equation in a moment and can also recall the name of that one actor we keep forgetting. However, human brains can process complicated layers of information quickly, accurately, and with almost no energy input: recognizing a face after only seeing it once or instantly knowing the difference between a mountain and the ocean. These simple human tasks require enormous processing and energy input from computers, and even then, with varying degrees of accuracy.
Creating brain-like computers with minimal energy requirements would revolutionize nearly every aspect of modern life. Funded by the Department of Energy, Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) — a nationwide consortium led by the University of California San Diego — has been at the forefront of this research.
UC San Diego Assistant Professor of Physics Alex Frañó is co-director of Q-MEEN-C and thinks of the center’s work in phases. In the first phase, he worked closely with President Emeritus of University of California and Professor of Physics Robert Dynes, as well as Rutgers Professor of Engineering Shriram Ramanathan. Together, their teams were successful in finding ways to create or mimic the properties of a single brain element (such as a neuron or synapse) in a quantum material.
Now, in phase two, new research from Q-MEEN-C, published in Nano Letters, shows that electrical stimuli passed between neighboring electrodes can also affect non-neighboring electrodes. Known as non-locality, this discovery is a crucial milestone in the journey toward new types of devices that mimic brain functions known as neuromorphic computing.
“In the brain it’s understood that these non-local interactions are nominal — they happen frequently and with minimal exertion,” stated Frañó, one of the paper’s co-authors. “It’s a crucial part of how the brain operates, but similar behaviors replicated in synthetic materials are scarce.”
Like many research projects now bearing fruit, the idea to test whether non-locality in quantum materials was possible came about during the pandemic. Physical lab spaces were shuttered, so the team ran calculations on arrays that contained multiple devices to mimic the multiple neurons and synapses in the brain. In running these tests, they found that non-locality was theoretically possible.
When labs reopened, they refined this idea further and enlisted UC San Diego Jacobs School of Engineering Associate Professor Duygu Kuzum, whose work in electrical and computer engineering helped them turn a simulation into an actual device.
This involved taking a thin film of nickelate — a “quantum material” ceramic that displays rich electronic properties — inserting hydrogen ions, and then placing a metal conductor on top. A wire is attached to the metal so that an electrical signal can be sent to the nickelate. The signal causes the gel-like hydrogen atoms to move into a certain configuration and when the signal is removed, the new configuration remains.
“This is essentially what a memory looks like,” stated Frañó. “The device remembers that you perturbed the material. Now you can fine tune where those ions go to create pathways that are more conductive and easier for electricity to flow through.”
Traditionally, creating networks that transport sufficient electricity to power something like a laptop requires complicated circuits with continuous connection points, which is both inefficient and expensive. The design concept from Q-MEEN-C is much simpler because the non-local behavior in the experiment means all the wires in a circuit do not have to be connected to each other. Think of a spider web, where movement in one part can be felt across the entire web.
This is analogous to how the brain learns: not in a linear fashion, but in complex layers. Each piece of learning creates connections in multiple areas of the brain, allowing us to differentiate not just trees from dogs, but an oak tree from a palm tree or a golden retriever from a poodle.
To date, these pattern recognition tasks that the brain executes so beautifully, can only be simulated through computer software. AI programs like ChatGPT and Bard use complex algorithms to mimic brain-based activities like thinking and writing. And they do it really well. But without correspondingly advanced hardware to support it, at some point software will reach its limit.
Frañó is eager for a hardware revolution to parallel the one currently happening with software, and showing that it’s possible to reproduce non-local behavior in a synthetic material inches scientists one step closer. The next step will involve creating more complex arrays with more electrodes in more elaborate configurations.
“This is a very important step forward in our attempts to understand and simulate brain functions,” said Dynes, who is also a co-author. “Showing a system that has non-local interactions leads us further in the direction toward how our brains think. Our brains are, of course, much more complicated than this but a physical system that is capable of learning must be highly interactive and this is a necessary first step. We can now think of longer range coherence in space and time”
“It’s widely understood that in order for this technology to really explode, we need to find ways to improve the hardware — a physical machine that can perform the task in conjunction with the software,” Frañó stated. “The next phase will be one in which we create efficient machines whose physical properties are the ones that are doing the learning. That will give us a new paradigm in the world of artificial intelligence.”
This work is primarily supported by Quantum Materials for Energy Efficient Neuromorphic Computing, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences and funded by the U.S. Department of Energy (DE-SC0019273). A full list of funders can be found in the paper acknowledgements.
Source: University of California San Diego
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Tech
How close are quantum computers to being really useful? Podcast
Quantum computers could revolutionize science by solving complex problems. However, scaling and error correction remain significant challenges before achieving practical applications.

Quantum computers have the potential to solve big scientific problems that are beyond the reach of today’s most powerful supercomputers, such as discovering new antibiotics or developing new materials.
But to achieve these breakthroughs, quantum computers will need to perform better than today’s best classical computers at solving real-world problems. And they’re not quite there yet. So what is still holding quantum computing back from becoming useful?
In this episode of The Conversation Weekly podcast, we speak to quantum computing expert Daniel Lidar at the University of Southern California in the US about what problems scientists are still wrestling with when it comes to scaling up quantum computing, and how close they are to overcoming them.
Quantum computers harness the power of quantum mechanics, the laws that govern subatomic particles. Instead of the classical bits of information used by microchips inside traditional computers, which are either a 0 or a 1, the chips in quantum computers use qubits, which can be both 0 and 1 at the same time or anywhere in between. Daniel Lidar explains:
“Put a lot of these qubits together and all of a sudden you have a computer that can simultaneously represent many, many different possibilities … and that is the starting point for the speed up that we can get from quantum computing.”
Faulty qubits
One of the biggest problems scientist face is how to scale up quantum computing power. Qubits are notoriously prone to errors – which means that they can quickly revert to being either a 0 or a 1, and so lose their advantage over classical computers.
Scientists have focused on trying to solve these errors through the concept of redundancy – linking strings of physical qubits together into what’s called a “logical qubit” to try and maximise the number of steps in a computation. And, little by little, they’re getting there.
In December 2024, Google announced that its new quantum chip, Willow, had demonstrated what’s called “beyond breakeven”, when its logical qubits worked better than the constituent parts and even kept on improving as it scaled up.
Lidar says right now the development of this technology is happening very fast:
“For quantum computing to scale and to take off is going to still take some real science breakthroughs, some real engineering breakthroughs, and probably overcoming some yet unforeseen surprises before we get to the point of true quantum utility. With that caution in mind, I think it’s still very fair to say that we are going to see truly functional, practical quantum computers kicking into gear, helping us solve real-life problems, within the next decade or so.”
Listen to Lidar explain more about how quantum computers and quantum error correction works on The Conversation Weekly podcast.
This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Katie Flood and Mend Mariwany. Sound design was by Michelle Macklem, and theme music by Neeta Sarl.
Clips in this episode from Google Quantum AI and 10 Hours Channel.
You can find us on Instagram at theconversationdotcom or via e-mail. You can also subscribe to The Conversation’s free daily e-mail here.
Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.
Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
News
AI gives nonprogrammers a boost in writing computer code

Leo Porter, University of California, San Diego and Daniel Zingaro, University of Toronto
What do you think there are more of: professional computer programmers or computer users who do a little programming?
It’s the second group. There are millions of so-called end-user programmers. They’re not going into a career as a professional programmer or computer scientist. They’re going into business, teaching, law, or any number of professions – and they just need a little programming to be more efficient. The days of programmers being confined to software development companies are long gone.
If you’ve written formulas in Excel, filtered your email based on rules, modded a game, written a script in Photoshop, used R to analyze some data, or automated a repetitive work process, you’re an end-user programmer.
As educators who teach programming, we want to help students in fields other than computer science achieve their goals. But learning how to program well enough to write finished programs can be hard to accomplish in a single course because there is so much to learn about the programming language itself. Artificial intelligence can help.
Lost in the weeds
Learning the syntax of a programming language – for example, where to place colons and where indentation is required – takes a lot of time for many students. Spending time at the level of syntax is a waste for students who simply want to use coding to help solve problems rather than learn the skill of programming.
As a result, we feel our existing classes haven’t served these students well. Indeed, many students end up barely able to write small functions – short, discrete pieces of code – let alone write a full program that can help make their lives better.

Tools built on large language models such as GitHub Copilot may allow us to change these outcomes. These tools have already changed how professionals program, and we believe we can use them to help future end-user programmers write software that is meaningful to them.
These AIs almost always write syntactically correct code and can often write small functions based on prompts in plain English. Because students can use these tools to handle some of the lower-level details of programming, it frees them to focus on bigger-picture questions that are at the heart of writing software programs. Numerous universities now offer programming courses that use Copilot.
At the University of California, San Diego, we’ve created an introductory programming course primarily for those who are not computer science students that incorporates Copilot. In this course, students learn how to program with Copilot as their AI assistant, following the curriculum from our book. In our course, students learn high-level skills such as decomposing large tasks into smaller tasks, testing code to ensure its correctness, and reading and fixing buggy code.
Freed to solve problems
In this course, we’ve been giving students large, open-ended projects and couldn’t be happier with what they have created.
For example, in a project where students had to find and analyze online datasets, we had a neuroscience major create a data visualization tool that illustrated how age and other factors affected stroke risk. Or, for example, in another project, students were able to integrate their personal art into a collage, after applying filters that they had created using the programming language Python. These projects were well beyond the scope of what we could ask students to do before the advent of large language model AIs.
Given the rhetoric about how AI is ruining education by writing papers for students and doing their homework, you might be surprised to hear educators like us talking about its benefits. AI, like any other tool people have created, can be helpful in some circumstances and unhelpful in others.
In our introductory programming course with a majority of students who are not computer science majors, we see firsthand how AI can empower students in specific ways – and promises to expand the ranks of end-user programmers.
Leo Porter, Teaching Professor of Computer Science and Engineering, University of California, San Diego and Daniel Zingaro, Associate Professor of Mathematical and Computational Sciences, University of Toronto
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Tech
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University
Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.
Others, such as renderings of Jesus made out of crustaceans, are just bizarre.
Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.
Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.
We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.
Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.
There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.
Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.
Generative AI meets scams and spam
Internet spammers and scammers are nothing new.
For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.
On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.
In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”
AI-generated content has become another “weird trick.”
It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”
Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.
Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.
Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.
But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.
Rate ‘my’ work!
When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.
Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations
Algorithms push AI-generated content
Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.
Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.
In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.
The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.
We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.
It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.
‘This post was brought to you by AI’
Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.
Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.
In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.
But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?
While our work focused on Facebook spam and scams, there are broader implications.
Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.
Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.
Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.
Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
-
Urbanism2 years ago
Signal Hill, California: A Historic Enclave Surrounded by Long Beach
-
News2 years ago
Diana Gregory Talks to us about Diana Gregory’s Outreach Services
-
Senior Pickleball Report2 years ago
The Absolute Most Comfortable Pickleball Shoe I’ve Ever Worn!
-
STM Blog2 years ago
World Naked Gardening Day: Celebrating Body Acceptance and Nature
-
Senior Pickleball Report2 years ago
ACE PICKLEBALL CLUB TO DEBUT THEIR HIGHLY ANTICIPATED INDOOR PICKLEBALL FRANCHISES IN THE US, IN EARLY 2023
-
Travel2 years ago
Unique Experiences at the CitizenM
-
Automotive2 years ago
2023 Nissan Sentra pricing starts at $19,950
-
Senior Pickleball Report2 years ago
“THE PEOPLE’S CHOICE AWARDS OF PICKLEBALL” – VOTING OPEN