Connect with us

Tech

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

Published

on

Last Updated on December 21, 2024 by Daily News Staff

images
Many of the AI images generated by spammers and scammers have religious themes. immortal70/iStock via Getty Images

Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University

Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.

Others, such as renderings of Jesus made out of crustaceans, are just bizarre.

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.

Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.

We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.

Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.

There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.

Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

Internet spammers and scammers are nothing new.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.

On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.

In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”

AI-generated content has become another “weird trick.”

It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”

Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.

Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.

Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.

But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.

Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.

In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.

The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.

We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.

It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.

Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.

In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?

While our work focused on Facebook spam and scams, there are broader implications.

Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.

Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.

Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.

Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement Sports Research

CES 2026

Inside the Computing Power Behind Spatial Filmmaking: Hugh Hou Goes Hands-On at GIGABYTE Suite During CES 2026

Inside the Computing Power Behind Spatial Filmmaking: Hugh Hou Goes Hands-On at GIGABYTE Suite During CES 2026

Published

on

Spatial filmmaking is having a moment—but at CES 2026, the more interesting story wasn’t a glossy trailer or a perfectly controlled demo. It was the workflow.

According to a recent GIGABYTE press release, VR filmmaker and educator Hugh Hou ran a live spatial computing demonstration inside the GIGABYTE suite, walking attendees through how immersive video is actually produced in real-world conditions—capture to post to playback—without leaning on pre-rendered “best case scenario” content. In other words: not theory, not a lab. A production pipeline, running live, on a show floor.

Inside the Computing Power Behind Spatial Filmmaking: Hugh Hou Goes Hands-On at GIGABYTE Suite During CES 2026
Inside the Computing Power Behind Spatial Filmmaking: Hugh Hou Goes Hands-On at GIGABYTE Suite During CES 2026

A full spatial pipeline—executed live

The demo gave attendees a front-row view of a complete spatial filmmaking pipeline:

  • Capture
  • Post-production
  • Final playback across multiple devices

And the key detail here is that the workflow was executed live at CES—mirroring the same processes used in commercial XR projects. That matters because spatial video isn’t forgiving. Once you’re working in 360-degree environments (and pushing into 8K), you’re no longer just chasing “fast.” You’re chasing:

  • System stability
  • Performance consistency
  • Thermal reliability

Those are the unsexy requirements that make or break actual production days.

Playback across Meta Quest, Apple Vision Pro, and Galaxy XR

The session culminated with attendees watching a two-minute spatial film trailer across:

  • Meta Quest
  • Apple Vision Pro
  • Newly launched Galaxy XR headsets
  • Plus a 3D tablet display offering an additional 180-degree viewing option

That multi-device playback is a quiet flex. Spatial content doesn’t live in one ecosystem anymore—creators are being pulled toward cross-platform deliverables, which adds even more pressure on the pipeline to stay clean and consistent.

Where AI fits (when it’s not the headline)

One of the better notes in the release: AI wasn’t positioned as a shiny feature. It was framed as what it’s becoming for a lot of editors—an embedded toolset that speeds up the grind without hijacking the creative process.

In the demo, AI-assisted processes supported tasks like:

  • Enhancement
  • Tracking
  • Preview workflows

The footage moved through industry-standard software—Adobe Premiere Pro and DaVinci Resolve—with AI-based:

  • Upscaling
  • Noise reduction
  • Detail refinement

And in immersive VR, those steps aren’t optional polish. Any artifact, softness, or weird noise pattern becomes painfully obvious when the viewer can look anywhere.

Why the hardware platform matters for spatial workloads

Underneath the demo was a custom-built GIGABYTE AI PC designed for sustained spatial video workloads. Per the release, the system included:

  • AMD Ryzen 7 9800X3D processor
  • Radeon AI PRO R9700 AI TOP GPU
  • X870E AORUS MASTER X3D ICE motherboard

The point GIGABYTE is making is less “look at these parts” and more: spatial computing workloads demand a platform that can run hard continuously—real-time 8K playback and rendering—without throttling, crashing, or drifting into inconsistent performance.

That’s the difference between “cool demo” and “reliable production machine.”

The bigger takeaway: spatial filmmaking is moving from experiment to repeatable process

By running a demanding spatial filmmaking workflow live—and repeatedly—at CES 2026, GIGABYTE is positioning spatial production as something creators can depend on, not just test-drive.

And that’s the shift worth watching in 2026: spatial filmmaking isn’t just about headsets getting better. It’s about the behind-the-scenes pipeline becoming stable enough that creators can treat immersive production like a real, repeatable craft—because the tools finally hold up under pressure.

Source:PRNewswire – GIGABYTE press release

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Welcome to the Consumer Corner section of STM Daily News, your ultimate destination for savvy shopping and informed decision-making! Dive into a treasure trove of insights and reviews covering everything from the hottest toys that spark joy in your little ones to the latest electronic gadgets that simplify your life. Explore our comprehensive guides on stylish home furnishings, discover smart tips for buying a home or enhancing your living space with creative improvement ideas, and get the lowdown on the best cars through our detailed auto reviews. Whether you’re making a major purchase or simply seeking inspiration, the Consumer Corner is here to empower you every step of the way—unlock the keys to becoming a smarter consumer today!

https://stmdailynews.com/category/consumer-corner


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Science

AI-induced cultural stagnation is no longer speculation − it’s already happening

AI-induced cultural stagnation. A 2026 study by researchers revealed that when generative AI operates autonomously, it produces homogenous content, referred to as “visual elevator music,” despite diverse prompts. This convergence leads to bland outputs and indicates a risk of cultural stagnation as AI perpetuates familiar themes, potentially limiting innovation and diversity in creative expression.

Published

on

Elevator with people in modern building.
When generative AI was left to its own devices, its outputs landed on a set of generic images – what researchers called ‘visual elevator music.’ Wang Zhao/AFP via Getty Images

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A collage of AI-generated images that begins with a politician surrounded by policy papers and progresses to a room with fancy red curtains.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

A rolling, green field with a tree and a clear, blue sky.
Pretty … boring. Chris McLoughlin/Moment via Getty Images

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

AI-induced cultural stagnation. A cityscape of tall buildings on a fall morning.
AI’s outputs are familiar because they revert to average displays of human creativity. Bulgac/iStock via Getty Images

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Consumer Corner

LUMISTAR Draws Record Crowds at CES 2026 With AI Tennis and Basketball Training Systems

LUMISTAR’s CES 2026 debut showcased TERO and CARRY, innovative AI sports training systems that engage athletes actively. The systems allow real-time adaptations, transforming training into competitive practice while effectively utilizing performance data for measurable skill development. Pre-orders start March 2026.

Published

on

LUMISTAR drew record crowds at CES 2026 with live demos of its AI tennis system TERO and AI basketball trainer CARRY, built to adapt in real time and turn performance data into actionable training.
Tero and Carry at Lumistar CES

LUMISTAR wrapped up its CES 2026 debut in Las Vegas with record-level attention, as live demos of its AI-powered sports training systems consistently drew full crowds throughout the show, according to the company.

The sports-focused AI brand showcased TERO, its AI tennis training system, and CARRY, its AI basketball training system—both described by attendees as “game changers” for how training can be delivered, measured, and scaled.

Why the Booth Stayed Packed

Across multiple days of hands-on demonstrations, LUMISTAR’s booth became a focal point for athletes, coaches, club operators, and sports technology professionals. Visitors repeatedly pointed to one key difference: the systems don’t just record results—they actively participate in training.

That’s a major break from the standard model in sports tech, where:

  • traditional ball machines run pre-set drills, and
  • wearables/video tools analyze performance after a session ends.

Training That Adapts in Real Time

LUMISTAR says both TERO and CARRY combine real-time computer vision, adaptive decision-making, and on-court execution to respond instantly to athlete behavior—adjusting difficulty, tempo, and training logic shot by shot.

Attendees noted that this turns practice from repetition into something closer to competition—an evolving back-and-forth between athlete and system.

“This is not an incremental improvement—it’s a complete rethink of what training equipment should do,” one professional coach attending CES said in the release. “For the first time, the machine is reacting to the athlete, not the other way around.”

From Data Collection to Action

Another standout point from CES feedback: the platform’s focus on turning performance data into immediate training outcomes.

LUMISTAR’s approach emphasizes:

  • continuous data retention across sessions
  • real-time performance interpretation
  • clear visualization of progress and training efficiency

Coaches and athletes highlighted that this could reduce wasted training time and accelerate skill development by making each session measurable and comparable.

What’s Next: Pre-Orders and Kickstarter

LUMISTAR outlined a 2026 rollout plan following CES:

  • TERO opens for pre-orders in March 2026, with full market availability beginning May 2026
  • CARRY launches via Kickstarter in Q2 2026
  • The company will continue private demonstrations and pilot programs with select training institutions worldwide ahead of commercial release

More information is available at https://www.lumistar.ai.

Source: PRNewswire press release from LUMISTAR (Jan. 11, 2026)

STM Daily News is tracking the biggest CES 2026 stories shaping entertainment, culture, and the tech that’s changing how we watch, play, train, and live—bringing you quick-hit updates, standout product debuts, and follow-up coverage as launches roll out in 2026. https://stmdailynews.com/entertainment/

Advertisement
Get More From A Face Cleanser And Spa-like Massage


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending