Tech
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Last Updated on December 21, 2024 by Daily News Staff
Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University
Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.
Others, such as renderings of Jesus made out of crustaceans, are just bizarre.
Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.
Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.
We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.
Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.
There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.
Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.
Generative AI meets scams and spam
Internet spammers and scammers are nothing new.
For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.
On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.
In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”
AI-generated content has become another “weird trick.”
It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”
Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.
Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.
Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.
But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.
Rate ‘my’ work!
When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.
Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations
Algorithms push AI-generated content
Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.
Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.
In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.
The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.
We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.
It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.
‘This post was brought to you by AI’
Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.
Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.
In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.
But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?
While our work focused on Facebook spam and scams, there are broader implications.
Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.
Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.
Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.
Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Tech
When ‘Head in the Clouds’ Means Staying Ahead
Head in the Clouds: Cloud is no longer just storage—it’s the intelligent core of modern business. Explore how “cognitive cloud” blends AI and cloud infrastructure to enable real-time, self-optimizing operations, improve customer experiences, and accelerate enterprise modernization.
Last Updated on February 7, 2026 by Daily News Staff

When ‘Head in the Clouds’ Means Staying Ahead
(Family Features) You approve a mortgage in minutes, your medical claim is processed without a phone call and an order that left the warehouse this morning lands at your door by dinner. These moments define the rhythm of an economy powered by intelligent cloud infrastructure. Once seen as remote storage, the cloud has become the operational core where data, AI models and autonomous systems converge to make business faster, safer and more human. In this new reality, the smartest companies aren’t looking up to the cloud; they’re operating within it. Public cloud spending is projected to reach $723 billion in 2025, according to Gartner research, reflecting a 21% increase year over year. At the same time, 90% of organizations are expected to adopt hybrid cloud by 2027. As cloud becomes the universal infrastructure for enterprise operations, the systems being built today aren’t just hosted in the cloud, they’re learning from it and adapting to it. Any cloud strategy that doesn’t account for AI workloads as native risks falling behind, holding the business back from delivering the experiences consumers rely on every day. After more than a decade of experimentation, most enterprises are still only partway up the curve. Based on Cognizant’s experience, roughly 1 in 5 enterprise workloads has moved to the cloud, while many of the most critical, including core banking, health care claims and enterprise resource planning, remain tied to legacy systems. These older environments were never designed for the scale or intelligence the modern economy demands. The next wave of progress – AI-driven products, predictive operations and autonomous decision-making – depends on cloud architectures designed to support intelligence natively. This means cloud and AI will advance together or not at all.The Cognitive Cloud: Cloud and AI as One System
For years, many organizations treated migration as a finish line. Applications were lifted and shifted into the cloud with little redesign, trading one set of constraints for another. The result, in many cases, has been higher costs, fragmented data and limited room for innovation. “Cognitive cloud” represents a new phase of evolution. Imagine every process, from customer service to supply-chain management, powered by AI models that learn, reason and act within secure cloud environments. These systems store and interpret data, detect patterns, anticipate demand and automate decisions at a scale humans simply cannot match. In this architecture, AI and cloud operate in concert. The cloud provides computing power, scale and governance while AI adds autonomy, context and insight. Together, they form an integrated platform where cloud foundations and AI intelligence combine to enable collaboration between people and systems. This marks the rise of the responsive enterprise; one that senses change, adjusts instantly and builds trust through reliability. Cognitive cloud platforms combine data fabric, observability, FinOps and SecOps into an intelligent core that regulates itself in real time. The result is invisible to consumers but felt in every interaction: fewer errors, faster responses and consistent experiences.Consumer Impact is Growing
The impact of cognitive cloud is already visible. In health care, 65% of U.S. insurance claims run through modernized, cloud-enabled platforms designed to reduce errors and speed up reimbursement. In the life sciences industry, a pharmaceuticals and diagnostics firm used cloud-native automation to increase clinical trial investigations by 20%, helping get treatments to patients sooner. In food service, intelligent cloud systems have reduced peak staffing needs by 35%, in part through real-time demand forecasting and automated kitchen operation. In insurance, modernization has produced multi-million-dollar savings and faster policy issuance, improving both customer experience and financial performance. Beneath these outcomes is the same principle: architecture that learns and responds in real time. AI-driven cloud systems process vast volumes of data, identify patterns as they emerge and automate routines so people can focus on innovation, care and service. For businesses, this means fewer bottlenecks and more predictive operations. For consumers, it means smarter, faster, more reliable services, quietly shaping everyday life. While cloud engineering and AI disciplines remain distinct, their outcomes are increasingly intertwined. The most advanced architectures now treat intelligence and infrastructure as complementary forces, each amplifying the other.Looking Ahead
This transformation is already underway. Self-correcting systems predict disruptions before they happen, AI models adapt to market shifts in real time and operations learn from every transaction. The organizations mastering this convergence are quietly redefining themselves and the competitive landscape. Cloud and AI have become interdependent priorities within a shared ecosystem that moves data, decisions and experiences at the speed customers expect. Companies that modernize around this reality and treat intelligence as infrastructure will likely be empowered to reinvent continuously. Those that don’t may spend more time maintaining the systems of yesterday than building the businesses of tomorrow. Learn more at cognizant.com. Photo courtesy of ShutterstockCulver’s Thank You Farmers® Project Hits $8 Million Donation MilestoneLink: https://stmdailynews.com/culvers-thank-you-farmers-project-hits-8-million-donation-milestone/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
The Knowledge
Beneath the Waves: The Global Push to Build Undersea Railways
Undersea railways are transforming transportation, turning oceans from barriers into gateways. Proven by tunnels like the Channel and Seikan, these innovations offer cleaner, reliable connections for passengers and freight. Ongoing projects in China and Europe, alongside future proposals, signal a new era of global mobility beneath the waves.

For most of modern history, oceans have acted as natural barriers—dividing nations, slowing trade, and shaping how cities grow. But beneath the waves, a quiet transportation revolution is underway. Infrastructure once limited by geography is now being reimagined through undersea railways.
Undersea rail tunnels—like the Channel Tunnel and Japan’s Seikan Tunnel—proved decades ago that trains could reliably travel beneath the ocean floor. Today, new projects are expanding that vision even further.
Around the world, engineers and governments are investing in undersea railways—tunnels that allow high-speed trains to travel beneath oceans and seas. Once considered science fiction, these projects are now operational, under construction, or actively being planned.

Undersea Rail Is Already a Reality
Japan’s Seikan Tunnel and the Channel Tunnel between the United Kingdom and France proved decades ago that undersea railways are not only possible, but reliable. These tunnels carry passengers and freight beneath the sea every day, reshaping regional connectivity.
Undersea railways are cleaner than short-haul flights, more resilient than bridges, and capable of lasting more than a century. As climate pressures and congestion increase, rail beneath the sea is emerging as a practical solution for future mobility.
What’s Being Built Right Now
China is currently constructing the Jintang Undersea Railway Tunnel as part of the Ningbo–Zhoushan high-speed rail line, while Europe’s Fehmarnbelt Fixed Link will soon connect Denmark and Germany beneath the Baltic Sea. These projects highlight how transportation and technology are converging to solve modern mobility challenges.
The Mega-Projects Still on the Drawing Board
Looking ahead, proposals such as the Helsinki–Tallinn Tunnel and the long-studied Strait of Gibraltar rail tunnel could reshape global affairs by linking regions—and even continents—once separated by water.
Why Undersea Rail Matters
The future of transportation may not rise above the ocean—but run quietly beneath it.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
CES 2026
Inside the Computing Power Behind Spatial Filmmaking: Hugh Hou Goes Hands-On at GIGABYTE Suite During CES 2026
Inside the Computing Power Behind Spatial Filmmaking: Hugh Hou Goes Hands-On at GIGABYTE Suite During CES 2026
Spatial filmmaking is having a moment—but at CES 2026, the more interesting story wasn’t a glossy trailer or a perfectly controlled demo. It was the workflow.
According to a recent GIGABYTE press release, VR filmmaker and educator Hugh Hou ran a live spatial computing demonstration inside the GIGABYTE suite, walking attendees through how immersive video is actually produced in real-world conditions—capture to post to playback—without leaning on pre-rendered “best case scenario” content. In other words: not theory, not a lab. A production pipeline, running live, on a show floor.

A full spatial pipeline—executed live
The demo gave attendees a front-row view of a complete spatial filmmaking pipeline:
- Capture
- Post-production
- Final playback across multiple devices
And the key detail here is that the workflow was executed live at CES—mirroring the same processes used in commercial XR projects. That matters because spatial video isn’t forgiving. Once you’re working in 360-degree environments (and pushing into 8K), you’re no longer just chasing “fast.” You’re chasing:
- System stability
- Performance consistency
- Thermal reliability
Those are the unsexy requirements that make or break actual production days.
Playback across Meta Quest, Apple Vision Pro, and Galaxy XR
The session culminated with attendees watching a two-minute spatial film trailer across:
- Meta Quest
- Apple Vision Pro
- Newly launched Galaxy XR headsets
- Plus a 3D tablet display offering an additional 180-degree viewing option
That multi-device playback is a quiet flex. Spatial content doesn’t live in one ecosystem anymore—creators are being pulled toward cross-platform deliverables, which adds even more pressure on the pipeline to stay clean and consistent.
Where AI fits (when it’s not the headline)
One of the better notes in the release: AI wasn’t positioned as a shiny feature. It was framed as what it’s becoming for a lot of editors—an embedded toolset that speeds up the grind without hijacking the creative process.
In the demo, AI-assisted processes supported tasks like:
- Enhancement
- Tracking
- Preview workflows
The footage moved through industry-standard software—Adobe Premiere Pro and DaVinci Resolve—with AI-based:
- Upscaling
- Noise reduction
- Detail refinement
And in immersive VR, those steps aren’t optional polish. Any artifact, softness, or weird noise pattern becomes painfully obvious when the viewer can look anywhere.
Why the hardware platform matters for spatial workloads
Underneath the demo was a custom-built GIGABYTE AI PC designed for sustained spatial video workloads. Per the release, the system included:
- AMD Ryzen 7 9800X3D processor
- Radeon AI PRO R9700 AI TOP GPU
- X870E AORUS MASTER X3D ICE motherboard
The point GIGABYTE is making is less “look at these parts” and more: spatial computing workloads demand a platform that can run hard continuously—real-time 8K playback and rendering—without throttling, crashing, or drifting into inconsistent performance.
That’s the difference between “cool demo” and “reliable production machine.”
The bigger takeaway: spatial filmmaking is moving from experiment to repeatable process
By running a demanding spatial filmmaking workflow live—and repeatedly—at CES 2026, GIGABYTE is positioning spatial production as something creators can depend on, not just test-drive.
And that’s the shift worth watching in 2026: spatial filmmaking isn’t just about headsets getting better. It’s about the behind-the-scenes pipeline becoming stable enough that creators can treat immersive production like a real, repeatable craft—because the tools finally hold up under pressure.
Source:PRNewswire – GIGABYTE press release
Welcome to the Consumer Corner section of STM Daily News, your ultimate destination for savvy shopping and informed decision-making! Dive into a treasure trove of insights and reviews covering everything from the hottest toys that spark joy in your little ones to the latest electronic gadgets that simplify your life. Explore our comprehensive guides on stylish home furnishings, discover smart tips for buying a home or enhancing your living space with creative improvement ideas, and get the lowdown on the best cars through our detailed auto reviews. Whether you’re making a major purchase or simply seeking inspiration, the Consumer Corner is here to empower you every step of the way—unlock the keys to becoming a smarter consumer today!
https://stmdailynews.com/category/consumer-corner
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
