Connect with us

Tech

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

Published

on

images
Many of the AI images generated by spammers and scammers have religious themes. immortal70/iStock via Getty Images

Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University

Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.

Others, such as renderings of Jesus made out of crustaceans, are just bizarre.

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.

Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.

We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.

Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.

There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

Internet spammers and scammers are nothing new.

For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.

On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.

In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”

AI-generated content has become another “weird trick.”

It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.

Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.

Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.

But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.

Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.

In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.

The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.

We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.

It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.

Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.

But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?

While our work focused on Facebook spam and scams, there are broader implications.

Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.

Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.

Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.

Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement SodaStream USA, inc

Tech

T-Mobile, MeetMo, and NantStudios Win Prestigious 2025 Lumiere Award for Revolutionary Las Vegas Grand Prix Formula One Fan Experience

Published

on

MeetMo
Radiant Images 360° 12K plate capture vehicle.

The world of motorsports just took a giant leap into the future! Excitement is in the air as T-Mobile, MeetMo, and NantStudios have clinched the illustrious 2025 Lumiere Award for Best Interactive Experience from the Advanced Imaging Society. This accolade is in recognition of their pioneering immersive video experience for fans at the celebrated Las Vegas Grand Prix!

A Game-Changing Experience

Imagine being able to step into a race track from the comfort of your own home, enveloped in a 360-degree augmented reality tour of the circuit, all captured in breathtaking 12K footage. Thanks to this remarkable collaboration, fans can now enjoy a race experience like never before, made possible by a spectacular fusion of 5G technology, virtual production, and artificial intelligence.


“By combining T-Mobile’s 5G Advanced Network Solutions with our real-time collaboration technology, we’ve created an immersive experience that brings fans closer to the action than ever before,” expressed Michael Mansouri, CEO of Radiant Images and MeetMo. His enthusiasm is shared by many, as this innovative project is seen as a quantum leap forward in the way motorsports are experienced.

The Technical Marvel Behind the Magic

Highlighting their technological finesse, the project transformed over 1.5TB of data into a stunningly interactive experience in mere hours—a feat that previously would have taken months. The journey began at the NantStudios headquarters in Los Angeles, where more than 10 minutes of ultra-high definition, immersive sequences were blended with telemetry and driver animation data captured tirelessly by Radiant Images’ crews in Las Vegas.

The astounding speed and efficiency were primarily powered by T-Mobile’s robust 5G infrastructure, allowing for rapid data transfers back and forth, ensuring seamless integration into the interactive app that fans could access. Chris Melus, VP of Product Management for T-Mobile’s Business Group, proudly remarked, “This collaboration broke new ground for immersive fan engagement.”

The Power of 5G

The integration of T-Mobile’s advanced network solutions turned the Las Vegas Grand Prix into a case study of innovation. With real-time capture and transmission capabilities utilizing Radiant Images’ cutting-edge 360° 12K camera car, production crews were able to capture immersive video feeds and transmit them instantaneously over the 5G network. This meant remote camera control and instant footage reviews, drastically cutting production time and resources.

Moreover, the seamless AR integration—thanks to the creative minds at NantStudios and their work with Unreal Engine—allowed the blending of virtual and real-world elements. Fans were treated to augmented reality overlays displaying real-time data, such as dashboard metrics and telemetry, all transmitted through the reliable 5G network.

Future of Fan Engagement

As Jim Chabin, President of the Advanced Imaging Society, eloquently noted, the remarkable work at the Las Vegas Grand Prix has set new standards for interactive sports entertainment. The recognition given to this innovative team underscores their commitment to pushing the envelope in immersive experiences.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

Gary Marshall, Vice President of Virtual Production at NantStudios, also highlighted the project’s importance: “This recognition underscores NantStudios’ legacy of pioneering real-time VFX and virtual production achievements, reaffirming our position as a leader in modern virtual production.”

F1 Las Vegas Grand Prix Fan Experience – Drive the Las Vegas Grand Prix Strip Circuit

The 2025 Lumiere Award is not just a trophy; it symbolizes the melding of creativity and technology in a way that elevates the fan experience to new heights. The collaboration between T-Mobile, MeetMo, and NantStudios exemplifies a thrilling future where motorsports become more accessible, engaging, and immersive. It’s a thrilling time to be a fan, and the development teams behind this innovation have truly set a new standard for content creators everywhere.

With such defining moments in sports entertainment, we can’t help but wonder what spectacular innovations lie ahead. Buckle up; it’s going to be a wild ride!

About the Companies

MeetMo
MeetMo.io is revolutionizing how creative professionals collaborate by combining video conferencing, live streaming, and AI automation into a single, intuitive platform. With persistent virtual meeting rooms that adapt to users over time, our platform evolves into a true collaborative partner, enhancing creativity and productivity. For more information please visit: https://www.meetmo.io

Radiant Images
Radiant Images is a globally acclaimed, award-winning technology provider specializing in innovative tools and solutions for the media and entertainment industries. The company focuses on advancing cinema, immersive media, and live production. https://www.radiantimages.com

T-Mobile
T-Mobile US, Inc.(NASDAQ: TMUS) is America’s supercharged Un-carrier, delivering an advanced 4G LTE and transformative nationwide 5G network that will offer reliable connectivity for all. T-Mobile’s customers benefit from its unmatched combination of value and quality, unwavering obsession with offering them the best possible service experience and indisputable drive for disruption that creates competition and innovation in wireless and beyond. Based in Bellevue, Wash., T-Mobile provides services through its subsidiaries and operates its flagship brands, T-Mobile, Metro by T-Mobile and Mint Mobile. For more information please visit: https://www.t-mobile.com

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

NantStudios
NantStudios is the first real time-native, full-service production house; re-imagined from the ground up to deliver exceptional creative results through next generation technologies like Virtual Production. For more information please visit: https://nantstudios.com

SOURCE MeetMo

Looking for an entertainment experience that transcends the ordinary? Look no further than STM Daily News Blog’s vibrant Entertainment section. Immerse yourself in the captivating world of indie films, streaming and podcasts, movie reviews, music, expos, venues, and theme and amusement parks. Discover hidden cinematic gems, binge-worthy series and addictive podcasts, gain insights into the latest releases with our movie reviews, explore the latest trends in music, dive into the vibrant atmosphere of expos, and embark on thrilling adventures in breathtaking venues and theme parks. Join us at STM Entertainment and let your entertainment journey begin! https://stmdailynews.com/category/entertainment/

and let your entertainment journey begin!


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Tech

How close are quantum computers to being really useful? Podcast

Quantum computers could revolutionize science by solving complex problems. However, scaling and error correction remain significant challenges before achieving practical applications.

Published

on

quantum computers
Audio und verbung/Shutterstock

Gemma Ware, The Conversation

Quantum computers have the potential to solve big scientific problems that are beyond the reach of today’s most powerful supercomputers, such as discovering new antibiotics or developing new materials.

But to achieve these breakthroughs, quantum computers will need to perform better than today’s best classical computers at solving real-world problems. And they’re not quite there yet. So what is still holding quantum computing back from becoming useful?

In this episode of The Conversation Weekly podcast, we speak to quantum computing expert Daniel Lidar at the University of Southern California in the US about what problems scientists are still wrestling with when it comes to scaling up quantum computing, and how close they are to overcoming them.

https://cdn.theconversation.com/infographics/561/4fbbd099d631750693d02bac632430b71b37cd5f/site/index.html

Quantum computers harness the power of quantum mechanics, the laws that govern subatomic particles. Instead of the classical bits of information used by microchips inside traditional computers, which are either a 0 or a 1, the chips in quantum computers use qubits, which can be both 0 and 1 at the same time or anywhere in between. Daniel Lidar explains:

“Put a lot of these qubits together and all of a sudden you have a computer that can simultaneously represent many, many different possibilities …  and that is the starting point for the speed up that we can get from quantum computing.”

Faulty qubits

One of the biggest problems scientist face is how to scale up quantum computing power. Qubits are notoriously prone to errors – which means that they can quickly revert to being either a 0 or a 1, and so lose their advantage over classical computers.

Scientists have focused on trying to solve these errors through the concept of redundancy – linking strings of physical qubits together into what’s called a “logical qubit” to try and maximise the number of steps in a computation. And, little by little, they’re getting there.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

In December 2024, Google announced that its new quantum chip, Willow, had demonstrated what’s called “beyond breakeven”, when its logical qubits worked better than the constituent parts and even kept on improving as it scaled up.

Lidar says right now the development of this technology is happening very fast:

“For quantum computing to scale and to take off is going to still take some real science breakthroughs, some real engineering breakthroughs, and probably overcoming some yet unforeseen surprises before we get to the point of true quantum utility. With that caution in mind, I think it’s still very fair to say that we are going to see truly functional, practical quantum computers kicking into gear, helping us solve real-life problems, within the next decade or so.”

Listen to Lidar explain more about how quantum computers and quantum error correction works on The Conversation Weekly podcast.


This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Katie Flood and Mend Mariwany. Sound design was by Michelle Macklem, and theme music by Neeta Sarl.

Clips in this episode from Google Quantum AI and 10 Hours Channel.

You can find us on Instagram at theconversationdotcom or via e-mail. You can also subscribe to The Conversation’s free daily e-mail here.

Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Tech

Why building big AIs costs billions – and how Chinese startup DeepSeek dramatically changed the calculus

Published

on

DeepSeek
DeepSeek burst on the scene – and may be bursting some bubbles. AP Photo/Andy Wong

Ambuj Tewari, University of Michigan

State-of-the-art artificial intelligence systems like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude have captured the public imagination by producing fluent text in multiple languages in response to user prompts. Those companies have also captured headlines with the huge sums they’ve invested to build ever more powerful models.

An AI startup from China, DeepSeek, has upset expectations about how much money is needed to build the latest and greatest AIs. In the process, they’ve cast doubt on the billions of dollars of investment by the big AI players.

I study machine learning. DeepSeek’s disruptive debut comes down not to any stunning technological breakthrough but to a time-honored practice: finding efficiencies. In a field that consumes vast computing resources, that has proved to be significant.

Where the costs are

Developing such powerful AI systems begins with building a large language model. A large language model predicts the next word given previous words. For example, if the beginning of a sentence is “The theory of relativity was discovered by Albert,” a large language model might predict that the next word is “Einstein.” Large language models are trained to become good at such predictions in a process called pretraining.

Pretraining requires a lot of data and computing power. The companies collect data by crawling the web and scanning books. Computing is usually powered by graphics processing units, or GPUs. Why graphics? It turns out that both computer graphics and the artificial neural networks that underlie large language models rely on the same area of mathematics known as linear algebra. Large language models internally store hundreds of billions of numbers called parameters or weights. It is these weights that are modified during pretraining. https://www.youtube.com/embed/MJQIQJYxey4?wmode=transparent&start=0 Large language models consume huge amounts of computing resources, which in turn means lots of energy.

Pretraining is, however, not enough to yield a consumer product like ChatGPT. A pretrained large language model is usually not good at following human instructions. It might also not be aligned with human preferences. For example, it might output harmful or abusive language, both of which are present in text on the web.

The pretrained model therefore usually goes through additional stages of training. One such stage is instruction tuning where the model is shown examples of human instructions and expected responses. After instruction tuning comes a stage called reinforcement learning from human feedback. In this stage, human annotators are shown multiple large language model responses to the same prompt. The annotators are then asked to point out which response they prefer.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

It is easy to see how costs add up when building an AI model: hiring top-quality AI talent, building a data center with thousands of GPUs, collecting data for pretraining, and running pretraining on GPUs. Additionally, there are costs involved in data collection and computation in the instruction tuning and reinforcement learning from human feedback stages.

All included, costs for building a cutting edge AI model can soar up to US$100 million. GPU training is a significant component of the total cost.

The expenditure does not stop when the model is ready. When the model is deployed and responds to user prompts, it uses more computation known as test time or inference time compute. Test time compute also needs GPUs. In December 2024, OpenAI announced a new phenomenon they saw with their latest model o1: as test time compute increased, the model got better at logical reasoning tasks such as math olympiad and competitive coding problems.

Slimming down resource consumption

Thus it seemed that the path to building the best AI models in the world was to invest in more computation during both training and inference. But then DeepSeek entered the fray and bucked this trend.

DeepSeek sent shockwaves through the tech financial ecosystem.

Their V-series models, culminating in the V3 model, used a series of optimizations to make training cutting edge AI models significantly more economical. Their technical report states that it took them less than $6 million dollars to train V3. They admit that this cost does not include costs of hiring the team, doing the research, trying out various ideas and data collection. But $6 million is still an impressively small figure for training a model that rivals leading AI models developed with much higher costs.

The reduction in costs was not due to a single magic bullet. It was a combination of many smart engineering choices including using fewer bits to represent model weights, innovation in the neural network architecture, and reducing communication overhead as data is passed around between GPUs.

It is interesting to note that due to U.S. export restrictions on China, the DeepSeek team did not have access to high performance GPUs like the Nvidia H100. Instead they used Nvidia H800 GPUs, which Nvidia designed to be lower performance so that they comply with U.S. export restrictions. Working with this limitation seems to have unleashed even more ingenuity from the DeepSeek team.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

DeepSeek also innovated to make inference cheaper, reducing the cost of running the model. Moreover, they released a model called R1 that is comparable to OpenAI’s o1 model on reasoning tasks.

They released all the model weights for V3 and R1 publicly. Anyone can download and further improve or customize their models. Furthermore, DeepSeek released their models under the permissive MIT license, which allows others to use the models for personal, academic or commercial purposes with minimal restrictions.

Resetting expectations

DeepSeek has fundamentally altered the landscape of large AI models. An open weights model trained economically is now on par with more expensive and closed models that require paid subscription plans.

The research community and the stock market will need some time to adjust to this new reality.

Ambuj Tewari, Professor of Statistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Find your perfect chandelier for living room, bedroom, dining room. Shop now

STM Daily News is a vibrant news blog dedicated to sharing the brighter side of human experiences. Emphasizing positive, uplifting stories, the site focuses on delivering inspiring, informative, and well-researched content. With a commitment to accurate, fair, and responsible journalism, STM Daily News aims to foster a community of readers passionate about positive change and engaged in meaningful conversations. Join the movement and explore stories that celebrate the positive impacts shaping our world.

https://stmdailynews.com/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending