Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.
Others, such as renderings of Jesus made out of crustaceans, are just bizarre.
Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.
Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.
We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.
Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.
There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.
Advertisement
Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.
Generative AI meets scams and spam
Internet spammers and scammers are nothing new.
For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.
On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.
In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”
AI-generated content has become another “weird trick.”
It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”
Advertisement
Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.
Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.
Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.
But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.
Rate ‘my’ work!
When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.
Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations
Algorithms push AI-generated content
Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.
Advertisement
Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.
In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.
The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.
We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.
It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.
‘This post was brought to you by AI’
Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.
Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released severalannouncements about how it plans to deal with AI-generated content.
Advertisement
In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.
But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?
While our work focused on Facebook spam and scams, there are broader implications.
Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.
Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Socially Engaged Design of Nuclear Energy Technologies
What prompted the idea for the course?
The two of us had some experience with participatory design coming into this course, and we had a shared interest in bringing virtual reality into a first-year design class at the University of Michigan.
It seemed like a good fit to help students learn about nuclear technologies, given that hands-on experience can be difficult to provide in that context. We both wanted to teach students about the social and environmental implications of engineering work, too.
Aditi is a nuclear engineer and had been using participatory design in her research, and Katie had been teaching ethics and design to engineering students for many years.
What does the course explore?
Broadly, the course explores engineering design. We introduce our students to the principles of nuclear engineering and energy systems design, and we go through ethical concerns. They also learn communication strategies – like writing for different audiences.
Students learn to design the exterior features of nuclear energy facilities in collaboration with local communities. The course focuses on a different nuclear energy technology each year.
In the first year, the focus was on fusion energy systems. In fall 2024, we looked at locating nuclear microreactors near local communities.
The main project was to collaboratively decide where a microreactor might be sited, what it might look like, and what outcomes the community would like to see versus which would cause concern.
Students also think about designing nuclear systems with both future generations and a shared common good in mind.
The class explores engineering as a sociotechnical practice – meaning that technologies are not neutral. They shape and affect social life, for better and for worse. To us, a sociotechnical engineer is someone who adheres to scientific and engineering fundamentals, communicates ethically and designs in collaboration with the people who are likely to be affected by their work.
In class, we help our students reflect on these challenges and responsibilities.
Why is this course relevant now?
Nuclear energy system design is advancing quickly, allowing engineers to rethink how they approach design. Fusion energy systems and fission microreactors are two areas of rapidly evolving innovation.
Microreactors are smaller than traditional nuclear energy systems, so planners can place them closer to communities. These smaller reactors will likely be safer to run and operate, and may be a good fit for rural communities looking to transition to carbon-neutral energy systems.
But for the needs, concerns and knowledge of local people to shape the design process, local communities need to be involved in these reactor siting and design conversations.
Students in the course explore nuclear facilities in virtual reality.Thomas Barwick/DigitalVision via Getty Images
What materials does the course feature?
We use virtual reality models of both fission and fusion reactors, along with models of energy system facilities. AI image generators are helpful for rapid prototyping – we have used these in class with students and in workshops.
This year, we are also inviting students to do some hands-on prototyping with scrap materials for a project on nuclear energy systems.
What will the course prepare students to do?
Students leave the course understanding that community engagement is an essential – not optional – component of good design. We equip students to approach technology use and development with users’ needs and concerns in mind.
Specifically, they learn how to engage with and observe communities using ethical, respectful methods that align with the university’s engineering research standards.
What’s a critical lesson from the course?
As instructors, we have an opportunity – and probably also an obligation – to learn from students as much as we are teaching them course content. Gen Z students have grown up with environmental and social concerns as centerpieces of their media diets, and we’ve noticed that they tend to be more strongly invested in these topics than previous generations of engineering students.
Aditi Verma, Assistant Professor of Nuclear Engineering and Radiological Sciences, University of Michigan and Katie Snyder, Lecturer III in Technical Communication, College of Engineering, University of Michigan
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Amplify any Aesthetic & Unleash Your Full Creative Potential with the Most Feature-Packed Camera in its Class
MELVILLE, N.Y. /PRNewswire/ — Today, Nikon announced the new full-frame / FX format Z5II, an entirely new generation of intermediate-level camera that miraculously manages to fit the latest high-end features into a lightweight camera body that will help kickstart any creative spark. The Nikon Z5II is the easiest way to level up a user’s captures with full-frame image quality, incredibly fast and intelligent autofocus (AF), excellent low-light performance, one-touch film-inspired color presets and the brightest viewfinder of any competing camera.1
The new Nikon Z5II uses the same high-power EXPEED 7 image processing engine as Nikon’s highest caliber professional models, the Z8 and Z9. The benefits of this processor are immediately apparent, affording incredible levels of performance and extremely fast AF with subject detection powered by deep learning (AI) technology. This highly accurate, high-speed focus is a massive leap from its predecessor, locking in at approximately one third2 the time. In addition, the new camera now utilizes a highly sensitive back-illuminated (BSI) CMOS sensor for beautiful rendering of textures and details, even in dimly lit situations such as indoors or nighttime landscapes, with minimal noise. The Z5II further fuels your creative drive with a dedicated Picture Control button and innovative tools like Imaging Recipes and Flexible Color Picture Controls, all of which help users create a truly distinctive look with unparalleled creative control of colors.
“The benefits of the Z5II go far beyond its attainable price and small size, offering users the benefits of our most advanced EXPEED 7 processing engine, a proven full-frame sensor along with unexpected pro-level features and performance,” said Fumiko Kawabata, Sr. Vice President of Marketing and Planning, Nikon Inc. “This is the camera many people have been waiting for in order to make the move to mirrorless, since nothing comes close to matching the value of features and performance in its class.”
Reliably Fast Focus and Performance
The AF on the Nikon Z5II is remarkably precise and super-fast, effortlessly locking-on and tracking a wide range of moving subjects. From fast-paced portraits or action shots, the system helps you to never miss a crucial moment, even when a subject is backlit. The cutting-edge AF system can detect up to nine types of subjects for stills and video, including people (faces, eyes, heads, and upper bodies), dogs, cats, birds, cars, motorcycles, bicycles, airplanes, and trains. But it’s not just the focus that’s fast—thanks to the next-generation processing power, the Z5II also offers high performance features from pro level Z models, to excel in any shoot.
3D-tracking AF mode keeps the target subject in focus even if it moves rapidly or erratically. This allows for subject tracking, even at high burst speeds, for sharp images again and again when photographing sports, animals or other fast-moving subjects.
The first full-frame mirrorless Nikon camera with AF-A focus mode. In this mode, the camera automatically switches between AF-S and AF-C focus modes in response to subject movement or changes in composition with still shooting. This allows the camera to automatically focus on the subject, with no setting adjustments when photographing. This new feature makes it simple to photograph pets, kids or other subjects whose movements are difficult to predict.
Fast continuous shooting speeds3 with a maximum frame rate of 14 frames per second in mechanical shutter mode and up to 15 or 30 frames per second (electronic shutter) with full autofocus.
Pre-Release Capture4 function when shooting in C15 and C30 modes is capable of recording images buffered up to one second before the shutter-release button is fully pressed, capturing the action before a user can react.
Embrace Low Light Like Never Before
There’s no need to be afraid of the dark with the Nikon Z5II. Featuring a powerful combination of the full-frame back-illuminated CMOS sensor and the EXPEED 7 image-processing engine, the Z5II delivers the best low light ability in its class. Images and video are rendered with minimal noise, and incredible AF detection abilities in low light. Whether shooting indoors, twilight cityscapes or the night sky, the Z5II is built to help you capture confidently in nearly any light, preserving details and textures throughout the broad ISO range.
Class-leading autofocus detection down to -10EV5 delivers accurate, reliable focus in dim and dark conditions—great for concerts, live performances, festivities, available light portraiture, astrophotography and more.
A broad standard ISO sensitivity range of 100-64,000, expandable to Hi 1.7 (ISO 204,800 equivalent), delivers exceptional low-light capabilities and outstanding image quality with minimal noise. The max ISO is 51,200 for video recording.
The 5-axis in-camera vibration reduction (VR) system provides superior image stabilization equivalent to a 7.5-stop6 increase in shutter speed at the center and a 6.0-stop increase at the peripheral areas of the frame. This allows users to create with confidence in lower light and get sharper results, even when handheld or at lower shutter speeds.
Focus-point VR7 tailors stabilization to the area covered by the active AF point, for sharp rendering of the subject, even when it is positioned near the edge of the frame.
Starlight View Mode makes focus and composition simple in extremely low light, while the Warm Color Display Mode helps preserve night vision when working in complete darkness.
Extended shutter speeds up to 900 seconds (15 minutes) in manual exposure mode. Perfect for extreme long-exposure nightscapes and star trails.
Engineered to be Used, Made to be Loved
Shooting with the compact and lightweight Z5II is a satisfying and comfortable experience. The electronic viewfinder (EVF) is simply stunning and is 6x brighter than any competing model. At up to 3000 nits brightness, users can easily shoot even in the brightest direct sun with a perfect view of the frame, with real-time exposure information. Additionally, the rear 3.2″ Vari-angle LCD touchscreen rotates freely to nearly any angle, giving full freedom of composition. Get down in the street or hold it high above everyone’s heads and still be able to accurately frame the perfect shot. The grip is deep and comfortable to minimize fatigue. Additionally, the Nikon Z5II’s front, back, and top covers are made from magnesium alloy, which delivers exceptional durability and outstanding dust-and drip-resistance.
Feel the Color with Picture Controls
The Nikon Z5II is the latest camera to support one-button access to Picture Controls, plus compatibility with the Nikon Imaging Cloud. The dedicated Picture Control button opens new possibilities for expressive color, with imaginative film-inspired looks that instantly change the color tone and color of a scene. In a single press, the user can see in real-time the effects of up to 31 built-in color presets plus Imaging Recipes downloaded by the user.
Nikon Imaging Cloud connectivity allows users to download a wide variety of free Imaging Recipes by Nikon and created by popular creators, and to apply these recipes when shooting. In addition, the Z5II supports Flexible Color Picture Control, which allows users to create their own unique color styles using Nikon’s free NX Studio software. Flexible Color allows for a wider variety of color and tone adjustments, including hue, brightness and contrast. What’s more, these settings can also be saved as Custom Picture Controls that can be imported to the Z5II for use while shooting.
Powerful Video Features for Hybrid Users
The Z5II offers an impressive array of video features for content creators:
Capture immensely vivid and detailed 4K/30 UHD video, with no crop. This gives creatives the ability to shoot in 4K at full-frame, with more wide-angle freedom. For higher frame rates, the camera can also capture up to 4K/60 with a 1.5x crop.
Flexible in-camera video recording options with 12-bit N-RAW8, 10-bit H.265, and 8-bit H.264. This is the first camera to be able to record N-RAW to an SD card.
N-Log9 tone modes offer greater flexibility for color grading. This means Z5II users also have access to the free RED LUTs, which were developed in collaboration with RED for users to enjoy cinematic looks.
Full HD/120p for flexibility to create 5x slow motion videos in 8-bit H.264.
Hi-Res Zoom10 uses 4K resolution to zoom up to 2X in-camera during Full HD shooting, without any loss of quality. This is useful when using prime lenses to get closer to a subject and add a dynamic look to footage.
Product Review Mode will seamlessly switch focus between the user and any objects that they hold up to the camera. Users can even customize the size of the active AF area.
Upgrade streaming while connected via UVC/UAC-compliant USB port, transforming the camera into a high-quality webcam for live streaming.
The Z5II also includes ports for headphones and microphones.
Additional Features of the Nikon Z5II
Dual SD card slots
Bird detection mode makes it easier to detect birds in motion and in flight.
Equipped with Nikon’s exclusive portrait functions, including Rich Tone Portrait that realizes radiant and beautiful rendering of skin textures, and Skin Softening that smooths the skin while leaving hair, eyes, and other details sharp.
Capture high-resolution images with Pixel Shift shooting11 to portray stunning depth and rich textures, from architectural details to rocky landscapes and vibrant artwork, creating images at a staggeringly high resolution of up to approx. 96-megapixels (must be processed with free Studio NX software).
Adobe Creative Cloud Promotion
For a limited time, customers who purchase the Nikon Z5II and register their camera will also get 1 year of Lightroom + 1TB of Adobe Creative Cloud storage. For more information and terms of this promotion, please visit www.nikonusa.com.
Price and Availability
The new Nikon Z5II full-frame mirrorless camera will be available in April 2025 for a suggested retail price (SRP) of $1699.95* for the body only. Kit configurations include the NIKKOR Z 24-50mm f/4-6.3 lens for $1999.95* SRP, and the NIKKOR Z 24-200mm f/4-6.3 VR lens for $2499.95* SRP.
For more information about the latest Nikon products, including the vast collection of NIKKOR Z lenses and the entire line of Z series cameras, please visit www.nikonusa.comAbout Nikon
Nikon Inc. is a world leader in digital imaging, precision optics and technologies for photo and video capture; globally recognized for setting new standards in product design and performance for an award-winning array of equipment that enables visual storytelling and content creation. Nikon Inc. distributes consumer and professional Z Series mirrorless cameras, digital SLR cameras, a vast array of NIKKOR and NIKKOR Z lenses, Speedlights and system accessories, Nikon COOLPIX® compact digital cameras and Nikon software products. For more information, dial (800) NIKON-US or visit www.nikonusa.com, which links all levels of photographers and visual storytellers to the Web’s most comprehensive learning and sharing communities. Connect with Nikon on Facebook , X , YouTube , Instagram , Threads , and TikTok.
Specifications, equipment, and release dates are subject to change without any notice or obligation on the part of the manufacturer.1. Based on Nikon research as of April 3rd, 2025. 2. Measured in accordance with CIPA standards. The measurement values are based on the following testing conditions: Subject brightness of 10 EV; in photo mode using aperture-priority auto (A), single-servo AF (AF-S), single-point AF (center), at 70-mm focal length with the NIKKOR Z 24-70mm f/4 S. 3. Max framerate of 14 fps in mechanical shutter mode is JPEG only. C15 and C30 are JPEG only, electronic shutter modes. Rolling shutter distortion may occur depending on the type of subject and shooting conditions. 4. Available only with JPEG recording. 5. In photo mode using single-servo AF (AF-S), single-point AF (center), at ISO 100 equivalent and a temperature of 20°C/68°F with an f/1.2 lens. 6. Based on CIPA 2024 Standard. Yaw/pitch/roll compensation performance when using the NIKKOR Z 24-120mm f/4 S (telephoto end, NORMAL) 7. Only in photo mode with NIKKOR Z lenses not equipped with VR. Does not function when multiple focus points are displayed. 8. When a frame size and rate of [[FX] 4032×2268 30p], [[FX] 4032×2268 25p], [[FX] 4032×2268 24p], [[DX] 3984×2240 30p], [[DX] 3984×2240 25p], or [[DX] 3984×2240 24p] is selected for [Frame size/frame rate] in the video recording menu. Picture quality is equivalent to that of a video quality setting of [Normal]. Use of Video Speed Class 90 (V90) SD memory cards is recommended. 9. When [H.265 10-bit (MOV)] or [N-RAW 12-bit (NEV)] is selected for [Video file type] in the video recording menu. 10. Hi-Res Zoom is available when all the following conditions are met:- [FX] is selected for [Image area] > [Choose image area] in the video recording menu, [H.265 10-bit (MOV)], [H.265 8-bit (MOV)], or [H.264 8-bit (MP4)] is selected for [Video file type] in the video recording menu, and – a frame size and rate of [1920×1080; 30p], [1920×1080; 25p], or [1920×1080; 24p] is selected for [Frame size/frame rate] in the video recording menu. 11. Both the subject and the camera must be still. RAW images shot with the pixel shift shooting feature must be combined using NX Studio. *SRP (Suggested Retail Price) listed only as a suggestion. Actual prices are set by dealers and are subject to change at any time.
SOURCE Nikon Inc.
Immigration enforcement is a key justification for repurposing government data.
Photo by U.S. Immigration and Customs Enforcement via Getty ImagesNicole M. Bennett, Indiana University
A whistleblower at the National Labor Relations Board reported an unusual spike in potentially sensitive data flowing out of the agency’s network in early March 2025 when staffers from the Department of Government Efficiency, which goes by DOGE, were granted access to the agency’s databases. On April 7, the Department of Homeland Security gained access to Internal Revenue Service tax data.
These seemingly unrelated events are examples of recent developments in the transformation of the structure and purpose of federal government data repositories. I am a researcher who studies the intersection of migration, data governance and digital technologies. I’m tracking how data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement.
Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades.
These data-sharing arrangements often take place outside public scrutiny, driven by national security justifications, fraud prevention initiatives and digital modernization efforts. The result is that the structure of government is quietly transforming into an integrated surveillance apparatus, capable of monitoring, predicting and flagging behavior at an unprecedented scale.
Executive orders signed by President Donald Trump aim to remove remaining institutional and legal barriers to completing this massive surveillance system.
DOGE and the private sector
Central to this transformation is DOGE, which is tasked via an executive order to “promote inter-operability between agency networks and systems, ensure data integrity, and facilitate responsible data collection and synchronization.” An additional executive order calls for the federal government to eliminate its information silos.
By building interoperable systems, DOGE can enable real-time, cross-agency access to sensitive information and create a centralized database on people within the U.S. These developments are framed as administrative streamlining but lay the groundwork for mass surveillance.
Key to this data repurposing are public-private partnerships. The DHS and other agencies have turned to third-party contractors and data brokers to bypass direct restrictions. These intermediaries also consolidate data from social media, utility companies, supermarkets and many other sources, enabling enforcement agencies to construct detailed digital profiles of people without explicit consent or judicial oversight.
Palantir, a private data firm and prominent federal contractor, supplies investigative platforms to agencies such as Immigration and Customs Enforcement, the Department of Defense, the Centers for Disease Control and Prevention and the Internal Revenue Service. These platforms aggregate data from various sources – driver’s license photos, social services, financial information, educational data – and present it in centralized dashboards designed for predictive policing and algorithmic profiling. These tools extend government reach in ways that challenge existing norms of privacy and consent.
The role of AI
Artificial intelligence has further accelerated this shift.
Predictive algorithms now scan vast amounts of data to generate risk scores, detect anomalies and flag potential threats.
These systems ingest data from school enrollment records, housing applications, utility usage and even social media, all made available through contracts with data brokers and tech companies. Because these systems rely on machine learning, their inner workings are often proprietary, unexplainable and beyond meaningful public accountability.
Data privacy researcher Justin Sherman explains the astonishing amount of information data brokers have about you.
Sometimes the results are inaccurate, generated by AI hallucinations – responses AI systems produce that sound convincing but are incorrect, made up or irrelevant. Minor data discrepancies can lead to major consequences: job loss, denial of benefits and wrongful targeting in law enforcement operations. Once flagged, individuals rarely have a clear pathway to contest the system’s conclusions.
Digital profiling
Participation in civic life, applying for a loan, seeking disaster relief and requesting student aid now contribute to a person’s digital footprint. Government entities could later interpret that data in ways that allow them to deny access to assistance. Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance. And with growing dependence on private contractors, the boundaries between public governance and corporate surveillance continue to erode.
Artificial intelligence, facial recognition systems and predictive profiling systems lack oversight. They also disproportionately affect low-income individuals, immigrants and people of color, who are more frequently flagged as risks.
Initially built for benefits verification or crisis response, these data systems now feed into broader surveillance networks. The implications are profound. What began as a system targeting noncitizens and fraud suspects could easily be generalized to everyone in the country.
Eyes on everyone
This is not merely a question of data privacy. It is a broader transformation in the logic of governance. Systems once designed for administration have become tools for tracking and predicting people’s behavior. In this new paradigm, oversight is sparse and accountability is minimal.
AI allows for the interpretation of behavioral patterns at scale without direct interrogation or verification. Inferences replace facts. Correlations replace testimony.
The risk extends to everyone. While these technologies are often first deployed at the margins of society – against migrants, welfare recipients or those deemed “high risk” – there’s little to limit their scope. As the infrastructure expands, so does its reach into the lives of all citizens.
With every form submitted, interaction logged and device used, a digital profile deepens, often out of sight. The infrastructure for pervasive surveillance is in place. What remains uncertain is how far it will be allowed to go.
Nicole M. Bennett, Ph.D. Candidate in Geography and Assistant Director at the Center for Refugee Studies, Indiana University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy