Connect with us

Tech

GIGABYTE Brings Supercomputer Power to Your Desktop with AI TOP ATOM

GIGABYTE launches AI TOP ATOM on October 15th—a compact personal AI supercomputer powered by NVIDIA Grace Blackwell GB10. Delivers 1 petaFLOP performance for on-premises AI development, supporting models up to 200B parameters on your desktop.

Published

on

Last Updated on October 15, 2025 by Daily News Staff

Global Launch Set for October 15th as Tech Giant Democratizes AI Development

GIGABYTE AI TOP ATOM personal AI supercomputer featuring NVIDIA Grace Blackwell GB10 Superchip in compact desktop chassis
GIGABYTE Announces its Personal AI Supercomputer AI TOP ATOM Will be Available Globally on October 15

GIGABYTE is making a bold move to put enterprise-level AI computing power directly into the hands of developers, researchers, and students. The company’s latest innovation, AI TOP ATOM, launches globally on October 15th, promising to transform how we think about on-premises AI development.

Desktop Supercomputing Becomes Reality

What makes AI TOP ATOM remarkable isn’t just its specs—though those are impressive—it’s the promise of bringing supercomputer performance into a compact form factor that fits on your desk. Powered by NVIDIA’s Grace Blackwell GB10 Superchip, this personal AI supercomputer delivers up to 1 petaFLOP of FP4 AI performance. To put that in perspective, we’re talking about the kind of computational muscle that can handle large-scale models with up to 200 billion parameters right in your office.

The system comes equipped with 128GB of unified system memory and supports up to 4TB SSD storage, giving users the resources they need for serious AI workloads without the traditional infrastructure headaches.

Scale When You Need It

Here’s where things get interesting for power users: GIGABYTE designed AI TOP ATOM with scalability in mind. Need to tackle even larger models? Connect two units using the built-in NVIDIA ConnectX-7 NIC, and you can handle models up to 405 billion parameters. It’s like having a modular supercomputer that grows with your ambitions.

Software That Actually Makes Sense

Hardware is only half the story. AI TOP ATOM ships with NVIDIA’s complete AI software stack preinstalled—the full suite of tools, frameworks, and libraries designed specifically for generative AI workloads. But GIGABYTE didn’t stop there. They’ve integrated their exclusive AI TOP Utility, which provides an intuitive interface for the tasks that matter most: model fine-tuning, inference, and deployment across large language models (LLMs), large multimodal models (LMMs), and modern machine learning applications.

This approach addresses one of the biggest pain points in AI development—getting everything configured and working together. With AI TOP ATOM, you’re ready to start prototyping and developing from day one.

Who’s This For?

GIGABYTE is positioning AI TOP ATOM as a solution for anyone serious about AI development, from individual developers and academic researchers to students and educational institutions. The compact chassis means it works in environments where traditional server infrastructure simply isn’t practical—dorm rooms, small offices, research labs with limited space.

The “personal AI supercomputer” concept represents a significant shift in accessibility. What once required cloud computing budgets or dedicated data center space can now happen on-premises, giving developers more control over their data, faster iteration cycles, and potentially lower long-term costs.

The Bigger Picture

As AI development continues to accelerate across industries, tools like AI TOP ATOM signal an important trend: the democratization of high-performance AI computing. When powerful AI development tools become more accessible, innovation happens in unexpected places—and that’s exactly what GIGABYTE seems to be betting on.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

AI TOP ATOM launches globally on October 15th. For complete specifications, pricing, and availability in your region, visit the official GIGABYTE website.


Source: GIGABYTE.com

Welcome to the Consumer Corner section of STM Daily News, your ultimate destination for savvy shopping and informed decision-making! Dive into a treasure trove of insights and reviews covering everything from the hottest toys that spark joy in your little ones to the latest electronic gadgets that simplify your life. Explore our comprehensive guides on stylish home furnishings, discover smart tips for buying a home or enhancing your living space with creative improvement ideas, and get the lowdown on the best cars through our detailed auto reviews. Whether you’re making a major purchase or simply seeking inspiration, the Consumer Corner is here to empower you every step of the way—unlock the keys to becoming a smarter consumer today!

https://stmdailynews.com/category/consumer-corner

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Science

AI-induced cultural stagnation is no longer speculation − it’s already happening

AI-induced cultural stagnation. A 2026 study by researchers revealed that when generative AI operates autonomously, it produces homogenous content, referred to as “visual elevator music,” despite diverse prompts. This convergence leads to bland outputs and indicates a risk of cultural stagnation as AI perpetuates familiar themes, potentially limiting innovation and diversity in creative expression.

Published

on

Elevator with people in modern building.
When generative AI was left to its own devices, its outputs landed on a set of generic images – what researchers called ‘visual elevator music.’ Wang Zhao/AFP via Getty Images

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A collage of AI-generated images that begins with a politician surrounded by policy papers and progresses to a room with fancy red curtains.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

A rolling, green field with a tree and a clear, blue sky.
Pretty … boring. Chris McLoughlin/Moment via Getty Images

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

AI-induced cultural stagnation. A cityscape of tall buildings on a fall morning.
AI’s outputs are familiar because they revert to average displays of human creativity. Bulgac/iStock via Getty Images

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View recent photos

Unlock fun facts & lost history—get The Knowledge in your inbox!

We don’t spam! Read our privacy policy for more info.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Consumer Corner

LUMISTAR Draws Record Crowds at CES 2026 With AI Tennis and Basketball Training Systems

LUMISTAR’s CES 2026 debut showcased TERO and CARRY, innovative AI sports training systems that engage athletes actively. The systems allow real-time adaptations, transforming training into competitive practice while effectively utilizing performance data for measurable skill development. Pre-orders start March 2026.

Published

on

LUMISTAR drew record crowds at CES 2026 with live demos of its AI tennis system TERO and AI basketball trainer CARRY, built to adapt in real time and turn performance data into actionable training.
Tero and Carry at Lumistar CES

LUMISTAR wrapped up its CES 2026 debut in Las Vegas with record-level attention, as live demos of its AI-powered sports training systems consistently drew full crowds throughout the show, according to the company.

The sports-focused AI brand showcased TERO, its AI tennis training system, and CARRY, its AI basketball training system—both described by attendees as “game changers” for how training can be delivered, measured, and scaled.

Why the Booth Stayed Packed

Across multiple days of hands-on demonstrations, LUMISTAR’s booth became a focal point for athletes, coaches, club operators, and sports technology professionals. Visitors repeatedly pointed to one key difference: the systems don’t just record results—they actively participate in training.

That’s a major break from the standard model in sports tech, where:

  • traditional ball machines run pre-set drills, and
  • wearables/video tools analyze performance after a session ends.

Training That Adapts in Real Time

LUMISTAR says both TERO and CARRY combine real-time computer vision, adaptive decision-making, and on-court execution to respond instantly to athlete behavior—adjusting difficulty, tempo, and training logic shot by shot.

Attendees noted that this turns practice from repetition into something closer to competition—an evolving back-and-forth between athlete and system.

“This is not an incremental improvement—it’s a complete rethink of what training equipment should do,” one professional coach attending CES said in the release. “For the first time, the machine is reacting to the athlete, not the other way around.”

From Data Collection to Action

Another standout point from CES feedback: the platform’s focus on turning performance data into immediate training outcomes.

LUMISTAR’s approach emphasizes:

  • continuous data retention across sessions
  • real-time performance interpretation
  • clear visualization of progress and training efficiency

Coaches and athletes highlighted that this could reduce wasted training time and accelerate skill development by making each session measurable and comparable.

What’s Next: Pre-Orders and Kickstarter

LUMISTAR outlined a 2026 rollout plan following CES:

  • TERO opens for pre-orders in March 2026, with full market availability beginning May 2026
  • CARRY launches via Kickstarter in Q2 2026
  • The company will continue private demonstrations and pilot programs with select training institutions worldwide ahead of commercial release

More information is available at https://www.lumistar.ai.

Source: PRNewswire press release from LUMISTAR (Jan. 11, 2026)

STM Daily News is tracking the biggest CES 2026 stories shaping entertainment, culture, and the tech that’s changing how we watch, play, train, and live—bringing you quick-hit updates, standout product debuts, and follow-up coverage as launches roll out in 2026. https://stmdailynews.com/entertainment/

Advertisement
Get More From A Face Cleanser And Spa-like Massage


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Festivals

Arspura Brings “Cook Freely, Breathe Freely” to CES 2026 With IQV™ Kitchen Ventilation Tech

Arspura showcased its IQV™ kitchen ventilation technology at CES 2026, highlighting PM2.5 health risks, high-airspeed smoke capture, and a cleaner, healthier cooking experience.

Published

on

Arspura showcased its IQV™ kitchen ventilation technology at CES 2026, highlighting PM2.5 health risks, high-airspeed smoke capture, and a cleaner, healthier cooking experience.
Professor Francesca Dominici highlights the importance of indoor air quality awareness.

Arspura used CES 2026 to make a very specific point: the kitchen isn’t just where meals happen—it’s where indoor air quality can quietly take a hit. From January 6–8 in Las Vegas, the premium smart home appliance brand showcased its latest IQV™ innovations and hosted a three-day brand program focused on respiratory wellness, user experience, and next-gen ventilation designed to tackle smoke, grease particles, and lingering odors right at the source.

At the center of the showcase was Arspura’s proprietary IQV™ Dynamic Particulate Capture Technology, built to reduce the “smoke escape” problem that many households still deal with using traditional range hoods. The brand’s message throughout the event was simple and consumer-friendly: “Cook freely, breathe freely.”

Arspura also shared CES showcase highlights here:

https://www.youtube.com/watch?v=owPq9B5GcLE

Day 1: The Invisible Kitchen Health Issue—PM2.5

Arspura opened its CES program with a keynote from Professor Francesca Dominici of the Harvard T.H. Chan School of Public Health, who emphasized the health risks tied to fine particulate matter (PM2.5). Her message: even low-level exposure can contribute to illness, and cooking-related particulates can be especially concerning when they spread throughout the home.

Dominici noted that PM2.5 exposure can be particularly hazardous for:

  • Older adults
  • People with existing health conditions
  • Individuals with asthma

The takeaway for everyday households was clear: preventing cooking-related PM2.5 from dispersing indoors—and reducing exposure at the source—can be an important step toward healthier living.

Arspura tied that research directly to its product mission, highlighting a focus on helping households (especially those with asthma or nasal sensitivities) cook with less irritation from smoke and odor.

Day 2: Aiming for Smoke Capture With Less “Escape”

On day two, Arspura shifted from health awareness to product mechanics. In a technical session led by the company’s product manager, Arspura explained how its IQV™ airflow design pairs with high-airspeed capture (up to 13 m/s) to improve capture performance while minimizing smoke escape.

The company framed IQV™ as more than a spec sheet upgrade—it’s meant to be a blend of technology and daily usability that makes “healthy cooking” feel effortless instead of high-maintenance.

Arspura also hosted an on-site visit and interview with media figure Yang Lan, who toured the booth and shared positive feedback on the IQV™ technology and the IQV Hood concept—especially for people who are sensitive to cooking fumes and want a more comfortable kitchen environment.

Day 3: Real Users, Real Kitchens, Real Results

Arspura closed out CES 2026 with momentum, including five awards earned during the show. To wrap the three-day program, the brand invited its first group of IQV Hood users for an in-person sharing session paired with hands-on demos.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The focus here wasn’t just “health protection”—it was also practicality. Arspura positioned the IQV Hood experience around two everyday wins:

  • Health protection through improved smoke capture and deodorization
  • Easy cleaning for real-life kitchen routines

Users shared that they had tried multiple traditional range hoods in the past and still dealt with smoke escape and stubborn odors. In contrast, they reported noticeably improved smoke capture with Arspura’s IQV™ performance—making cooking more enjoyable and, in some cases, convincing friends and family to consider upgrading after seeing it in action.

What’s Next for Arspura

With CES recognition and user-driven validation, Arspura is betting on a growing shift in the category: kitchen ventilation that prioritizes health + usability, not one or the other. The company says it will continue developing smarter, cleaner-air technologies for modern homes.

For more information, visit arspura.com.

Source: Arspura via PRNewswire (Jan. 16, 2026)

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending