Tech
Honeywell and Google Cloud to Accelerate Autonomous Operations with AI Agents for the Industrial Sector
Google Cloud AI to enhance Honeywell’s product offerings
and help upskill the industrial workforce
New solutions will connect to enterprise-wide industrial data from Honeywell Forge,
a leading IoT platform for industrials
CHARLOTTE, N.C. and SUNNYVALE, Calif. /PRNewswire/ — Honeywell (NASDAQ: HON) and Google Cloud announced a unique collaboration connecting artificial intelligence (AI) agents with assets, people and processes to accelerate safer, autonomous operations for the industrial sector.
This partnership will bring together the multimodality and natural language capabilities of Gemini on Vertex AI – Google Cloud’s AI platform – and the massive data set on Honeywell Forge, a leading Internet of Things (IoT) platform for industrials. This will unleash easy-to-understand, enterprise-wide insights across a multitude of use cases. Honeywell’s customers across the industrial sector will benefit from opportunities to reduce maintenance costs, increase operational productivity and upskill employees. The first solutions built with Google Cloud AI will be available to Honeywell’s customers in 2025.
“The path to autonomy requires assets working harder, people working smarter and processes working more efficiently,” said Vimal Kapur, Chairman and CEO of Honeywell. “By combining Google Cloud’s AI technology with our deep domain expertise–including valuable data on our Honeywell Forge platform–customers will receive unparalleled, actionable insights bridging the physical and digital worlds to accelerate autonomous operations, a key driver of Honeywell’s growth.”
“Our partnership with Honeywell represents a significant step forward in bringing the transformative power of AI to industrial operations,” said Thomas Kurian, CEO of Google Cloud. “With Gemini on Vertex AI, combined with Honeywell’s industrial data and expertise, we’re creating new opportunities to optimize processes, empower workforces and drive meaningful business outcomes for industrial organizations worldwide.”
With the mass retirement of workers from the baby boomer generation, the industrial sector faces both labor and skills shortages, and AI can be part of the solution – as a revenue generator, not job eliminator. More than two-thirds (82%) of Industrial AI leaders believe their companies are early adopters of AI, but only 17% have fully launched their initial AI plans, according to Honeywell’s 2024 Industrial AI Insights report. This partnership will provide AI agents that augment the existing operations and workforce to help drive AI adoption and enable companies across the sector to benefit from expanding automation.
Honeywell and Google Cloud will co-innovate solutions around:
Purpose-Built, Industrial AI Agents
Built on Google Cloud’s Vertex AI Search and tailored to engineers’ specific needs, a new AI-powered agent will help automate tasks and reduce project design cycles, enabling users to focus on driving innovation and delivering exceptional customer experiences.
Additional agents will utilize Google’s large language models (LLMs) to help technicians to more quickly resolve maintenance issues (e.g., “How did a unit perform last night?” “How do I replace the input/output module?” or “Why is my system making this sound?”). By leveraging Gemini’s multimodality capabilities, users will be able to process various data types such as images, videos, text and sensor readings, which will help its engineers get the answers they need quickly – going beyond simple chat and predictions.
Enhanced Cybersecurity
Google Threat Intelligence – featuring frontline insight from Mandiant – will be integrated into current Honeywell cybersecurity products, including Global Analysis, Research and Defense (GARD) Threat Intelligence and Secure Media Exchange (SMX), to help enhance threat detection and protect global infrastructure for industrial customers.
On-the-Edge Device Advances
Looking ahead, Honeywell will explore using Google’s Gemini Nano model to enhance Honeywell edge AI devices’ intelligence multiple use cases across verticals, ranging from scanning performance to voice-based guided workflow, maintenance, operational and alarm assist without the need to connect to the internet and cloud. This is the beginning of a new wave of more intelligent devices and solutions, which will be the subject of future Honeywell announcements.
By leveraging AI to enable growth and productivity, the integration of Google Cloud technology also further supports Honeywell’s alignment of its portfolio to three compelling megatrends, including automation.
About Honeywell
Honeywell is an integrated operating company serving a broad range of industries and geographies around the world. Our business is aligned with three powerful megatrends – automation, the future of aviation and energy transition – underpinned by our Honeywell Accelerator operating system and Honeywell Forge IoT platform. As a trusted partner, we help organizations solve the world’s toughest, most complex challenges, providing actionable solutions and innovations through our Aerospace Technologies, Industrial Automation, Building Automation and Energy and Sustainability Solutions business segments that help make the world smarter and safer as well as more secure and sustainable. For more news and information on Honeywell, please visit www.honeywell.com/newsroom.
About Google Cloud
Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated, and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models, and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.
SOURCE Honeywell
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Tech
T-Mobile, MeetMo, and NantStudios Win Prestigious 2025 Lumiere Award for Revolutionary Las Vegas Grand Prix Formula One Fan Experience

The world of motorsports just took a giant leap into the future! Excitement is in the air as T-Mobile, MeetMo, and NantStudios have clinched the illustrious 2025 Lumiere Award for Best Interactive Experience from the Advanced Imaging Society. This accolade is in recognition of their pioneering immersive video experience for fans at the celebrated Las Vegas Grand Prix!
A Game-Changing Experience
Imagine being able to step into a race track from the comfort of your own home, enveloped in a 360-degree augmented reality tour of the circuit, all captured in breathtaking 12K footage. Thanks to this remarkable collaboration, fans can now enjoy a race experience like never before, made possible by a spectacular fusion of 5G technology, virtual production, and artificial intelligence.
“By combining T-Mobile’s 5G Advanced Network Solutions with our real-time collaboration technology, we’ve created an immersive experience that brings fans closer to the action than ever before,” expressed Michael Mansouri, CEO of Radiant Images and MeetMo. His enthusiasm is shared by many, as this innovative project is seen as a quantum leap forward in the way motorsports are experienced.
The Technical Marvel Behind the Magic
Highlighting their technological finesse, the project transformed over 1.5TB of data into a stunningly interactive experience in mere hours—a feat that previously would have taken months. The journey began at the NantStudios headquarters in Los Angeles, where more than 10 minutes of ultra-high definition, immersive sequences were blended with telemetry and driver animation data captured tirelessly by Radiant Images’ crews in Las Vegas.
The astounding speed and efficiency were primarily powered by T-Mobile’s robust 5G infrastructure, allowing for rapid data transfers back and forth, ensuring seamless integration into the interactive app that fans could access. Chris Melus, VP of Product Management for T-Mobile’s Business Group, proudly remarked, “This collaboration broke new ground for immersive fan engagement.”
The Power of 5G
The integration of T-Mobile’s advanced network solutions turned the Las Vegas Grand Prix into a case study of innovation. With real-time capture and transmission capabilities utilizing Radiant Images’ cutting-edge 360° 12K camera car, production crews were able to capture immersive video feeds and transmit them instantaneously over the 5G network. This meant remote camera control and instant footage reviews, drastically cutting production time and resources.
Moreover, the seamless AR integration—thanks to the creative minds at NantStudios and their work with Unreal Engine—allowed the blending of virtual and real-world elements. Fans were treated to augmented reality overlays displaying real-time data, such as dashboard metrics and telemetry, all transmitted through the reliable 5G network.
Future of Fan Engagement
As Jim Chabin, President of the Advanced Imaging Society, eloquently noted, the remarkable work at the Las Vegas Grand Prix has set new standards for interactive sports entertainment. The recognition given to this innovative team underscores their commitment to pushing the envelope in immersive experiences.
Gary Marshall, Vice President of Virtual Production at NantStudios, also highlighted the project’s importance: “This recognition underscores NantStudios’ legacy of pioneering real-time VFX and virtual production achievements, reaffirming our position as a leader in modern virtual production.”
The 2025 Lumiere Award is not just a trophy; it symbolizes the melding of creativity and technology in a way that elevates the fan experience to new heights. The collaboration between T-Mobile, MeetMo, and NantStudios exemplifies a thrilling future where motorsports become more accessible, engaging, and immersive. It’s a thrilling time to be a fan, and the development teams behind this innovation have truly set a new standard for content creators everywhere.
With such defining moments in sports entertainment, we can’t help but wonder what spectacular innovations lie ahead. Buckle up; it’s going to be a wild ride!
About the Companies
MeetMo
MeetMo.io is revolutionizing how creative professionals collaborate by combining video conferencing, live streaming, and AI automation into a single, intuitive platform. With persistent virtual meeting rooms that adapt to users over time, our platform evolves into a true collaborative partner, enhancing creativity and productivity. For more information please visit: https://www.meetmo.io
Radiant Images
Radiant Images is a globally acclaimed, award-winning technology provider specializing in innovative tools and solutions for the media and entertainment industries. The company focuses on advancing cinema, immersive media, and live production. https://www.radiantimages.com
T-Mobile
T-Mobile US, Inc.(NASDAQ: TMUS) is America’s supercharged Un-carrier, delivering an advanced 4G LTE and transformative nationwide 5G network that will offer reliable connectivity for all. T-Mobile’s customers benefit from its unmatched combination of value and quality, unwavering obsession with offering them the best possible service experience and indisputable drive for disruption that creates competition and innovation in wireless and beyond. Based in Bellevue, Wash., T-Mobile provides services through its subsidiaries and operates its flagship brands, T-Mobile, Metro by T-Mobile and Mint Mobile. For more information please visit: https://www.t-mobile.com
NantStudios
NantStudios is the first real time-native, full-service production house; re-imagined from the ground up to deliver exceptional creative results through next generation technologies like Virtual Production. For more information please visit: https://nantstudios.com
SOURCE MeetMo
Looking for an entertainment experience that transcends the ordinary? Look no further than STM Daily News Blog’s vibrant Entertainment section. Immerse yourself in the captivating world of indie films, streaming and podcasts, movie reviews, music, expos, venues, and theme and amusement parks. Discover hidden cinematic gems, binge-worthy series and addictive podcasts, gain insights into the latest releases with our movie reviews, explore the latest trends in music, dive into the vibrant atmosphere of expos, and embark on thrilling adventures in breathtaking venues and theme parks. Join us at STM Entertainment and let your entertainment journey begin! https://stmdailynews.com/category/entertainment/
and let your entertainment journey begin!
Tech
How close are quantum computers to being really useful? Podcast
Quantum computers could revolutionize science by solving complex problems. However, scaling and error correction remain significant challenges before achieving practical applications.

Quantum computers have the potential to solve big scientific problems that are beyond the reach of today’s most powerful supercomputers, such as discovering new antibiotics or developing new materials.
But to achieve these breakthroughs, quantum computers will need to perform better than today’s best classical computers at solving real-world problems. And they’re not quite there yet. So what is still holding quantum computing back from becoming useful?
In this episode of The Conversation Weekly podcast, we speak to quantum computing expert Daniel Lidar at the University of Southern California in the US about what problems scientists are still wrestling with when it comes to scaling up quantum computing, and how close they are to overcoming them.
Quantum computers harness the power of quantum mechanics, the laws that govern subatomic particles. Instead of the classical bits of information used by microchips inside traditional computers, which are either a 0 or a 1, the chips in quantum computers use qubits, which can be both 0 and 1 at the same time or anywhere in between. Daniel Lidar explains:
“Put a lot of these qubits together and all of a sudden you have a computer that can simultaneously represent many, many different possibilities … and that is the starting point for the speed up that we can get from quantum computing.”
Faulty qubits
One of the biggest problems scientist face is how to scale up quantum computing power. Qubits are notoriously prone to errors – which means that they can quickly revert to being either a 0 or a 1, and so lose their advantage over classical computers.
Scientists have focused on trying to solve these errors through the concept of redundancy – linking strings of physical qubits together into what’s called a “logical qubit” to try and maximise the number of steps in a computation. And, little by little, they’re getting there.
In December 2024, Google announced that its new quantum chip, Willow, had demonstrated what’s called “beyond breakeven”, when its logical qubits worked better than the constituent parts and even kept on improving as it scaled up.
Lidar says right now the development of this technology is happening very fast:
“For quantum computing to scale and to take off is going to still take some real science breakthroughs, some real engineering breakthroughs, and probably overcoming some yet unforeseen surprises before we get to the point of true quantum utility. With that caution in mind, I think it’s still very fair to say that we are going to see truly functional, practical quantum computers kicking into gear, helping us solve real-life problems, within the next decade or so.”
Listen to Lidar explain more about how quantum computers and quantum error correction works on The Conversation Weekly podcast.
This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Katie Flood and Mend Mariwany. Sound design was by Michelle Macklem, and theme music by Neeta Sarl.
Clips in this episode from Google Quantum AI and 10 Hours Channel.
You can find us on Instagram at theconversationdotcom or via e-mail. You can also subscribe to The Conversation’s free daily e-mail here.
Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.
Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Tech
Why building big AIs costs billions – and how Chinese startup DeepSeek dramatically changed the calculus

Ambuj Tewari, University of Michigan
State-of-the-art artificial intelligence systems like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude have captured the public imagination by producing fluent text in multiple languages in response to user prompts. Those companies have also captured headlines with the huge sums they’ve invested to build ever more powerful models.
An AI startup from China, DeepSeek, has upset expectations about how much money is needed to build the latest and greatest AIs. In the process, they’ve cast doubt on the billions of dollars of investment by the big AI players.
I study machine learning. DeepSeek’s disruptive debut comes down not to any stunning technological breakthrough but to a time-honored practice: finding efficiencies. In a field that consumes vast computing resources, that has proved to be significant.
Where the costs are
Developing such powerful AI systems begins with building a large language model. A large language model predicts the next word given previous words. For example, if the beginning of a sentence is “The theory of relativity was discovered by Albert,” a large language model might predict that the next word is “Einstein.” Large language models are trained to become good at such predictions in a process called pretraining.
Pretraining requires a lot of data and computing power. The companies collect data by crawling the web and scanning books. Computing is usually powered by graphics processing units, or GPUs. Why graphics? It turns out that both computer graphics and the artificial neural networks that underlie large language models rely on the same area of mathematics known as linear algebra. Large language models internally store hundreds of billions of numbers called parameters or weights. It is these weights that are modified during pretraining. https://www.youtube.com/embed/MJQIQJYxey4?wmode=transparent&start=0 Large language models consume huge amounts of computing resources, which in turn means lots of energy.
Pretraining is, however, not enough to yield a consumer product like ChatGPT. A pretrained large language model is usually not good at following human instructions. It might also not be aligned with human preferences. For example, it might output harmful or abusive language, both of which are present in text on the web.
The pretrained model therefore usually goes through additional stages of training. One such stage is instruction tuning where the model is shown examples of human instructions and expected responses. After instruction tuning comes a stage called reinforcement learning from human feedback. In this stage, human annotators are shown multiple large language model responses to the same prompt. The annotators are then asked to point out which response they prefer.
It is easy to see how costs add up when building an AI model: hiring top-quality AI talent, building a data center with thousands of GPUs, collecting data for pretraining, and running pretraining on GPUs. Additionally, there are costs involved in data collection and computation in the instruction tuning and reinforcement learning from human feedback stages.
All included, costs for building a cutting edge AI model can soar up to US$100 million. GPU training is a significant component of the total cost.
The expenditure does not stop when the model is ready. When the model is deployed and responds to user prompts, it uses more computation known as test time or inference time compute. Test time compute also needs GPUs. In December 2024, OpenAI announced a new phenomenon they saw with their latest model o1: as test time compute increased, the model got better at logical reasoning tasks such as math olympiad and competitive coding problems.
Slimming down resource consumption
Thus it seemed that the path to building the best AI models in the world was to invest in more computation during both training and inference. But then DeepSeek entered the fray and bucked this trend.
Their V-series models, culminating in the V3 model, used a series of optimizations to make training cutting edge AI models significantly more economical. Their technical report states that it took them less than $6 million dollars to train V3. They admit that this cost does not include costs of hiring the team, doing the research, trying out various ideas and data collection. But $6 million is still an impressively small figure for training a model that rivals leading AI models developed with much higher costs.
The reduction in costs was not due to a single magic bullet. It was a combination of many smart engineering choices including using fewer bits to represent model weights, innovation in the neural network architecture, and reducing communication overhead as data is passed around between GPUs.
It is interesting to note that due to U.S. export restrictions on China, the DeepSeek team did not have access to high performance GPUs like the Nvidia H100. Instead they used Nvidia H800 GPUs, which Nvidia designed to be lower performance so that they comply with U.S. export restrictions. Working with this limitation seems to have unleashed even more ingenuity from the DeepSeek team.
DeepSeek also innovated to make inference cheaper, reducing the cost of running the model. Moreover, they released a model called R1 that is comparable to OpenAI’s o1 model on reasoning tasks.
They released all the model weights for V3 and R1 publicly. Anyone can download and further improve or customize their models. Furthermore, DeepSeek released their models under the permissive MIT license, which allows others to use the models for personal, academic or commercial purposes with minimal restrictions.
Resetting expectations
DeepSeek has fundamentally altered the landscape of large AI models. An open weights model trained economically is now on par with more expensive and closed models that require paid subscription plans.
The research community and the stock market will need some time to adjust to this new reality.
Ambuj Tewari, Professor of Statistics, University of Michigan
This article is republished from The Conversation under a Creative Commons license. Read the original article.
STM Daily News is a vibrant news blog dedicated to sharing the brighter side of human experiences. Emphasizing positive, uplifting stories, the site focuses on delivering inspiring, informative, and well-researched content. With a commitment to accurate, fair, and responsible journalism, STM Daily News aims to foster a community of readers passionate about positive change and engaged in meaningful conversations. Join the movement and explore stories that celebrate the positive impacts shaping our world.
-
Urbanism1 year ago
Signal Hill, California: A Historic Enclave Surrounded by Long Beach
-
News2 years ago
Diana Gregory Talks to us about Diana Gregory’s Outreach Services
-
Senior Pickleball Report2 years ago
The Absolute Most Comfortable Pickleball Shoe I’ve Ever Worn!
-
STM Blog2 years ago
World Naked Gardening Day: Celebrating Body Acceptance and Nature
-
Senior Pickleball Report2 years ago
ACE PICKLEBALL CLUB TO DEBUT THEIR HIGHLY ANTICIPATED INDOOR PICKLEBALL FRANCHISES IN THE US, IN EARLY 2023
-
Travel2 years ago
Unique Experiences at the CitizenM
-
Automotive2 years ago
2023 Nissan Sentra pricing starts at $19,950
-
Senior Pickleball Report2 years ago
“THE PEOPLE’S CHOICE AWARDS OF PICKLEBALL” – VOTING OPEN