Science
Using machine learning to help monitor climate-induced hazards
Study finds way to track hurricane landfalls, other hazards using satellite data
Newswise — CHICAGO – Combining satellite technology with machine learning may allow scientists to better track and prepare for climate-induced natural hazards, according to research presented last month at the annual meeting of the American Geophysical Union.
Over the last few decades, rising global temperatures have caused many natural phenomena like hurricanes, snowstorms, floods and wildfires to grow in intensity and frequency.
While humans can’t prevent these disasters from occurring, the rapidly increasing number of satellites that orbit the Earth from space offers a greater opportunity to monitor their evolution, said C.K Shum, co-author of the study and a professor at the Byrd Polar Research Center and in earth sciences at The Ohio State University. He said that potentially allowing people in the area to make informed decisions could improve the effectiveness of local disaster response and management.
“Predicting the future is a pretty difficult task, but by using remote sensing and machine learning, our research aims to help create a system that will be able to monitor these climate-induced hazards in a manner that enables a timely and informed disaster response,” said Shum.
Shum’s research uses geodesy — the science of measuring the planet’s size, shape and orientation in space — to study phenomena related to global climate change.
Using geodetic data gathered from various space agency satellites, researchers conducted several case studies to test whether a mix of remote sensing and deep machine learning analytics could accurately monitor abrupt weather episodes, including floods, droughts and storm surges in some areas of the world.
In one experiment, the team used these methods to determine if radar signals from Earth’s Global Navigation Satellite System (GNSS), which were reflected over the ocean and received by GNSS receivers located at towns offshore in the Gulf of Mexico, could be used to track hurricane evolution by measuring rising sea levels after landfall. Between 2020 and 2021, the team studied how seven storms, such as Hurricane Hana and Hurricane Delta, affected coastal sea levels before they made landfall in the Gulf of Mexico. By monitoring these complex changes, they found a positive correlation between higher sea levels and how intense the storm surges were.
The data they used was collected by NASA and the German Aerospace Center’s Gravity Recovery And Climate Experiment (GRACE) mission, and its successor, GRACE Follow-On. Both satellites have been used to monitor changes in Earth’s mass over the past two decades, but so far, have only been able to view the planet from a little more than 400 miles up. But using deep machine learning analytics, Shum’s team was able to reduce this resolution to about 15 miles, effectively improving society’s ability to monitor natural hazards.
“Taking advantage of deep machine learning means having to condition the algorithm to continuously learn from various data inputs to achieve the goal you want to accomplish,” Shum said. In this instance, satellites allowed researchers to quantify the path and evolution of two Category 4 Atlantic hurricane-induced storm surges during their landfalls over Texas and Louisiana, Hurricane Harvey in August 2017 and Hurricane Laura in August 2020, respectively.
Accurate measurements of these natural hazards could one day help improve hurricane forecasting, said Shum. But in the short term, Shum would like to see countries and organizations make their satellite data more readily available to scientists, as projects that rely on deep machine learning often need large amounts of wide-ranging data to help make accurate forecasts.
“Many of these novel satellite techniques require time and effort to process massive amounts of accurate data,” said Shum. “If researchers have access to more resources, we’ll be able to potentially develop technologies to better prepare people to adapt, as well as allow disaster management agencies to improve their response to intense and frequent climate-induced natural hazards.”
Co-authors of the project were Yu Zhang, Yuanyuan Jia, Yihang Ding and Junyi Guo of Ohio State; Orhan Akyilmaz and Metehan Uz of Istanbul Technical University; and Kazim Atman of Queen Mary University of London. This work was supported by the United States Agency for International Development (USAID), the National Science Foundation (NSF), the National Aeronautics and Space Administration and the Scientific and Technological Research Council of Türkiye (TÜBİTAK).
Source: Ohio State University
https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Forgotten Genius Fridays
Valerie Thomas: NASA Engineer, Inventor, and STEM Trailblazer
Last Updated on February 10, 2026 by Daily News Staff![]()
Valerie Thomas is a true pioneer in the world of science and technology. A NASA engineer and physicist, she is best known for inventing the illusion transmitter, a groundbreaking device that creates 3D images using concave mirrors. This invention laid the foundation for modern 3D imaging and virtual reality technologies.
Beyond her inventions, Thomas broke barriers as an African American woman in STEM, mentoring countless young scientists and advocating for diversity in science and engineering. Her work at NASA’s Goddard Space Flight Center helped advance satellite technology and data visualization, making her contributions both innovative and enduring.
In our latest short video, we highlight Valerie Thomas’ remarkable journey—from her early passion for science to her groundbreaking work at NASA. Watch and be inspired by a true STEM pioneer whose legacy continues to shape the future of space and technology.
🎥 Watch the video here: https://youtu.be/P5XTgpcAoHw
Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter. https://stmdailynews.com/the-knowledge/
Forgotten Genius Fridays
https://stmdailynews.com/the-knowledge-2/forgotten-genius-fridays/
🧠 Forgotten Genius Fridays
A Short-Form Series from The Knowledge by STM Daily News
Every Friday, STM Daily News shines a light on brilliant minds history overlooked.
Forgotten Genius Fridays is a weekly collection of short videos and articles dedicated to inventors, innovators, scientists, and creators whose impact changed the world—but whose names were often left out of the textbooks.
From life-saving inventions and cultural breakthroughs to game-changing ideas buried by bias, our series digs up the truth behind the minds that mattered.
Each episode of The Knowledge runs 30–90 seconds, designed for curious minds on the go—perfect for YouTube Shorts, TikTok, Reels, and quick reads.
Because remembering these stories isn’t just about the past—it’s about restoring credit where it’s long overdue.
🔔 New episodes every Friday
📺 Watch now at: stmdailynews.com/the-knowledge
🧠 Now you know.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
The Knowledge
Beneath the Waves: The Global Push to Build Undersea Railways
Undersea railways are transforming transportation, turning oceans from barriers into gateways. Proven by tunnels like the Channel and Seikan, these innovations offer cleaner, reliable connections for passengers and freight. Ongoing projects in China and Europe, alongside future proposals, signal a new era of global mobility beneath the waves.

For most of modern history, oceans have acted as natural barriers—dividing nations, slowing trade, and shaping how cities grow. But beneath the waves, a quiet transportation revolution is underway. Infrastructure once limited by geography is now being reimagined through undersea railways.
Undersea rail tunnels—like the Channel Tunnel and Japan’s Seikan Tunnel—proved decades ago that trains could reliably travel beneath the ocean floor. Today, new projects are expanding that vision even further.
Around the world, engineers and governments are investing in undersea railways—tunnels that allow high-speed trains to travel beneath oceans and seas. Once considered science fiction, these projects are now operational, under construction, or actively being planned.

Undersea Rail Is Already a Reality
Japan’s Seikan Tunnel and the Channel Tunnel between the United Kingdom and France proved decades ago that undersea railways are not only possible, but reliable. These tunnels carry passengers and freight beneath the sea every day, reshaping regional connectivity.
Undersea railways are cleaner than short-haul flights, more resilient than bridges, and capable of lasting more than a century. As climate pressures and congestion increase, rail beneath the sea is emerging as a practical solution for future mobility.
What’s Being Built Right Now
China is currently constructing the Jintang Undersea Railway Tunnel as part of the Ningbo–Zhoushan high-speed rail line, while Europe’s Fehmarnbelt Fixed Link will soon connect Denmark and Germany beneath the Baltic Sea. These projects highlight how transportation and technology are converging to solve modern mobility challenges.
The Mega-Projects Still on the Drawing Board
Looking ahead, proposals such as the Helsinki–Tallinn Tunnel and the long-studied Strait of Gibraltar rail tunnel could reshape global affairs by linking regions—and even continents—once separated by water.
Why Undersea Rail Matters
The future of transportation may not rise above the ocean—but run quietly beneath it.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
child education
Special Education Is Turning to AI to Fill Staffing Gaps—But Privacy and Bias Risks Remain
With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.
Seth King, University of Iowa
In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners.
Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs.
Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services.
As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges – ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered.
Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution.
A faster IEP, but how individualized?
AI is already shaping special education planning, personnel preparation and assessment.
One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a child’s strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals.
But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a child’s true individual needs.
Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans.
Training and diagnosing disabilities
There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel.
Some districts have begun using AI for assessments, which can involve a range of academic, cognitive and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability.
Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts.
Privacy and trust concerns
There are serious ethical – and practical – questions about these AI-supported interventions, ranging from risks to students’ privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law.
The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students’ data privacy and the rights of parents to access and hold their children’s data.
What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a child’s data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable.
These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have struggled to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents.
But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present.
Embedded bias
At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified.
Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data.
Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process.
By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services.
Filling the gap
Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law.
The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, “de minimis” progress, which weakens one of the primary rationales for pursuing AI – that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo.
But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides.
Seth King, Associate Profess of Special Education, University of Iowa
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
