Connect with us

Science

NASA to Provide Live Coverage of Space Station Cargo Launch, Docking

Published

on

Last Updated on May 22, 2023 by Daily News Staff

iss068e046973large
The Roscosmos Progress 81 cargo resupply ship is pictured after undocking from the International Space Station’s Zvezda service module. It would later reenter the Earth’s atmosphere above the Pacific Ocean for a safe demise, completing an eight-month space station resupply mission.
Credits: NASA

NASA will provide live launch and docking coverage of the Roscosmos Progress 84 cargo spacecraft carrying about three tons of food, fuel, and supplies for the Expedition 69 crew aboard the International Space Station.

The unpiloted spacecraft is scheduled to launch at 8:56 a.m. EDT (5:56 p.m. Baikonur time) on Wednesday, May 24, on a Soyuz rocket from the Baikonur Cosmodrome in Kazakhstan. NASA coverage will begin at 8:30 a.m. on NASA Television, the NASA app, and the agency’s website.

The Progress spacecraft will be placed into a two-orbit journey to the station, leading to an automatic docking to the Poisk module at 12:20 p.m. NASA coverage will resume at 11:30 a.m. for rendezvous and docking.

The spacecraft will remain at the orbiting laboratory for approximately six months, then undock for a destructive but safe re-entry into Earth’s atmosphere to dispose of trash loaded by the crew.

The International Space Station is a convergence of science, technology, and human innovation that enables research not possible on Earth. For more than 22 years, NASA has supported a continuous U.S. human presence aboard the orbiting laboratory, through which humans have learned to live and work in space for extended periods of time. The space station is a springboard for the development of a low Earth orbit economy and NASA’s next great leaps in exploration, including missions to the Moon under Artemis and ultimately, human exploration of Mars.

Get breaking news, images, and features from the space station on InstagramFacebook, and Twitter.

Learn more about the space station, its research, and crew, at:

https://www.nasa.gov/station

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

child education

Special Education Is Turning to AI to Fill Staffing Gaps—But Privacy and Bias Risks Remain

With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.

Published

on

Seth King, University of Iowa

With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.
Adobe Stock

In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners.

Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs.

Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services.

As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges – ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered.

Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution.

A faster IEP, but how individualized?

AI is already shaping special education planning, personnel preparation and assessment.

One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a child’s strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals.

But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a child’s true individual needs.

Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans.

Training and diagnosing disabilities

There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Some districts have begun using AI for assessments, which can involve a range of academic, cognitive and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability.

Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts.

Privacy and trust concerns

There are serious ethical – and practical – questions about these AI-supported interventions, ranging from risks to students’ privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law.

The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students’ data privacy and the rights of parents to access and hold their children’s data.

What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a child’s data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable.

These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have struggled to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents.

But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present.

Embedded bias

At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified.

Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data.

Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services.

Filling the gap

Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law.

The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, “de minimis” progress, which weakens one of the primary rationales for pursuing AI – that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo.

But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides.

Seth King, Associate Profess of Special Education, University of Iowa

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

unknown

Fact Check: Did Mike Rogers Admit the Travis Walton UFO Case Was a Hoax?

A fact check of viral claims that Mike Rogers admitted the Travis Walton UFO case was a hoax. We examine the evidence, the spotlight theory, and what the record actually shows.

Published

on

Last Updated on February 6, 2026 by Daily News Staff

A fact check of viral claims that Mike Rogers admitted the Travis Walton UFO case was a hoax. We examine the evidence, the spotlight theory, and what the record actually shows.

In recent years, viral YouTube videos and podcast commentary have revived claims that the 1975 Travis Walton UFO abduction case was an admitted hoax. One of the most widely repeated allegations asserts that Mike Rogers, the logging crew’s foreman, supposedly confessed that he and Walton staged the entire event using a spotlight from a ranger tower to fool their coworkers.

So, is there any truth to this claim?

After reviewing decades of interviews, skeptical investigations, and public records, the answer is clear:

There is no verified evidence that Mike Rogers ever admitted the Travis Walton incident was a hoax.


 

Where the Viral Claim Comes From

The “confession” story has circulated for years in online forums and was recently amplified by commentary-style YouTube and podcast content, including popular personality-driven shows. These versions often claim:

  • Rogers and Walton planned the incident in advance

  • A spotlight from a ranger or observation tower simulated the UFO

  • The rest of the crew was unaware of the hoax

  • Rogers later “admitted” this publicly

However, none of these claims are supported by primary documentation.


What the Documented Record Shows

No Recorded Confession Exists

  • There is no audio, video, affidavit, court record, or signed statement in which Mike Rogers admits staging the incident.

  • Rogers has repeatedly denied hoax allegations in interviews spanning decades.

  • Even prominent skeptical organizations do not cite any confession by Rogers.

If such an admission existed, it would be widely referenced in skeptical literature and would have effectively closed the case. It has not.


The “Ranger Tower Spotlight” Theory Lacks Evidence

  • No confirmed ranger tower or spotlight installation matching the claim has been documented at the location.

  • No ranger, third party, or equipment operator has ever come forward.

  • No physical evidence or corroborating testimony supports this explanation.

Even professional skeptics typically label this idea as speculative, not factual.


Why Skepticism Still Exists (Legitimately)

While the viral claim lacks evidence, skepticism about the Walton case is not unfounded. Common, well-documented critiques include:

Advertisement
Get More From A Face Cleanser And Spa-like Massage
  • Financial pressure tied to a logging contract

  • The limitations and inconsistency of polygraph testing

  • Walton’s later use of hypnosis, which is controversial in memory recall

  • Possible cultural influence from 1970s UFO media

Importantly, none of these critiques rely on a confession by Mike Rogers, because none exists.


Updates & Current Status of the Case

As of today:

  • No new witnesses have come forward to confirm a hoax

  • No participant has recanted their core testimony

  • No physical evidence has conclusively proven or disproven the event

  • Walton and Rogers have both continued to deny hoax allegations

The case remains unresolved, not debunked.


Why Viral Misinformation Persists

Online commentary formats often compress nuance into dramatic statements. Over time:

  • Speculation becomes repeated as “fact”

  • Hypothetical explanations are presented as admissions

  • Entertainment content is mistaken for investigative reporting

This is especially common with long-standing mysteries like the Walton case, where ambiguity invites exaggeration.


Viral Claims vs. Verified Facts

Viral Claim:

Mike Rogers admitted he and Travis Walton staged the UFO incident.

Verified Fact:

No documented confession exists. Rogers has consistently denied hoax claims.


Viral Claim:

Advertisement
Get More From A Face Cleanser And Spa-like Massage

A ranger tower spotlight was used to fake the UFO.

Verified Fact:

No evidence confirms a tower, spotlight, or third-party involvement.


Viral Claim:

The case was “officially debunked.”

Verified Fact:

No authoritative body has conclusively debunked or confirmed the incident.


Viral Claim:

All skeptics agree it was a hoax.

Verified Fact:

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Even skeptical researchers acknowledge the absence of definitive proof.


Viral Claim:

Hollywood exposed the truth in Fire in the Sky.

Verified Fact:

The film significantly fictionalized Walton’s testimony for dramatic effect.


Bottom Line

  • ❌ There is no verified admission by Mike Rogers

  • ❌ There is no evidence of a ranger tower spotlight hoax

  • ✅ There are legitimate unanswered questions about the case

  • ✅ The incident remains debated, not solved

The Travis Walton story persists not because it has been proven — but because it has never been conclusively explained.  

Related External Reading

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/

Author

  • Rod Washington

    Rod: A creative force, blending words, images, and flavors. Blogger, writer, filmmaker, and photographer. Cooking enthusiast with a sci-fi vision. Passionate about his upcoming series and dedicated to TNC Network. Partnered with Rebecca Washington for a shared journey of love and art. View all posts

View recent photos

Unlock fun facts & lost history—get The Knowledge in your inbox!

We don’t spam! Read our privacy policy for more info.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Science

AI-induced cultural stagnation is no longer speculation − it’s already happening

AI-induced cultural stagnation. A 2026 study by researchers revealed that when generative AI operates autonomously, it produces homogenous content, referred to as “visual elevator music,” despite diverse prompts. This convergence leads to bland outputs and indicates a risk of cultural stagnation as AI perpetuates familiar themes, potentially limiting innovation and diversity in creative expression.

Published

on

Elevator with people in modern building.
When generative AI was left to its own devices, its outputs landed on a set of generic images – what researchers called ‘visual elevator music.’ Wang Zhao/AFP via Getty Images

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A collage of AI-generated images that begins with a politician surrounded by policy papers and progresses to a room with fancy red curtains.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

A rolling, green field with a tree and a clear, blue sky.
Pretty … boring. Chris McLoughlin/Moment via Getty Images

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

AI-induced cultural stagnation. A cityscape of tall buildings on a fall morning.
AI’s outputs are familiar because they revert to average displays of human creativity. Bulgac/iStock via Getty Images

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending