Connect with us

Science

Too many em dashes? Weird words like ‘delves’? Spotting text written by ChatGPT is still more art than science

Published

on

ChatGPT
Language experts fare no better than everyday people. Aitor Diago/Moment via Getty Images

Too many em dashes? Weird words like ‘delves’? Spotting text written by ChatGPT is still more art than science

Roger J. Kreuz, University of Memphis People are now routinely using chatbots to write computer code, summarize articles and books, or solicit advice. But these chatbots are also employed to quickly generate text from scratch, with some users passing off the words as their own. This has, not surprisingly, created headaches for teachers tasked with evaluating their students’ written work. It’s also created issues for people seeking advice on forums like Reddit, or consulting product reviews before making a purchase. Over the past few years, researchers have been exploring whether it’s even possible to distinguish human writing from artificial intelligence-generated text. But the best strategies to distinguish between the two may come from the chatbots themselves.

Too good to be human?

Several recent studies have highlighted just how difficult it is to determine whether text was generated by a human or a chatbot. Research participants recruited for a 2021 online study, for example, were unable to distinguish between human- and ChatGPT-generated stories, news articles and recipes. Language experts fare no better. In a 2023 study, editorial board members for top linguistics journals were unable to determine which article abstracts had been written by humans and which were generated by ChatGPT. And a 2024 study found that 94% of undergraduate exams written by ChatGPT went undetected by graders at a British university. Clearly, humans aren’t very good at this. A commonly held belief is that rare or unusual words can serve as “tells” regarding authorship, just as a poker player might somehow give away that they hold a winning hand. Researchers have, in fact, documented a dramatic increase in relatively uncommon words, such as “delves” or “crucial,” in articles published in scientific journals over the past couple of years. This suggests that unusual terms could serve as tells that generative AI has been used. It also implies that some researchers are actively using bots to write or edit parts of their submissions to academic journals. Whether this practice reflects wrongdoing is up for debate. In another study, researchers asked people about characteristics they associate with chatbot-generated text. Many participants pointed to the excessive use of em dashes – an elongated dash used to set off text or serve as a break in thought – as one marker of computer-generated output. But even in this study, the participants’ rate of AI detection was only marginally better than chance. Given such poor performance, why do so many people believe that em dashes are a clear tell for chatbots? Perhaps it’s because this form of punctuation is primarily employed by experienced writers. In other words, people may believe that writing that is “too good” must be artificially generated. But if people can’t intuitively tell the difference, perhaps there are other methods for determining human versus artificial authorship.

Stylometry to the rescue?

Some answers may be found in the field of stylometry, in which researchers employ statistical methods to detect variations in the writing styles of authors. I’m a cognitive scientist who authored a book on the history of stylometric techniques. In it, I document how researchers developed methods to establish authorship in contested cases, or to determine who may have written anonymous texts. One tool for determining authorship was proposed by the Australian scholar John Burrows. He developed Burrows’ Delta, a computerized technique that examines the relative frequency of common words, as opposed to rare ones, that appear in different texts. It may seem counterintuitive to think that someone’s use of words like “the,” “and” or “to” can determine authorship, but the technique has been impressively effective.
Black-and-white photographic portrait of young woman with short hair seated and posing for the camera. ChatGPT
A stylometric technique called Burrow’s Delta was used to identify LaSalle Corbell Pickett as the author of love letters attributed to her deceased husband, Confederate Gen. George Pickett. Encyclopedia Virginia
Burrows’ Delta, for example, was used to establish that Ruth Plumly Thompson, L. Frank Baum’s successor, was the author of a disputed book in the “Wizard of Oz” series. It was also used to determine that love letters attributed to Confederate Gen. George Pickett were actually the inventions of his widow, LaSalle Corbell Pickett. A major drawback of Burrows’ Delta and similar techniques is that they require a fairly large amount of text to reliably distinguish between authors. A 2016 study found that at least 1,000 words from each author may be required. A relatively short student essay, therefore, wouldn’t provide enough input for a statistical technique to work its attribution magic. More recent work has made use of what are known as BERT language models, which are trained on large amounts of human- and chatbot-generated text. The models learn the patterns that are common in each type of writing, and they can be much more discriminating than people: The best ones are between 80% and 98% accurate. However, these machine-learning models are “black boxes” – that is, we don’t really know which features of texts are responsible for their impressive abilities. Researchers are actively trying to find ways to make sense of them, but for now, it isn’t clear whether the models are detecting specific, reliable signals that humans can look for on their own.

A moving target

Another challenge for identifying bot-generated text is that the models themselves are constantly changing – sometimes in major ways. Early in 2025, for example, users began to express concerns that ChatGPT had become overly obsequious, with mundane queries deemed “amazing” or “fantastic.” OpenAI addressed the issue by rolling back some changes it had made. Of course, the writing style of a human author may change over time as well, but it typically does so more gradually. At some point, I wondered what the bots had to say for themselves. I asked ChatGPT-4o: “How can I tell if some prose was generated by ChatGPT? Does it have any ‘tells,’ such as characteristic word choice or punctuation?” The bot admitted that distinguishing human from nonhuman prose “can be tricky.” Nevertheless, it did provide me with a 10-item list, replete with examples. These included the use of hedges – words like “often” and “generally” – as well as redundancy, an overreliance on lists and a “polished, neutral tone.” It did mention “predictable vocabulary,” which included certain adjectives such as “significant” and “notable,” along with academic terms like “implication” and “complexity.” However, though it noted that these features of chatbot-generated text are common, it concluded that “none are definitive on their own.” Chatbots are known to hallucinate, or make factual errors. But when it comes to talking about themselves, they appear to be surprisingly perceptive. Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis This article is republished from The Conversation under a Creative Commons license. Read the original article.
iframe src=”https://counter.theconversation.com/content/259629/count.gif?distributor=republish-lightbox-advanced” width=”1″ height=”1″ style=”border: none !important” referrerpolicy=”no-referrer-when-downgrade”> Article as HTML

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Forgotten Genius Fridays

Valerie Thomas: NASA Engineer, Inventor, and STEM Trailblazer

Published

on

Last Updated on February 10, 2026 by Daily News StaffValerie Thomas

Valerie Thomas is a true pioneer in the world of science and technology. A NASA engineer and physicist, she is best known for inventing the illusion transmitter, a groundbreaking device that creates 3D images using concave mirrors. This invention laid the foundation for modern 3D imaging and virtual reality technologies.

Beyond her inventions, Thomas broke barriers as an African American woman in STEM, mentoring countless young scientists and advocating for diversity in science and engineering. Her work at NASA’s Goddard Space Flight Center helped advance satellite technology and data visualization, making her contributions both innovative and enduring.

In our latest short video, we highlight Valerie Thomas’ remarkable journey—from her early passion for science to her groundbreaking work at NASA. Watch and be inspired by a true STEM pioneer whose legacy continues to shape the future of space and technology.

🎥 Watch the video here: https://youtu.be/P5XTgpcAoHw

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/

Forgotten Genius Fridays

https://stmdailynews.com/the-knowledge-2/forgotten-genius-fridays/

🧠 Forgotten Genius Fridays

A Short-Form Series from The Knowledge by STM Daily News

Every Friday, STM Daily News shines a light on brilliant minds history overlooked.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Forgotten Genius Fridays is a weekly collection of short videos and articles dedicated to inventors, innovators, scientists, and creators whose impact changed the world—but whose names were often left out of the textbooks.

From life-saving inventions and cultural breakthroughs to game-changing ideas buried by bias, our series digs up the truth behind the minds that mattered.

Each episode of The Knowledge runs 30–90 seconds, designed for curious minds on the go—perfect for YouTube Shorts, TikTok, Reels, and quick reads.

Because remembering these stories isn’t just about the past—it’s about restoring credit where it’s long overdue.

 🔔 New episodes every Friday

📺 Watch now at: stmdailynews.com/the-knowledge

 🧠 Now you know.  

Author

  • Rod Washington

    Rod: A creative force, blending words, images, and flavors. Blogger, writer, filmmaker, and photographer. Cooking enthusiast with a sci-fi vision. Passionate about his upcoming series and dedicated to TNC Network. Partnered with Rebecca Washington for a shared journey of love and art. View all posts


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

The Knowledge

Beneath the Waves: The Global Push to Build Undersea Railways

Undersea railways are transforming transportation, turning oceans from barriers into gateways. Proven by tunnels like the Channel and Seikan, these innovations offer cleaner, reliable connections for passengers and freight. Ongoing projects in China and Europe, alongside future proposals, signal a new era of global mobility beneath the waves.

Published

on

Train traveling through underwater tunnel
Trains beneath the ocean are no longer science fiction—they’re already in operation.

For most of modern history, oceans have acted as natural barriers—dividing nations, slowing trade, and shaping how cities grow. But beneath the waves, a quiet transportation revolution is underway. Infrastructure once limited by geography is now being reimagined through undersea railways.

Undersea rail tunnels—like the Channel Tunnel and Japan’s Seikan Tunnel—proved decades ago that trains could reliably travel beneath the ocean floor. Today, new projects are expanding that vision even further.

Around the world, engineers and governments are investing in undersea railways—tunnels that allow high-speed trains to travel beneath oceans and seas. Once considered science fiction, these projects are now operational, under construction, or actively being planned.

image 3

Undersea Rail Is Already a Reality

Japan’s Seikan Tunnel and the Channel Tunnel between the United Kingdom and France proved decades ago that undersea railways are not only possible, but reliable. These tunnels carry passengers and freight beneath the sea every day, reshaping regional connectivity.

Undersea railways are cleaner than short-haul flights, more resilient than bridges, and capable of lasting more than a century. As climate pressures and congestion increase, rail beneath the sea is emerging as a practical solution for future mobility.

What’s Being Built Right Now

China is currently constructing the Jintang Undersea Railway Tunnel as part of the Ningbo–Zhoushan high-speed rail line, while Europe’s Fehmarnbelt Fixed Link will soon connect Denmark and Germany beneath the Baltic Sea. These projects highlight how transportation and technology are converging to solve modern mobility challenges.

The Mega-Projects Still on the Drawing Board

Looking ahead, proposals such as the Helsinki–Tallinn Tunnel and the long-studied Strait of Gibraltar rail tunnel could reshape global affairs by linking regions—and even continents—once separated by water.

Why Undersea Rail Matters

The future of transportation may not rise above the ocean—but run quietly beneath it.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

child education

Special Education Is Turning to AI to Fill Staffing Gaps—But Privacy and Bias Risks Remain

With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.

Published

on

Seth King, University of Iowa

With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.
Adobe Stock

In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners.

Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs.

Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services.

As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges – ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered.

Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution.

A faster IEP, but how individualized?

AI is already shaping special education planning, personnel preparation and assessment.

One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a child’s strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals.

But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a child’s true individual needs.

Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans.

Training and diagnosing disabilities

There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Some districts have begun using AI for assessments, which can involve a range of academic, cognitive and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability.

Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts.

Privacy and trust concerns

There are serious ethical – and practical – questions about these AI-supported interventions, ranging from risks to students’ privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law.

The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students’ data privacy and the rights of parents to access and hold their children’s data.

What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a child’s data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable.

These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have struggled to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents.

But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present.

Embedded bias

At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified.

Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data.

Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services.

Filling the gap

Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law.

The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, “de minimis” progress, which weakens one of the primary rationales for pursuing AI – that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo.

But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides.

Seth King, Associate Profess of Special Education, University of Iowa

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending