Connect with us

Artificial Intelligence

Researchers Find Little Evidence of Cheating with Online, Unsupervised Exams

Published

on

Last Updated on September 6, 2025 by Daily News Staff

Little Evidence of Cheating
Credit: Christopher Gannon/Iowa State University.
Students work on laptops above “Gene Pool,” a tile mosaic by Andrew Leicester inside the Molecular Biology Building at Iowa State University.
« Researchers Find Little Evidence of Cheating with Online, Unsupervised Exams

Newswise — AMES, IA — When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?

His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.

“The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.

To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.

Across different academic disciplines, class sizes, course levels and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.

The research findings were recently published in Proceedings of the National Academy of Sciences.

“Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn.

Both researchers say they’ve continued to give exams online, even for in-person classes. Chan says this format provides more flexibility for students who have part-time jobs or travel for sports and extra-curriculars. It also expands options for teaching remote classes. Ahn led her first  online course over the summer.

Why might cheating have had a minimal effect on test scores?

The researchers say students more likely to cheat might be underperforming in the class and anxious about failing. Perhaps they’ve skipped lectures, fallen behind with studying or feel uncomfortable asking for help. Even with the option of searching Google during an unmonitored exam, students may struggle to find the correct answer if they don’t understand the content. In their paper, the researchers point to evidence from previous studies comparing test scores from open-book and close-book exams.

Another factor that may deter cheating is academic integrity or a sense of fairness, something many students value, says Chan. Those who have studied hard and take pride in their grades may be more inclined to protect their exam answers from students they view as freeloaders.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Still, the researchers say instructors should be aware of potential weak spots with unsupervised, online exams. For example, some platforms have the option of showing students the correct answer immediately after they select a multiple-choice option. This makes it much easier for students to share answers in a group text.

To counter this and other forms of cheating, instructors can:

  • Wait to release exam answers until the test window closes.
  • Use larger, randomized question banks.
  • Add more options in multiple-choice questions and making the right choice less obvious.
  • Adjust grade cutoffs.

COVID-19 and ChatGPT

Chan and Ahn say the spring 2020 semester provided a unique opportunity to research the validity of online exams for student evaluations. However, there were some limitations. For example, it wasn’t clear what role stress and other COVID-19-related impacts may have played on students, faculty and teaching assistants. Perhaps instructors were more lenient with grading or gave longer windows of time to complete exams.

The researchers said another limitation was not knowing if the 18 classes in the sample normally get easier or harder as the semester progresses. In an ideal experiment, half of the students would have taken online exams for the first half of the semester and in-person exams for the second half.

They attempted to account for these two concerns by looking at older test score data from a subset of the 18 classes during semesters when they were fully in-person. The researchers found the distribution of grades in each class was consistent with the spring 2020 semester and concluded that the materials covered in the first and second halves of the semester did not differ in their difficulty.

At the time of data collection for this study, ChatGPT wasn’t available to students. But the researchers acknowledge AI writing tools are a gamechanger in education and could make it much harder for instructors to evaluate their students. Understanding how instructors should approach online exams with the advent of ChatGPT is something Ahn intends to research.

The study was supported by a National Science Foundation Science of Learning and Augmented Intelligence Grant.

Journal Link: Proceedings of the National Academy of Sciences

Source: Iowa State University

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Artificial Intelligence

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

A new study finds over 50% of online articles are now AI-generated, raising questions about the future of human writing. Discover why formulaic content is most at risk, and why authentic, creative voices may become more valuable than ever.

Published

on

Is AI Replacing Human Writers? Why Over Half of Online Articles Are Now AI-Generated
Preserving the value of real human voices will likely depend on how people adapt to artificial intelligence and collaborate with it. BlackJack3D/E+ via Getty Images

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

Francesco Agnellini, Binghamton University, State University of New York The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it’s affecting culture, I’ve thought a lot about what this technology can do and where it falls short. If you’re more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?

It isn’t all or nothing

Thinking about these questions reminded me of Umberto Eco’s essay “Apocalyptic and Integrated,” which was originally written in the early 1960s. Parts of it were later included in an anthology titled “Apocalypse Postponed,” which I first read as a college student in Italy. In it, Eco draws a contrast between two attitudes toward mass media. There are the “apocalyptics” who fear cultural degradation and moral collapse. Then there are the “integrated” who champion new media technologies as a democratizing force for culture.
An older man with a beard, glasses and a suit poses while holding a cigarette.
Italian philosopher, cultural critic and novelist Umberto Eco cautioned against overreacting to the impact of new technologies. Leonardo Cendamo/Getty Images
Back then, Eco was writing about the proliferation of TV and radio. Today, you’ll often see similar reactions to AI. Yet Eco argued that both positions were too extreme. It isn’t helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shape – and sometimes reinforce – power structures. While I was teaching a course on deepfakes during the 2024 election, Eco’s lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent “deepfake apocalypse.” Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home? Those fears weren’t groundless: Research shows that people aren’t especially good at identifying deepfakes. At the same time, they consistently overestimate their ability to do so. In the end, though, the apocalypse was postponed. Post-election analyses found that deepfakes did seem to intensify some ongoing political trends, such as the erosion of trust and polarization, but there’s no evidence that they affected the final outcome of the election.

Listicles, news updates and how-to guides

Of course, the fears that AI raises for supporters of democracy are not the same as those it creates for writers and artists. For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning? It’s important to clarify what’s meant by “online content,” the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers. https://stmdailynews.com/wp-admin/post-new.php#visibility The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writers – mostly freelance, including many translators – has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.

Collaborating with AI

The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them and then reshape that output into the final text. This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone. Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. It’s worth noting that even before AI, human writing has never been entirely human, either. Every technology, from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it. Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and human–AI co-produced text. This has raised concerns about their ability to continue improving over time. Some commentators have already described a sense of disillusionment following the release of newer large models, with companies struggling to deliver on their promises.

Human voices may matter even more

But what happens when people become overly reliant on AI in their writing? Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Western – and especially English-speaking – norms in the writing of people from other cultures, raising concerns about a new form of AI colonialism. In this context, texts that display originality, voice and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models. If you set aside the more apocalyptic scenarios and assume that AI will continue to advance – perhaps at a slower pace than in the recent past – it’s quite possible that thoughtful, original, human-generated writing will become even more valuable. Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans. Francesco Agnellini, Lecturer in Digital and Data Studies, Binghamton University, State University of New York This article is republished from The Conversation under a Creative Commons license. Read the original article.
 

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/

View recent photos

Unlock fun facts & lost history—get The Knowledge in your inbox!

We don’t spam! Read our privacy policy for more info.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Learning with AI falls short compared to old-fashioned web search

Learning with AI falls short: New research with 10,000+ participants reveals people who learn using ChatGPT develop shallower knowledge than those using Google search. Discover why AI-generated summaries reduce learning effectiveness and how to use AI tools strategically for education.

Published

on

Learning with AI falls short compared to old-fashioned web search
The work of seeking and synthesizing information can improve understanding of it compared to reading a summary. Tom Werner/DigitalVision via Getty Images

Learning with AI falls short compared to old-fashioned web search

Shiri Melumad, University of Pennsylvania Since the release of ChatGPT in late 2022, millions of people have started using large language models to access knowledge. And it’s easy to understand their appeal: Ask a question, get a polished synthesis and move on – it feels like effortless learning. However, a new paper I co-authored offers experimental evidence that this ease may come at a cost: When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search. Co-author Jin Ho Yun and I, both professors of marketing, reported this finding in a paper based on seven studies with more than 10,000 participants. Most of the studies used the same basic paradigm: Participants were asked to learn about a topic – such as how to grow a vegetable garden – and were randomly assigned to do so by using either an LLM like ChatGPT or the “old-fashioned way,” by navigating links using a standard Google search. No restrictions were put on how they used the tools; they could search on Google as long as they wanted and could continue to prompt ChatGPT if they felt they wanted more information. Once they completed their research, they were then asked to write advice to a friend on the topic based on what they learned. The data revealed a consistent pattern: People who learned about a topic through an LLM versus web search felt that they learned less, invested less effort in subsequently writing their advice, and ultimately wrote advice that was shorter, less factual and more generic. In turn, when this advice was presented to an independent sample of readers, who were unaware of which tool had been used to learn about the topic, they found the advice to be less informative, less helpful, and they were less likely to adopt it. We found these differences to be robust across a variety of contexts. For example, one possible reason LLM users wrote briefer and more generic advice is simply that the LLM results exposed users to less eclectic information than the Google results. To control for this possibility, we conducted an experiment where participants were exposed to an identical set of facts in the results of their Google and ChatGPT searches. Likewise, in another experiment we held constant the search platform – Google – and varied whether participants learned from standard Google results or Google’s AI Overview feature. The findings confirmed that, even when holding the facts and platform constant, learning from synthesized LLM responses led to shallower knowledge compared to gathering, interpreting and synthesizing information for oneself via standard web links.

Why it matters

Why did the use of LLMs appear to diminish learning? One of the most fundamental principles of skill development is that people learn best when they are actively engaged with the material they are trying to learn. When we learn about a topic through Google search, we face much more “friction”: We must navigate different web links, read informational sources, and interpret and synthesize them ourselves. While more challenging, this friction leads to the development of a deeper, more original mental representation of the topic at hand. But with LLMs, this entire process is done on the user’s behalf, transforming learning from a more active to passive process.

What’s next?

To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals. Need a quick, factual answer to a question? Feel free to use your favorite AI co-pilot. But if your aim is to develop deep and generalizable knowledge in an area, relying on LLM syntheses alone will be less helpful. As part of my research on the psychology of new technology and new media, I am also interested in whether it’s possible to make LLM learning a more active process. In another experiment we tested this by having participants engage with a specialized GPT model that offered real-time web links alongside its synthesized responses. There, however, we found that once participants received an LLM summary, they weren’t motivated to dig deeper into the original sources. The result was that the participants still developed shallower knowledge compared to those who used standard Google. Building on this, in my future research I plan to study generative AI tools that impose healthy frictions for learning tasks – specifically, examining which types of guardrails or speed bumps most successfully motivate users to actively learn more beyond easy, synthesized answers. Such tools would seem particularly critical in secondary education, where a major challenge for educators is how best to equip students to develop foundational reading, writing and math skills while also preparing for a real world where LLMs are likely to be an integral part of their daily lives. The Research Brief is a short take on interesting academic work. Shiri Melumad, Associate Professor of Marketing, University of Pennsylvania This article is republished from The Conversation under a Creative Commons license. Read the original article.

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AII: Cognizant guides organizations in AI adoption, addressing challenges like talent shortages and governance while empowering employees to transform business practices and achieve lasting impact.

Published

on

Last Updated on November 10, 2025 by Daily News Staff

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AI

(Family Features) In today’s AI-powered economy, transformation is no longer optional – it’s essential. Enterprises are eager to embrace generative and agentic AI, but many lack the clarity and confidence to scale it responsibly.

As a global leader in technology and consulting services, Cognizant is helping organizations bridge that gap – turning possibility into progress.

The Moment is Now

AI is reshaping industries, redefining roles, and revolutionizing decision-making. According to Cognizant Research, 61% of senior decision-makers expect AI to drive complete business transformation. Yet, 83% feel unprepared to embed AI into their organizations, citing gaps in talent, governance, and culture.

This disconnect presents a powerful opportunity.

“In the age of AI, transformation isn’t just about technology, it’s about trust, talent and the ability to turn possibility into progress,” said Shveta Arora, head of Cognizant Consulting. “The true impact of AI is delivered when organizations build trust, invest in adaptable talent and embrace bold ideas. By empowering people and embedding AI responsibly, leaders can bridge the gap between potential and progress, ensuring lasting value for business and society.”

A Trusted Voice in AI

As a recognized leader in AI strategy and enterprise transformation, Cognizant brings credibility and clarity to this evolving space. It has been named a Leader and Star Performer by Everest Group in their 2024 AI and Generative AI Services PEAK Matrix Assessment, underscoring its strategic vision and execution.

With thought leadership in AI strategy and enterprise transformation published across thousands of U.S. outlets, its position as a trusted voice in shaping the future of AI has been reinforced. It has also been recognized across the industry for excellence in client service and innovation.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Its platforms – Neuro, Flowsource and the Data and Intelligence Toolkit – are driving real-world impact across industries. Furthermore, a strategic collaboration with a leading enterprise-grade generative AI provider enables secure and scalable deployment of agentic AI in regulated settings, ensuring adherence to compliance and data governance standards

Bridging the AI Adoption Gap

When a leading property intelligence provider’s IT systems were hampering progressing turnaround times, the company turned to Cognizant’s Gen AI-powered Data as a Service and Neuro Business Process (BP) platform. Driven by AI insights and learning, Neuro BP centralized business processing. It automated data collection, case reviews and decision-making to align with the client’s goals. Powered by the platform, the organization saw a reduction in processing time and errors and an increase in productivity.

Stories like these are still the exception.

Despite enthusiasm and investment – global businesses are spending an average of $47.5 million on generative AI this year – many feel they’re moving too slowly. The barriers include talent shortages, infrastructure gaps and unclear governance. These challenges can be overcome by moving from experimentation to execution. With clarity, credibility and conviction, organizations can scale AI responsibly and effectively.

Accelerating Enterprise AI Transformations

Unlike traditional software, AI models are contextual computing engines. They don’t require every path to be spelled out in advance but instead interpret broad instructions and intent, and adapt based on the context they are given. Agentic AI systems lacking business-specific knowledge can lead to generic or unreliable outputs.

To address this, enterprises need systems that can deliver the right information and tools to AI models – enabling accurate decisions, alignment with human goals, compliance with policy frameworks and adaptability to real-time challenges. This is the role of context engineering, an emerging discipline focused on delivering the right context at the right time to agentic systems. Context refers to the sum of a company’s institutional knowledge, including its operating models, roles, goals, metrics, processes, policies and governance – essential ingredients for effective AI.

To guide clients through their AI journey, Cognizant developed the Strategic Enterprise Agentification Framework, an end-to-end model designed to unlock productivity, market expansion and new business models.

At its core is the Agent Development Lifecycle (ADLC), which guides the development of enterprise agents and agentic AI systems across six distinct stages. Designed to align with real-world enterprise dynamics, ADLC supports seamless integration with business applications. This unique approach embeds context engineering into ADLC, ensuring agents are tailored to support real-world enterprise dynamics.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

To help bridge vision and execution, businesses can utilize the Neuro AI Multi-Agent Accelerator. This no-code framework allows rapid deployment of custom multi-agent systems.

People Power the Progress

Technology alone doesn’t transform enterprises – people do. With an AI-driven Workforce Transformation (WFT), Cognizant helps organizations reskill employees, redesign roles and build AI fluency. Integrated with the Agentification Framework, WFT is designed to accelerate transformation and support long-term resilience.

From Possibility to Progress

From strategic frameworks to enterprise platforms to workforce readiness, Cognizant equips organizations with the confidence to harness AI responsibly and at scale. In the age of AI, it’s not just about transformation – it’s about leading with purpose.

Explore more at cognizant.com.

Photo courtesy of Shutterstock

collect?v=1&tid=UA 482330 7&cid=1955551e 1975 5e52 0cdb 8516071094cd&sc=start&t=pageview&dl=http%3A%2F%2Ftrack.familyfeatures
SOURCE:
Cognizant

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending