Artificial Intelligence
Researchers Find Little Evidence of Cheating with Online, Unsupervised Exams
Newswise — AMES, IA — When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?
His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.
“The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.
To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.
Across different academic disciplines, class sizes, course levels and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.
The research findings were recently published in Proceedings of the National Academy of Sciences.
“Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn.
Both researchers say they’ve continued to give exams online, even for in-person classes. Chan says this format provides more flexibility for students who have part-time jobs or travel for sports and extra-curriculars. It also expands options for teaching remote classes. Ahn led her first online course over the summer.
Why might cheating have had a minimal effect on test scores?
The researchers say students more likely to cheat might be underperforming in the class and anxious about failing. Perhaps they’ve skipped lectures, fallen behind with studying or feel uncomfortable asking for help. Even with the option of searching Google during an unmonitored exam, students may struggle to find the correct answer if they don’t understand the content. In their paper, the researchers point to evidence from previous studies comparing test scores from open-book and close-book exams.
Another factor that may deter cheating is academic integrity or a sense of fairness, something many students value, says Chan. Those who have studied hard and take pride in their grades may be more inclined to protect their exam answers from students they view as freeloaders.
Still, the researchers say instructors should be aware of potential weak spots with unsupervised, online exams. For example, some platforms have the option of showing students the correct answer immediately after they select a multiple-choice option. This makes it much easier for students to share answers in a group text.
To counter this and other forms of cheating, instructors can:
- Wait to release exam answers until the test window closes.
- Use larger, randomized question banks.
- Add more options in multiple-choice questions and making the right choice less obvious.
- Adjust grade cutoffs.
COVID-19 and ChatGPT
Chan and Ahn say the spring 2020 semester provided a unique opportunity to research the validity of online exams for student evaluations. However, there were some limitations. For example, it wasn’t clear what role stress and other COVID-19-related impacts may have played on students, faculty and teaching assistants. Perhaps instructors were more lenient with grading or gave longer windows of time to complete exams.
The researchers said another limitation was not knowing if the 18 classes in the sample normally get easier or harder as the semester progresses. In an ideal experiment, half of the students would have taken online exams for the first half of the semester and in-person exams for the second half.
They attempted to account for these two concerns by looking at older test score data from a subset of the 18 classes during semesters when they were fully in-person. The researchers found the distribution of grades in each class was consistent with the spring 2020 semester and concluded that the materials covered in the first and second halves of the semester did not differ in their difficulty.
At the time of data collection for this study, ChatGPT wasn’t available to students. But the researchers acknowledge AI writing tools are a gamechanger in education and could make it much harder for instructors to evaluate their students. Understanding how instructors should approach online exams with the advent of ChatGPT is something Ahn intends to research.
The study was supported by a National Science Foundation Science of Learning and Augmented Intelligence Grant.
Journal Link: Proceedings of the National Academy of Sciences
Source: Iowa State University
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Tech
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University
Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.
Others, such as renderings of Jesus made out of crustaceans, are just bizarre.
Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.
Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.
We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.
Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.
There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.
Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.
Generative AI meets scams and spam
Internet spammers and scammers are nothing new.
For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.
On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.
In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”
AI-generated content has become another “weird trick.”
It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”
Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.
Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.
Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.
But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.
Rate ‘my’ work!
When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.
Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations
Algorithms push AI-generated content
Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.
Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.
In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.
The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.
We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.
It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.
‘This post was brought to you by AI’
Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.
Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.
In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.
But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?
While our work focused on Facebook spam and scams, there are broader implications.
Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.
Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.
Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.
Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Tech
Honeywell and Google Cloud to Accelerate Autonomous Operations with AI Agents for the Industrial Sector
Google Cloud AI to enhance Honeywell’s product offerings
and help upskill the industrial workforce
New solutions will connect to enterprise-wide industrial data from Honeywell Forge,
a leading IoT platform for industrials
CHARLOTTE, N.C. and SUNNYVALE, Calif. /PRNewswire/ — Honeywell (NASDAQ: HON) and Google Cloud announced a unique collaboration connecting artificial intelligence (AI) agents with assets, people and processes to accelerate safer, autonomous operations for the industrial sector.
This partnership will bring together the multimodality and natural language capabilities of Gemini on Vertex AI – Google Cloud’s AI platform – and the massive data set on Honeywell Forge, a leading Internet of Things (IoT) platform for industrials. This will unleash easy-to-understand, enterprise-wide insights across a multitude of use cases. Honeywell’s customers across the industrial sector will benefit from opportunities to reduce maintenance costs, increase operational productivity and upskill employees. The first solutions built with Google Cloud AI will be available to Honeywell’s customers in 2025.
“The path to autonomy requires assets working harder, people working smarter and processes working more efficiently,” said Vimal Kapur, Chairman and CEO of Honeywell. “By combining Google Cloud’s AI technology with our deep domain expertise–including valuable data on our Honeywell Forge platform–customers will receive unparalleled, actionable insights bridging the physical and digital worlds to accelerate autonomous operations, a key driver of Honeywell’s growth.”
“Our partnership with Honeywell represents a significant step forward in bringing the transformative power of AI to industrial operations,” said Thomas Kurian, CEO of Google Cloud. “With Gemini on Vertex AI, combined with Honeywell’s industrial data and expertise, we’re creating new opportunities to optimize processes, empower workforces and drive meaningful business outcomes for industrial organizations worldwide.”
With the mass retirement of workers from the baby boomer generation, the industrial sector faces both labor and skills shortages, and AI can be part of the solution – as a revenue generator, not job eliminator. More than two-thirds (82%) of Industrial AI leaders believe their companies are early adopters of AI, but only 17% have fully launched their initial AI plans, according to Honeywell’s 2024 Industrial AI Insights report. This partnership will provide AI agents that augment the existing operations and workforce to help drive AI adoption and enable companies across the sector to benefit from expanding automation.
Honeywell and Google Cloud will co-innovate solutions around:
Purpose-Built, Industrial AI Agents
Built on Google Cloud’s Vertex AI Search and tailored to engineers’ specific needs, a new AI-powered agent will help automate tasks and reduce project design cycles, enabling users to focus on driving innovation and delivering exceptional customer experiences.
Additional agents will utilize Google’s large language models (LLMs) to help technicians to more quickly resolve maintenance issues (e.g., “How did a unit perform last night?” “How do I replace the input/output module?” or “Why is my system making this sound?”). By leveraging Gemini’s multimodality capabilities, users will be able to process various data types such as images, videos, text and sensor readings, which will help its engineers get the answers they need quickly – going beyond simple chat and predictions.
Enhanced Cybersecurity
Google Threat Intelligence – featuring frontline insight from Mandiant – will be integrated into current Honeywell cybersecurity products, including Global Analysis, Research and Defense (GARD) Threat Intelligence and Secure Media Exchange (SMX), to help enhance threat detection and protect global infrastructure for industrial customers.
On-the-Edge Device Advances
Looking ahead, Honeywell will explore using Google’s Gemini Nano model to enhance Honeywell edge AI devices’ intelligence multiple use cases across verticals, ranging from scanning performance to voice-based guided workflow, maintenance, operational and alarm assist without the need to connect to the internet and cloud. This is the beginning of a new wave of more intelligent devices and solutions, which will be the subject of future Honeywell announcements.
By leveraging AI to enable growth and productivity, the integration of Google Cloud technology also further supports Honeywell’s alignment of its portfolio to three compelling megatrends, including automation.
About Honeywell
Honeywell is an integrated operating company serving a broad range of industries and geographies around the world. Our business is aligned with three powerful megatrends – automation, the future of aviation and energy transition – underpinned by our Honeywell Accelerator operating system and Honeywell Forge IoT platform. As a trusted partner, we help organizations solve the world’s toughest, most complex challenges, providing actionable solutions and innovations through our Aerospace Technologies, Industrial Automation, Building Automation and Energy and Sustainability Solutions business segments that help make the world smarter and safer as well as more secure and sustainable. For more news and information on Honeywell, please visit www.honeywell.com/newsroom.
About Google Cloud
Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated, and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models, and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.
SOURCE Honeywell
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Science
ChatGPT and the movie ‘Her’ are just the latest example of the ‘sci-fi feedback loop’
Rizwan Virk, Arizona State University
ChatGPT
In May 2024, OpenAI CEO Sam Altman sparked a firestorm by referencing the 2013 movie “Her” to highlight the novelty of the latest iteration of ChatGPT.
Within days, actor Scarlett Johansson, who played the voice of Samantha, the AI girlfriend of the protagonist in the movie “Her,” accused the company of improperly using her voice after she had spurned their offer to make her the voice of ChatGPT’s new virtual assistant. Johansson ended up suing OpenAI and has been invited to testify before Congress.
This tiff highlights a broader interchange between Hollywood and Silicon Valley that’s called the “sci-fi feedback loop.” The subject of my doctoral research, the sci-fi feedback loop explores how science fiction and technological innovation feed off each other. This dynamic is bidirectional and can sometimes play out over many decades, resulting in an ongoing loop.
Fiction sparks dreams of Moon travel
One of the most famous examples of this loop is Moon travel.
Jules Verne’s 1865 novel “From the Earth to the Moon” and the fiction of H.G. Wells inspired one of the first films to visualize such a journey, 1902’s “A Trip to the Moon.”
The fiction of Verne and Wells also influenced future rocket scientists such as Robert Goddard, Hermann Oberth and Oberth’s better-known protégé, Wernher von Braun. The innovations of these men – including the V-2 rocket built by von Braun during World War II – inspired works of science fiction, such as the 1950 film “Destination Moon,” which included a rocket that looked just like the V-2.
Films like “Destination Moon” would then go on to bolster public support for lavish government spending on the space program. https://www.youtube.com/embed/xLVChRVfZ74?wmode=transparent&start=0 The 1902 silent short ‘A Trip to the Moon.’
Creative symbiosis
The sci-fi feedback loop generally follows the same cycle.
First, the technological climate of a given era will shape that period’s science fiction. For example, the personal computing revolution of the 1970s and 1980s directly inspired the works of cyberpunk writers Neal Stephenson and William Gibson.
Then the sci-fi that emerges will go on to inspire real-world technological innovation. In his 1992 classic “Snow Crash,” Stephenson coined the term “metaverse” to describe a 3-D, video game-like world accessed through virtual reality goggles.
Silicon Valley entrepreneurs and innovators have been trying to build a version of this metaverse ever since. The virtual world of the video game Second Life, released in 2003, took a stab at this: Players lived in virtual homes, went to virtual dance clubs and virtual concerts with virtual girlfriends and boyfriends, and were even paid virtual dollars for showing up at virtual jobs.
This technology seeded yet more fiction; in my research, I discovered that sci-fi novelist Ernest Cline had spent a lot of time playing Second Life, and it inspired the metaverse of his bestselling novel “Ready Player One.”
The cycle continued: Employees of Oculus VR – now known as Meta Reality Labs – were given copies of “Ready Player One” to read as they developed the company’s virtual reality headsets. When Facebook changed its name to Meta in 2021, it did so in the hopes of being at the forefront of building the metaverse, though the company’s grand ambitions have tempered somewhat.
Another sci-fi franchise that has its fingerprints all over this loop is “Star Trek,” which first aired in 1966, right in the middle of the space race.
Steve Perlman, the inventor of Apple’s QuickTime media format and player, said he was inspired by an episode of “Star Trek: The Next Generation,” in which Lt. Commander Data, an android, sifts through multiple streams of audio and video files. And Rob Haitani, the designer of the Palm Pilot’s operating system, has said that the bridge on the Enterprise influenced its interface.
In my research, I also discovered that the show’s Holodeck – a room that could simulate any environment – influenced both the name and the development of Microsoft’s HoloLens augmented reality glasses.
From ALICE to ‘Her’
Which brings us back to OpenAI and “Her.”
In the movie, the protagonist, Theodore, played by Joaquin Phoenix, acquires an AI assistant, “Samantha,” voiced by Johansson. He begins to develop feelings for Samantha – so much so that he starts to consider her his girlfriend.
ChatGPT-4o, the latest version of the generative AI software, seems to be able to cultivate a similar relationship between user and machine. Not only can ChatGPT-4o speak to you and “understand” you, but it can also do so sympathetically, as a romantic partner would.
There’s little doubt that the depiction of AI in “Her” influenced OpenAI’s developers. In addition to Altman’s tweet, the company’s promotional videos for ChatGPT-4o feature a chatbot speaking with a job candidate before his interview, propping him up and encouraging him – as, well, an AI girlfriend would. The AI featured in the clips, Ars Technica observed, was “disarmingly lifelike,” and willing “to laugh at your jokes and your dumb hat.”
But you might be surprised to learn that a previous generation of chatbots inspired Spike Jonze, the director and screenwriter of “Her,” to write the screenplay in the first place. Nearly a decade before the film’s release, Jonze had interacted with a version of the ALICE chatbot, which was one of the first chatbots to have a defined personality – in ALICE’s case, that of a young woman.
The ALICE chatbot won the Loebner Prize three times, which was awarded annually until 2019 to the AI software that came closest to passing the Turing Test, long seen as a threshold for determining whether artificial intelligence has become indistinguishable from human intelligence.
The sci-fi feedback loop has no expiration date. AI’s ability to form relationships with humans is a theme that continues to be explored in fiction and real life.
A few years after “Her,” “Blade Runner 2049” featured a virtual girlfriend, Joi, with a holographic body. Well before the latest drama with OpenAI, companies had started developing and pitching virtual girlfriends, a process that will no doubt continue. As science fiction writer and social media critic Cory Doctorow wrote in 2017, “Science fiction does something better than predict the future: It influences it.”
Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
-
Urbanism1 year ago
Signal Hill, California: A Historic Enclave Surrounded by Long Beach
-
News2 years ago
Diana Gregory Talks to us about Diana Gregory’s Outreach Services
-
Senior Pickleball Report2 years ago
The Absolute Most Comfortable Pickleball Shoe I’ve Ever Worn!
-
STM Blog2 years ago
World Naked Gardening Day: Celebrating Body Acceptance and Nature
-
Senior Pickleball Report2 years ago
ACE PICKLEBALL CLUB TO DEBUT THEIR HIGHLY ANTICIPATED INDOOR PICKLEBALL FRANCHISES IN THE US, IN EARLY 2023
-
Travel2 years ago
Unique Experiences at the CitizenM
-
Automotive2 years ago
2023 Nissan Sentra pricing starts at $19,950
-
Senior Pickleball Report2 years ago
“THE PEOPLE’S CHOICE AWARDS OF PICKLEBALL” – VOTING OPEN