
Artificial Intelligence
Security threats in AIs such as ChatGPT revealed by researchers
Last Updated on June 14, 2025 by Daily News Staff
- University of Sheffield scientists have discovered natural language processing tools (NLP), such as ChatGPT, can be tricked into producing malicious code that could lead to cyber attacks
- Study is the first to demonstrate that NLP models can be exploited to attack real-world computer systems used in a wide range of industries
- Results show AI language models are vulnerable to simple backdoor attacks, such as planting a Trojan Horse, that could be triggered at any time to steal information or bring down services
- Findings also highlight the security risks in how people are using AI tools to learn programming languages to interact with databases
Newswise — Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code, which could be used to launch cyber attacks, according to research from the University of Sheffield.
The study, by academics from the University’s Department of Computer Science, is the first to demonstrate that Text-to-SQL systems – AI that enables people to search databases by asking questions in plain language and are used throughout a wide range of industries – can be exploited to attack computer systems in the real world.
Findings from the research have revealed how the AIs can be manipulated to help steal sensitive personal information, tamper with or destroy databases, or bring down services through Denial-of-Service attacks.
As part of the study, the Sheffield academics found security vulnerabilities in six commercial AI tools and successfully attacked each one.
The AI tools they studied were:
- BAIDU-UNIT – a leading Chinese intelligent dialogue platform adopted by high-profile clients in many industries, including e-commerce, banking, journalism, telecommunications, automobile and civil aviation
- ChatGPT
- AI2SQL
- AIHELPERBOT
- Text2SQL
- ToolSKE
The researchers found that if they asked each of the AIs specific questions, they produced malicious code. Once executed, the code would leak confidential database information, interrupt a database’s normal service, or even destroy it. On Baidu-UNIT, the scientists were able to obtain confidential Baidu server configurations and made one server node out of order.
Xutan Peng, a PhD student at the University of Sheffield, who co-led the research, said: “In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood.
“At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.”
Findings from the study also highlight the dangers in how people are using AI to learn programming languages, so they can interact with databases.
Xutan Peng added: “The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are. For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records. As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.”
As part of the study, the Sheffield team also discovered it’s possible to launch simple backdoor attacks, such as planting a “Trojan Horse” in Text-to-SQL models by poisoning the training data. Such a backdoor attack would not affect model performance in general, but can be triggered at any time to cause real harm to anyone who uses it.
Dr Mark Stevenson, a Senior Lecturer in the Natural Language Processing research group at the University of Sheffield, said: “Users of Text-to-SQL systems should be aware of the potential risks highlighted in this work. Large language models, like those used in Text-to-SQL systems, are extremely powerful but their behaviour is complex and can be difficult to predict. At the University of Sheffield we are currently working to better understand these models and allow their full potential to be safely realised.”
The Sheffield researchers presented their paper at ISSRE – a major academic and industry conference for software engineering earlier this month (10 October 2023). They are working with stakeholders across the cybersecurity community to address the vulnerabilities, as Text-to-SQL systems continue to be more widely used throughout society.
Their work has already been recognised by Baidu whose Security Response Centre officially rated the vulnerabilities as ‘Highly Dangerous’. In response, the company has addressed and fixed all the reported vulnerabilities and financially rewarded the scientists.
The researchers hope the vulnerabilities they have exposed will act as a proof of concept and ultimately a rallying cry to the natural language processing and cybersecurity communities to identify and address security issues that have so far been overlooked.
Xutan Peng added: “Our efforts are being recognised by industry and they are following our advice to fix these security flaws. However, we are opening a door on an endless road – what we now need to see are large groups of researchers creating and testing patches to minimise security risks through open source communities.
“There will always be more advanced strategies being developed by attackers, which means security strategies must keep pace. To do so we need a new community to fight these next generation attacks.”
Journal Link: The 34th IEEE International Symposium on Software Reliability Engineering
Source: University of Sheffield
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Artificial Intelligence
Learning with AI falls short compared to old-fashioned web search
Learning with AI falls short: New research with 10,000+ participants reveals people who learn using ChatGPT develop shallower knowledge than those using Google search. Discover why AI-generated summaries reduce learning effectiveness and how to use AI tools strategically for education.

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter. https://stmdailynews.com/the-knowledge/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Artificial Intelligence
Leading with Purpose in the Age of AI
Leading with Purpose in the Age of AII: Cognizant guides organizations in AI adoption, addressing challenges like talent shortages and governance while empowering employees to transform business practices and achieve lasting impact.
Last Updated on November 10, 2025 by Daily News Staff
Leading with Purpose in the Age of AI
(Family Features) In today’s AI-powered economy, transformation is no longer optional – it’s essential. Enterprises are eager to embrace generative and agentic AI, but many lack the clarity and confidence to scale it responsibly.
As a global leader in technology and consulting services, Cognizant is helping organizations bridge that gap – turning possibility into progress.
The Moment is Now
AI is reshaping industries, redefining roles, and revolutionizing decision-making. According to Cognizant Research, 61% of senior decision-makers expect AI to drive complete business transformation. Yet, 83% feel unprepared to embed AI into their organizations, citing gaps in talent, governance, and culture.
This disconnect presents a powerful opportunity.
“In the age of AI, transformation isn’t just about technology, it’s about trust, talent and the ability to turn possibility into progress,” said Shveta Arora, head of Cognizant Consulting. “The true impact of AI is delivered when organizations build trust, invest in adaptable talent and embrace bold ideas. By empowering people and embedding AI responsibly, leaders can bridge the gap between potential and progress, ensuring lasting value for business and society.”
A Trusted Voice in AI
As a recognized leader in AI strategy and enterprise transformation, Cognizant brings credibility and clarity to this evolving space. It has been named a Leader and Star Performer by Everest Group in their 2024 AI and Generative AI Services PEAK Matrix Assessment, underscoring its strategic vision and execution.
With thought leadership in AI strategy and enterprise transformation published across thousands of U.S. outlets, its position as a trusted voice in shaping the future of AI has been reinforced. It has also been recognized across the industry for excellence in client service and innovation.
Its platforms – Neuro, Flowsource and the Data and Intelligence Toolkit – are driving real-world impact across industries. Furthermore, a strategic collaboration with a leading enterprise-grade generative AI provider enables secure and scalable deployment of agentic AI in regulated settings, ensuring adherence to compliance and data governance standards
Bridging the AI Adoption Gap
When a leading property intelligence provider’s IT systems were hampering progressing turnaround times, the company turned to Cognizant’s Gen AI-powered Data as a Service and Neuro Business Process (BP) platform. Driven by AI insights and learning, Neuro BP centralized business processing. It automated data collection, case reviews and decision-making to align with the client’s goals. Powered by the platform, the organization saw a reduction in processing time and errors and an increase in productivity.
Stories like these are still the exception.
Despite enthusiasm and investment – global businesses are spending an average of $47.5 million on generative AI this year – many feel they’re moving too slowly. The barriers include talent shortages, infrastructure gaps and unclear governance. These challenges can be overcome by moving from experimentation to execution. With clarity, credibility and conviction, organizations can scale AI responsibly and effectively.
Accelerating Enterprise AI Transformations
Unlike traditional software, AI models are contextual computing engines. They don’t require every path to be spelled out in advance but instead interpret broad instructions and intent, and adapt based on the context they are given. Agentic AI systems lacking business-specific knowledge can lead to generic or unreliable outputs.
To address this, enterprises need systems that can deliver the right information and tools to AI models – enabling accurate decisions, alignment with human goals, compliance with policy frameworks and adaptability to real-time challenges. This is the role of context engineering, an emerging discipline focused on delivering the right context at the right time to agentic systems. Context refers to the sum of a company’s institutional knowledge, including its operating models, roles, goals, metrics, processes, policies and governance – essential ingredients for effective AI.
To guide clients through their AI journey, Cognizant developed the Strategic Enterprise Agentification Framework, an end-to-end model designed to unlock productivity, market expansion and new business models.
At its core is the Agent Development Lifecycle (ADLC), which guides the development of enterprise agents and agentic AI systems across six distinct stages. Designed to align with real-world enterprise dynamics, ADLC supports seamless integration with business applications. This unique approach embeds context engineering into ADLC, ensuring agents are tailored to support real-world enterprise dynamics.
To help bridge vision and execution, businesses can utilize the Neuro AI Multi-Agent Accelerator. This no-code framework allows rapid deployment of custom multi-agent systems.
People Power the Progress
Technology alone doesn’t transform enterprises – people do. With an AI-driven Workforce Transformation (WFT), Cognizant helps organizations reskill employees, redesign roles and build AI fluency. Integrated with the Agentification Framework, WFT is designed to accelerate transformation and support long-term resilience.
From Possibility to Progress
From strategic frameworks to enterprise platforms to workforce readiness, Cognizant equips organizations with the confidence to harness AI responsibly and at scale. In the age of AI, it’s not just about transformation – it’s about leading with purpose.
Explore more at cognizant.com.
Photo courtesy of Shutterstock
SOURCE:
Cognizant
The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Artificial Intelligence
Can AI keep students motivated, or does it do the opposite?

Yurou Wang, University of Alabama
Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.
These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?
As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.
What the evidence shows so far
Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.
For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.
A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.
But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.
When AI backfires
There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.
Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.
Not all AI tools work the same way
The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.
We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.
There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.
These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.
Why it matters for all of us
The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.
But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.
For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.
Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.
Yurou Wang, Associate Professor of Educational Psychology, University of Alabama
This article is republished from The Conversation under a Creative Commons license. Read the original article.
https://stmdailynews.com/%f0%9f%93%9c-who-created-blogging-a-look-back-at-the-birth-of-the-blog/
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
