Connect with us

Artificial Intelligence

AI in health care could save lives and money − but change won’t happen overnight

Published

on

AI in health care
AI will help human physicians by analyzing patient data prior to surgery. Boy_Anupong/Moment via Getty Images
Turgay Ayer, Georgia Institute of Technology

AI in health care could save lives and money − but change won’t happen overnight

Imagine walking into your doctor’s office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what’s wrong. This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives. What’s more, a 2023 study found that if the health care industry significantly increased its use of AI, up to US$360 billion annually could be saved. But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low. A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses. I’m a professor and researcher who studies AI and health care analytics. I’ll try to explain why AI’s growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI’s widespread adoption by the medical industry.

Inaccurate diagnoses, racial bias

Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care. AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care. But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn’t perfectly match the patient in front of them. As a result, AI doesn’t always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations. Racial and ethnic bias is another issue. If data includes bias because it doesn’t include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened.
Humans and AI are beginning to work together at this Florida hospital.

Data-sharing concerns, unrealistic expectations

Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor’s offices simply don’t have the time, personnel, money or will to implement AI. Also, many cutting-edge AI systems operate as opaque “black boxes.” They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification. But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings. There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records. For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient’s data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards. Privacy concerns also extend to patients’ trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care. The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises. Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they’re safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations.
AI could rapidly accelerate the discovery of new medications.

Incremental change

Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time. Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help. Suffice to say that health care’s transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI’s potential to treat millions and save trillions awaits. Turgay Ayer, Professor of Industrial and Systems Engineering, Georgia Institute of Technology This article is republished from The Conversation under a Creative Commons license. Read the original article.

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement SodaStream USA, inc

Artificial Intelligence

Learning with AI falls short compared to old-fashioned web search

Learning with AI falls short: New research with 10,000+ participants reveals people who learn using ChatGPT develop shallower knowledge than those using Google search. Discover why AI-generated summaries reduce learning effectiveness and how to use AI tools strategically for education.

Published

on

Learning with AI falls short compared to old-fashioned web search
The work of seeking and synthesizing information can improve understanding of it compared to reading a summary. Tom Werner/DigitalVision via Getty Images

Learning with AI falls short compared to old-fashioned web search

Shiri Melumad, University of Pennsylvania Since the release of ChatGPT in late 2022, millions of people have started using large language models to access knowledge. And it’s easy to understand their appeal: Ask a question, get a polished synthesis and move on – it feels like effortless learning. However, a new paper I co-authored offers experimental evidence that this ease may come at a cost: When people rely on large language models to summarize information on a topic for them, they tend to develop shallower knowledge about it compared to learning through a standard Google search. Co-author Jin Ho Yun and I, both professors of marketing, reported this finding in a paper based on seven studies with more than 10,000 participants. Most of the studies used the same basic paradigm: Participants were asked to learn about a topic – such as how to grow a vegetable garden – and were randomly assigned to do so by using either an LLM like ChatGPT or the “old-fashioned way,” by navigating links using a standard Google search. No restrictions were put on how they used the tools; they could search on Google as long as they wanted and could continue to prompt ChatGPT if they felt they wanted more information. Once they completed their research, they were then asked to write advice to a friend on the topic based on what they learned. The data revealed a consistent pattern: People who learned about a topic through an LLM versus web search felt that they learned less, invested less effort in subsequently writing their advice, and ultimately wrote advice that was shorter, less factual and more generic. In turn, when this advice was presented to an independent sample of readers, who were unaware of which tool had been used to learn about the topic, they found the advice to be less informative, less helpful, and they were less likely to adopt it. We found these differences to be robust across a variety of contexts. For example, one possible reason LLM users wrote briefer and more generic advice is simply that the LLM results exposed users to less eclectic information than the Google results. To control for this possibility, we conducted an experiment where participants were exposed to an identical set of facts in the results of their Google and ChatGPT searches. Likewise, in another experiment we held constant the search platform – Google – and varied whether participants learned from standard Google results or Google’s AI Overview feature. The findings confirmed that, even when holding the facts and platform constant, learning from synthesized LLM responses led to shallower knowledge compared to gathering, interpreting and synthesizing information for oneself via standard web links.

Why it matters

Why did the use of LLMs appear to diminish learning? One of the most fundamental principles of skill development is that people learn best when they are actively engaged with the material they are trying to learn. When we learn about a topic through Google search, we face much more “friction”: We must navigate different web links, read informational sources, and interpret and synthesize them ourselves. While more challenging, this friction leads to the development of a deeper, more original mental representation of the topic at hand. But with LLMs, this entire process is done on the user’s behalf, transforming learning from a more active to passive process.

What’s next?

To be clear, we do not believe the solution to these issues is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts. Rather, our message is that people simply need to become smarter or more strategic users of LLMs – which starts by understanding the domains wherein LLMs are beneficial versus harmful to their goals. Need a quick, factual answer to a question? Feel free to use your favorite AI co-pilot. But if your aim is to develop deep and generalizable knowledge in an area, relying on LLM syntheses alone will be less helpful. As part of my research on the psychology of new technology and new media, I am also interested in whether it’s possible to make LLM learning a more active process. In another experiment we tested this by having participants engage with a specialized GPT model that offered real-time web links alongside its synthesized responses. There, however, we found that once participants received an LLM summary, they weren’t motivated to dig deeper into the original sources. The result was that the participants still developed shallower knowledge compared to those who used standard Google. Building on this, in my future research I plan to study generative AI tools that impose healthy frictions for learning tasks – specifically, examining which types of guardrails or speed bumps most successfully motivate users to actively learn more beyond easy, synthesized answers. Such tools would seem particularly critical in secondary education, where a major challenge for educators is how best to equip students to develop foundational reading, writing and math skills while also preparing for a real world where LLMs are likely to be an integral part of their daily lives. The Research Brief is a short take on interesting academic work. Shiri Melumad, Associate Professor of Marketing, University of Pennsylvania This article is republished from The Conversation under a Creative Commons license. Read the original article.

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/

View recent photos

Unlock fun facts & lost history—get The Knowledge in your inbox!

We don’t spam! Read our privacy policy for more info.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AII: Cognizant guides organizations in AI adoption, addressing challenges like talent shortages and governance while empowering employees to transform business practices and achieve lasting impact.

Published

on

Last Updated on November 10, 2025 by Daily News Staff

Leading with Purpose in the Age of AI

Leading with Purpose in the Age of AI

(Family Features) In today’s AI-powered economy, transformation is no longer optional – it’s essential. Enterprises are eager to embrace generative and agentic AI, but many lack the clarity and confidence to scale it responsibly.

As a global leader in technology and consulting services, Cognizant is helping organizations bridge that gap – turning possibility into progress.

The Moment is Now

AI is reshaping industries, redefining roles, and revolutionizing decision-making. According to Cognizant Research, 61% of senior decision-makers expect AI to drive complete business transformation. Yet, 83% feel unprepared to embed AI into their organizations, citing gaps in talent, governance, and culture.

This disconnect presents a powerful opportunity.

“In the age of AI, transformation isn’t just about technology, it’s about trust, talent and the ability to turn possibility into progress,” said Shveta Arora, head of Cognizant Consulting. “The true impact of AI is delivered when organizations build trust, invest in adaptable talent and embrace bold ideas. By empowering people and embedding AI responsibly, leaders can bridge the gap between potential and progress, ensuring lasting value for business and society.”

A Trusted Voice in AI

As a recognized leader in AI strategy and enterprise transformation, Cognizant brings credibility and clarity to this evolving space. It has been named a Leader and Star Performer by Everest Group in their 2024 AI and Generative AI Services PEAK Matrix Assessment, underscoring its strategic vision and execution.

With thought leadership in AI strategy and enterprise transformation published across thousands of U.S. outlets, its position as a trusted voice in shaping the future of AI has been reinforced. It has also been recognized across the industry for excellence in client service and innovation.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Its platforms – Neuro, Flowsource and the Data and Intelligence Toolkit – are driving real-world impact across industries. Furthermore, a strategic collaboration with a leading enterprise-grade generative AI provider enables secure and scalable deployment of agentic AI in regulated settings, ensuring adherence to compliance and data governance standards

Bridging the AI Adoption Gap

When a leading property intelligence provider’s IT systems were hampering progressing turnaround times, the company turned to Cognizant’s Gen AI-powered Data as a Service and Neuro Business Process (BP) platform. Driven by AI insights and learning, Neuro BP centralized business processing. It automated data collection, case reviews and decision-making to align with the client’s goals. Powered by the platform, the organization saw a reduction in processing time and errors and an increase in productivity.

Stories like these are still the exception.

Despite enthusiasm and investment – global businesses are spending an average of $47.5 million on generative AI this year – many feel they’re moving too slowly. The barriers include talent shortages, infrastructure gaps and unclear governance. These challenges can be overcome by moving from experimentation to execution. With clarity, credibility and conviction, organizations can scale AI responsibly and effectively.

Accelerating Enterprise AI Transformations

Unlike traditional software, AI models are contextual computing engines. They don’t require every path to be spelled out in advance but instead interpret broad instructions and intent, and adapt based on the context they are given. Agentic AI systems lacking business-specific knowledge can lead to generic or unreliable outputs.

To address this, enterprises need systems that can deliver the right information and tools to AI models – enabling accurate decisions, alignment with human goals, compliance with policy frameworks and adaptability to real-time challenges. This is the role of context engineering, an emerging discipline focused on delivering the right context at the right time to agentic systems. Context refers to the sum of a company’s institutional knowledge, including its operating models, roles, goals, metrics, processes, policies and governance – essential ingredients for effective AI.

To guide clients through their AI journey, Cognizant developed the Strategic Enterprise Agentification Framework, an end-to-end model designed to unlock productivity, market expansion and new business models.

At its core is the Agent Development Lifecycle (ADLC), which guides the development of enterprise agents and agentic AI systems across six distinct stages. Designed to align with real-world enterprise dynamics, ADLC supports seamless integration with business applications. This unique approach embeds context engineering into ADLC, ensuring agents are tailored to support real-world enterprise dynamics.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

To help bridge vision and execution, businesses can utilize the Neuro AI Multi-Agent Accelerator. This no-code framework allows rapid deployment of custom multi-agent systems.

People Power the Progress

Technology alone doesn’t transform enterprises – people do. With an AI-driven Workforce Transformation (WFT), Cognizant helps organizations reskill employees, redesign roles and build AI fluency. Integrated with the Agentification Framework, WFT is designed to accelerate transformation and support long-term resilience.

From Possibility to Progress

From strategic frameworks to enterprise platforms to workforce readiness, Cognizant equips organizations with the confidence to harness AI responsibly and at scale. In the age of AI, it’s not just about transformation – it’s about leading with purpose.

Explore more at cognizant.com.

Photo courtesy of Shutterstock

collect?v=1&tid=UA 482330 7&cid=1955551e 1975 5e52 0cdb 8516071094cd&sc=start&t=pageview&dl=http%3A%2F%2Ftrack.familyfeatures
SOURCE:
Cognizant

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

Can AI keep students motivated, or does it do the opposite?

Published

on

AI
AI-based tools can be effective in motivating students but require proper design and thoughtful implementation. Associated Press

Yurou Wang, University of Alabama

Imagine a student using a writing assistant powered by a generative AI chatbot. As the bot serves up practical suggestions and encouragement, insights come more easily, drafts polish up quickly and feedback loops feel immediate. It can be energizing. But when that AI support is removed, some students report feeling less confident or less willing to engage.

These outcomes raise the question: Can AI tools genuinely boost student motivation? And what conditions can make or break that boost?

As AI tools become more common in classroom settings, the answers to these questions matter a lot. While tools for general use such as ChatPGT or Claude remain popular, more and more students are encountering AI tools that are purpose-built to support learning, such as Khan Academy’s Khanmigo, which personalizes lessons. Others, such as ALEKS, provide adaptive feedback. Both tools adjust to a learner’s level and highlight progress over time, which helps students feel capable and see improvement. But there are still many unknowns about the long-term effects of these tools on learners’ progress, an issue I continue to study as an educational psychologist.

What the evidence shows so far

Recent studies indicate that AI can boost motivation, at least for certain groups, when deployed under the right conditions. A 2025 experiment with university students showed that when AI tools delivered a high-quality performance and allowed meaningful interaction, students’ motivation and their confidence in being able to complete a task – known as self-efficacy – increased.

For foreign language learners, a 2025 study found that university students using AI-driven personalized systems took more pleasure in learning and had less anxiety and more self-efficacy compared with those using traditional methods. A recent cross-cultural analysis with participants from Egypt, Saudi Arabia, Spain and Poland who were studying diverse majors suggested that positive motivational effects are strongest when tools prioritize autonomy, self-direction and critical thinking. These individual findings align with a broader, systematic review of generative AI tools that found positive effects on student motivation and engagement across cognitive, emotional and behavioral dimensions.

A forthcoming meta-analysis from my team at the University of Alabama, which synthesized 71 studies, echoed these patterns. We found that generative AI tools on average produce moderate positive effects on motivation and engagement. The impact is larger when tools are used consistently over time rather than in one-off trials. Positive effects were also seen when teachers provide scaffolding, when students maintain agency in how they use the tool, and when the output quality is reliable.

But there are caveats. More than 50 of the studies we reviewed did not draw on a clear theoretical framework of motivation, and some used methods that we found were weak or inappropriate. This raises concerns about the quality of the evidence and underscores how much more careful research is needed before one can say with confidence that AI nurtures students’ intrinsic motivation rather than just making tasks easier in the moment.

When AI backfires

There is also research that paints a more sobering picture. A large study of more than 3,500 participants found that while human–AI collaboration improved task performance, it reduced intrinsic motivation once the AI was removed. Students reported more boredom and less satisfaction, suggesting that overreliance on AI can erode confidence in their own abilities.

Another study suggested that while learning achievement often rises with the use of AI tools, increases in motivation are smaller, inconsistent or short-lived. Quality matters as much as quantity. When AI delivers inaccurate results, or when students feel they have little control over how it is used, motivation quickly erodes. Confidence drops, engagement fades and students can begin to see the tool as a crutch rather than a support. And because there are not many long-term studies in this field, we still do not know whether AI can truly sustain motivation over time, or whether its benefits fade once the novelty wears off.

Not all AI tools work the same way

The impact of AI on student motivation is not one-size-fits-all. Our team’s meta-analysis shows that, on average, AI tools do have a positive effect, but the size of that effect depends on how and where they are used. When students work with AI regularly over time, when teachers guide them in using it thoughtfully, and when students feel in control of the process, the motivational benefits are much stronger.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

We also saw differences across settings. College students seemed to gain more than younger learners, STEM and writing courses tended to benefit more than other subjects, and tools designed to give feedback or tutoring support outperformed those that simply generated content.

Young student working on tablet at school.
Specialized AI-based tools designed for learning tend to work better for students with proper teacher support compared to general-purpose chatbots such as ChatGPT and Claude. But those specialized products typically cost money, raising questions over equity and quality of education. Charlie Riedel/AP

There is also evidence that general-use tools like ChatGPT or Claude do not reliably promote intrinsic motivation or deeper engagement with content, compared to learning-specific platforms such as ALEKS and Khanmigo, which are more effective at supporting persistence and self-efficacy. However, these tools often come with subscription or licensing costs. This raises questions of equity, since the students who could benefit most from motivational support may also be the least likely to afford it.

These and other recent findings should be seen as only a starting point. Because AI is so new and is changing so quickly, what we know today may not hold true tomorrow. In a paper titled The Death and Rebirth of Research in Education in the Age of AI, the authors argue that the speed of technological change makes traditional studies outdated before they are even published. At the same time, AI opens the door to new ways of studying learning that are more participatory, flexible and imaginative. Taken together, the data and the critiques point to the same lesson: Context, quality and agency matter just as much as the technology itself.

Why it matters for all of us

The lessons from this growing body of research are straightforward. The presence of AI does not guarantee higher motivation, but it can make a difference if tools are designed and used with care and understanding of students’ needs. When it is used thoughtfully, in ways that strengthen students’ sense of competence, autonomy and connection to others, it can be a powerful ally in learning.

But without those safeguards, the short-term boost in performance could come at a steep cost. Over time, there is the risk of weakening the very qualities that matter most – motivation, persistence, critical thinking and the uniquely human capacities that no machine can replace.

For teachers, this means that while AI may prove a useful partner in learning, it should never serve as a stand-in for genuine instruction. For parents, it means paying attention to how children use AI at home, noticing whether they are exploring, practicing and building skills or simply leaning on it to finish tasks. For policymakers and technology developers, it means creating systems that support student agency, provide reliable feedback and avoid encouraging overreliance. And for students themselves, it is a reminder that AI can be a tool for growth, but only when paired with their own effort and curiosity.

Regardless of technology, students need to feel capable, autonomous and connected. Without these basic psychological needs in place, their sense of motivation will falter – with or without AI.

Yurou Wang, Associate Professor of Educational Psychology, University of Alabama

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://stmdailynews.com/%f0%9f%93%9c-who-created-blogging-a-look-back-at-the-birth-of-the-blog/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending