Family
The ‘choking game’ and other challenges amplified by social media can come with deadly consequences
Teenagers are increasingly engaging in dangerous games amplified by social media, like the Choking Game and Skullbreaker Challenge, which can have deadly consequences. Parental involvement and healthy risk-taking are essential for prevention and guidance.

Steven Wolterning, Texas A&M University and Paige Williams, Texas A&M University
The “choking game” has potentially deadly consequences, as players are challenged to temporarily strangle themselves by restricting oxygen to the brain. It sounds terrifying, but rough estimates suggest that about 10% of U.S. teenagers may have played this type of game at least once.
There’s more, unfortunately: The Skullbreaker Challenge, the Tide Pod Challenge and Car Surfing are but a few of the deadly games popularized through social media, particularly on Snapchat, Instagram, TikTok, YouTube and X – formerly Twitter. Many of these games go back more than a generation, and some are resurging.
The consequences of these so-called games can be deadly. Skullbreaker Challenge, for example, involves two people kicking the legs out from under a third person, causing them to fall and potentially suffer lasting injuries. Swallowing detergent pods can result in choking and serious illness. A fall from car surfing can lead to severe head trauma.
Coming up with an exact number of adolescent deaths from these activities is difficult. Data is lacking, partly because public health databases do not track these activities well – some deaths may be misclassified as suicides – and partly because much of the existing research is dated.
A 2008 report from the Centers for Disease Control and Prevention found that 82 U.S. children over a 12-year period died after playing the Choking Game. About 87% of the participants were male, most were alone, and their average age was just over 13. Obviously, new, updated research is needed to determine the severity of the problem.
A 2008 report from the Centers for Disease Control and Prevention found that 82 U.S. children over a 12-year period died after playing the Choking Game. About 87% of the participants were male, most were alone, and their average age was just over 13. Obviously, new, updated research is needed to determine the severity of the problem.
Peer pressure and the developing brain
We are a professor of educational neuroscience and a Ph.D. student in educational psychology. Both of us study how children regulate their behaviors and emotions, why teenagers are particularly vulnerable to dangerous games, and how social media amplifies their risks.
Risk-taking is a necessary part of human development, and parents, peers, schools and the broader community play an integral role in guiding and moderating risk-taking. Children are drawn to, and often encouraged to engage in, activities with a degree of social or physical risk, like riding a bike, asking someone for a date or learning how to drive.
Those are healthy risks. They let children explore boundaries and develop risk-management skills. One of those skills is scaffolding. An example of scaffolding is an adult helping a child climb a tree by initially guiding them, then gradually stepping back as the child gains confidence and climbs independently.
Information-gathering is another skill, like asking if swallowing a spoonful of cinnamon is dangerous. A third skill is taking appropriate safety measures – such as surfing with friends rather than going by yourself, or wearing a helmet and having someone nearby when skateboarding.
The perfect storm
During adolescence, the brain is growing and developing in ways that affect maturity, particularly within the circuits responsible for decision-making and emotional regulation. At the same time, hormonal changes increase the drive for reward and social feedback.
All of these biological events are happening as teenagers deal with increasingly complex social relationships while simultaneously trying to gain greater autonomy. The desire for social validation, to impress peers or to attract a potential romantic interest, coupled with less adult supervision, increases the likelihood of participating in risky behaviors. An adolescent might participate in these antics to impress someone they have a crush on, or fit in with others.
That’s why the combination of teenagers and social media can be a perfect storm – and the ideal environment for the proliferation of these dangerous activities.
Social media shapes brain circuits
Social media platforms are driven by algorithms engineered to promote engagement. So they feed you what evokes a strong emotional reaction, and they seem to prioritize sensationalism over safety.
Because teens strongly react to emotional content, they’re more likely to view, like and share videos of these dangerous activities. The problem has become worse as young people spend more time on social media – by some estimates, about five hours a day.
This may be why mood disorders among young people have risen sharply since 2012, about the time when social media became widespread. These mood disorders, like depression and conduct issues, more than double the likelihood of playing dangerous games. It becomes a vicious cycle.
Rather than parents or real-life friends, TikTok, YouTube and other apps and websites are shaping a child’s brain circuits related to risk management. Social media is replacing what was once the community’s role in guiding risk-taking behavior.
Protecting teens while encouraging healthy risk-taking
Monitoring what teens watch on social media is extraordinarily difficult, and adults often are ill-equipped to help. But there are some things parents can do. Unexplained marks on the neck, bloodshot eyes or frequent headaches may indicate involvement in the choking game. Some social media sites, such as YouTube, are sensitive to community feedback and will take down a video that is flagged as dangerous.
As parents keep an eye out for unhealthy risks, they should encourage their children to take healthy ones, such as joining a new social group or participating in outdoor activities. These healthy risks help children learn from mistakes, build resilience and improve risk-management skills. The more they can assess and manage potential dangers, the less likely they will engage in truly unhealthy behaviors.
But many parents have increasingly adopted another route. They shield their children from the healthy challenges the real world presents to them. When that happens, children tend to underestimate more dangerous risks, and they may be more likely to try them.
This issue is systemic, involving schools, government and technology companies alike, each bearing a share of responsibility. However, the dynamic between parents and children also plays a pivotal role. Rather than issuing a unilateral “no” to risk-taking, it’s crucial for parents to engage actively in their children’s healthy risk-taking from an early age.
This helps build a foundation where trust is not assumed but earned, enabling children to feel secure in discussing their experiences and challenges in the digital world, including dangerous activities both online and offline. Such mutual engagement can support the development of a child’s healthy risk assessment skills, providing a robust basis for tackling problems together.
Steven Wolterning, Associate Professor of Educational Psychology, Texas A&M University and Paige Williams, Doctoral student in Educational Psychology, Texas A&M University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
child education
Special Education Is Turning to AI to Fill Staffing Gaps—But Privacy and Bias Risks Remain
With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.
Seth King, University of Iowa
In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners.
Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs.
Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services.
As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges – ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered.
Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution.
A faster IEP, but how individualized?
AI is already shaping special education planning, personnel preparation and assessment.
One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a child’s strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals.
But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a child’s true individual needs.
Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans.
Training and diagnosing disabilities
There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel.
Some districts have begun using AI for assessments, which can involve a range of academic, cognitive and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability.
Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts.
Privacy and trust concerns
There are serious ethical – and practical – questions about these AI-supported interventions, ranging from risks to students’ privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law.
The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students’ data privacy and the rights of parents to access and hold their children’s data.
What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a child’s data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable.
These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have struggled to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents.
But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present.
Embedded bias
At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified.
Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data.
Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process.
By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services.
Filling the gap
Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law.
The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, “de minimis” progress, which weakens one of the primary rationales for pursuing AI – that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo.
But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides.
Seth King, Associate Profess of Special Education, University of Iowa
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Lifestyle
A Legacy of Service: How family stories shape service
Legacy of Service: Discover how military service creates lasting family legacies across generations. Explore powerful veteran stories from the Veterans History Project, including Pearl Harbor survivors and Code Talkers, and learn how to preserve your family’s service history.
Last Updated on February 6, 2026 by Daily News Staff

A Legacy of Service: How family stories shape service
(Family Features) Major historical events like war or military service make a lasting impact on family identity, values and traditions, often reverberating across multiple generations. Veterans frequently speak about their military units as if they were family, given the unbreakable bonds that develop between comrades. However, for some veterans, “brothers in arms” is more than a figurative turn of phrase. Throughout the 20th century, entire families felt the firsthand effects of war, with multiple generations serving. Brothers enlisted together. A father’s military legacy inspired his children to join up. Sweethearts met and married while in uniform. These stories not only illustrate the experiences of individual veterans but also provide an intimate glimpse into family legacies of military service. Consider the Veterans History Project, a program overseen by the Library of Congress, which collects and preserves the firsthand remembrances of U.S. military veterans and makes them accessible for future generations to better understand veterans’ service and sacrifice. These personal stories encompass original correspondence, memoirs, diaries, photographs and oral history interviews, all offering deeper insight into the long-term impact of military service. Veterans’ narratives are collected by volunteers, and anyone who served from World War I to today can submit their personal story, regardless of whether or not they saw combat. The collections frequently shed light on the importance of family in military experiences. Whether expressed through heartfelt letters home, enduring family legacies of service or the experience of serving alongside loved ones, these stories reflect profound connections.
Family Identity During the Cold War, Jennifer McNeill rose from Army Dental Assistant to Command Sergeant Major at the Army Eisenhower Medical Center in Fort Gordon, Georgia. Her collection includes a poignant photograph of her mother sharing images of her four military daughters in uniform, underscoring how family identity and military service are closely connected. Values Military service makes a lasting impression on veterans, shaping the experiences and the values that guide them through life. Ray Chavez is one such example. He was the oldest known Pearl Harbor survivor before his passing in 2018. For most of his life, he remained silent about his experiences, but in 1991, his daughter, Kathleen Chavez, who served in the U.S. Navy during Desert Storm, convinced him to return to Pearl Harbor. That trip marked the first time he spoke openly about his service. Kathleen shared their family’s deep military legacy in her oral history for the Veterans History Project. Traditions Across Generations Serving in the military is a deeply personal journey, but for many veterans, it’s an experience that transcends generations. Bill Toledo enlisted in the Marine Corps in October 1942 at the age of 18. Along with his uncle, Frank Toledo, and cousin, Preston Toledo, he served as a Code Talker transmitting military messages through secret codes. In his oral history, Bill vividly recalled both the challenges of combat during the invasion of Iwo Jima in February 1945, and the treasured moments spent with his uncle. These and many other family stories of military service and remembrance are available to the public at loc.gov/vets. Photo courtesy of Shutterstock (men looking at scrapbook)
Photo courtesy of the Library of Congress (man and woman on park bench) Discover more from Daily News
Subscribe to get the latest posts sent to your email.
Entertainment
Smart Gaming: How Parents Can Keep Kids Safe Online
Parents can enhance kids’ safety during online gaming by using privacy settings, researching games, enabling age checks, keeping personal information private, and utilizing parental controls and security tools.
Last Updated on January 21, 2026 by Daily News Staff
Smart Gaming: How Parents Can Keep Kids Safe Online
(Family Features) Playing video games can be a fun, social experience. However, online gaming also poses real risks, especially for kids. As a parent, you don’t necessarily need to be a gamer yourself to help keep your children safe when the controller is in their hands.
Consider taking proactive steps like these to create a healthy online gaming environment for kids of all ages.
Check System Privacy Settings
As a first line of defense – before your child even starts gaming – spend some time in the device or console privacy settings. Here you can turn off sharing, disable location tracking, limit microphone and camera access and restrict how other users can interact with your child’s profile. Similarly, many games and platforms include built-in privacy settings that can be tailored to your child’s age and online experience. These settings may allow you to limit who can view your child’s profile or send a friend request, message or voice chat.
Research Games
Because not all games are created equal, look up game ratings through a service such as ESRB before buying or downloading to understand the maturity level of the game and determine if it’s appropriate for your child. To take it a step further, read reviews from other parents or watch gameplay videos to see if you deem not only the content but also the social interaction acceptable.
Use Facial Age Estimation
Online platforms are increasingly looking for ways to keep users safe, and that includes added levels of verification. As part of a multilayered approach to safety, Roblox is the first online gaming platform to require age checks for users of all ages to access chat features, enabling age-appropriate communication and limiting conversations between adults and minors. These secure age checks are designed to be fast, easy and secure using Facial Age Estimation technology directly within the app.
“Our commitment to safety is rooted in delivering the highest level of protection for our users,” said Matt Kaufman, chief safety officer at Roblox. “By building proactive, age-based barriers, we can empower users to create and connect in ways that are both safe and appropriate.”
Once age-checked, users are assigned to one of six age groups: under 9, 9-12, 13-15, 16-17, 18-20 or 21 and older, ensuring conversations are safe and age appropriate. Age checks are optional; however, features like chat will not be accessible unless an age check is completed. Chat is also turned off by default for children under age 9, unless a parent provides consent after an age check.
Keep Personal Information Private
It’s seldom a bad idea to be extra cautious when interacting with strangers online, even if they seem friendly enough while playing the game. Teach children what information not to share, including their full name, address, birthday, school name, phone number, email address, passwords or any photos that may contain any personal information (like a house number or school logo) in the background. Also encourage a screen name and generic avatar for added privacy.
Turn on Parental Controls
Designed to allow parents a supervisory role in their child’s online gaming experience, parental controls on many platforms include the ability to set schedules and limit playtime, restrict access to certain content or social features, require a password for purchases or set a spending limit.
Avoid Clicking Unfamiliar Links
Player profiles and in-game chats may include links to external sites, including those promising rewards or cheat codes. Because they can be used to gain access to personal information, remind your children to ask an adult before clicking any unfamiliar links while gaming so they can be verified as trustworthy.
Employ Privacy and Security Tools
While system or console-specific settings allow parents to set content restrictions, approve downloads, manage friends lists and more, additional layers of security are sometimes necessary. Extra safeguards such as antivirus and internet security software, DNS (domain name system) filtering and two-factor authentication can also be enabled to help keep kids safe online.
For more tools to help parents make informed decisions and support their children’s gaming experience, visit corp.roblox.com/safety.
Photo courtesy of Shutterstock (father and daughter playing video game)
SOURCE:
Roblox
Our Lifestyle section on STM Daily News is a hub of inspiration and practical information, offering a range of articles that touch on various aspects of daily life. From tips on family finances to guides for maintaining health and wellness, we strive to empower our readers with knowledge and resources to enhance their lifestyles. Whether you’re seeking outdoor activity ideas, fashion trends, or travel recommendations, our lifestyle section has got you covered. Visit us today at https://stmdailynews.com/category/lifestyle/ and embark on a journey of discovery and self-improvement.
Discover more from Daily News
Subscribe to get the latest posts sent to your email.
