Connect with us

Artificial Intelligence

As OpenAI attracts billions in new investment, its goal of balancing profit with purpose is getting more challenging to pull off

Published

on

OpenAI
What’s in store for OpenAI is the subject of many anonymously sourced reports. AP Photo/Michael Dwyer

Alnoor Ebrahim, Tufts University

OpenAI, the artificial intelligence company that developed the popular ChatGPT chatbot and the text-to-art program Dall-E, is at a crossroads. On Oct. 2, 2024, it announced that it had obtained US$6.6 billion in new funding from investors and that the business was worth an estimated $157 billion – making it only the second startup ever to be valued at over $100 billion.

Unlike other big tech companies, OpenAI is a nonprofit with a for-profit subsidiary that is overseen by a nonprofit board of directors. Since its founding in 2015, OpenAI’s official mission has been “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”

By late September 2024, The Associated Press, Reuters, The Wall Street Journal and many other media outlets were reporting that OpenAI plans to discard its nonprofit status and become a for-profit tech company managed by investors. These stories have all cited anonymous sources. The New York Times, referencing documents from the recent funding round, reported that unless this change happens within two years, the $6.6 billion in equity would become debt owed to the investors who provided that funding.

The Conversation U.S. asked Alnoor Ebrahim, a Tufts University management scholar, to explain why OpenAI’s leaders’ reported plans to change its structure would be significant and potentially problematic.

How have its top executives and board members responded?

There has been a lot of leadership turmoil at OpenAI. The disagreements boiled over in November 2023, when its board briefly ousted Sam Altman, its CEO. He got his job back in less than a week, and then three board members resigned. The departing directors were advocates for building stronger guardrails and encouraging regulation to protect humanity from potential harms posed by AI.

Over a dozen senior staff members have quit since then, including several other co-founders and executives responsible for overseeing OpenAI’s safety policies and practices. At least two of them have joined Anthropic, a rival founded by a former OpenAI executive responsible for AI safety. Some of the departing executives say that Altman has pushed the company to launch products prematurely.

Safety “has taken a backseat to shiny products,” said OpenAI’s former safety team leader Jan Leike, who quit in May 2024.

A group of people in suits stand together under the words 'OpenAI' and 'Sam Altman, Chief Executive Officer'
Open AI CEO Sam Altman, center, speaks at an event in September 2024. Bryan R. Smith/Pool Photo via AP

Why would OpenAI’s structure change?

OpenAI’s deep-pocketed investors cannot own shares in the organization under its existing nonprofit governance structure, nor can they get a seat on its board of directors. That’s because OpenAI is incorporated as a nonprofit whose purpose is to benefit society rather than private interests. Until now, all rounds of investments, including a reported total of $13 billion from Microsoft, have been channeled through a for-profit subsidiary that belongs to the nonprofit.

The current structure allows OpenAI to accept money from private investors in exchange for a future portion of its profits. But those investors do not get a voting seat on the board, and their profits are “capped.” According to information previously made public, OpenAI’s original investors can’t earn more than 100 times the money they provided. The goal of this hybrid governance model is to balance profits with OpenAI’s safety-focused mission.

Becoming a for-profit enterprise would make it possible for its investors to acquire ownership stakes in OpenAI and no longer have to face a cap on their potential profits. Down the road, OpenAI could also go public and raise capital on the stock market.

Altman reportedly seeks to personally acquire a 7% equity stake in OpenAI, according to a Bloomberg article that cited unnamed sources.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

That arrangement is not allowed for nonprofit executives, according to BoardSource, an association of nonprofit board members and executives. Instead, the association explains, nonprofits “must reinvest surpluses back into the organization and its tax-exempt purpose.”

What kind of company might OpenAI become?

The Washington Post and other media outlets have reported, also citing unnamed sources, that OpenAI might become a “public benefit corporation” – a business that aims to benefit society and earn profits.

Examples of businesses with this status, known as B Corps., include outdoor clothing and gear company Patagonia and eyewear maker Warby Parker.

It’s more typical that a for-profit businessnot a nonprofit – becomes a benefit corporation, according to the B Lab, a network that sets standards and offers certification for B Corps. It is unusual for a nonprofit to do this because nonprofit governance already requires those groups to benefit society.

Boards of companies with this legal status are free to consider the interests of society, the environment and people who aren’t its shareholders, but that is not required. The board may still choose to make profits a top priority and can drop its benefit status to satisfy its investors. That is what online craft marketplace Etsy did in 2017, two years after becoming a publicly traded company.

In my view, any attempt to convert a nonprofit into a public benefit corporation is a clear move away from focusing on the nonprofit’s mission. And there will be a risk that becoming a benefit corporation would just be a ploy to mask a shift toward focusing on revenue growth and investors’ profits.

Many legal scholars and other experts are predicting that OpenAI will not do away with its hybrid ownership model entirely because of legal restrictions on the placement of nonprofit assets in private hands.

But I think OpenAI has a possible workaround: It could try to dilute the nonprofit’s control by making it a minority shareholder in a new for-profit structure. This would effectively eliminate the nonprofit board’s power to hold the company accountable. Such a move could lead to an investigation by the office of the relevant state attorney general and potentially by the Internal Revenue Service.

What could happen if OpenAI turns into a for-profit company?

The stakes for society are high.

AI’s potential harms are wide-ranging, and some are already apparent, such as deceptive political campaigns and bias in health care.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

If OpenAI, an industry leader, begins to focus more on earning profits than ensuring AI’s safety, I believe that these dangers could get worse. Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his artificial intelligence research, has cautioned that AI may exacerbate inequality by replacing “lots of mundane jobs.” He believes that there’s a 50% probability “that we’ll have to confront the problem of AI trying to take over” from humanity.

And even if OpenAI did retain board members for whom safety is a top concern, the only common denominator for the members of its new corporate board would be their obligation to protect the interests of the company’s shareholders, who would expect to earn a profit. While such expectations are common on a for-profit board, they constitute a conflict of interest on a nonprofit board where mission must come first and board members cannot benefit financially from the organization’s work.

The arrangement would, no doubt, please OpenAI’s investors. But would it be good for society? The purpose of nonprofit control over a for-profit subsidiary is to ensure that profit does not interfere with the nonprofit’s mission. Without guardrails to ensure that the board seeks to limit harm to humanity from AI, there would be little reason for it to prevent the company from maximizing profit, even if its chatbots and other AI products endanger society.

Regardless of what OpenAI does, most artificial intelligence companies are already for-profit businesses. So, in my view, the only way to manage the potential harms is through better industry standards and regulations that are starting to take shape.

California’s governor vetoed such a bill in September 2024 on the grounds it would slow innovation – but I believe slowing it down is exactly what is needed, given the dangers AI already poses to society.

Alnoor Ebrahim, Thomas Schmidheiny Professor of International Business, The Fletcher School & Tisch College of Civic Life, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement Sports Research

child education

Special Education Is Turning to AI to Fill Staffing Gaps—But Privacy and Bias Risks Remain

With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.

Published

on

Seth King, University of Iowa

With special education staffing shortages worsening, schools are using AI to draft IEPs, support training, and assist assessments. Experts warn the benefits come with major risks—privacy, bias, and trust.
Adobe Stock

In special education in the U.S., funding is scarce and personnel shortages are pervasive, leaving many school districts struggling to hire qualified and willing practitioners.

Amid these long-standing challenges, there is rising interest in using artificial intelligence tools to help close some of the gaps that districts currently face and lower labor costs.

Over 7 million children receive federally funded entitlements under the Individuals with Disabilities Education Act, which guarantees students access to instruction tailored to their unique physical and psychological needs, as well as legal processes that allow families to negotiate support. Special education involves a range of professionals, including rehabilitation specialists, speech-language pathologists and classroom teaching assistants. But these specialists are in short supply, despite the proven need for their services.

As an associate professor in special education who works with AI, I see its potential and its pitfalls. While AI systems may be able to reduce administrative burdens, deliver expert guidance and help overwhelmed professionals manage their caseloads, they can also present ethical challenges – ranging from machine bias to broader issues of trust in automated systems. They also risk amplifying existing problems with how special ed services are delivered.

Yet some in the field are opting to test out AI tools, rather than waiting for a perfect solution.

A faster IEP, but how individualized?

AI is already shaping special education planning, personnel preparation and assessment.

One example is the individualized education program, or IEP, the primary instrument for guiding which services a child receives. An IEP draws on a range of assessments and other data to describe a child’s strengths, determine their needs and set measurable goals. Every part of this process depends on trained professionals.

But persistent workforce shortages mean districts often struggle to complete assessments, update plans and integrate input from parents. Most districts develop IEPs using software that requires practitioners to choose from a generalized set of rote responses or options, leading to a level of standardization that can fail to meet a child’s true individual needs.

Preliminary research has shown that large language models such as ChatGPT can be adept at generating key special education documents such as IEPs by drawing on multiple data sources, including information from students and families. Chatbots that can quickly craft IEPs could potentially help special education practitioners better meet the needs of individual children and their families. Some professional organizations in special education have even encouraged educators to use AI for documents such as lesson plans.

Training and diagnosing disabilities

There is also potential for AI systems to help support professional training and development. My own work on personnel development combines several AI applications with virtual reality to enable practitioners to rehearse instructional routines before working directly with children. Here, AI can function as a practical extension of existing training models, offering repeated practice and structured support in ways that are difficult to sustain with limited personnel.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

Some districts have begun using AI for assessments, which can involve a range of academic, cognitive and medical evaluations. AI applications that pair automatic speech recognition and language processing are now being employed in computer-mediated oral reading assessments to score tests of student reading ability.

Practitioners often struggle to make sense of the volume of data that schools collect. AI-driven machine learning tools also can help here, by identifying patterns that may not be immediately visible to educators for evaluation or instructional decision-making. Such support may be especially useful in diagnosing disabilities such as autism or learning disabilities, where masking, variable presentation and incomplete histories can make interpretation difficult. My ongoing research shows that current AI can make predictions based on data likely to be available in some districts.

Privacy and trust concerns

There are serious ethical – and practical – questions about these AI-supported interventions, ranging from risks to students’ privacy to machine bias and deeper issues tied to family trust. Some hinge on the question of whether or not AI systems can deliver services that truly comply with existing law.

The Individuals with Disabilities Education Act requires nondiscriminatory methods of evaluating disabilities to avoid inappropriately identifying students for services or neglecting to serve those who qualify. And the Family Educational Rights and Privacy Act explicitly protects students’ data privacy and the rights of parents to access and hold their children’s data.

What happens if an AI system uses biased data or methods to generate a recommendation for a child? What if a child’s data is misused or leaked by an AI system? Using AI systems to perform some of the functions described above puts families in a position where they are expected to put their faith not only in their school district and its special education personnel, but also in commercial AI systems, the inner workings of which are largely inscrutable.

These ethical qualms are hardly unique to special ed; many have been raised in other fields and addressed by early-adopters. For example, while automatic speech recognition, or ASR, systems have struggled to accurately assess accented English, many vendors now train their systems to accommodate specific ethnic and regional accents.

But ongoing research work suggests that some ASR systems are limited in their capacity to accommodate speech differences associated with disabilities, account for classroom noise, and distinguish between different voices. While these issues may be addressed through technical improvement in the future, they are consequential at present.

Embedded bias

At first glance, machine learning models might appear to improve on traditional clinical decision-making. Yet AI models must be trained on existing data, meaning their decisions may continue to reflect long-standing biases in how disabilities have been identified.

Indeed, research has shown that AI systems are routinely hobbled by biases within both training data and system design. AI models can also introduce new biases, either by missing subtle information revealed during in-person evaluations or by overrepresenting characteristics of groups included in the training data.

Such concerns, defenders might argue, are addressed by safeguards already embedded in federal law. Families have considerable latitude in what they agree to, and can opt for alternatives, provided they are aware they can direct the IEP process.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

By a similar token, using AI tools to build IEPs or lessons may seem like an obvious improvement over underdeveloped or perfunctory plans. Yet true individualization would require feeding protected data into large language models, which could violate privacy regulations. And while AI applications can readily produce better-looking IEPs and other paperwork, this does not necessarily result in improved services.

Filling the gap

Indeed, it is not yet clear whether AI provides a standard of care equivalent to the high-quality, conventional treatment to which children with disabilities are entitled under federal law.

The Supreme Court in 2017 rejected the notion that the Individuals with Disabilities Education Act merely entitles students to trivial, “de minimis” progress, which weakens one of the primary rationales for pursuing AI – that it can meet a minimum standard of care and practice. And since AI really has not been empirically evaluated at scale, it has not been proved that it adequately meets the low bar of simply improving beyond the flawed status quo.

But this does not change the reality of limited resources. For better or worse, AI is already being used to fill the gap between what the law requires and what the system actually provides.

Seth King, Associate Profess of Special Education, University of Iowa

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Science

AI-induced cultural stagnation is no longer speculation − it’s already happening

AI-induced cultural stagnation. A 2026 study by researchers revealed that when generative AI operates autonomously, it produces homogenous content, referred to as “visual elevator music,” despite diverse prompts. This convergence leads to bland outputs and indicates a risk of cultural stagnation as AI perpetuates familiar themes, potentially limiting innovation and diversity in creative expression.

Published

on

Elevator with people in modern building.
When generative AI was left to its own devices, its outputs landed on a set of generic images – what researchers called ‘visual elevator music.’ Wang Zhao/AFP via Getty Images

Ahmed Elgammal, Rutgers University

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A collage of AI-generated images that begins with a politician surrounded by policy papers and progresses to a room with fancy red curtains.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings. Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

A rolling, green field with a tree and a clear, blue sky.
Pretty … boring. Chris McLoughlin/Moment via Getty Images

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

AI-induced cultural stagnation. A cityscape of tall buildings on a fall morning.
AI’s outputs are familiar because they revert to average displays of human creativity. Bulgac/iStock via Getty Images

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

Advertisement
Get More From A Face Cleanser And Spa-like Massage

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Artificial Intelligence

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

A new study finds over 50% of online articles are now AI-generated, raising questions about the future of human writing. Discover why formulaic content is most at risk, and why authentic, creative voices may become more valuable than ever.

Published

on

Is AI Replacing Human Writers? Why Over Half of Online Articles Are Now AI-Generated
Preserving the value of real human voices will likely depend on how people adapt to artificial intelligence and collaborate with it. BlackJack3D/E+ via Getty Images

More than half of new articles on the internet are being written by AI – is human writing headed for extinction?

Francesco Agnellini, Binghamton University, State University of New York The line between human and machine authorship is blurring, particularly as it’s become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. As a scholar who explores how AI is built, how people are using it in their everyday lives, and how it’s affecting culture, I’ve thought a lot about what this technology can do and where it falls short. If you’re more likely to read something written by AI than by a human on the internet, is it only a matter of time before human writing becomes obsolete? Or is this simply another technological development that humans will adapt to?

It isn’t all or nothing

Thinking about these questions reminded me of Umberto Eco’s essay “Apocalyptic and Integrated,” which was originally written in the early 1960s. Parts of it were later included in an anthology titled “Apocalypse Postponed,” which I first read as a college student in Italy. In it, Eco draws a contrast between two attitudes toward mass media. There are the “apocalyptics” who fear cultural degradation and moral collapse. Then there are the “integrated” who champion new media technologies as a democratizing force for culture.
An older man with a beard, glasses and a suit poses while holding a cigarette.
Italian philosopher, cultural critic and novelist Umberto Eco cautioned against overreacting to the impact of new technologies. Leonardo Cendamo/Getty Images
Back then, Eco was writing about the proliferation of TV and radio. Today, you’ll often see similar reactions to AI. Yet Eco argued that both positions were too extreme. It isn’t helpful, he wrote, to see new media as either a dire threat or a miracle. Instead, he urged readers to look at how people and communities use these new tools, what risks and opportunities they create, and how they shape – and sometimes reinforce – power structures. While I was teaching a course on deepfakes during the 2024 election, Eco’s lesson also came back to me. Those were days when some scholars and media outlets were regularly warning of an imminent “deepfake apocalypse.” Would deepfakes be used to mimic major political figures and push targeted disinformation? What if, on the eve of an election, generative AI was used to mimic the voice of a candidate on a robocall telling voters to stay home? Those fears weren’t groundless: Research shows that people aren’t especially good at identifying deepfakes. At the same time, they consistently overestimate their ability to do so. In the end, though, the apocalypse was postponed. Post-election analyses found that deepfakes did seem to intensify some ongoing political trends, such as the erosion of trust and polarization, but there’s no evidence that they affected the final outcome of the election.

Listicles, news updates and how-to guides

Of course, the fears that AI raises for supporters of democracy are not the same as those it creates for writers and artists. For them, the core concerns are about authorship: How can one person compete with a system trained on millions of voices that can produce text at hyper-speed? And if this becomes the norm, what will it do to creative work, both as an occupation and as a source of meaning? It’s important to clarify what’s meant by “online content,” the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers. https://stmdailynews.com/wp-admin/post-new.php#visibility The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writers – mostly freelance, including many translators – has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.

Collaborating with AI

The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI. A writer might draft a few lines, let an AI expand them and then reshape that output into the final text. This article is no exception. As a non-native English speaker, I often rely on AI to refine my language before sending drafts to an editor. At times the system attempts to reshape what I mean. But once its stylistic tendencies become familiar, it becomes possible to avoid them and maintain a personal tone. Also, artificial intelligence is not entirely artificial, since it is trained on human-made material. It’s worth noting that even before AI, human writing has never been entirely human, either. Every technology, from parchment and stylus paper to the typewriter and now AI, has shaped how people write and how readers make sense of it. Another important point: AI models are increasingly trained on datasets that include not only human writing but also AI-generated and human–AI co-produced text. This has raised concerns about their ability to continue improving over time. Some commentators have already described a sense of disillusionment following the release of newer large models, with companies struggling to deliver on their promises.

Human voices may matter even more

But what happens when people become overly reliant on AI in their writing? Some studies show that writers may feel more creative when they use artificial intelligence for brainstorming, yet the range of ideas often becomes narrower. This uniformity affects style as well: These systems tend to pull users toward similar patterns of wording, which reduces the differences that usually mark an individual voice. Researchers also note a shift toward Western – and especially English-speaking – norms in the writing of people from other cultures, raising concerns about a new form of AI colonialism. In this context, texts that display originality, voice and stylistic intention are likely to become even more meaningful within the media landscape, and they may play a crucial role in training the next generations of models. If you set aside the more apocalyptic scenarios and assume that AI will continue to advance – perhaps at a slower pace than in the recent past – it’s quite possible that thoughtful, original, human-generated writing will become even more valuable. Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans. Francesco Agnellini, Lecturer in Digital and Data Studies, Binghamton University, State University of New York This article is republished from The Conversation under a Creative Commons license. Read the original article.
 

Dive into “The Knowledge,” where curiosity meets clarity. This playlist, in collaboration with STMDailyNews.com, is designed for viewers who value historical accuracy and insightful learning. Our short videos, ranging from 30 seconds to a minute and a half, make complex subjects easy to grasp in no time. Covering everything from historical events to contemporary processes and entertainment, “The Knowledge” bridges the past with the present. In a world where information is abundant yet often misused, our series aims to guide you through the noise, preserving vital knowledge and truths that shape our lives today. Perfect for curious minds eager to discover the ‘why’ and ‘how’ of everything around us. Subscribe and join in as we explore the facts that matter.  https://stmdailynews.com/the-knowledge/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending