Connect with us

Artificial Intelligence

As OpenAI attracts billions in new investment, its goal of balancing profit with purpose is getting more challenging to pull off

Published

on

OpenAI
What’s in store for OpenAI is the subject of many anonymously sourced reports. AP Photo/Michael Dwyer

Alnoor Ebrahim, Tufts University

OpenAI, the artificial intelligence company that developed the popular ChatGPT chatbot and the text-to-art program Dall-E, is at a crossroads. On Oct. 2, 2024, it announced that it had obtained US$6.6 billion in new funding from investors and that the business was worth an estimated $157 billion – making it only the second startup ever to be valued at over $100 billion.

Unlike other big tech companies, OpenAI is a nonprofit with a for-profit subsidiary that is overseen by a nonprofit board of directors. Since its founding in 2015, OpenAI’s official mission has been “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”

By late September 2024, The Associated Press, Reuters, The Wall Street Journal and many other media outlets were reporting that OpenAI plans to discard its nonprofit status and become a for-profit tech company managed by investors. These stories have all cited anonymous sources. The New York Times, referencing documents from the recent funding round, reported that unless this change happens within two years, the $6.6 billion in equity would become debt owed to the investors who provided that funding.

The Conversation U.S. asked Alnoor Ebrahim, a Tufts University management scholar, to explain why OpenAI’s leaders’ reported plans to change its structure would be significant and potentially problematic.

How have its top executives and board members responded?

There has been a lot of leadership turmoil at OpenAI. The disagreements boiled over in November 2023, when its board briefly ousted Sam Altman, its CEO. He got his job back in less than a week, and then three board members resigned. The departing directors were advocates for building stronger guardrails and encouraging regulation to protect humanity from potential harms posed by AI.

Over a dozen senior staff members have quit since then, including several other co-founders and executives responsible for overseeing OpenAI’s safety policies and practices. At least two of them have joined Anthropic, a rival founded by a former OpenAI executive responsible for AI safety. Some of the departing executives say that Altman has pushed the company to launch products prematurely.

Safety “has taken a backseat to shiny products,” said OpenAI’s former safety team leader Jan Leike, who quit in May 2024.

Advertisement
Group4208
A group of people in suits stand together under the words 'OpenAI' and 'Sam Altman, Chief Executive Officer'
Open AI CEO Sam Altman, center, speaks at an event in September 2024. Bryan R. Smith/Pool Photo via AP

Why would OpenAI’s structure change?

OpenAI’s deep-pocketed investors cannot own shares in the organization under its existing nonprofit governance structure, nor can they get a seat on its board of directors. That’s because OpenAI is incorporated as a nonprofit whose purpose is to benefit society rather than private interests. Until now, all rounds of investments, including a reported total of $13 billion from Microsoft, have been channeled through a for-profit subsidiary that belongs to the nonprofit.

The current structure allows OpenAI to accept money from private investors in exchange for a future portion of its profits. But those investors do not get a voting seat on the board, and their profits are “capped.” According to information previously made public, OpenAI’s original investors can’t earn more than 100 times the money they provided. The goal of this hybrid governance model is to balance profits with OpenAI’s safety-focused mission.

Becoming a for-profit enterprise would make it possible for its investors to acquire ownership stakes in OpenAI and no longer have to face a cap on their potential profits. Down the road, OpenAI could also go public and raise capital on the stock market.

Altman reportedly seeks to personally acquire a 7% equity stake in OpenAI, according to a Bloomberg article that cited unnamed sources.

That arrangement is not allowed for nonprofit executives, according to BoardSource, an association of nonprofit board members and executives. Instead, the association explains, nonprofits “must reinvest surpluses back into the organization and its tax-exempt purpose.”

What kind of company might OpenAI become?

The Washington Post and other media outlets have reported, also citing unnamed sources, that OpenAI might become a “public benefit corporation” – a business that aims to benefit society and earn profits.

Examples of businesses with this status, known as B Corps., include outdoor clothing and gear company Patagonia and eyewear maker Warby Parker.

It’s more typical that a for-profit businessnot a nonprofit – becomes a benefit corporation, according to the B Lab, a network that sets standards and offers certification for B Corps. It is unusual for a nonprofit to do this because nonprofit governance already requires those groups to benefit society.

Advertisement
Group4208

Boards of companies with this legal status are free to consider the interests of society, the environment and people who aren’t its shareholders, but that is not required. The board may still choose to make profits a top priority and can drop its benefit status to satisfy its investors. That is what online craft marketplace Etsy did in 2017, two years after becoming a publicly traded company.

In my view, any attempt to convert a nonprofit into a public benefit corporation is a clear move away from focusing on the nonprofit’s mission. And there will be a risk that becoming a benefit corporation would just be a ploy to mask a shift toward focusing on revenue growth and investors’ profits.

Many legal scholars and other experts are predicting that OpenAI will not do away with its hybrid ownership model entirely because of legal restrictions on the placement of nonprofit assets in private hands.

But I think OpenAI has a possible workaround: It could try to dilute the nonprofit’s control by making it a minority shareholder in a new for-profit structure. This would effectively eliminate the nonprofit board’s power to hold the company accountable. Such a move could lead to an investigation by the office of the relevant state attorney general and potentially by the Internal Revenue Service.

What could happen if OpenAI turns into a for-profit company?

The stakes for society are high.

AI’s potential harms are wide-ranging, and some are already apparent, such as deceptive political campaigns and bias in health care.

If OpenAI, an industry leader, begins to focus more on earning profits than ensuring AI’s safety, I believe that these dangers could get worse. Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his artificial intelligence research, has cautioned that AI may exacerbate inequality by replacing “lots of mundane jobs.” He believes that there’s a 50% probability “that we’ll have to confront the problem of AI trying to take over” from humanity.

Advertisement
Group4208

And even if OpenAI did retain board members for whom safety is a top concern, the only common denominator for the members of its new corporate board would be their obligation to protect the interests of the company’s shareholders, who would expect to earn a profit. While such expectations are common on a for-profit board, they constitute a conflict of interest on a nonprofit board where mission must come first and board members cannot benefit financially from the organization’s work.

The arrangement would, no doubt, please OpenAI’s investors. But would it be good for society? The purpose of nonprofit control over a for-profit subsidiary is to ensure that profit does not interfere with the nonprofit’s mission. Without guardrails to ensure that the board seeks to limit harm to humanity from AI, there would be little reason for it to prevent the company from maximizing profit, even if its chatbots and other AI products endanger society.

Regardless of what OpenAI does, most artificial intelligence companies are already for-profit businesses. So, in my view, the only way to manage the potential harms is through better industry standards and regulations that are starting to take shape.

California’s governor vetoed such a bill in September 2024 on the grounds it would slow innovation – but I believe slowing it down is exactly what is needed, given the dangers AI already poses to society.

Alnoor Ebrahim, Thomas Schmidheiny Professor of International Business, The Fletcher School & Tisch College of Civic Life, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Group4208

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement

Tech

From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam

Published

on

images
Many of the AI images generated by spammers and scammers have religious themes. immortal70/iStock via Getty Images

Renee DiResta, Stanford University; Abhiram Reddy, Georgetown University, and Josh A. Goldstein, Georgetown University

Suppose you’ve spent time on Facebook over the past six months. In that case, you may have noticed photorealistic images that are too good to be true: children holding paintings that look like the work of professional artists, or majestic log cabin interiors that are the stuff of Airbnb dreams.

Others, such as renderings of Jesus made out of crustaceans, are just bizarre.

Like the AI image of the pope in a puffer jacket that went viral in May 2023, these AI-generated images are increasingly prevalent – and popular – on social media platforms. Even as many of them border on the surreal, they’re often used to bait engagement from ordinary users.

Our team of researchers from the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology investigated over 100 Facebook pages that posted high volumes of AI-generated content. We published the results in March 2024 as a preprint paper, meaning the findings have not yet gone through peer review.

We explored patterns of images, unearthed evidence of coordination between some of the pages, and tried to discern the likely goals of the posters.

Page operators seemed to be posting pictures of AI-generated babies, kitchens or birthday cakes for a range of reasons.

There were content creators innocuously looking to grow their followings with synthetic content; scammers using pages stolen from small businesses to advertise products that don’t seem to exist; and spammers sharing AI-generated images of animals while referring users to websites filled with advertisements, which allow the owners to collect ad revenue without creating high-quality content.

Advertisement
Group4208

Our findings suggest that these AI-generated images draw in users – and Facebook’s recommendation algorithm may be organically promoting these posts.

Generative AI meets scams and spam

Internet spammers and scammers are nothing new.

For more than two decades, they’ve used unsolicited bulk email to promote pyramid schemes. They’ve targeted senior citizens while posing as Medicare representatives or computer technicians.

On social media, profiteers have used clickbait articles to drive users to ad-laden websites. Recall the 2016 U.S. presidential election, when Macedonian teenagers shared sensational political memes on Facebook and collected advertising revenue after users visited the URLs they posted. The teens didn’t care who won the election. They just wanted to make a buck.

In the early 2010s, spammers captured people’s attention with ads promising that anyone could lose belly fat or learn a new language with “one weird trick.”

AI-generated content has become another “weird trick.”

It’s visually appealing and cheap to produce, allowing scammers and spammers to generate high volumes of engaging posts. Some of the pages we observed uploaded dozens of unique images per day. In doing so, they followed Meta’s own advice for page creators. Frequent posting, the company suggests, helps creators get the kind of algorithmic pickup that leads their content to appear in the “Feed,” formerly known as the “News Feed.”

Advertisement
Group4208

Much of the content is still, in a sense, clickbait: Shrimp Jesus makes people pause to gawk and inspires shares purely because it is so bizarre.

Many users react by liking the post or leaving a comment. This signals to the algorithmic curators that perhaps the content should be pushed into the feeds of even more people.

Some of the more established spammers we observed, likely recognizing this, improved their engagement by pivoting from posting URLs to posting AI-generated images. They would then comment on the post of the AI-generated images with the URLs of the ad-laden content farms they wanted users to click.

But more ordinary creators capitalized on the engagement of AI-generated images, too, without obviously violating platform policies.

Rate ‘my’ work!

When we looked up the posts’ captions on CrowdTangle – a social media monitoring platform owned by Meta and set to sunset in August – we found that they were “copypasta” captions, which means that they were repeated across posts.

Some of the copypasta captions baited interaction by directly asking users to, for instance, rate a “painting” by a first-time artist – even when the image was generated by AI – or to wish an elderly person a happy birthday. Facebook users often replied to AI-generated images with comments of encouragement and congratulations

Algorithms push AI-generated content

Our investigation noticeably altered our own Facebook feeds: Within days of visiting the pages – and without commenting on, liking or following any of the material – Facebook’s algorithm recommended reams of other AI-generated content.

Advertisement
Group4208

Interestingly, the fact that we had viewed clusters of, for example, AI-generated miniature cow pages didn’t lead to a short-term increase in recommendations for pages focused on actual miniature cows, normal-sized cows or other farm animals. Rather, the algorithm recommended pages on a range of topics and themes, but with one thing in common: They contained AI-generated images.

In 2022, the technology website Verge detailed an internal Facebook memo about proposed changes to the company’s algorithm.

The algorithm, according to the memo, would become a “discovery-engine,” allowing users to come into contact with posts from individuals and pages they didn’t explicitly seek out, akin to TikTok’s “For You” page.

We analyzed Facebook’s own “Widely Viewed Content Reports,” which lists the most popular content, domains, links, pages and posts on the platform per quarter.

It showed that the proportion of content that users saw from pages and people they don’t follow steadily increased between 2021 and 2023. Changes to the algorithm have allowed more room for AI-generated content to be organically recommended without prior engagement – perhaps explaining our experiences and those of other users.

‘This post was brought to you by AI’

Since Meta currently does not flag AI-generated content by default, we sometimes observed users warning others about scams or spam AI content with infographics.

Meta, however, seems to be aware of potential issues if AI-generated content blends into the information environment without notice. The company has released several announcements about how it plans to deal with AI-generated content.

Advertisement
Group4208

In May 2024, Facebook will begin applying a “Made with AI” label to content it can reliably detect as synthetic.

But the devil is in the details. How accurate will the detection models be? What AI-generated content will slip through? What content will be inappropriately flagged? And what will the public make of such labels?

While our work focused on Facebook spam and scams, there are broader implications.

Reporters have written about AI-generated videos targeting kids on YouTube and influencers on TikTok who use generative AI to turn a profit.

Social media platforms will have to reckon with how to treat AI-generated content; it’s certainly possible that user engagement will wane if online worlds become filled with artificially generated posts, images and videos.

Shrimp Jesus may be an obvious fake. But the challenge of assessing what’s real is only heating up.

Renee DiResta, Research Manager of the Stanford Internet Observatory, Stanford University; Abhiram Reddy, Research Assistant at the Center for Security and Emerging Technology, Georgetown University, and Josh A. Goldstein, Research Fellow at the Center for Security and Emerging Technology, Georgetown University

Advertisement
Group4208

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Tech

Honeywell and Google Cloud to Accelerate Autonomous Operations with AI Agents for the Industrial Sector

Published

on

Google Cloud AI to enhance Honeywell’s product offerings 
and help upskill the industrial workforce

New solutions will connect to enterprise-wide industrial data from Honeywell Forge, 
a leading IoT platform for industrials

CHARLOTTE, N.C. and SUNNYVALE, Calif. /PRNewswire/ — Honeywell (NASDAQ: HON) and Google Cloud announced a unique collaboration connecting artificial intelligence (AI) agents with assets, people and processes to accelerate safer, autonomous operations for the industrial sector.

Google Cloud
The first solutions built with Google Cloud will be available to Honeywell customers in 2025.

This partnership will bring together the multimodality and natural language capabilities of Gemini on Vertex AI – Google Cloud’s AI platform – and the massive data set on Honeywell Forge, a leading Internet of Things (IoT) platform for industrials. This will unleash easy-to-understand, enterprise-wide insights across a multitude of use cases. Honeywell’s customers across the industrial sector will benefit from opportunities to reduce maintenance costs, increase operational productivity and upskill employees. The first solutions built with Google Cloud AI will be available to Honeywell’s customers in 2025.

“The path to autonomy requires assets working harder, people working smarter and processes working more efficiently,” said Vimal Kapur, Chairman and CEO of Honeywell. “By combining Google Cloud’s AI technology with our deep domain expertise–including valuable data on our Honeywell Forge platform–customers will receive unparalleled, actionable insights bridging the physical and digital worlds to accelerate autonomous operations, a key driver of Honeywell’s growth.”

“Our partnership with Honeywell represents a significant step forward in bringing the transformative power of AI to industrial operations,” said Thomas Kurian, CEO of Google Cloud. “With Gemini on Vertex AI, combined with Honeywell’s industrial data and expertise, we’re creating new opportunities to optimize processes, empower workforces and drive meaningful business outcomes for industrial organizations worldwide.”

With the mass retirement of workers from the baby boomer generation, the industrial sector faces both labor and skills shortages, and AI can be part of the solution – as a revenue generator, not job eliminator. More than two-thirds (82%) of Industrial AI leaders believe their companies are early adopters of AI, but only 17% have fully launched their initial AI plans, according to Honeywell’s 2024 Industrial AI Insights report. This partnership will provide AI agents that augment the existing operations and workforce to help drive AI adoption and enable companies across the sector to benefit from expanding automation.

Honeywell and Google Cloud will co-innovate solutions around:

Advertisement
Group4208

Purpose-Built, Industrial AI Agents 
Built on Google Cloud’s Vertex AI Search and tailored to engineers’ specific needs, a new AI-powered agent will help automate tasks and reduce project design cycles, enabling users to focus on driving innovation and delivering exceptional customer experiences.

Additional agents will utilize Google’s large language models (LLMs) to help technicians to more quickly resolve maintenance issues (e.g., “How did a unit perform last night?” “How do I replace the input/output module?” or “Why is my system making this sound?”). By leveraging Gemini’s multimodality capabilities, users will be able to process various data types such as images, videos, text and sensor readings, which will help its engineers get the answers they need quickly – going beyond simple chat and predictions.

Enhanced Cybersecurity
Google Threat Intelligence – featuring frontline insight from Mandiant – will be integrated into current Honeywell cybersecurity products, including Global Analysis, Research and Defense (GARD) Threat Intelligence and Secure Media Exchange (SMX), to help enhance threat detection and protect global infrastructure for industrial customers. 

On-the-Edge Device Advances 
Looking ahead, Honeywell will explore using Google’s Gemini Nano model to enhance Honeywell edge AI devices’ intelligence multiple use cases across verticals, ranging from scanning performance to voice-based guided workflow, maintenance, operational and alarm assist without the need to connect to the internet and cloud. This is the beginning of a new wave of more intelligent devices and solutions, which will be the subject of future Honeywell announcements.

By leveraging AI to enable growth and productivity, the integration of Google Cloud technology also further supports Honeywell’s alignment of its portfolio to three compelling megatrends, including automation.

About Honeywell

Honeywell is an integrated operating company serving a broad range of industries and geographies around the world. Our business is aligned with three powerful megatrends – automation, the future of aviation and energy transition – underpinned by our Honeywell Accelerator operating system and Honeywell Forge IoT platform. As a trusted partner, we help organizations solve the world’s toughest, most complex challenges, providing actionable solutions and innovations through our Aerospace Technologies, Industrial Automation, Building Automation and Energy and Sustainability Solutions business segments that help make the world smarter and safer as well as more secure and sustainable. For more news and information on Honeywell, please visit www.honeywell.com/newsroom.

About Google Cloud

Google Cloud is the new way to the cloud, providing AI, infrastructure, developer, data, security, and collaboration tools built for today and tomorrow. Google Cloud offers a powerful, fully integrated, and optimized AI stack with its own planet-scale infrastructure, custom-built chips, generative AI models, and development platform, as well as AI-powered applications, to help organizations transform. Customers in more than 200 countries and territories turn to Google Cloud as their trusted technology partner.

Advertisement
Group4208

SOURCE Honeywell

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Science

ChatGPT and the movie ‘Her’ are just the latest example of the ‘sci-fi feedback loop’

Published

on

ChatGPT
ChatGPT-4o and the films ‘Her’ and ‘Blade Runner 2049’ all pull from one another as they develop the concept of a virtual assistant. Warner Bros.

Rizwan Virk, Arizona State University

ChatGPT

In May 2024, OpenAI CEO Sam Altman sparked a firestorm by referencing the 2013 movie “Her” to highlight the novelty of the latest iteration of ChatGPT.

Within days, actor Scarlett Johansson, who played the voice of Samantha, the AI girlfriend of the protagonist in the movie “Her,” accused the company of improperly using her voice after she had spurned their offer to make her the voice of ChatGPT’s new virtual assistant. Johansson ended up suing OpenAI and has been invited to testify before Congress.

This tiff highlights a broader interchange between Hollywood and Silicon Valley that’s called the “sci-fi feedback loop.” The subject of my doctoral research, the sci-fi feedback loop explores how science fiction and technological innovation feed off each other. This dynamic is bidirectional and can sometimes play out over many decades, resulting in an ongoing loop.

Fiction sparks dreams of Moon travel

One of the most famous examples of this loop is Moon travel.

Jules Verne’s 1865 novel “From the Earth to the Moon” and the fiction of H.G. Wells inspired one of the first films to visualize such a journey, 1902’s “A Trip to the Moon.”

The fiction of Verne and Wells also influenced future rocket scientists such as Robert Goddard, Hermann Oberth and Oberth’s better-known protégé, Wernher von Braun. The innovations of these men – including the V-2 rocket built by von Braun during World War II – inspired works of science fiction, such as the 1950 film “Destination Moon,” which included a rocket that looked just like the V-2.

Films like “Destination Moon” would then go on to bolster public support for lavish government spending on the space program. https://www.youtube.com/embed/xLVChRVfZ74?wmode=transparent&start=0 The 1902 silent short ‘A Trip to the Moon.’

Advertisement
Group4208

Creative symbiosis

The sci-fi feedback loop generally follows the same cycle.

First, the technological climate of a given era will shape that period’s science fiction. For example, the personal computing revolution of the 1970s and 1980s directly inspired the works of cyberpunk writers Neal Stephenson and William Gibson.

Then the sci-fi that emerges will go on to inspire real-world technological innovation. In his 1992 classic “Snow Crash,” Stephenson coined the term “metaverse” to describe a 3-D, video game-like world accessed through virtual reality goggles.

Silicon Valley entrepreneurs and innovators have been trying to build a version of this metaverse ever since. The virtual world of the video game Second Life, released in 2003, took a stab at this: Players lived in virtual homes, went to virtual dance clubs and virtual concerts with virtual girlfriends and boyfriends, and were even paid virtual dollars for showing up at virtual jobs.

This technology seeded yet more fiction; in my research, I discovered that sci-fi novelist Ernest Cline had spent a lot of time playing Second Life, and it inspired the metaverse of his bestselling novel “Ready Player One.”

The cycle continued: Employees of Oculus VR – now known as Meta Reality Labs – were given copies of “Ready Player One” to read as they developed the company’s virtual reality headsets. When Facebook changed its name to Meta in 2021, it did so in the hopes of being at the forefront of building the metaverse, though the company’s grand ambitions have tempered somewhat.

Digitally rendered woman wearing pink outfit strolls along a runway.
Metaverse Fashion Week, the first virtual fashion week, was hosted by the Decentraland virtual world in 2022. Vittorio Zunino Celotto/Getty Images

Another sci-fi franchise that has its fingerprints all over this loop is “Star Trek,” which first aired in 1966, right in the middle of the space race.

Steve Perlman, the inventor of Apple’s QuickTime media format and player, said he was inspired by an episode of “Star Trek: The Next Generation,” in which Lt. Commander Data, an android, sifts through multiple streams of audio and video files. And Rob Haitani, the designer of the Palm Pilot’s operating system, has said that the bridge on the Enterprise influenced its interface.

Advertisement
Group4208

In my research, I also discovered that the show’s Holodeck – a room that could simulate any environment – influenced both the name and the development of Microsoft’s HoloLens augmented reality glasses.

From ALICE to ‘Her’

Which brings us back to OpenAI and “Her.”

In the movie, the protagonist, Theodore, played by Joaquin Phoenix, acquires an AI assistant, “Samantha,” voiced by Johansson. He begins to develop feelings for Samantha – so much so that he starts to consider her his girlfriend.

ChatGPT-4o, the latest version of the generative AI software, seems to be able to cultivate a similar relationship between user and machine. Not only can ChatGPT-4o speak to you and “understand” you, but it can also do so sympathetically, as a romantic partner would.

There’s little doubt that the depiction of AI in “Her” influenced OpenAI’s developers. In addition to Altman’s tweet, the company’s promotional videos for ChatGPT-4o feature a chatbot speaking with a job candidate before his interview, propping him up and encouraging him – as, well, an AI girlfriend would. The AI featured in the clips, Ars Technica observed, was “disarmingly lifelike,” and willing “to laugh at your jokes and your dumb hat.”

But you might be surprised to learn that a previous generation of chatbots inspired Spike Jonze, the director and screenwriter of “Her,” to write the screenplay in the first place. Nearly a decade before the film’s release, Jonze had interacted with a version of the ALICE chatbot, which was one of the first chatbots to have a defined personality – in ALICE’s case, that of a young woman.

Young man wearing tuxedo smiles as he holds a gold statuette.
Filmmaker Spike Jonze won the Oscar for best original screenplay for ‘Her’ in 2014. Kevork Djansezian/Getty Images

The ALICE chatbot won the Loebner Prize three times, which was awarded annually until 2019 to the AI software that came closest to passing the Turing Test, long seen as a threshold for determining whether artificial intelligence has become indistinguishable from human intelligence.

The sci-fi feedback loop has no expiration date. AI’s ability to form relationships with humans is a theme that continues to be explored in fiction and real life.

Advertisement
Group4208

A few years after “Her,” “Blade Runner 2049” featured a virtual girlfriend, Joi, with a holographic body. Well before the latest drama with OpenAI, companies had started developing and pitching virtual girlfriends, a process that will no doubt continue. As science fiction writer and social media critic Cory Doctorow wrote in 2017, “Science fiction does something better than predict the future: It influences it.”

Rizwan Virk, Faculty Associate, PhD Candidate in Human and Social Dimensions of Science and Technology, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending