Connect with us

Artificial Intelligence

As OpenAI attracts billions in new investment, its goal of balancing profit with purpose is getting more challenging to pull off

Published

on

OpenAI
What’s in store for OpenAI is the subject of many anonymously sourced reports. AP Photo/Michael Dwyer

Alnoor Ebrahim, Tufts University

OpenAI, the artificial intelligence company that developed the popular ChatGPT chatbot and the text-to-art program Dall-E, is at a crossroads. On Oct. 2, 2024, it announced that it had obtained US$6.6 billion in new funding from investors and that the business was worth an estimated $157 billion – making it only the second startup ever to be valued at over $100 billion.

Unlike other big tech companies, OpenAI is a nonprofit with a for-profit subsidiary that is overseen by a nonprofit board of directors. Since its founding in 2015, OpenAI’s official mission has been “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”

By late September 2024, The Associated Press, Reuters, The Wall Street Journal and many other media outlets were reporting that OpenAI plans to discard its nonprofit status and become a for-profit tech company managed by investors. These stories have all cited anonymous sources. The New York Times, referencing documents from the recent funding round, reported that unless this change happens within two years, the $6.6 billion in equity would become debt owed to the investors who provided that funding.

The Conversation U.S. asked Alnoor Ebrahim, a Tufts University management scholar, to explain why OpenAI’s leaders’ reported plans to change its structure would be significant and potentially problematic.

How have its top executives and board members responded?

There has been a lot of leadership turmoil at OpenAI. The disagreements boiled over in November 2023, when its board briefly ousted Sam Altman, its CEO. He got his job back in less than a week, and then three board members resigned. The departing directors were advocates for building stronger guardrails and encouraging regulation to protect humanity from potential harms posed by AI.

Over a dozen senior staff members have quit since then, including several other co-founders and executives responsible for overseeing OpenAI’s safety policies and practices. At least two of them have joined Anthropic, a rival founded by a former OpenAI executive responsible for AI safety. Some of the departing executives say that Altman has pushed the company to launch products prematurely.

Safety “has taken a backseat to shiny products,” said OpenAI’s former safety team leader Jan Leike, who quit in May 2024.

Advertisement
image 101376000 12222003
A group of people in suits stand together under the words 'OpenAI' and 'Sam Altman, Chief Executive Officer'
Open AI CEO Sam Altman, center, speaks at an event in September 2024. Bryan R. Smith/Pool Photo via AP

Why would OpenAI’s structure change?

OpenAI’s deep-pocketed investors cannot own shares in the organization under its existing nonprofit governance structure, nor can they get a seat on its board of directors. That’s because OpenAI is incorporated as a nonprofit whose purpose is to benefit society rather than private interests. Until now, all rounds of investments, including a reported total of $13 billion from Microsoft, have been channeled through a for-profit subsidiary that belongs to the nonprofit.

The current structure allows OpenAI to accept money from private investors in exchange for a future portion of its profits. But those investors do not get a voting seat on the board, and their profits are “capped.” According to information previously made public, OpenAI’s original investors can’t earn more than 100 times the money they provided. The goal of this hybrid governance model is to balance profits with OpenAI’s safety-focused mission.

Becoming a for-profit enterprise would make it possible for its investors to acquire ownership stakes in OpenAI and no longer have to face a cap on their potential profits. Down the road, OpenAI could also go public and raise capital on the stock market.

Altman reportedly seeks to personally acquire a 7% equity stake in OpenAI, according to a Bloomberg article that cited unnamed sources.

That arrangement is not allowed for nonprofit executives, according to BoardSource, an association of nonprofit board members and executives. Instead, the association explains, nonprofits “must reinvest surpluses back into the organization and its tax-exempt purpose.”

What kind of company might OpenAI become?

The Washington Post and other media outlets have reported, also citing unnamed sources, that OpenAI might become a “public benefit corporation” – a business that aims to benefit society and earn profits.

Examples of businesses with this status, known as B Corps., include outdoor clothing and gear company Patagonia and eyewear maker Warby Parker.

It’s more typical that a for-profit businessnot a nonprofit – becomes a benefit corporation, according to the B Lab, a network that sets standards and offers certification for B Corps. It is unusual for a nonprofit to do this because nonprofit governance already requires those groups to benefit society.

Advertisement
image 101376000 12222003

Boards of companies with this legal status are free to consider the interests of society, the environment and people who aren’t its shareholders, but that is not required. The board may still choose to make profits a top priority and can drop its benefit status to satisfy its investors. That is what online craft marketplace Etsy did in 2017, two years after becoming a publicly traded company.

In my view, any attempt to convert a nonprofit into a public benefit corporation is a clear move away from focusing on the nonprofit’s mission. And there will be a risk that becoming a benefit corporation would just be a ploy to mask a shift toward focusing on revenue growth and investors’ profits.

Many legal scholars and other experts are predicting that OpenAI will not do away with its hybrid ownership model entirely because of legal restrictions on the placement of nonprofit assets in private hands.

But I think OpenAI has a possible workaround: It could try to dilute the nonprofit’s control by making it a minority shareholder in a new for-profit structure. This would effectively eliminate the nonprofit board’s power to hold the company accountable. Such a move could lead to an investigation by the office of the relevant state attorney general and potentially by the Internal Revenue Service.

What could happen if OpenAI turns into a for-profit company?

The stakes for society are high.

AI’s potential harms are wide-ranging, and some are already apparent, such as deceptive political campaigns and bias in health care.

If OpenAI, an industry leader, begins to focus more on earning profits than ensuring AI’s safety, I believe that these dangers could get worse. Geoffrey Hinton, who won the 2024 Nobel Prize in physics for his artificial intelligence research, has cautioned that AI may exacerbate inequality by replacing “lots of mundane jobs.” He believes that there’s a 50% probability “that we’ll have to confront the problem of AI trying to take over” from humanity.

Advertisement
image 101376000 12222003

And even if OpenAI did retain board members for whom safety is a top concern, the only common denominator for the members of its new corporate board would be their obligation to protect the interests of the company’s shareholders, who would expect to earn a profit. While such expectations are common on a for-profit board, they constitute a conflict of interest on a nonprofit board where mission must come first and board members cannot benefit financially from the organization’s work.

The arrangement would, no doubt, please OpenAI’s investors. But would it be good for society? The purpose of nonprofit control over a for-profit subsidiary is to ensure that profit does not interfere with the nonprofit’s mission. Without guardrails to ensure that the board seeks to limit harm to humanity from AI, there would be little reason for it to prevent the company from maximizing profit, even if its chatbots and other AI products endanger society.

Regardless of what OpenAI does, most artificial intelligence companies are already for-profit businesses. So, in my view, the only way to manage the potential harms is through better industry standards and regulations that are starting to take shape.

California’s governor vetoed such a bill in September 2024 on the grounds it would slow innovation – but I believe slowing it down is exactly what is needed, given the dangers AI already poses to society.

Alnoor Ebrahim, Thomas Schmidheiny Professor of International Business, The Fletcher School & Tisch College of Civic Life, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
image 101376000 12222003

The science section of our news blog STM Daily News provides readers with captivating and up-to-date information on the latest scientific discoveries, breakthroughs, and innovations across various fields. We offer engaging and accessible content, ensuring that readers with different levels of scientific knowledge can stay informed. Whether it’s exploring advancements in medicine, astronomy, technology, or environmental sciences, our science section strives to shed light on the intriguing world of scientific exploration and its profound impact on our daily lives. From thought-provoking articles to informative interviews with experts in the field, STM Daily News Science offers a harmonious blend of factual reporting, analysis, and exploration, making it a go-to source for science enthusiasts and curious minds alike. https://stmdailynews.com/category/science/


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading
Advertisement SodaStream USA, inc

STM Blog

AI-generated images can exploit how your mind works − here’s why they fool you and how to spot them

Arryn Robbins discusses the challenges of recognizing AI-generated images due to human cognitive limitations and inattentional blindness, emphasizing the importance of critical thinking in a visually fast-paced online environment.

Published

on

Arryn Robbins, University of Richmond

I’m more of a scroller than a poster on social media. Like many people, I wind down at the end of the day with a scroll binge, taking in videos of Italian grandmothers making pasta or baby pygmy hippos frolicking.

For a while, my feed was filled with immaculately designed tiny homes, fueling my desire for a minimalist paradise. Then, I started seeing AI-generated images; many contained obvious errors, such as staircases to nowhere or sinks within sinks. Yet, commenters rarely pointed them out, instead admiring the aesthetic.

These images were clearly AI-generated and didn’t depict reality. Did people just not notice? Not care?

As a cognitive psychologist, I’d guess “yes” and “yes.” My expertise is in how people process and use visual information. I primarily investigate how people look for objects and information visually, from the mundane searches of daily life, such as trying to find a dropped earring, to more critical searches, like those conducted by radiologists or search-and-rescue teams.

With my understanding of how people process images and notice − or don’t notice − detail, it’s not surprising to me that people aren’t tuning in to the fact that many images are AI-generated.

We’ve been here before

The struggle to detect AI-generated images mirrors past detection challenges such as spotting photoshopped images or computer-generated images in movies.

Advertisement
image 101376000 12222003

But there’s a key difference: Photo editing and CGI require intentional design by artists, while AI images are generated by algorithms trained on datasets, often without human oversight. The lack of oversight can lead to imperfections or inconsistencies that can feel unnatural, such as the unrealistic physics or lack of consistency between frames that characterize what’s sometimes called “AI slop.”

Despite these differences, studies show people struggle to distinguish real images from synthetic ones, regardless of origin. Even when explicitly asked to identify images as real, synthetic or AI-generated, accuracy hovers near the level of chance, meaning people did only a little better than if they’d just guessed.

In everyday interactions, where you aren’t actively scrutinizing images, your ability to detect synthetic content might even be weaker.

Attention shapes what you see, what you miss

Spotting errors in AI images requires noticing small details, but the human visual system isn’t wired for that when you’re casually scrolling. Instead, while online, people take in the gist of what they’re viewing and can overlook subtle inconsistencies.

Visual attention operates like a zoom lens: You scan broadly to get an overview of your environment or phone screen, but fine details require focused effort. Human perceptual systems evolved to quickly assess environments for any threats to survival, with sensitivity to sudden changes − such as a quick-moving predator − sacrificing precision for speed of detection.

This speed-accuracy trade-off allows for rapid, efficient processing, which helped early humans survive in natural settings. But it’s a mismatch with modern tasks such as scrolling through devices, where small mistakes or unusual details in AI-generated images can easily go unnoticed.

People also miss things they aren’t actively paying attention to or looking for. Psychologists call this inattentional blindness: Focusing on one task causes you to overlook other details, even obvious ones. In the famous invisible gorilla study, participants asked to count basketball passes in a video failed to notice someone in a gorilla suit walking through the middle of the scene.

Advertisement
image 101376000 12222003
If you’re counting how many passes the people in white make, do you even notice someone walk through in a gorilla suit?

Similarly, when your focus is on the broader content of an AI image, such as a cozy tiny home, you’re less likely to notice subtle distortions. In a way, the sixth finger in an AI image is today’s invisible gorilla − hiding in plain sight because you’re not looking for it.

Efficiency over accuracy in thinking

Our cognitive limitations go beyond visual perception. Human thinking uses two types of processing: fast, intuitive thinking based on mental shortcuts, and slower, analytical thinking that requires effort. When scrolling, our fast system likely dominates, leading us to accept images at face value.

Adding to this issue is the tendency to seek information that confirms your beliefs or reject information that goes against them. This means AI-generated images are more likely to slip by you when they align with your expectations or worldviews. If an AI-generated image of a basketball player making an impossible shot jibes with a fan’s excitement, they might accept it, even if something feels exaggerated.

While not a big deal for tiny home aesthetics, these issues become concerning when AI-generated images may be used to influence public opinion. For example, research shows that people tend to assume images are relevant to accompanying text. Even when the images provide no actual evidence, they make people more likely to accept the text’s claims as true.

Misleading real or generated images can make false claims seem more believable and even cause people to misremember real events. AI-generated images have the power to shape opinions and spread misinformation in ways that are difficult to counter.

Beating the machine

While AI gets better at detecting AI, humans need tools to do the same. Here’s how:

Advertisement
image 101376000 12222003
  1. Trust your gut. If something feels off, it probably is. Your brain expertly recognizes objects and faces, even under varying conditions. Perhaps you’ve experienced what psychologists call the uncanny valley and felt unease with certain humanoid faces. This experience shows people can detect anomalies, even when they can’t fully explain what’s wrong.
  2. Scan for clues. AI struggles with certain elements: hands, text, reflections, lighting inconsistencies and unnatural textures. If an image seems suspicious, take a closer look.
  3. Think critically. Sometimes, AI generates photorealistic images with impossible scenarios. If you see a political figure casually surprising baristas or a celebrity eating concrete, ask yourself: Does this make sense? If not, it’s probably fake.
  4. Check the source. Is the poster a real person? Reverse image search can help trace a picture’s origin. If the metadata is missing, it might be generated by AI.

AI-generated images are becoming harder to spot. During scrolling, the brain processes visuals quickly, not critically, making it easy to miss details that reveal a fake. As technology advances, slow down, look closer and think critically.The Conversation

Arryn Robbins, Assistant Professor of Psychology, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.


A beautiful kitchen to scroll past – but check out the clock. Tiny Homes via Facebook
AI-generated images

https://stmdailynews.com/space-force-faces-new-challenge-tracking-debris-from-intelsat-33e-breakdown/

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Tech

How close are quantum computers to being really useful? Podcast

Quantum computers could revolutionize science by solving complex problems. However, scaling and error correction remain significant challenges before achieving practical applications.

Published

on

quantum computers
Audio und verbung/Shutterstock

Gemma Ware, The Conversation

Quantum computers have the potential to solve big scientific problems that are beyond the reach of today’s most powerful supercomputers, such as discovering new antibiotics or developing new materials.

But to achieve these breakthroughs, quantum computers will need to perform better than today’s best classical computers at solving real-world problems. And they’re not quite there yet. So what is still holding quantum computing back from becoming useful?

In this episode of The Conversation Weekly podcast, we speak to quantum computing expert Daniel Lidar at the University of Southern California in the US about what problems scientists are still wrestling with when it comes to scaling up quantum computing, and how close they are to overcoming them.

https://cdn.theconversation.com/infographics/561/4fbbd099d631750693d02bac632430b71b37cd5f/site/index.html

Quantum computers harness the power of quantum mechanics, the laws that govern subatomic particles. Instead of the classical bits of information used by microchips inside traditional computers, which are either a 0 or a 1, the chips in quantum computers use qubits, which can be both 0 and 1 at the same time or anywhere in between. Daniel Lidar explains:

“Put a lot of these qubits together and all of a sudden you have a computer that can simultaneously represent many, many different possibilities …  and that is the starting point for the speed up that we can get from quantum computing.”

Faulty qubits

One of the biggest problems scientist face is how to scale up quantum computing power. Qubits are notoriously prone to errors – which means that they can quickly revert to being either a 0 or a 1, and so lose their advantage over classical computers.

Scientists have focused on trying to solve these errors through the concept of redundancy – linking strings of physical qubits together into what’s called a “logical qubit” to try and maximise the number of steps in a computation. And, little by little, they’re getting there.

Advertisement
image 101376000 12222003

In December 2024, Google announced that its new quantum chip, Willow, had demonstrated what’s called “beyond breakeven”, when its logical qubits worked better than the constituent parts and even kept on improving as it scaled up.

Lidar says right now the development of this technology is happening very fast:

“For quantum computing to scale and to take off is going to still take some real science breakthroughs, some real engineering breakthroughs, and probably overcoming some yet unforeseen surprises before we get to the point of true quantum utility. With that caution in mind, I think it’s still very fair to say that we are going to see truly functional, practical quantum computers kicking into gear, helping us solve real-life problems, within the next decade or so.”

Listen to Lidar explain more about how quantum computers and quantum error correction works on The Conversation Weekly podcast.


This episode of The Conversation Weekly was written and produced by Gemma Ware with assistance from Katie Flood and Mend Mariwany. Sound design was by Michelle Macklem, and theme music by Neeta Sarl.

Clips in this episode from Google Quantum AI and 10 Hours Channel.

You can find us on Instagram at theconversationdotcom or via e-mail. You can also subscribe to The Conversation’s free daily e-mail here.

Listen to The Conversation Weekly via any of the apps listed above, download it directly via our RSS feed or find out how else to listen here.

Advertisement
image 101376000 12222003

Gemma Ware, Host, The Conversation Weekly Podcast, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Author


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

News

AI gives nonprogrammers a boost in writing computer code

Published

on

AI
AI coding handles the hard parts for nonprogrammers. Andriy/Moment via Getty Images

Leo Porter, University of California, San Diego and Daniel Zingaro, University of Toronto

What do you think there are more of: professional computer programmers or computer users who do a little programming?

It’s the second group. There are millions of so-called end-user programmers. They’re not going into a career as a professional programmer or computer scientist. They’re going into business, teaching, law, or any number of professions – and they just need a little programming to be more efficient. The days of programmers being confined to software development companies are long gone.

If you’ve written formulas in Excel, filtered your email based on rules, modded a game, written a script in Photoshop, used R to analyze some data, or automated a repetitive work process, you’re an end-user programmer.

As educators who teach programming, we want to help students in fields other than computer science achieve their goals. But learning how to program well enough to write finished programs can be hard to accomplish in a single course because there is so much to learn about the programming language itself. Artificial intelligence can help.

Lost in the weeds

Learning the syntax of a programming language – for example, where to place colons and where indentation is required – takes a lot of time for many students. Spending time at the level of syntax is a waste for students who simply want to use coding to help solve problems rather than learn the skill of programming.

As a result, we feel our existing classes haven’t served these students well. Indeed, many students end up barely able to write small functions – short, discrete pieces of code – let alone write a full program that can help make their lives better.

a teacher speaks to students in a classroom with a large screen displaying computer code
Learning a programming language can be difficult for those who are not computer science students. LordHenriVoton/E+ via Getty Images

Tools built on large language models such as GitHub Copilot may allow us to change these outcomes. These tools have already changed how professionals program, and we believe we can use them to help future end-user programmers write software that is meaningful to them.

These AIs almost always write syntactically correct code and can often write small functions based on prompts in plain English. Because students can use these tools to handle some of the lower-level details of programming, it frees them to focus on bigger-picture questions that are at the heart of writing software programs. Numerous universities now offer programming courses that use Copilot.

Advertisement
image 101376000 12222003

At the University of California, San Diego, we’ve created an introductory programming course primarily for those who are not computer science students that incorporates Copilot. In this course, students learn how to program with Copilot as their AI assistant, following the curriculum from our book. In our course, students learn high-level skills such as decomposing large tasks into smaller tasks, testing code to ensure its correctness, and reading and fixing buggy code.

Freed to solve problems

In this course, we’ve been giving students large, open-ended projects and couldn’t be happier with what they have created.

For example, in a project where students had to find and analyze online datasets, we had a neuroscience major create a data visualization tool that illustrated how age and other factors affected stroke risk. Or, for example, in another project, students were able to integrate their personal art into a collage, after applying filters that they had created using the programming language Python. These projects were well beyond the scope of what we could ask students to do before the advent of large language model AIs.

Given the rhetoric about how AI is ruining education by writing papers for students and doing their homework, you might be surprised to hear educators like us talking about its benefits. AI, like any other tool people have created, can be helpful in some circumstances and unhelpful in others.

In our introductory programming course with a majority of students who are not computer science majors, we see firsthand how AI can empower students in specific ways – and promises to expand the ranks of end-user programmers.

Leo Porter, Teaching Professor of Computer Science and Engineering, University of California, San Diego and Daniel Zingaro, Associate Professor of Mathematical and Computational Sciences, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
image 101376000 12222003


Discover more from Daily News

Subscribe to get the latest posts sent to your email.

Continue Reading

Trending