fbpx
Connect with us

Artificial Intelligence

That’s Funny – but AI Models Don’t Get the Joke

Published

on

Newswise — ITHACA, N.Y. — Large neural networks, a form of artificial intelligence, can generate thousands of jokes along the lines of “Why did the chicken cross the road?” But do they understand why they’re funny?

Using hundreds of entries from the New Yorker magazine’s Cartoon Caption Contest as a testbed, researchers challenged AI models and humans with three tasks: matching a joke to a cartoon; identifying a winning caption; and explaining why a winning caption is funny.

In all tasks, humans performed demonstrably better than machines, even as AI advances such as ChatGPT have closed the performance gap. So are machines beginning to “understand” humor? In short, they’re making some progress, but aren’t quite there yet.

“The way people challenge AI models for understanding is to build tests for them – multiple choice tests or other evaluations with an accuracy score,” said Jack Hessel, Ph.D. ’20, research scientist at the Allen Institute for AI (AI2). “And if a model eventually surpasses whatever humans get at this test, you think, ‘OK, does this mean it truly understands?’ It’s a defensible position to say that no machine can truly `understand’ because understanding is a human thing. But, whether the machine understands or not, it’s still impressive how well they do on these tasks.”

Hessel is lead author of “Do Androids Laugh at Electric Sheep? Humor ‘Understanding’ Benchmarks from The New Yorker Caption Contest,” which won a best-paper award at the 61st annual meeting of the Association for Computational Linguistics, held July 9-14 in Toronto.

Lillian Lee ’93, the Charles Roy Davis Professor in the Cornell Ann S. Bowers College of Computing and Information Science, and Yejin Choi, Ph.D. ’10, professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and the senior director of common-sense intelligence research at AI2, are also co-authors on the paper.

For their study, the researchers compiled 14 years’ worth of New Yorker caption contests – more than 700 in all. Each contest included: a captionless cartoon; that week’s entries; the three finalists selected by New Yorker editors; and, for some contests, crowd quality estimates for each submission.  

Advertisement

For each contest, the researchers tested two kinds of AI – “from pixels” (computer vision) and “from description” (analysis of human summaries of cartoons) – for the three tasks.

“There are datasets of photos from Flickr with captions like, ‘This is my dog,’” Hessel said. “The interesting thing about the New Yorker case is that the relationships between the images and the captions are indirect, playful, and reference lots of real-world entities and norms. And so the task of ‘understanding’ the relationship between these things requires a bit more sophistication.”

In the experiment, matching required AI models to select the finalist caption for the given cartoon from among “distractors” that were finalists but for other contests; quality ranking required models to differentiate a finalist caption from a nonfinalist; and explanation required models to generate free text saying how a high-quality caption relates to the cartoon.

Hessel penned the majority of human-generated explanations himself, after crowdsourcing the task proved unsatisfactory. He generated 60-word explanations for more than 650 cartoons.

“A number like 650 doesn’t seem very big in a machine-learning context, where you often have thousands or millions of data points,” Hessel said, “until you start writing them out.”

This study revealed a significant gap between AI- and human-level “understanding” of why a cartoon is funny. The best AI performance in a multiple choice test of matching cartoon to caption was only 62% accuracy, far behind humans’ 94% in the same setting. And when it came to comparing human- vs. AI-generated explanations, humans’ were preferred roughly 2-to-1.

While AI might not be able to “understand” humor yet, the authors wrote, it could be a collaborative tool humorists could use to brainstorm ideas.

Advertisement

Other contributors include Ana Marasovic, assistant professor at the University of Utah School of Computing; Jena D. Hwang, research scientist at AI2; Jeff Da, research assistant at the University of Washington Rowan Zellers, researcher at OpenAI; and humorist Robert Mankoff, president of Cartoon Collections and long-time cartoon editor at the New Yorker.

The authors wrote this paper in the spirit of the subject matter, with playful comments and footnotes throughout.

“This three or four years of research wasn’t always super fun,” Lee said, “but something we try to do in our work, or at least in our writing, is to encourage more of a spirit of fun.”

This work was funded in part by the Defense Advanced Research Projects Agency; AI2; and a Google Focused Research Award.

Source: Cornell University

Artificial Intelligence

NASA Report: No Evidence of Extraterrestrial Origin for UFOs

NASA’s recent report debunks extraterrestrial claims, finding no evidence linking UFOs to aliens. Discover the scientific findings.

Published

on

With AI as their companion, the NASA study team intends to develop advanced algorithms capable of discerning patterns, identifying anomalies, and separating genuine UAP sightings from misidentifications or mundane phenomena. These algorithms will be trained on a wealth of data, including sensor data from aircraft, satellites, and ground-based observatories, aiming to unveil the secrets hidden within the cosmic tapestry.


What NASA’s new UFO report says — and what it doesn’t

Additionally, the team plans to collaborate with international partners and engage the scientific community, encouraging the rigorous examination of UAP sightings through a multi-disciplinary approach. By fostering collaboration and sharing data, NASA aspires to cultivate a comprehensive understanding of these enigmatic aerial phenomena.

What does this report mean for the public’s perception of UFOs/UAPs?
The unveiling of NASA’s report serves as a beacon of scientific scrutiny amidst the sea of conjecture that often surrounds UFOs and UAPs. It offers a nuanced perspective, grounded in empirical evidence and systematic investigation. While the report does not present evidence of extraterrestrial origins for these phenomena, it does advocate for a serious and rational examination of UAP sightings.



This newfound transparency from NASA can potentially shift the public’s perception of UFOs/UAPs from sensationalism and speculation toward a more measured and evidence-based discourse. The report encourages open dialogue, scientific inquiry, and the exploration of alternative explanations, ultimately fostering a greater understanding of the mysteries that reside in our celestial realm.

As humanity continues its relentless quest for knowledge and understanding of the universe, NASA’s report stands as a testament to the power of science, reminding us that even in the face of cosmic enigmas, rational investigation remains our most potent tool.

Summary: An independent study team appointed by NASA has not found evidence of extraterrestrial unidentified anomalous phenomena (UAPs), nor have they found any terrestrial explanations.

https://www.nasa.gov/press-release/update-nasa-shares-uap-independent-study-report-names-director

Advertisement
Continue Reading

Artificial Intelligence

TIME Reveals Inaugural TIME100 AI List of the World’s Most Influential People in Artificial Intelligence

Published

on


NEW YORK /PRNewswire/ — Today, TIME reveals the inaugural TIME100 AI, a new list highlighting the 100 most influential people in artificial intelligence.

The 2023 TIME100 AI issue features a worldwide cover with illustrations by Neil Jamieson for TIME, featuring 28 list-makers including Sam Altman of OpenAI, Dario and Daniela Amodei of Anthropic, Demis Hassabis of Google DeepMind, and more from the new list.


TIME100 AI

Published alongside the TIME100 AI are in-depth profiles and interviews with musician Holly Herndon, co-founder of character.ai Noam Shazeer, world-renowned researcher Geoffrey Hinton, president of Signal Meredith Whittaker, co-founder and chief AGI scientist of Google DeepMind Shane Legg, co-founder and president of OpenAI Greg Brockman, co-founder and chief scientist of OpenAI Ilya Sutskever, co-founder of Schmidt Futures Eric Schmidt, science fiction writer Ted Chiang, policy adviser Alondra Nelson and more.

To assemble the list, TIME’s editors and reporters solicited nominations and recommendations from industry leaders and dozens of expert sources. The result is a list of 100 leaders, pioneers, innovators and thinkers who are shaping today’s AI landscape.

“TIME’s mission is to highlight the people and ideas that are making the world a better, more equitable place,” said TIME Chief Executive Officer Jessica Sibley. “At this critical moment of exceptional growth and advancement in AI, we are proud to reveal the first-ever TIME100 AI list to recognize the individuals leading AI innovation, including those advancing major conversations to promote equity in AI.”

Of the inaugural TIME100 AI list, TIME Editor-in-Chief Sam Jacobs writes: “Reporting on people and influence is what TIME does best. That led us to the TIME100 AI.…This group of 100 individuals is in many ways a map of the relationships and power centers driving the development of AI. They are rivals and regulators, scientists and artists, advocates and executives—the competing and cooperating humans whose insights, desires, and flaws will shape the direction of an increasingly influential technology.” https://bit.ly/3r0lgfH

HIGHLIGHTS FROM THE 2023 TIME100 AI LIST: 

The 2023 TIME100 AI list features 43 CEOs, founders and co-founders: Elon Musk of xAI, Sam Altman of OpenAI, Andrew Hopkins of Exscientia, Nancy Xu of Moonhub, Kate Kallot of Amini, Pelonomi Moiloa of Lelapa AI, Jack Clark of Anthropic, Raquel Urtasan of Waabi, Aidan Gomez of Cohere and more.

The list features 41 women and nonbinary individuals, including: CEO & co-founder of Humane Intelligence Rumman Chowdhury, cognitive scientist Abeba Birhane, COO of Google DeepMind Lila Ibrahim, General Manager of the Data Center and AI Group at Intel Sandra Rivera, chief AI ethics scientist at Hugging Face Margaret Mitchell, Stanford professor Fei-Fei Li, artist Linda Dounia Rebeiz, artist Kelly McKernan and more.

Advertisement

The youngest individual recognized on the TIME100 AI list is 18-year-old Sneha Revanur, who recently met with the Biden Administration as part of her work leading Encode Justice, a youth-led movement organizing for ethical AI. On the other end is 76-year-old Geoffrey Hinton, who left his position at Google this spring to speak freely about the dangers of the technology he helped bring into existence.

Policy-makers and government officials on this year’s list include: U.S. representatives Anna Eshoo and Ted Lieu, chair of the U.K.’s AI Foundation Model Taskforce Ian Hogarth, Taiwan’s minister of digital affairs Audrey Tang, and the UAE’s minister for artificial intelligence Omar Al Olama.

Scientists, professors, researchers and activists recognized on the list include those focused on AI ethics, bias and safety: president of Future of Life Institute Max Tegmark, professor Emily M. Bender, professor Yoshua Bengio, professor and researcher Kate Crawford, researcher Yi Zeng, computer scientist and artist Joy Buolamwini, labor organizer Richard Mathenge, researcher Inioluwa Deborah Raji, researcher Timnit Gebru, and more.

Rootport, the anonymous author of Japanese manga, who used Midjourney to produce the first completely AI-illustrated Japanese comic.

The list also features creatives interrogating the influence of AI on society or experimenting with the technology including: musician Grimes, science fiction writer Ted ChiangBlack Mirror creator Charlie Brooker, filmmaker Lilly Wachowski, musician Holly Herndon, artist Linda Dounia Rebeiz, artist Sougwen Chung and more.

See the full TIME100 AI list herehttps://time.com/collection/time100-ai/

TIME TO CONVENE SERIES OF EVENTS FOCUSED ON WOMEN IN AI 

Advertisement

Following the publication of the inaugural TIME100 AI list, TIME will host a series of new events that will convene leaders to facilitate meaningful conversations to drive impact with a focus on finding solutions to create a more inclusive future with AI.

TIME will host a series of TIME100 Talks showcasing the foundational role female leadership plays in AI innovation during Dreamforce on September 12-14. Featured speakers include Alondra NelsonFei-Fei Li and Ayanna Howard. 

With presenting partner Meta, TIME will convene the “TIME100 Impact Dinner: Women in AI” to spotlight influential leaders in AI in October. 

TIME will also host a special “TIME100 Talks” on the topic of AI accessibility and responsible AI in November, presented by Intel.

About TIME
TIME is the 100-year-old global media brand that reaches a combined audience of over 120 million around the world through its iconic magazine and digital platforms. With unparalleled access to the world’s most influential people, the trust of consumers and partners globally, and an unrivaled power to convene, TIME’s mission is to tell the essential stories of the people and ideas that shape and improve the world. Today, TIME also includes the Emmy Award®-winning film and television division TIME Studios; a significantly expanded live events business built on the powerful TIME100 and Person of the Year franchises and custom experiences; TIME for Kids, which provides trusted news with a focus on news literacy for kids and valuable resources for teachers and families; the award-winning branded content studio Red Border Studios; an industry-leading web3 division; the website-building platform TIME Sites; the sustainability and climate action platform TIME CO2; the new e-commerce and content platform TIME Stamped, and more. 

SOURCE TIME

Advertisement
Continue Reading

adult relationships

AI and Friendships: Negative Effects of AI-Assisted Messages

AI-assisted messages can harm friendships, leading to dissatisfaction and uncertainty in relationships.

Published

on

A recent study conducted by Ohio State University suggests that using artificial intelligence (AI) to help write messages to friends may have negative consequences on relationships. The research found that when participants discovered that their friend had used AI assistance or received help from another person to compose a message, they perceived less effort being put into the relationship. This perception not only affected the message itself but also had broader implications. Participants reported feeling less satisfied with their relationships and experienced increased uncertainty about where they stood with their friends.


code projected over woman
Photo by ThisIsEngineering on Pexels.com

Interestingly, the study revealed that negative effects were observed even when participants learned that their friend had received assistance from another human. This suggests that people value the personal effort and investment put into maintaining relationships, rather than relying on external aids.

As AI chatbots like ChatGPT gain popularity, the issue of how to use them appropriately becomes more complex. The study involved 208 adults who were instructed to write messages to a fictional friend named Taylor, who then responded with a message that was either AI-assisted, assisted by another person, or solely written by Taylor. Participants who received AI-assisted replies rated them as less appropriate and improper, leading to decreased satisfaction and increased uncertainty about the friendship.

The study’s lead author, Bingjie Liu, emphasizes the importance of sincerity and authenticity in relationships. While most people may not disclose their use of AI to craft messages, Liu suggests that as AI technology becomes more prevalent, individuals may unknowingly question the authenticity of messages, potentially harming relationships.

Ultimately, the study highlights the value of putting in personal effort and avoiding shortcuts in maintaining meaningful connections. Technology should not be used solely for convenience; sincerity and authenticity remain fundamental in fostering strong and fulfilling relationships.

Journal Link: Journal of Social and Personal Relationships

https://www.newswise.com/articles/ai-can-help-write-a-message-to-a-friend-but-don-t-do-it?sc=lwhn&user=10022176

Advertisement
Continue Reading

Trending