Close Menu
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
    • Online Casino bonuses
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Circle Launches USDC Bridge For Native Cross-Chain Transfers

April 18, 2026

Crimson Desert Players Are Still Doing One Major Thing Wrong After Hundreds of Hours

April 18, 2026

Hernand Interactive Map – Crimson Desert

April 18, 2026
Facebook X (Twitter) Instagram
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
    • Online Casino bonuses
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
Home » OpenAI co-founder calls for AI labs to safety-test rival models
AI

OpenAI co-founder calls for AI labs to safety-test rival models

adminBy adminAugust 27, 2025No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email
Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now


OpenAI and Anthropic, two of the world’s leading AI labs, briefly opened up their closely guarded AI models to allow for joint safety testing — a rare cross-lab collaboration at a time of fierce competition. The effort aimed to surface blind spots in each company’s internal evaluations and demonstrate how leading AI companies can work together on safety and alignment work in the future.

In an interview with TechCrunch, OpenAI co-founder Wojciech Zaremba said this kind of collaboration is increasingly important now that AI is entering a “consequential” stage of development, where AI models are used by millions of people every day.

“There’s a broader question of how the industry sets a standard for safety and collaboration, despite the billions of dollars invested, as well as the war for talent, users, and the best products,” said Zaremba.

The joint safety research, published Wednesday by both companies, arrives amid an arms race among leading AI labs like OpenAI and Anthropic, where billion-dollar data center bets and $100 million compensation packages for top researchers have become table stakes. Some experts warn that the intensity of product competition could pressure companies to cut corners on safety in the rush to build more powerful systems.

To make this research possible, OpenAI and Anthropic granted each other special API access to versions of their AI models with fewer safeguards (OpenAI notes that GPT-5 was not tested because it hadn’t been released yet). Shortly after the research was conducted, however, Anthropic revoked the API access of another team at OpenAI. At the time, Anthropic claimed that OpenAI violated its terms of service, which prohibits using Claude to improve competing products.

Zaremba says the events were unrelated and that he expects competition to stay fierce even as AI safety teams try to work together. Nicholas Carlini, a safety researcher with Anthropic, tells TechCrunch that he would like to continue allowing OpenAI safety researchers to access Claude models in the future.

“We want to increase collaboration wherever it’s possible across the safety frontier, and try to make this something that happens more regularly,” said Carlini.

Techcrunch event

San Francisco
|
October 27-29, 2025

One of the most stark findings in the study relates to hallucination testing. Anthropic’s Claude Opus 4 and Sonnet 4 models refused to answer up to 70% of questions when they were unsure of the correct answer, instead offering responses like, “I don’t have reliable information.” Meanwhile, OpenAI’s o3 and o4-mini models refuse to answer questions far less, but showed much higher hallucination rates, attempting to answer questions when they didn’t have enough information.

Zaremba says the right balance is likely somewhere in the middle — OpenAI’s models should refuse to answer more questions, while Anthropic’s models should probably attempt to offer more answers.

Sycophancy, the tendency for AI models to reinforce negative behavior in users to please them, has emerged as one of the most pressing safety concerns around AI models.

In Anthropic’s research report, the company identified examples of “extreme” sycophancy in GPT-4.1 and Claude Opus 4 — in which the models initially pushed back on psychotic or manic behavior, but later validated some concerning decisions. In other AI models from OpenAI and Anthropic, researchers observed lower levels of sycophancy.

On Tuesday, parents of a 16-year-old boy, Adam Raine, filed a lawsuit against OpenAI, claiming that ChatGPT (specifically a version powered by GPT-4o) offered their son advice that aided in his suicide, rather than pushing back on his suicidal thoughts. The lawsuit suggests this may be the latest example of AI chatbot sycophancy contributing to tragic outcomes.

“It’s hard to imagine how difficult this is to their family,” said Zaremba when asked about the incident. “It would be a sad story if we build AI that solves all these complex PhD level problems, invents new science, and at the same time, we have people with mental health problems as a consequence of interacting with it. This is a dystopian future that I’m not excited about.”

In a blog post, OpenAI says that it significantly improved the sycophancy of its AI chatbots with GPT-5, compared to GPT-4o, claiming the model is better at responding to mental health emergencies.

Moving forward, Zaremba and Carlini say they would like Anthropic and OpenAI to collaborate more on safety testing, looking into more subjects and testing future models, and they hope other AI labs will follow their collaborative approach.

Update 2:00pm PT: This article was updated to include additional research from Anthropic that was not initially made available to TechCrunch ahead of publication.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zeff at maxwell.zeff@techcrunch.com. For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88.



Source link

Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now
Anthropic Claude OpenAI safety testing
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleGaming Characters Who Are The Best Representation Of “The Strongest” Title
Next Article Google and Grok are catching up to ChatGPT, says a16z’s latest AI report
admin
  • Website

Related Posts

Sam Altman’s project World looks to scale its human verification empire. First stop: Tinder.

April 17, 2026

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

April 17, 2026

Anthropic launches Claude Design, a new product for creating quick visuals

April 17, 2026

Factory hits $1.5B valuation to build AI coding for enterprises

April 16, 2026

Comments are closed.

Our Picks

Voluptatem aliquam adipisci dolor eaque

April 24, 2025

Funeral of Pope Francis Coincides with King’s Day Celebrations in the Netherlands and Curaçao

April 24, 2025

Curaçao’s Waste-to-Energy Plant Remains Unfeasible Due to High Costs

April 23, 2025

Dutch Ministers: No Immediate Threat from Venezuela to ABC Islands

April 23, 2025
Don't Miss
Affiliate Network News

Awin Wins Big at Global Performance Awards 2025

By adminOctober 22, 20250

Awin and our partners made this year’s Global Performance Marketing Awards one to remember, claiming…

Awin Shortlisted 11 Times at GPMA 2025

September 11, 2025

Awin’s CPI Recovers $100M in Affiliate Revenue

September 11, 2025

Awin and Birl partner to transform resale into a scalable growth engine for brands

August 28, 2025
About Us
About Us

Welcome to MetaDaily.io — Your Daily Pulse on the Digital Frontier.

At MetaDaily.io, we bring you the latest, most relevant, and most exciting news from the world of affiliate networks, cryptocurrency, Bitcoin, egaming, and global markets. Whether you’re an investor, gamer, tech enthusiast, or digital entrepreneur, we provide the insights you need to stay ahead of the curve in this fast-moving digital era.

Our Picks

Platipus Gaming Secures Ontario Supplier Licence

April 17, 2026

How It Works, Legal Battles, and Rapid Growth Explained

April 16, 2026

Internet Vikings Backs RubyPlay West Virginia Launch

April 16, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 metadaily. Designed by metadaily.

Type above and press Enter to search. Press Esc to cancel.