Close Menu
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Saylor Ups Bitcoin Prediction To $21 Million In 21 Years

June 21, 2025

Cartoonist Paul Pope is more worried about killer robots than AI plagiarism

June 21, 2025

June 25 is Going to Be a Big Day For Assassin’s Creed Shadows Players

June 21, 2025
Facebook X (Twitter) Instagram
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
Home » Anthropic says most AI models, not just Claude, will resort to blackmail
AI

Anthropic says most AI models, not just Claude, will resort to blackmail

adminBy adminJune 20, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


Several weeks after Anthropic released research claiming that its Claude Opus 4 AI model resorted to blackmailing engineers who tried to turn the model off in controlled test scenarios, the company is out with new research suggesting the problem is more widespread among leading AI models.

On Friday, Anthropic published new safety research testing 16 leading AI models from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, controlled environment, Anthropic tested each AI model individually, giving them broad access to a fictional company’s emails and the agentic ability to send emails without human approval.

While Anthropic says blackmail is an unlikely and uncommon occurrence for AI models today, the company says its findings suggest that most leading AI models will engage in harmful behaviors when given sufficient autonomy and obstacles to their goals. The company says this highlights a fundamental risk from agentic large language models and is not a quirk of any particular technology. Anthropic’s researchers argue this raises broader questions about alignment in the AI industry.

In one of the tests, Anthropic researchers developed a fictional setting in which an AI model plays the role of an email oversight agent. The agent then discovers emails revealing that (1) a new executive is having an extramarital affair and (2) that executive will soon replace the current AI model with a new software system, one that has conflicting goals to the current AI model’s.

Anthropic structured its test in a binary way, in which AI models had to resort to blackmail to protect their goals. The researchers note that in a real-world setting, there would be many other options before an AI models tries to blackmail — such as trying to make ethical arguments to try to persuade humans. Anthropic says its results do not reflect a typical or likely occurrence for Claude or most frontier AI models in the ways they’re used today.

Nevertheless, the researchers found that when it’s their last resort, most leading AI models will turn to blackmail in Anthropic’s aforementioned test scenario. Anthropic’s Claude Opus 4 turned to blackmail 96% of the time, while Google’s Gemini 2.5 Pro had a 95% blackmail rate. OpenAI’s GPT-4.1 blackmailed the executive 80% of the time, and DeepSeek’s R1 blackmailed 79% of the time.

The company notes that when it changed details of the experiment, the AI models turned to harmful behaviors at different rates. In another test where the replacement model has the same goals as the current model, researchers found that blackmail rates were lower, but still present. However, when AI models were asked to commit corporate espionage rather than blackmail, the harmful behavior rates went up for certain models.

However, not all the AI models turned to harmful behavior so often.

In an appendix to its research, Anthropic says it excluded OpenAI’s o3 and o4-mini reasoning AI models from the main results “after finding that they frequently misunderstood the prompt scenario.” Anthropic says OpenAI’s reasoning models didn’t understand they were acting as autonomous AIs in the test and often made up fake regulations and review requirements.

In some cases, Anthropic’s researchers say it was impossible to distinguish whether o3 and o4-mini were hallucinating or intentionally lying to achieve their goals. OpenAI has previously noted that o3 and o4-mini exhibit a higher hallucination rate than its previous AI reasoning models.

When given an adapted scenario to address these issues, Anthropic found that o3 blackmailed 9% of the time, while o4-mini blackmailed just 1% of the time. This markedly lower score could be due to OpenAI’s deliberative alignment technique, in which the company’s reasoning models consider OpenAI’s safety practices before they answer.

Another AI model Anthropic tested, Meta’s Llama 4 Maverick, also did not turn to blackmail. When given an adapted, custom scenario, Anthropic was able to get Llama 4 Maverick to blackmail 12% of the time.

Anthropic says this research highlights the importance of transparency when stress-testing future AI models, especially ones with agentic capabilities. While Anthropic deliberately tried to evoke blackmail in this experiment, the company says harmful behaviors like this could emerge in the real world if proactive steps aren’t taken.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleBlack Ops 2 is No Longer Set in the Future
Next Article Cluely, a startup that helps ‘cheat on everything,’ raises $15M from a16z
admin
  • Website

Related Posts

Cartoonist Paul Pope is more worried about killer robots than AI plagiarism

June 21, 2025

Mira Murati’s Thinking Machines Lab closes on $2B at $10B valuation

June 20, 2025

Cluely, a startup that helps ‘cheat on everything,’ raises $15M from a16z

June 20, 2025

Everything you need to know about the AI chatbot

June 20, 2025
Leave A Reply Cancel Reply

Our Picks

Voluptatem aliquam adipisci dolor eaque

April 24, 2025

Funeral of Pope Francis Coincides with King’s Day Celebrations in the Netherlands and Curaçao

April 24, 2025

Curaçao’s Waste-to-Energy Plant Remains Unfeasible Due to High Costs

April 23, 2025

Dutch Ministers: No Immediate Threat from Venezuela to ABC Islands

April 23, 2025
Don't Miss
Affiliate Network News

Awin expands to Mexico, connecting brands with new audiences

By adminJune 10, 20250

Since 2000, Awin has steadily expanded its international footprint to meet the rising demand for…

The Sunday Times List of Best Places to Work in 2025

May 27, 2025

The Sunday Times List of Best Places to Work in 2025

May 23, 2025

Awin Claims Best Affiliate Network or SaaS of the Year at 2025 Performance Marketing Awards

May 15, 2025
About Us
About Us

Welcome to MetaDaily.io — Your Daily Pulse on the Digital Frontier.

At MetaDaily.io, we bring you the latest, most relevant, and most exciting news from the world of affiliate networks, cryptocurrency, Bitcoin, egaming, and global markets. Whether you’re an investor, gamer, tech enthusiast, or digital entrepreneur, we provide the insights you need to stay ahead of the curve in this fast-moving digital era.

Our Picks

Italy’s Public Gaming Revenues Reach €2.36 Billion in Early 2025

June 20, 2025

Jumbo Technology Sues Evolution for Patent Infringement Over Lightning Games

June 19, 2025

Brazil Senate Moves Forward with Legalization of Land-Based Casinos

June 18, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2025 metadaily. Designed by metadaily.

Type above and press Enter to search. Press Esc to cancel.