Close Menu
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
    • Online Casino bonuses
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Open-World Games That Officially Surpassed Red Dead Redemption 2

November 14, 2025

Bitcoin Resonates With Everyone on the Political Spectrum

November 14, 2025

How to Unlock All Deities in Anno 117 Pax Romana

November 14, 2025
Facebook X (Twitter) Instagram
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
    • Online Casino bonuses
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
Home » Are bad incentives to blame for AI hallucinations?
AI

Are bad incentives to blame for AI hallucinations?

adminBy adminSeptember 7, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email
Lucky's Ledger logo
Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now
Lucky's Ledger mascot


A new research paper from OpenAI asks why large language models like GPT-5 and chatbots like ChatGPT still hallucinate and whether anything can be done to reduce those hallucinations.

Lucky's Ledger logo
Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now
Lucky's Ledger mascot

In a blog post summarizing the paper, OpenAI defines hallucinations as “plausible but false statements generated by language models,” and it acknowledges that despite improvements, hallucinations “remain a fundamental challenge for all large language models” — one that will never be completely eliminated.

To illustrate the point, researchers say that when they asked “a widely used chatbot” about the title of Adam Tauman Kalai’s PhD dissertation, they got three different answers, all of them wrong. (Kalai is one of the paper’s authors.) They then asked about his birthday and received three different dates. Once again, all of them were wrong.

How can a chatbot be so wrong — and sound so confident in its wrongness? The researchers suggest that hallucinations arise, in part, because of a pretraining process that focuses on getting models to correctly predict the next word, without true or false labels attached to the training statements: “The model sees only positive examples of fluent language and must approximate the overall distribution.”

“Spelling and parentheses follow consistent patterns, so errors there disappear with scale,” they write. “But arbitrary low-frequency facts, like a pet’s birthday, cannot be predicted from patterns alone and hence lead to hallucinations.”

The paper’s proposed solution, however, focuses less on the initial pretraining process and more on how large language models are evaluated. It argues that the current evaluation models don’t cause hallucinations themselves, but they “set the wrong incentives.”

The researchers compare these evaluations to the kind of multiple-choice tests where random guessing makes sense, because “you might get lucky and be right,” while leaving the answer blank “guarantees a zero.” 

Techcrunch event

San Francisco
|
October 27-29, 2025

“In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say ‘I don’t know,’” they say.

The proposed solution, then, is similar to tests (like the SAT) that include “negative [scoring] for wrong answers or partial credit for leaving questions blank to discourage blind guessing.” Similarly, OpenAI says model evaluations need to “penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.”

And the researchers argue that it’s not enough to introduce “a few new uncertainty-aware tests on the side.” Instead, “the widely used, accuracy-based evals need to be updated so that their scoring discourages guessing.”

“If the main scoreboards keep rewarding lucky guesses, models will keep learning to guess,” the researchers say.



Source link

Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now
ChatGPT hallucinations large language models OpenAI
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleKoah raises $5M to bring ads into AI apps
Next Article Kinto Tanks 81% After Announcing Its Blockchain Will Shut Down
admin
  • Website

Related Posts

VCs abandon old rules for a ‘funky time’ of investing in AI startups

November 13, 2025

Apple’s new App Review Guidelines clamp down on apps sharing personal data with ‘third-party AI’

November 13, 2025

Google’s NotebookLM adds ‘Deep Research’ tool, support for more file types

November 13, 2025

Google’s SIMA 2 agent uses Gemini to reason and act in virtual worlds

November 13, 2025

Comments are closed.

Our Picks

Voluptatem aliquam adipisci dolor eaque

April 24, 2025

Funeral of Pope Francis Coincides with King’s Day Celebrations in the Netherlands and Curaçao

April 24, 2025

Curaçao’s Waste-to-Energy Plant Remains Unfeasible Due to High Costs

April 23, 2025

Dutch Ministers: No Immediate Threat from Venezuela to ABC Islands

April 23, 2025
Don't Miss
Affiliate Network News

Awin Wins Big at Global Performance Awards 2025

By adminOctober 22, 20250

Awin and our partners made this year’s Global Performance Marketing Awards one to remember, claiming…

Awin Shortlisted 11 Times at GPMA 2025

September 11, 2025

Awin’s CPI Recovers $100M in Affiliate Revenue

September 11, 2025

Awin and Birl partner to transform resale into a scalable growth engine for brands

August 28, 2025
About Us
About Us

Welcome to MetaDaily.io — Your Daily Pulse on the Digital Frontier.

At MetaDaily.io, we bring you the latest, most relevant, and most exciting news from the world of affiliate networks, cryptocurrency, Bitcoin, egaming, and global markets. Whether you’re an investor, gamer, tech enthusiast, or digital entrepreneur, we provide the insights you need to stay ahead of the curve in this fast-moving digital era.

Our Picks

European Regulators Meet in Madrid on Gambling Policy

November 13, 2025

New Zealand Raises Online Casino Tax to 16%, Channeling Funds to Sports and Community Programs

November 12, 2025

Zimpler Secures Pix Approval from Brazil’s Central Bank

November 12, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2025 metadaily. Designed by metadaily.

Type above and press Enter to search. Press Esc to cancel.