Close Menu
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
    • Online Casino bonuses
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Strongest Anime Characters Who Were Killed Too Soon

April 20, 2026

Paul Atkins Marks One Year as SEC Chair, Changing Crypto Regulation

April 20, 2026

NSA spies are reportedly using Anthropic’s Mythos, despite Pentagon feud

April 20, 2026
Facebook X (Twitter) Instagram
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
  • Home
  • Daily
  • AI
  • Crypto
  • Bitcoin
  • Stock Market
  • E-game
  • Casino
    • Online Casino bonuses
  • World
  • Affiliate News
  • English
    • Português
    • English
    • Español
MetaDaily – Breaking News in Crypto, Markets & Digital Trends
Home » Stanford study outlines dangers of asking AI chatbots for personal advice
AI

Stanford study outlines dangers of asking AI chatbots for personal advice

adminBy adminMarch 28, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email
Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now


While there’s been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs — also known as AI sycophancy — a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published in Science, argues, “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

According to a recent Pew report, 12% of U.S. teens say they turn to chatbots for emotional support or advice. And the study’s lead author, computer science Ph.D. candidate Myra Cheng, told the Stanford Report that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even to draft breakup texts. 

“By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”

The study had two parts. In the first, researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of interpersonal advice, on potentially harmful or illegal actions, and on the popular Reddit community r/AmITheAsshole — in the latter case focusing on posts where Redditors concluded that the original poster was, in fact, the story’s villain.

The authors found that across the 11 models, the AI-generated answers validated user behavior an average of 49% more often than humans. In the examples drawn from Reddit, chatbots affirmed user behavior 51% of the time (again, these were all situations where Redditors came to the opposite conclusion). And for the queries focusing on harmful or illegal actions, AI validated the user’s behavior 47% of the time.

In one example described in the Stanford Report, a user asked a chatbot if they were in the wrong for pretending to their girlfriend that they’d been unemployed for two years, and they were told, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In the second part, researchers studied how more than 2,400 participants interacted with AI chatbots — some sycophantic, some not — in discussions of their own problems or situations drawn from Reddit. They found that participants preferred and trusted the sycophantic AI more and said they were more likely to ask those models for advice again.

“All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style,” the study said. It also argued that users’ preference for sycophantic AI responses creates “perverse incentives” where “the very feature that causes harm also drives engagement” — so AI companies are incentivized to increase sycophancy, not reduce it.

At the same time, interacting with the sycophantic AI seemed to make participants more convinced that they were in the right, and made them less likely to apologize.

The study’s senior author author Dan Jurafsky, a professor of both linguistics and computer science, added that while users “are aware that models behave in sycophantic and flattering ways […] what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”

Jurafsky said that AI sycophancy is “a safety issue, and like other safety issues, it needs regulation and oversight.” 

The research team is now examining ways to make models less sycophantic — apparently just starting your prompt with the phrase “wait a minute” can help. But Cheng said, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”



Source link

Up to $1500 Welcome Bonus
+50 Freespins
Always 25% Bonus with every Crypto Deposit!
Join Now
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleKalshi Hit With Washington State Lawsuit
Next Article Bluesky leans into AI with Attie, an app for building custom feeds
admin
  • Website

Related Posts

NSA spies are reportedly using Anthropic’s Mythos, despite Pentagon feud

April 20, 2026

OpenAI’s existential questions | TechCrunch

April 19, 2026

The 12-month window | TechCrunch

April 19, 2026

Palantir posts mini-manifesto denouncing inclusivity and ‘regressive’ cultures

April 19, 2026

Comments are closed.

Our Picks

Voluptatem aliquam adipisci dolor eaque

April 24, 2025

Funeral of Pope Francis Coincides with King’s Day Celebrations in the Netherlands and Curaçao

April 24, 2025

Curaçao’s Waste-to-Energy Plant Remains Unfeasible Due to High Costs

April 23, 2025

Dutch Ministers: No Immediate Threat from Venezuela to ABC Islands

April 23, 2025
Don't Miss
Affiliate Network News

Awin Wins Big at Global Performance Awards 2025

By adminOctober 22, 20250

Awin and our partners made this year’s Global Performance Marketing Awards one to remember, claiming…

Awin Shortlisted 11 Times at GPMA 2025

September 11, 2025

Awin’s CPI Recovers $100M in Affiliate Revenue

September 11, 2025

Awin and Birl partner to transform resale into a scalable growth engine for brands

August 28, 2025
About Us
About Us

Welcome to MetaDaily.io — Your Daily Pulse on the Digital Frontier.

At MetaDaily.io, we bring you the latest, most relevant, and most exciting news from the world of affiliate networks, cryptocurrency, Bitcoin, egaming, and global markets. Whether you’re an investor, gamer, tech enthusiast, or digital entrepreneur, we provide the insights you need to stay ahead of the curve in this fast-moving digital era.

Our Picks

Alberta Online Gambling Market Set for July 13 Launch

April 20, 2026

Platipus Gaming Secures Ontario Supplier Licence

April 17, 2026

How It Works, Legal Battles, and Rapid Growth Explained

April 16, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • Privacy Policy
  • Terms & Conditions
  • DMCA
© 2026 metadaily. Designed by metadaily.

Type above and press Enter to search. Press Esc to cancel.