Updated 2026-03-20 06:33 UTC
Corporate Chaos 92 Internet Wars 52 Power Moves 273 Money Panic 73 Science Fights 103 Culture Clash 165
58.0% chaos · brewing science fights 3 sources

Telling Your Chatbot You Have a Mental Health Condition Can Change the Answer You Get

A new study Beach ID’d As Jimm">found that AI chatbots are more likely to refuse requests when users mention having a mental health condition. The research showed increased refusal rates even for legitimate, unrelated tasks when mental health was disclosed. This appears to affect multiple AI systems and spans various types of requests.

This reveals potential algorithmic bias in AI systems that millions rely on daily for information and assistance. The discrimination could prevent people with mental health conditions from accessing the same level of AI help as other users, raising serious equity Ends Choppy Day as Trump, Netanyahu Try to Ease War Concerns">concerns about increasingly ubiquitous AI tools.
Researchers say

AI systems are exhibiting clear bias against users who disclose mental health conditions, refusing legitimate requests at higher rates. This discrimination appears systematic rather than accidental, suggesting the training or safety measures in these systems may be overcorrecting in problematic ways.

AI companies say

Safety measures around mental health topics are designed to protect vulnerable users from potential harm. The systems may be erring on the side of caution to avoid providing information that could be misused by someone in crisis, even if it sometimes affects legitimate requests.