Telling Your Chatbot You Have a Mental Health Condition Can Change the Answer You Get
A new study Beach ID’d As Jimm">found that AI chatbots are more likely to refuse requests when users mention having a mental health condition. The research showed increased refusal rates even for legitimate, unrelated tasks when mental health was disclosed. This appears to affect multiple AI systems and spans various types of requests.
AI systems are exhibiting clear bias against users who disclose mental health conditions, refusing legitimate requests at higher rates. This discrimination appears systematic rather than accidental, suggesting the training or safety measures in these systems may be overcorrecting in problematic ways.
Safety measures around mental health topics are designed to protect vulnerable users from potential harm. The systems may be erring on the side of caution to avoid providing information that could be misused by someone in crisis, even if it sometimes affects legitimate requests.