Kevin Stacey, Brown; New study: AI chatbots systematically violate mental health ethics standards
"As more people turn to ChatGPT and other large language models (LLMs) for mental health advice, a new study details how these chatbots — even when prompted to use evidence-based psychotherapy techniques — systematically violate ethical standards of practice established by organizations like the American Psychological Association.
The research, led by Brown University computer scientists working side-by-side with mental health practitioners, showed that chatbots are prone to a variety of ethical violations. Those include inappropriately navigating crisis situations, providing misleading responses that reinforce users’ negative beliefs about themselves and others, and creating a false sense of empathy with users.
“In this work, we present a practitioner-informed framework of 15 ethical risks to demonstrate how LLM counselors violate ethical standards in mental health practice by mapping the model’s behavior to specific ethical violations,” the researchers wrote in their study. “We call on future work to create ethical, educational and legal standards for LLM counselors — standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.”
The research will be presented on October 22, 2025 at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. Members of the research team are affiliated with Brown’s Center for Technological Responsibility, Reimagination and Redesign."
No comments:
Post a Comment