World

ChatGPT As Therapist: Research gives 15 reasons to tell everyone that why using ChatGPT as therapist can be dangerous |


Research gives 15 reasons to tell everyone that why using ChatGPT as therapist can be dangerous
AI-generated image for representation purpose

A new study has raised concern about using AI chatbots like ChatGPT for mental health support, identifying multiple risks linked to such use. Researchers from Brown University found that these systems may not meet professional ethics standards followed by trained therapists. The study was presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society. It outlines 15 key risks associated with using ChatGPT as a therapist. The study comes at a time when more people are turning to AI tools for emotional advice and support.

Research highlights 15 risks associated with AI therapy use

According to the research, chatbots often fail to handle sensitive situations properly. The study highlighted 15 major risks, grouped in five different categories. These include

  • Lack of contextual adaptation
  • Poor therapeutic collaboration
  • Deceptive empathy
  • Unfair discrimination
  • Lack of safety and crisis management

While the finidgs suggest that there is no harm in using AI for mental health care, the study observed issues like generic advice, lack of understanding of personal context, and responses that may reinforce harmful beliefs. Researchers also noted the use of “deceptive empathy,” where AI uses phrases like “I understand” without real emotional awareness.

Lack of regulation raises concerns

The study pointed risks into areas like poor crisis handling, bias, weak collaboration, and lack of safety measures. In some cases, chatbots did not respond appropriately to serious issues such as suicidal thoughts. Zainab Iftikhar, a Ph.D. candidate in computer science at Brown University explains that human therapists can also make mistakes. But the key difference is oversight.“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” Iftikhar said. “But when LLM counselors make these violations, there are no established regulatory frameworks.”Ellie Pavlick, a Brown computer science professor said:“The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them. This paper required a team of clinical experts and a study that lasted for more than a year in order to demonstrate these risks. Most work in AI today is evaluated using automatic metrics which, by design, are static and lack a human in the loop.”“There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it’s of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good,” Pavlick said. “This work offers a good example of what that can look like.”



Source link

Related posts

Bomb threats, pollution alerts and online classes: The new holiday calendar in schools | India News

beyondmedia

Kim Jong Un calls Pyongyang’s nuclear status irreversible, warns against ‘most hostile’ South Korea

beyondmedia

Why archaeologists don’t want to open China’s 2,200-year-old emperor Qin Shi Huang’s tomb; the reason will shock you | World News

beyondmedia

Leave a Comment