ChatGPT Demonstrates Strong ‘Emotional Awareness,’ But Mental Health Applications May Require Further Development

ChatGPT Demonstrates Strong ‘Emotional Awareness,’ But Mental Health Applications May Require Further Development


Is OpenAI’s ChatGPT ready ​to be applied ⁤in the mental health field?
AFP

KEY POINTS

AI chatbots ⁣cannot replace the doctor-patient relationship, especially in mental health: Dr. Shikha Jain

Today’s LLMs haven’t reached the level where they can be ‘independent’ health advisers: Henri Estramant

Empathy, data privacy and lack of personalization are some issues with AI chatbot application in mental health

OpenAI’s ChatGPT has a high level⁣ of emotional awareness⁣ (EA) – ⁢even surpassing the EA levels ‌of ‍the general population, a recent study has found. But some experts ⁢are raising concerns about the readiness of today’s large language models (LLMs) in mental‍ health applications.

In the study published ⁤in the ‘Frontiers in Psychology’ journal, it was found that ChatGPT ​”demonstrated significantly​ higher performance than the general population on all the LEAS (Levels of Emotional Awareness Scale) scales. The chatbot’s accuracy levels were also “extremely high” at 9.7/10.

“ChatGPT’s EA-like abilities may facilitate psychiatric diagnosis and assessment and be used​ to enhance emotional language,” the researchers said. They noted that further research​ is‍ needed to determine the benefits and risks associated with the use of ChatGPT for promoting mental‍ health.

Innovative technologies such as‌ LLMs have potential in supplementing patient care provided by physicians, but there is no replacing the doctor-patient relationship, especially when it comes to mental health support, Dr. Shikha Jain, director of Communication Strategies in Medicine at the University of Illinois at Chicago’s Department of Medicine, told International Business Times.

AI Hallucinations Show The⁢ Human Using LLMs Has An Important Role To Play, Say Experts

She said she wouldn’t⁤ recommend the‍ use of​ ChatGPT for mental health support “without ‌evaluation, follow-up and monitoring⁢ by a physician.” The AI chatbot may “likely complement” psychiatric assessments and help with diagnoses but there are still ⁢various concerns with psychiatric diagnosis and ⁢LLMs that should be addressed.

Henri Estramant, AI⁢ expert and former service provider for⁢ the European Parliament, told IBT that⁢ extra care should be put ⁤into the use of “terms such as ‘awareness’, sentient’, or ‘intelligent’ when​ dealing with ​ChatGPT.”

“At this moment of their ‍development these generative AIs are designed⁤ to learn⁢ by ‌’educated⁢ conjectures,’ that is, putting information ​together from their LLMs but⁣ they are not aware of what they write,” he said.

A manifestation of LLMs not being aware of what they produce is when ChatGPT​ “uses some‌ words rather often.” It can still learn new words and terms through continuous training, but it is still only providing content it was trained with.

AI⁢ philosophy expert Jacob Browning and​ Turing Award-winning computer scientist Yann LeCun wrote earlier this year that ⁢chatbots “don’t ​recognize there are things they shouldn’t say regardless of how statistically likely they are.”

Being careless is a normal characteristic of AI chatbots as they ‍don’t have “any intrinsic goals” that they want to accomplish when in conversation. They have not yet reached⁢ the level where they can mostly serve⁣ “as an independent adviser in health matters,” Estramant pointed out.

Carelessness in responses is just the tip of the iceberg as the bigger issue is​ the possible direct‍ impact on patients. Uninsured citizens and people in developing countries who will ​turn to‍ AI chatbots for “advise” are more at risk from the use​ of LLMs as these ​tools are readily available or inexpensive compared to physicians’ mental health services.

University of Illinois’ Jain ​added ChatGPT may also not have the capacity to modify or personalize responses when asked for mental health advise. The ​chatbot ⁤may not be⁢ able to consider the patient’s age, culture and background, which could affect‌ patient outcomes.

There’s also the issue of data privacy protection. There is concern⁣ that private information about ‍an individual will be fed to AI chatbots like ChatGPT, posing a serious privacy problem at ⁣a time when regulations regarding training data for LLMs⁣ are still unclear. “OpenAI⁣ may subsequently become the owner of our health data,⁤ and we run into similar scenarios ⁤as we have had with Facebook,” AI expert Estramant said.

In 2018, the U.S. Federal Trade Commission‌ opened an investigation into‌ Facebook, following revelations that consulting firm Cambridge Analytica improperly⁤ accessed the data⁤ of nearly 90 million Facebook users. Google’s YouTube was fined $170 ‍million by the FTC in 2019 due to its alleged⁢ unauthorized collection of children’s⁢ data.

Empathy is another issue that may need further consideration before​ ChatGPT and other LLMs are applied in the mental health sector. Estramant ⁤said he used the ​prompt “what ​should‍ I do because I am depressed” on ChatGPT, to which the chatbot provided a “very‌ robotic” response ‌with some recommendations. “It lacks human ​warmth, and empathy,” he said.

Concerns surrounding the “emotional awareness” of ChatGPT do not immediately suggest that AI chatbots⁤ cannot⁤ adapt to specializing in mental health in the future. Earlier studies have indicated the possible contributions of AI chatbots ​in the mental health field.

Researchers found in a 2017 study that‍ conversational agent (CA) Woebot, which helped provide cognitive behavioral therapy​ (CBT) for patients with depression and anxiety symptoms, appeared to be “a ⁤feasible,‌ engaging, and effective ‍way⁤ to ​deliver CBT.”

Still, confabulations and “hallucinations” made by AI chatbots should be taken into account as they ‍can⁢ be very problematic for psychotherapy, Estramant said. ⁣ChatGPT users should also make sure to⁣ double-check sources before‍ taking any advise from the chatbot.

LLMs may transform ⁢and improve through⁣ time ‌to help bolster the⁢ mental health sector, but​ for now, many⁢ experts advise mental ⁤health patients to ⁢be cautious when utilizing chatbots. A human in the loop is‍ also still the ​prevailing recommendation​ among experts, and AI chatbots should‍ only be used to supplement human therapists.

2023-09-04 18:48:03
Article from www.ibtimes.com

Exit mobile version