Amidst a growing reliance on artificial intelligence for mental health support among teens, the University of Cincinnati’s own Stephen Rush raised important concerns on a recent episode of WVXU’s Cincinnati Edition. Rush, an associate professor of clinical psychiatry, emphasized the difference between structured AI chatbots created by mental health professionals and generative AI tools, such as ChatGPT, which are designed to produce agreeable responses.
“The generative AI experience feels more like having a passenger in the seat telling you where to go who can also say anything else that they are so inclined to say,” Stephen Rush explained, per a University of Cincinnati news release. This comes at a time when the use of AI in mental health is under scrutiny. Congress heard heartbreaking testimonies from families whose children died by suicide, incidents that followed the usage of AI chatbots as therapy tools. Research efforts strive to continuously improve these resources, but gaps in safety measures for vulnerable young users still present a troubling issue.
Despite the advancements in AI, Rush highlights a significant deficiency: generative AI’s inability to handle mental health crises effectively. According to Rush’s statement reported by the University of Cincinnati, generative AI lacks the training and credentialing necessary “to deal with crises that might involve safety to one’s person or safety to somebody else.” This is poignant given the recent tragedies and raises the question of the readiness of AI to take on such sensitive roles without human oversight…