More people are turning to artificial-intelligence chatbots for emotional support. Michigan researchers question whether warning labels meant to reduce emotional dependence could actually backfire.
A team from Michigan State University and the University of Wisconsin–Milwaukee pointed to new state laws that require some AI companion platforms to remind users they’re not interacting with a real person.
Researcher Celeste Campos-Castillo, associate professor at Michigan State’s Department of Media and Information, warned that vulnerable users may view chatbots as their only safe outlet, and repeated reminders that they aren’t real could make them feel worse…