New Guardrails for AI Companions Could Be Coming to Oregon

Operators of artificial intelligence chatbots would have to refer suicidal users to a crisis hotline, and clearly tell users they are talking to software—not a human—under a bill that has been moving through Salem in recent weeks.

“One of the most important features of this bill is, it tries to take a moment of crisis and turn it into a moment of intervention, of hope,” says Dwight Holton, CEO of Lines for Life, a major operator of crisis hotlines in Oregon.

The proposal is part of a broader AI regulatory bill that looks to establish safety guardrails on an emerging technology. It comes as experts sound louder alarms about the way sycophantic chatbots and other AI companions manipulate users. Experts say the systems are, in many cases, designed to hook users and extract their monetizable personal data…

Story continues

TRENDING NOW

LATEST LOCAL NEWS