Duke innovates on implementing and assessing AI in health spaces

As artificial intelligence enters the healthcare space, Duke University researchers are working to make sure their application is safe and fair.

Large Language Models (LLMs) are making administrative tasks in healthcare more efficient. LLMs are a class of AI that includes ChatGPT and Meta’s LLaMa, among others. They respond to questions asked of them in a human-like way by predicting the most likely word order in a sentence based on the information they are trained on. They are being used to take medical notes and respond to patient portal messages.

Especially in the sensitive space that is the clinic, researchers emphasize the importance of AI governance—the practice of reviewing and assessing AI tools—to ensure that the technology is implemented in an ethical, transparent, and accountable way.

Solving ‘pajama time’

Imagine that after a long day of seeing patients, your doctor gets home, changes into their lounge clothes and plops down on the couch. Instead of turning on the T.V., they open up their work laptop to finish taking notes. “Pajama time,” as these after-hours tasks are sometimes called, can contribute to feelings of burnout in medical practitioners…

Story continues

TRENDING NOW

LATEST LOCAL NEWS