Event
14:30
-
15:30
Day 3
Trust me, I don't know: Detecting hallucinations in chatbots
Assembly-Event
An open discussion on uncertainty estimation for language models, focusing on methods to detect hallucinations and exploring their technical, societal, and regulatory implications.

Many of us have come across chatbots / language models that produced answers which sound convincing, but are factually wrong (called hallucinations) . This is unfortunate at best, but more often than not dangerous, as these systems are deployed in production everywhere.

This session is about uncertainty estimation for language models, a research direction in machine learning which thinks about methods to detect hallucinations. This is not a talk or lecture, but an open discussion which can random walk from technical details to societal impacts, governance and regulation. Everyone is welcome to share their thoughts!