Event

Many of us have come across chatbots / language models that produced answers which sound convincing, but are factually wrong (called hallucinations) . This is unfortunate at best, but more often than not dangerous, as these systems are deployed in production everywhere.

This session is about uncertainty estimation for language models, a research direction in machine learning which thinks about methods to detect hallucinations. This is not a talk or lecture, but an open discussion which can random walk from technical details to societal impacts, governance and regulation. Everyone is welcome to share their thoughts!

location

Bits & Bäume / about:freedom Wohnzimmer