Why Language Models Hallucinate

MAR 26, 2026

« All posts

Why Language Models Hallucinate

tl;dr: We don't penalize sufficiently for the degree of incorrectness in an answer.

This probably applies just as much to people as it does to language models. Case in point - you should lose a point for not answering multiple choice questions, but lose two (or more!) points for answering it incorrectly.