AI models sometimes make things up — confidently stating 'facts' that are completely wrong. This is called hallucination. It happens because the model doesn't store facts like a database. It has compressed billions of pages into statistical patterns — what words tend to follow other words. When you ask a question, it generates the most plausible-sounding continuation, whether or not it's actually true.
Hallucination in action
AI-generated legal brief:
"This principle was established in Varghese v. China Southern Airlines, 925 F.3d 1339 (11th Cir. 2019)."
This looks like a reference to a real court case — but it's completely made up. The case name, the numbers, the court — all invented, yet perfectly formatted. In 2023, a New York lawyer used ChatGPT to write a legal filing and submitted fake citations just like this. The judge found out, and the lawyer was fined $5,000. Read the full story →
This is the core problem: the model has no internal "confidence meter" connected to factual accuracy. Every token it generates is chosen for plausibility, not truth — so fabricated details can sound just as authoritative as real ones. Sounding right and being right are two different things.
Understanding where knowledge comes from helps you spot these errors. It lives in two places: parameters (facts baked into the model during training — like a textbook you've memorized) and the context window (information you provide in the current conversation — like an open notebook on your desk).
🧠
Parameters
Permanent knowledge
Learned during training
📝
Context Window
Temporary info
You provide it each time
Modern systems mitigate hallucinations with web search, tool calling, and retrieval-augmented generation (RAG) — letting the model look things up instead of relying on memory alone. But they can't be fully eliminated. The model still generates its response one token at a time, choosing what sounds most likely — and sometimes "most likely" just isn't true.
Why not just fact-check every sentence? The model generates text one token at a time — it would need to verify mid-sentence, before it even knows what it's going to say. Some systems do post-hoc checking (generate first, verify after), but it's slow, expensive, and still imperfect.
Your turn — try it out!
Hallucination Detective
Is each statement a fact or a common hallucination/myth?
"The Eiffel Tower is 330 meters tall."
True. The Eiffel Tower is 330m including its antenna (324m to the top, 330m with antenna).
"The Great Wall of China is visible from space with the naked eye."
This is a common myth. Astronauts have confirmed it's not visible from low Earth orbit without aid.
"Octopuses have three hearts."
True. Two branchial hearts pump blood to the gills, and one systemic heart pumps it to the rest of the body.
"Albert Einstein failed mathematics in school."
This is a persistent myth. Einstein excelled at mathematics throughout his education.
"Humans only use 10% of their brains."
This is a myth. Brain imaging shows that virtually all areas of the brain are active and serve functions.