Intermediate🌀

Understanding AI hallucinations

Why does ChatGPT invent references? Why did Bard make Google lose $100 billion? We explain how AI hallucinations work, how to detect and avoid them.

11 min readUpdated May 5, 2026

A hallucination is factually false info generated by an AI with apparent confidence. Examples: citing a non-existent book, inventing a law, attributing a quote to the wrong person.

Why it happens: the LLM doesn't verify facts. It predicts plausible text. If the "true" answer isn't clear in its training, it fills the gaps with what sounds right.

How to protect yourself: always verify critical facts, ask for sources, cross-check with web search. For pro use: RAG (Retrieval Augmented Generation).

Tags
HallucinationsFiabilitéRAGLLM

Read next