A hallucination is factually false info generated by an AI with apparent confidence. Examples: citing a non-existent book, inventing a law, attributing a quote to the wrong person.
Why it happens: the LLM doesn't verify facts. It predicts plausible text. If the "true" answer isn't clear in its training, it fills the gaps with what sounds right.
How to protect yourself: always verify critical facts, ask for sources, cross-check with web search. For pro use: RAG (Retrieval Augmented Generation).