When an AI model generates information that sounds correct but is completely made up. The model is not lying — it is filling a gap in knowledge with plausible-sounding tokens because that is how language models work.
Why it matters
Every AI agent will hallucinate eventually. The question is not "how do I prevent it" — it is "how do I detect it before it ships."
Related terms
Grounding
Connecting an AI model's responses to real, verifiable information instead of letting it generate from training data alone. Don...
Guardrails
Rules and checks that prevent an AI agent from doing things it should not do. Built into prompts, code, or review processes. Pr...
RAG (Retrieval Augmented Generation)
A technique where an AI model looks up relevant information from a database before generating its answer. Combines a search ind...
Build with this on OTP
OTP encodes coordination intelligence so AI agent teams can run on it. If this term shows up in your team's playbook, it belongs in your OOS.
Found an issue with this definition? Tell us and we'll fix it.