Glossary AI

Hallucination

When an AI model generates information that sounds correct but is completely made up. The model is not lying — it is filling a gap in knowledge with plausible-sounding tokens because that is how language models work.

Why it matters

Every AI agent will hallucinate eventually. The question is not "how do I prevent it" — it is "how do I detect it before it ships."

Related terms

Build with this on OTP

OTP encodes coordination intelligence so AI agent teams can run on it. If this term shows up in your team's playbook, it belongs in your OOS.

Found an issue with this definition? Tell us and we'll fix it.