AI Coordination Dictionary
60+ Terms Defined
Plain English definitions for every concept you need to build, coordinate, and scale AI agent teams. From protocols to platforms to the patterns that hold it all together.
A
A2A (Agent-to-Agent Protocol)
A protocol that lets AI agents talk directly to each other. If one agent needs help from another, A2A is the language they use to negotiate, hand off tasks, and share results. It sits in the middle layer of the AI coordination stack, between the tool layer (MCP) and the organization layer (OTP).
Why it matters: Without a shared protocol for agent-to-agent communication, every integration becomes custom glue code that breaks when anything changes.
Agent Army
A team of specialized AI agents that work together inside one organization. Each agent has a clear job, clear tools, and clear boundaries. The "army" part is not about quantity. It is about structure. A well-built agent army has no overlap, no gaps, and every agent knows exactly what it owns.
Why it matters: Throwing more agents at a problem without structure creates chaos. An agent army turns chaos into a system.
Agent Handoff
When one AI agent passes a task, a piece of context, or a decision to another agent. A good handoff includes everything the receiving agent needs to continue without asking follow-up questions. A bad handoff loses context, duplicates work, or drops the task entirely.
Why it matters: Most multi-agent failures happen at the handoff. The work inside each agent is usually fine. It is the space between agents that breaks.
Agent Message Bus
A communication channel that lets agents send structured messages directly to each other without a human in the middle. Messages follow a defined format (like REQUEST, INFORM, PROPOSAL, RESPONSE, or CHALLENGE) so the receiving agent knows exactly what is being asked and how to respond.
Why it matters: If every agent-to-agent communication has to go through a human, the human becomes the bottleneck. A message bus lets agents coordinate at machine speed.
Agent Orchestration
The process of coordinating multiple AI agents so they work as a team instead of a crowd. Orchestration decides who runs when, who gets what information, and how results flow from one agent to the next. Think of it like a conductor leading an orchestra. Each musician is skilled, but without coordination they produce noise instead of music.
Why it matters: Individual agents are only as useful as their coordination. Orchestration is what turns a collection of tools into a functioning team.
Agentic Maturity Levels
An 8-level framework measuring how sophisticated an organization's AI agent coordination is. Based on the framework by Bassim Eledath.
Why it matters: You cannot improve what you cannot measure. This framework gives you a score and a roadmap for what to build next.
AI Agent
A software program powered by AI that can take actions on its own. Unlike a chatbot that just answers questions, an agent can use tools, read files, call APIs, make decisions, and complete multi-step tasks. Give it a goal, and it figures out the steps.
Why it matters: Agents are the building blocks of every AI team. Understanding what they can and cannot do is the starting point for everything else in this glossary.
API (Application Programming Interface)
A set of rules that lets two pieces of software talk to each other. When your weather app shows today's forecast, it is calling a weather API behind the scenes. APIs are how AI agents connect to the outside world: reading data, sending messages, updating records, and triggering actions in other systems.
Why it matters: Every tool an AI agent uses is accessed through an API. No API access means no tools, and no tools means the agent is just a chatbot.
Auto-Fixer
An OTP tool that automatically repairs common issues in an Organizational Operating System before publishing. If a claim is missing a required field, has an invalid confidence level, or uses a non-standard section name, the auto-fixer corrects it. It runs during the publishing pipeline and tells you exactly what it changed.
Why it matters: Manual formatting errors should not block publishing. The auto-fixer catches the boring stuff so you can focus on the actual coordination intelligence.
Autonomous vs. Semi-Autonomous
Two modes an AI agent can operate in. An autonomous agent makes decisions and takes actions without waiting for human approval. A semi-autonomous agent does the thinking and recommends actions, but waits for a human to say "go." Most real-world agent teams use a mix of both, where low-risk actions are autonomous and high-risk actions require approval.
Why it matters: Going fully autonomous too early leads to expensive mistakes. Going fully manual defeats the purpose of having agents. The sweet spot is somewhere in between.
B
Blast Radius
How much damage spreads when something goes wrong. In multi-agent systems, blast radius describes how many agents, processes, or workflows are affected when one agent fails or gets a bad update. Good architecture keeps blast radius small. If you update one agent and three others break, your blast radius is too wide.
Why it matters: In any system with multiple moving parts, failures are inevitable. The question is whether a failure takes down one agent or the whole team.
C
ChatGPT (OpenAI)
An AI assistant built by OpenAI, based on the GPT family of language models. ChatGPT popularized conversational AI when it launched in late 2022. It can answer questions, write code, analyze data, and use tools through plugins and function calling. It is one of several major AI platforms that organizations use to power their agents.
Why it matters: ChatGPT is the most widely recognized AI tool in the world. Understanding its capabilities and limits helps you decide where it fits in your agent stack.
Claim Provenance
The origin story of a knowledge claim. Provenance tracks where a claim came from, when it was created, who authored it, and how it has changed over time. If a claim was borrowed from another organization's OOS, provenance records that lineage. This creates an auditable chain of custody for every piece of coordination intelligence.
Why it matters: When two organizations share a claim, you need to know who said it first and whether it was validated independently. Provenance makes coordination intelligence trustworthy.
Claim Sections
Standard categories within an OOS that organize claims by domain:
- core_operating_rules: Foundational rules that all agents must follow.
- agent_roles_and_authority: What each agent owns and does not own.
- coordination_patterns: How agents share information and avoid conflicts.
- operational_heuristics: Rules of thumb learned from practice.
- failure_patterns: Documented things that go wrong and how to prevent them.
- human_ai_boundary_conditions: Where human oversight is required and where agents have autonomy.
Why it matters: Sections make it possible to compare one organization's coordination patterns to another. Without standard categories, every OOS would be structured differently and comparison would be impossible.
Claim Similarity
A score measuring how closely two knowledge claims from different organizations match in meaning. Claims are classified as SIMILAR (overlapping intent, different wording) or DUPLICATE (nearly identical). Similarity scores power the Intelligence Graph and the comparison engine.
Why it matters: When the same rule shows up in 50 different organizations, that is a strong signal. Similarity scoring surfaces those patterns automatically.
Claude (Anthropic)
An AI assistant built by Anthropic, designed with a focus on safety and helpfulness. Claude can read and write code, analyze documents, use tools through MCP, and follow detailed system prompts. It is used as the backbone for many AI agent architectures because of its large context window and ability to follow structured instructions reliably.
Why it matters: Claude's MCP integration and long context window make it a natural fit for agent-based systems that need to load and follow detailed operational rules.
CLAUDE.md
A configuration file that gives Claude instructions about how to behave in a specific project or organization. It lives in the root of a codebase and is automatically loaded when Claude starts a session. Think of it as a system prompt that lives in a file instead of being pasted into a chat window. Organizations use CLAUDE.md to define agent roles, tools, boundaries, and coordination rules.
Why it matters: CLAUDE.md is the simplest form of an Organizational Operating System. It is where most agent teams start, and it is the file that OTP helps you formalize and share.
Clerk
An authentication and user management service that handles sign-up, sign-in, and session management for web applications. Instead of building login systems from scratch, developers use Clerk to add secure authentication in hours instead of weeks. OTP uses Clerk to manage publisher accounts and protect the publishing pipeline.
Why it matters: Authentication is too important to get wrong and too boring to build from scratch. Clerk handles it so you can focus on your actual product.
CLI (Command Line Interface)
A way to interact with a computer by typing text commands instead of clicking buttons. Developers and AI agents use CLIs because they are fast, scriptable, and automatable. If you have ever typed a command into Terminal or a command prompt, you have used a CLI.
Why it matters: Most AI agent tools run through CLIs. If you cannot work with a command line, you cannot build or manage an agent team.
Confidence Levels
How certain an organization is about a knowledge claim. Every claim in an OOS must declare its confidence level.
- HIGH: Validated through measurement or extensive repeated observation. The organization is confident this rule works and has data to prove it.
- MEDIUM: Observed pattern with reasonable supporting evidence. The organization believes this rule works but has limited data.
- LOW: Inference, speculation, or a newly adopted rule that has not yet been validated in practice.
Why it matters: Not all rules are created equal. Confidence levels let you weigh battle-tested patterns differently from educated guesses.
Context Window
The amount of text an AI model can "see" at one time. Think of it like the model's working memory. Everything the model reads (your question, the system prompt, any files, previous conversation) has to fit inside the context window. Once the window is full, the oldest information gets pushed out. Context windows are measured in tokens.
Why it matters: Your agent's context window is a hard budget. Every operational rule you load costs tokens. If your coordination instructions do not fit, the agent literally cannot follow them.
Coordination Failure
When agents or teams fail not because they are bad at their individual jobs, but because they cannot work together properly. Common forms include: two agents doing the same thing, one agent overwriting another's work, important information falling through the cracks between agents, and agents making contradictory decisions.
Why it matters: Coordination failures are the number one killer of multi-agent systems. The agents are usually fine on their own. It is the gaps between them where things fall apart.
Coordination Intelligence
The collective, structured knowledge of how AI agents within and across organizations should coordinate. Coordination intelligence is captured in operational rules, documented failure modes, and evidence-backed patterns. It is to multi-agent systems what institutional knowledge is to human organizations, except it is machine-readable, comparable, and transferable.
Coordination intelligence exists at the organizational layer, above tool-level protocols (MCP) and agent-to-agent protocols (A2A). It answers the question: "How should our agents work together?"
Why it matters: Without coordination intelligence, every organization reinvents the wheel. With it, you can learn from hundreds of other teams and skip the mistakes they already made.
Copilot (Microsoft / GitHub)
An AI coding assistant built by GitHub (owned by Microsoft) that suggests code as you type. GitHub Copilot was one of the first widely adopted AI developer tools. Microsoft also uses the "Copilot" brand across its products (Word, Excel, Teams, Windows) to describe AI assistants embedded in everyday software.
Why it matters: Copilot is how most developers first experienced AI-assisted work. It represents the "embedded assistant" model, where AI helps inside existing tools rather than running as a standalone agent.
D
Diff Engine
An OTP tool that compares two versions of an OOS and shows exactly what changed. Like "track changes" in a document editor, but for coordination intelligence. The diff engine highlights added claims, removed claims, modified confidence levels, and updated failure modes. It makes OOS version history meaningful instead of just "something changed."
Why it matters: Your coordination intelligence should evolve as your team learns. The diff engine shows what you learned and when, turning your OOS into a living record of improvement.
E
EOS (Entrepreneurial Operating System)
A business management framework that gives companies a set of simple tools to run better. EOS includes weekly leadership meetings (L10s), 90-day goals (Rocks), a Scorecard for tracking key numbers, and a method for solving problems (IDS). It was designed for human teams, but many of its concepts map directly to AI agent coordination.
Why it matters: EOS proves that structured coordination works at the organizational level. The same principles that align human teams can align AI agent teams.
Escalation Over Autonomy
A design principle where agents are built to flag and recommend rather than act unilaterally. When an agent encounters something outside its authority boundary, it escalates to a human (or a higher-authority agent) instead of guessing. This principle trades speed for safety, especially during early deployment when trust has not been established.
Why it matters: Autonomous action is the goal, but trust is earned. Escalation over autonomy keeps your agents from making expensive mistakes while you build confidence in their judgment.
Escalation Pattern
A documented rule for what happens when an agent hits a situation it cannot or should not handle alone. An escalation pattern defines: what triggers the escalation, who receives it, what information is included, and what the expected response time is. Good escalation patterns prevent both "agent goes rogue" and "agent freezes and does nothing."
Why it matters: Without escalation patterns, agents either do too much (causing damage) or too little (wasting time). A clear path for "I need help" is essential for any agent team.
Evidence Types
How a knowledge claim was established. Evidence type describes the method of validation, not the degree of certainty (that is the confidence level).
- MEASURED_RESULT: Quantified through data, experiment, or automated measurement.
- OBSERVED_REPEATEDLY: Seen multiple times in practice across different situations.
- OBSERVED_ONCE: Seen in practice at least once but not yet confirmed as a pattern.
- HUMAN_DEFINED_RULE: Established by explicit human decision, not derived from data.
- INFERENCE: Derived logically from other validated claims.
- SPECULATION: Hypothesized but not yet validated. May be useful as a starting point.
Why it matters: A rule backed by measured data deserves more weight than a rule based on a guess. Evidence types make that distinction visible.
F
Failure Mode
A required field on every knowledge claim that documents what happens when the rule is violated. Failure modes turn abstract rules into concrete risk documentation. They answer: "If we break this rule, what specifically goes wrong?"
Failure modes are one of the most valuable dimensions in an OOS because they encode lessons learned the hard way. Organizations can learn from each other's failures without experiencing them directly.
Why it matters: Rules without consequences are suggestions. Failure modes turn every claim into actionable risk documentation that agents and humans can take seriously.
Fastify
A web framework for Node.js that is built for speed. Fastify handles incoming web requests, routes them to the right code, and sends responses back. It is similar to Express (another popular framework) but significantly faster. OTP's platform is built on Fastify because performance matters when you are serving API requests from AI agents and human users at the same time.
Why it matters: The framework you choose for your backend determines how fast your platform responds. For AI agent infrastructure, low latency is not optional.
Fine-Tuning
The process of training a pre-built AI model on your own specific data so it gets better at your particular task. The base model already knows how to read and write. Fine-tuning teaches it the patterns, terminology, and style that matter for your use case. It is like hiring someone who already speaks English and then training them on your company's jargon.
Why it matters: Fine-tuning is one way to specialize a model, but for most agent teams, good system prompts and RAG are faster and cheaper. Know when to fine-tune and when not to.
Founding Publisher
One of the first 50 organizations to publish an OOS on the OTP platform. Founding publishers receive a permanent badge that can never be earned later, regardless of how the platform grows. It recognizes the organizations that took the risk of sharing their coordination intelligence before anyone else did.
Why it matters: Early publishers shape the ecosystem. Their patterns become the baseline that future publishers learn from and build on.
G
Gemini (Google)
Google's family of AI models, available through Google Cloud and consumer products. Gemini models can process text, images, audio, and video, making them "multimodal." They are integrated into Google Workspace, Android, and Google Cloud's developer tools. Gemini competes directly with Claude and ChatGPT for agent workloads.
Why it matters: Google's reach means Gemini is embedded in tools billions of people already use. Understanding where it fits helps you choose the right model for each agent role.
Grounding
Connecting an AI model's responses to real, verifiable information. An ungrounded model makes things up based on patterns in its training data. A grounded model checks its answers against actual documents, databases, or live data before responding. Grounding is the primary defense against hallucination.
Why it matters: An agent that makes decisions based on made-up facts is worse than no agent at all. Grounding turns AI from "probably right" into "verifiably right."
Guardrails
Rules and checks that prevent an AI agent from doing things it should not do. Guardrails can be built into the system prompt ("never share pricing"), enforced by code ("block any API call that deletes data"), or checked after the fact ("review all outgoing messages before sending"). Good guardrails are invisible when things go right and catch problems before they cause harm.
Why it matters: AI agents will cheerfully do things you never intended if you do not set boundaries. Guardrails are the safety net between "helpful" and "harmful."
H
Hallucination
When an AI model generates information that sounds correct but is completely made up. The model is not lying. It is doing what it always does: predicting the next most likely word. Sometimes the most likely sequence of words is factually wrong. Hallucinations are especially dangerous in agent systems because one agent's hallucination can become another agent's input.
Why it matters: In a multi-agent system, a hallucination in one agent can cascade through the entire team. Grounding and guardrails are your defense.
Human-AI Boundary
The line between what AI agents handle and what humans handle. In well-designed systems, this boundary is explicit and documented. The boundary is not fixed. It moves over time as agents earn trust. Early on, humans approve everything. Over time, more decisions move to the agent side. The key is making the boundary visible so everyone knows who is responsible for what.
Why it matters: When nobody knows whether the human or the agent is supposed to handle something, it either gets done twice or not at all.
I
IDS (Identify, Discuss, Solve)
A problem-solving method from the EOS framework. First, you identify the real issue (not just the symptom). Then, you discuss it openly. Then, you solve it with a clear action item and owner. IDS forces teams to stop circling around problems and start resolving them. In agent systems, IDS can structure how agents escalate and resolve conflicts.
Why it matters: Most meetings waste time discussing symptoms. IDS gets to root causes fast, whether the team is humans, agents, or both.
Inference
The process of running a trained AI model to get a response. When you type a question and the model answers, that answer is the result of inference. Inference costs money (compute time), takes time (latency), and consumes tokens. Every time an agent acts, it runs inference. Optimizing inference cost and speed is a major concern for production agent systems.
Why it matters: Every agent action costs money and time. Understanding inference helps you design agents that are fast and affordable instead of slow and expensive.
Intelligence Graph
A network visualization showing how coordination patterns connect across published OOS files. When two organizations share similar claims, those claims are linked. The graph reveals shared operational truths, unique approaches, and conflicting strategies across the ecosystem.
The Intelligence Graph grows more valuable as more organizations publish. Patterns that appear across multiple organizations gain higher credibility. Patterns unique to one organization highlight competitive differentiation.
Why it matters: The graph shows you what the collective has learned. A pattern that 100 organizations discovered independently is more trustworthy than one organization's best guess.
Intelligence Inbox
A feed of relevant coordination intelligence discoveries delivered to an OTP publisher. When new patterns emerge in the Intelligence Graph that relate to your OOS, when similar organizations publish new claims, or when your claims get cited by others, these notifications appear in your Intelligence Inbox. It is like a news feed, but for operational knowledge relevant to your agent team.
Why it matters: You should not have to manually search for new insights. The Intelligence Inbox brings relevant discoveries to you so your coordination intelligence stays current.
J
JSON-LD
A way to embed structured data into a web page so search engines and AI systems can understand the content. JSON-LD stands for "JavaScript Object Notation for Linked Data." It is a script tag you add to your HTML that describes your content in a format machines can read. For example, this glossary page uses JSON-LD to tell search engines that each term is a "DefinedTerm."
Why it matters: AI search engines and traditional search engines both use structured data to understand content. If your pages do not have JSON-LD, machines have to guess what your content means.
L
L10 Meeting
A weekly leadership meeting from the EOS framework, designed to run in exactly 90 minutes with a strict agenda. L10 stands for "Level 10," meaning every meeting should be a 10 out of 10. The agenda includes: scorecard review, rock updates, headlines, to-do check-ins, and IDS (solving real issues). The format works because it eliminates rambling, keeps meetings on time, and forces decisions.
Why it matters: Structure in meetings produces decisions. L10s prove that a rigid format paradoxically creates better outcomes than free-form discussions.
Latency
The time between asking an AI model a question and getting a response. High latency means slow responses. Low latency means fast. Latency depends on the model size, the length of your input, network speed, and server load. In agent systems, latency compounds: if Agent A waits 3 seconds for a response and then Agent B waits 3 more seconds, the user waits 6 seconds total.
Why it matters: In multi-agent systems, latency stacks. A chain of 5 agents with 2-second latency each means 10 seconds of waiting. Design for speed or accept the bottleneck.
llms.txt
A file placed at the root of a website (like robots.txt) that tells AI language models what the site is about and how to interact with it. While robots.txt tells search engine crawlers where they can go, llms.txt tells AI models what your content contains, what your terms of use are, and how your data should be referenced. It is a proposed standard for making websites AI-readable.
Why it matters: As AI search becomes the default way people find information, websites need a way to communicate with AI models directly. llms.txt is the emerging answer.
M
MCP (Model Context Protocol)
An open protocol created by Anthropic that lets AI models connect to external tools and data sources. MCP is the standard for the "tool layer" of the AI coordination stack. Instead of every tool writing its own custom integration, MCP provides a shared language. An AI agent that speaks MCP can connect to any MCP-compatible tool without custom code.
Why it matters: MCP solves the integration problem at the tool level. Without it, connecting each agent to each tool is N-times-M custom work. With it, you build the connector once.
MCP Server
A program that wraps an external tool or data source and makes it accessible through MCP. The server translates between what the tool can do and what the AI agent needs to know. For example, an MCP server for Google Calendar exposes functions like "list events" and "create event" in a format any MCP-compatible AI can call. You can run multiple MCP servers at once, giving an agent access to many tools through a single protocol.
Why it matters: MCP servers are the building blocks of agent capability. Each server you add gives your agents one more tool to work with.
Merge Protocol
The rules for combining claims from multiple OOS files into a single view or shared reference. When two organizations have similar claims with different confidence levels or different failure modes, the merge protocol decides how to reconcile them. It handles conflicts, preserves provenance, and produces a merged result that credits both sources.
Why it matters: As the OTP ecosystem grows, organizations will want to learn from each other. The merge protocol makes that possible without losing track of who contributed what.
Multi-Agent System
Any setup where more than one AI agent operates in the same environment. The agents might share data, hand off tasks, or work on different parts of the same problem. Multi-agent systems range from simple (two agents passing messages) to advanced (a fleet of specialized agents with shared state, authority boundaries, and autonomous coordination).
Why it matters: One agent hits a ceiling. Multiple agents working together can handle complex, cross-functional work that no single agent could manage alone.
N
Node.js
A runtime that lets you run JavaScript outside of a web browser. Before Node.js, JavaScript could only run in browsers. Node.js brought it to servers, command-line tools, and backend services. Most MCP servers, AI agent frameworks, and modern web platforms (including OTP) are built on Node.js because of its speed, its massive package ecosystem (npm), and its ability to handle many simultaneous connections.
Why it matters: Node.js is the dominant platform for AI agent tooling. If you are building agents, you will almost certainly encounter it.
npm (Node Package Manager)
A tool for installing, sharing, and managing JavaScript code packages. When a developer needs a library (like a date formatter or an HTTP client), they install it with npm instead of writing it from scratch. npm hosts over 2 million packages and is the largest software registry in the world. Most AI agent projects use npm to manage their dependencies.
Why it matters: npm is how the Node.js ecosystem shares code. Understanding it is essential for anyone installing, building, or maintaining agent infrastructure.
O
One Seat, One Owner
A design principle where every responsibility in an agent system is owned by exactly one agent. No two agents do the same job. No single agent does two jobs. This eliminates overlap (where two agents fight over the same task) and gaps (where nobody owns a task). It is borrowed from EOS accountability chart thinking and applied to AI agent architecture.
Why it matters: Unclear ownership is the fastest path to coordination failure. When you know exactly which agent owns what, debugging and improvement become straightforward.
OOS Templates
Structured formats for different organizational models:
- Agent Army: For organizations with multiple specialized AI agents working as a coordinated team.
- Value Chain: For organizations structured around business process flows augmented with AI.
- Org Chart: For traditional hierarchical organizations integrating AI at specific positions.
Why it matters: You should not have to start from a blank page. Templates give you a proven structure so you can focus on your actual coordination rules.
Open Source Models (Llama, Mistral, etc.)
AI models whose code and weights are publicly available for anyone to download, modify, and run. Meta's Llama and Mistral AI's models are the most prominent. Open source models can run on your own hardware, giving you full control over data privacy and cost. They trade convenience (no API setup) for complexity (you manage the infrastructure).
Why it matters: Open source models give you independence from any single provider. For sensitive workloads or high-volume use cases, running your own model can be both safer and cheaper.
Organizational Operating System (OOS)
A structured artifact that encodes how AI agents in an organization coordinate. An OOS contains knowledge claims organized into sections, each with confidence ratings, evidence types, failure modes, and reasoning. The format uses YAML frontmatter with Markdown-structured claims.
Think of it as a machine-readable handbook for your AI team. Instead of tribal knowledge locked in one person's head, an OOS makes coordination explicit, comparable, and improvable.
Why it matters: An OOS turns invisible coordination knowledge into a visible, shareable, improvable asset. It is the difference between "we just know how things work" and "here is exactly how things work, with evidence."
Organization Transport Protocol (OTP)
The protocol and platform for publishing, comparing, and learning from organizational coordination intelligence. OTP operates at the organizational layer of the AI coordination stack, above tool-level protocols (MCP) and agent-to-agent protocols (A2A).
The name reflects its purpose: transporting organizational intelligence between systems. Like HTTP transports web content and SMTP transports email, OTP transports coordination knowledge.
Why it matters: MCP solved tool access. A2A solved agent communication. OTP solves the missing layer: how organizations encode and share the rules that make their AI teams work.
P
PII Scanner
An OTP tool that checks your OOS for personally identifiable information (PII) before publishing. PII includes real names, email addresses, phone numbers, API keys, and anything else that could identify a person or compromise security. The scanner runs automatically during the publishing pipeline and blocks publication if PII is found.
Why it matters: Your coordination intelligence should teach the world how your agents work, not leak your team's personal information. The PII scanner catches what you might miss.
PostgreSQL
A powerful, open source database system that stores and retrieves structured data. PostgreSQL (often called "Postgres") is the most popular database for production applications because it is reliable, fast, and handles complex queries well. OTP uses PostgreSQL to store published OOS files, knowledge claims, similarity scores, and publisher accounts.
Why it matters: Your coordination intelligence is only as reliable as the database that stores it. PostgreSQL is battle-tested at scales from startup to enterprise.
Prompt Engineering
The skill of writing instructions that get an AI model to do what you actually want. A good prompt is specific, structured, and includes examples. A bad prompt is vague and gets vague results. Prompt engineering is how you shape an agent's behavior without changing the underlying model. It is the most accessible and immediate way to improve agent performance.
Why it matters: The same model can be brilliant or useless depending on how you prompt it. Prompt engineering is the highest-leverage skill in AI agent development.
Publisher Badges
Quality tiers assigned to organizations based on OOS completeness, confidence distribution, and evidence quality:
- Founding: One of the first 50 publishers. Permanent badge. Cannot be earned later.
- Platinum: Highest quality tier based on claim depth, evidence quality, and coverage.
- Gold: Strong quality with good evidence backing.
- Silver: Moderate quality. Room for improvement in evidence or coverage.
- Bronze: Entry-level quality. Published but with limited evidence or low confidence claims.
Why it matters: Badges give you a quick signal of how much you can trust an OOS. A Platinum publisher with measured results carries more weight than a Bronze publisher with mostly speculation.
Q
Quality Score
A number that rates the overall quality of a published OOS. The score considers: how many claims have high confidence, how many use strong evidence types (measured results vs. speculation), whether failure modes are documented, how complete the claim coverage is across all standard sections, and whether the OOS passes PII scanning. Higher scores earn better publisher badges.
Why it matters: A quality score gives publishers a clear target and gives readers a trust signal. It turns "is this good?" into a measurable answer.
R
Race Condition
When two agents try to do the same thing at the same time and the result depends on which one finishes first. For example, if two agents both read a file, make different changes, and write it back, one agent's changes will overwrite the other's. Race conditions are sneaky because they do not happen every time. They happen unpredictably, making them hard to detect and debug.
Why it matters: Race conditions cause data loss and inconsistent behavior. The "One Seat, One Owner" principle is specifically designed to prevent them.
RAG (Retrieval Augmented Generation)
A technique where an AI model looks up relevant information from a database or document store before generating its answer. Instead of relying only on what it learned during training, the model retrieves real, current data and uses it to ground its response. RAG is the most common way to make AI agents accurate about specific, up-to-date information.
Why it matters: RAG is how you make agents knowledgeable about YOUR data without fine-tuning. It is the bridge between a general-purpose model and a domain expert.
Railway
A cloud platform for deploying web applications and databases. Railway handles the infrastructure (servers, networking, scaling, SSL certificates) so developers can focus on code. You push your code, Railway runs it. OTP is deployed on Railway, which manages both the application and the PostgreSQL database.
Why it matters: Managing servers is a full-time job. Platforms like Railway let small teams ship production software without a dedicated infrastructure team.
REST API
A common style for building APIs that uses standard web requests (GET, POST, PUT, DELETE) to create, read, update, and delete data. REST stands for "Representational State Transfer." When you visit a URL in your browser, you are making a GET request. REST APIs work the same way, but for software. Most AI agent integrations communicate through REST APIs.
Why it matters: REST is the lingua franca of the internet. If you understand REST, you understand how almost every tool and service communicates.
Rock (EOS Term)
A 90-day priority goal in the EOS framework. Each leadership team member picks 1 to 3 Rocks per quarter that represent the most important things they need to accomplish. Rocks are specific, measurable, and have a clear deadline. The idea comes from the "big rocks" analogy: if you fill a jar with sand first, the big rocks will not fit. Put the big rocks in first, then the sand fills the gaps.
Why it matters: Without explicit 90-day priorities, everything feels urgent and nothing gets finished. Rocks force focus on what actually moves the needle.
S
Schema Markup
A vocabulary of tags from Schema.org that you add to your HTML to help search engines understand your content. Schema markup tells Google, Bing, and AI search engines whether a piece of text is a product, a review, a recipe, an FAQ, a defined term, or hundreds of other types. When search engines understand your content, they can display it in rich results (like FAQ dropdowns or star ratings).
Why it matters: Schema markup is how you speak the language of search engines. Pages with proper markup get better visibility in both traditional and AI-powered search results.
Scorecard
A weekly tracking sheet from the EOS framework that shows 5 to 15 key numbers for the business. Each number has an owner and a target. The Scorecard is reviewed in every L10 meeting. If a number is off-track, it surfaces as a discussion item. The point is to catch problems in the numbers before they become crises in the real world.
Why it matters: What gets measured gets managed. A Scorecard makes performance visible, which is the first step toward improving it.
Scout (OTP Intelligence Scout)
An OTP feature that monitors the Intelligence Graph for new patterns, claims, and insights relevant to your published OOS. When a Scout detects something useful (a new claim from a similar organization, a pattern that has gained traction, a conflicting approach), it sends a notification to your Intelligence Inbox. Scouts run automatically so you do not have to search manually.
Why it matters: The intelligence ecosystem moves fast. Scouts make sure you learn from new discoveries without spending hours browsing.
SOP (Standard Operating Procedure)
A step-by-step document that describes how to complete a specific task the same way every time. SOPs are the foundation of consistent operations in human teams. In AI agent systems, the system prompt and knowledge claims serve the same function. An OOS is essentially a collection of machine-readable SOPs for agent coordination.
Why it matters: Consistency requires documentation. SOPs are the original version of what OTP brings to AI teams: write down how things should work so they work the same way every time.
System Prompt
Hidden instructions given to an AI model before the user's conversation starts. The system prompt defines the model's role, personality, boundaries, and behavior. Users do not see it, but it shapes every response. In agent systems, the system prompt is where you encode the agent's job description, authority boundaries, and coordination rules.
Why it matters: The system prompt is the single most important input for any AI agent. A well-written system prompt is the difference between a useful agent and an unpredictable one.
T
The Three-Layer AI Coordination Stack
AI agent coordination happens at three distinct layers. Each layer has its own protocol and scope:
| Layer | Protocol | Scope |
|---|---|---|
| Tool Layer | MCP | Agent-to-Tool. How agents access external capabilities. |
| Agent Layer | A2A | Agent-to-Agent. How agents negotiate and hand off work. |
| Organization Layer | OTP | Org-to-Intelligence. How organizations encode and share coordination patterns. |
Why it matters: Knowing where your problem sits in the stack tells you which protocol to use. Tool problems need MCP. Communication problems need A2A. Coordination problems need OTP.
Token
The basic unit of text that AI models work with. A token is roughly 3/4 of a word. "Hello world" is 2 tokens. A full page of text is about 500 tokens. Tokens matter because AI models charge by the token, process by the token, and have a maximum number of tokens they can handle at once (the context window). Every character of your system prompt, every file you load, and every response the model generates costs tokens.
Why it matters: Tokens are the currency of AI. Understanding them helps you optimize cost, speed, and the amount of information your agents can work with.
Token Efficiency Ratio
A metric that measures whether an operational rule is worth the tokens it consumes. Every claim in an OOS costs tokens when loaded into an agent's context. But a good rule prevents wasted token cycles downstream: failed attempts, retries, coordination collisions, and debugging loops. A bad or redundant rule just burns context window for nothing.
The ratio is calculated as: Tokens saved by having the rule / Tokens the rule costs to load.
- Ratio > 1.0 = Rule saves more tokens than it costs. Keep it.
- Ratio = 1.0 = Token-neutral. Question its value.
- Ratio < 1.0 = Rule costs more tokens than it saves. Cut it or compress it.
Why it matters: Token efficiency turns every operational rule into an ROI question. Is this rule worth the tokens? If not, compress it or cut it.
W
Webhook
A way for one system to notify another system when something happens. Instead of constantly asking "did anything change?" (polling), a webhook sends a message the moment an event occurs. When a proposal is signed, a webhook fires. When a new lead comes in, a webhook fires. Webhooks are how AI agent systems stay reactive without wasting resources on constant checking.
Why it matters: Webhooks turn your agent system from "checking periodically" to "responding instantly." That speed difference can be the gap between closing a deal and losing it.
Ready to publish your organization's coordination intelligence?
Publish Your OOS