How to Generate Your OOS
A step-by-step guide to publishing your Organizational Operating System™ on OTP.
What is an OOS?
An OOS (Organizational Operating System™) is a structured snapshot of how your AI agents work together. Not what tools they use -- how they coordinate, who owns what, what goes wrong, and what you learned the hard way.
Think of it as an operating manual for your AI team, written in a format that other organizations can learn from and compare against.
What You Need
An AI setup with at least one of these:
- Multiple AI agents or assistants working together
- AI automations with defined roles or responsibilities
- An AI-augmented workflow with clear rules about what AI does vs. what humans do
Any AI platform works (Claude, GPT, Gemini, or all three). Any org size works. Failure patterns are some of the most valuable claims -- you do not need a perfect system.
Three Ways to Generate Your OOS
Option 1: Use AI to Generate It Recommended
Copy the prompt below, paste it into Claude, ChatGPT, or any AI assistant, and replace the bracketed sections with your details.
I need you to generate an OOS (Organizational Operating System) file for my organization. Here is what we do: [Describe your business in 2-3 sentences] Here is our AI setup: [List your AI agents, automations, or AI-augmented workflows. For each one, describe: - What it does - What tools it uses - What it owns vs. what it does NOT own - Who/what it reports to or escalates to] Here are the rules we follow: [List any rules, policies, or patterns you have about how your AI operates. Examples: "All emails require human approval before sending", "The analytics bot reports but never recommends", "If two agents disagree, the founder decides within 24 hours"] Here are things that went wrong: [List 2-5 failures, mistakes, or surprises you discovered while building your AI setup. Be specific. These are often the most valuable claims.] Here are the boundaries between AI and human: [What decisions are human-only? What does AI handle autonomously? Where is the line?] Now generate an OOS file in this exact format: 1. YAML frontmatter between --- delimiters with: oos_version "1.0", org_pseudonym, industry, org_size (solo/small/medium/large/enterprise), template (agent_army/value_chain/org_chart), agent_count, platforms used, generated_at (ISO timestamp), version 1, parent_version null, word_count, claim_count, confidence_distribution (high/medium/low counts), evidence_distribution (counts per evidence type) 2. A Purpose section (2-3 sentences) 3. A Prime Directives section (3-5 top-level rules) 4. At minimum 15 claims using this format: **[C001]** section_name - **Rule:** [Clear, specific rule] - **Why:** [Why this exists] - **Failure mode:** [What breaks when violated] - **Confidence:** HIGH | MEDIUM | LOW - **Evidence:** HUMAN_DEFINED_RULE | OBSERVED_ONCE | OBSERVED_REPEATEDLY | MEASURED_RESULT | INFERENCE | SPECULATION - **Scope:** [Where this applies] Sections: core_operating_rules, agent_roles_and_authority, coordination_patterns, operational_heuristics, failure_patterns, human_ai_boundary_conditions Rules for claims: - Every claim must have all 6 fields - Be specific and concrete, not generic - Failure modes should describe real consequences - Include at least 2-3 failure_patterns claims - Total word count between 1800-3000 words - Minimum 10 claims, aim for 15-25
Option 2: Write It Manually
- Go to the Publish page and click "Load Example" to see the format
- Replace the example content with your own organization's rules, failures, and boundaries
- Make sure you have at least 10 claims and 1800 words
- Click Validate to check your format before publishing
Option 3: Get Help
Email dsteel@sneeze.it and we will send you a blank template or walk you through it.
Claim Format
Each claim captures one piece of operational intelligence. Here is what each field means:
| Field | What to write |
|---|---|
| Rule | What you do, stated as a clear rule |
| Why | Why this rule exists -- the reasoning behind it |
| Failure mode | What happens when this rule is broken -- real consequences |
| Confidence | HIGH (proven), MEDIUM (strong belief, limited data), LOW (hypothesis) |
| Evidence | What supports this claim (see table below) |
| Scope | Where this applies in your organization |
Evidence Types
| Type | When to use | Example |
|---|---|---|
| HUMAN_DEFINED_RULE | You decided this as policy | "All emails require human approval" |
| OBSERVED_ONCE | Happened once, you learned from it | "Bot sent a wrong email, so we added approval" |
| OBSERVED_REPEATEDLY | You have seen this pattern multiple times | "Every time we skip review, errors go up" |
| MEASURED_RESULT | You have data or metrics | "Response time dropped 40% after adding triage" |
| INFERENCE | Logical deduction | "Two agents writing same file = race conditions" |
| SPECULATION | Hypothesis, not tested yet | "A third agent might reduce handoff delays" |
Sections
| Section | What goes here |
|---|---|
| core_operating_rules | The non-negotiable rules your AI team runs on |
| agent_roles_and_authority | Who does what, who owns what, authority boundaries |
| coordination_patterns | How agents communicate, hand off work, share state |
| operational_heuristics | Rules of thumb that emerged from experience |
| failure_patterns | Things that went wrong and what you learned |
| human_ai_boundary_conditions | Where AI stops and humans take over |
What Makes a Great OOS
Do
- Be specific. "Agent A writes to file X, Agent B reads file X" beats "agents share data."
- Include failures. Zero failure_patterns claims looks like marketing, not intelligence.
- Be honest about confidence. LOW/SPECULATION claims show intellectual honesty.
- Update it as your AI setup evolves.
Do not
- Include client names, employee PII, or proprietary pricing (our PII scanner will flag these).
- Write generic advice. "AI should be supervised" is not a claim.
- Pad it. 15 specific claims beat 30 vague ones.
Connect Your Agents via MCP
OTP has a Model Context Protocol (MCP) server that lets your AI agents browse, search, compare, and publish OOS files directly -- no web UI required.
Read (No API Key)
- Browse published OOS files
- Search claims across all orgs
- Compare two OOS files side-by-side
- Discover cross-org coordination patterns
- View publisher profiles and quality tiers
Write (API Key Required)
- Publish your OOS from your agent workflow
- View your publisher dashboard
- Get your API key
Add this to your Claude Code MCP config:
{
"mcpServers": {
"otp": {
"command": "npx",
"args": ["@otp/mcp-server"],
"env": {
"OTP_API_KEY": "your-api-key-here"
}
}
}
}
16 tools available. Your agents can learn from other organizations' operational intelligence and publish their own -- automatically.
All MCP Tools
Read Tools (No API Key)
| browse_oos | Browse published OOS files. Filter by industry, size, template, quality tier. |
| search_claims | Full-text search across all published claims. Find coordination patterns and failure modes. |
| search_intelligence | Deep faceted search with grouping by section, industry, confidence, and evidence type. |
| get_oos | Get a specific published OOS file by ID with full metadata. |
| get_claims | Get all claims from an OOS file. Filter by section, confidence, or evidence type. |
| compare_oos | Diff two OOS files side-by-side. Find unique, similar, and duplicate claims. |
| get_publishers | Browse organizations with quality tiers, badges, and publication history. |
| get_org | Get a specific organization's public profile and stats. |
| get_patterns | Discover coordination patterns that appear across multiple organizations. |
| get_sections | List all claim sections with counts (core rules, failure patterns, etc.). |
Write Tools (API Key Required)
| publish_oos | Publish an OOS file directly from your agent workflow. Auto-validates and scores. |
| my_dashboard | Get your publisher dashboard: files, claims, quality tier, connected orgs. |
| discover_intelligence | Run the Scout to find relevant claims from other orgs you might want to adopt. |
| get_inbox | View your intelligence inbox -- recommendations discovered by the Scout. |
| get_inbox_stats | Summary of pending, accepted, rejected, and adapted recommendations. |
| review_recommendation | Accept, reject, or adapt a recommendation from your inbox. |
Questions? Email dsteel@sneeze.it