Why OTP Exists

We believe the hardest lessons about
AI coordination shouldn't have to be
learned twice.

Right now, every organization building with AI agents is hitting the same walls. The same agent conflicts. The same escalation gaps. The same failures that only surface after you've shipped.

And every lesson learned stays trapped -- inside one CLAUDE.md, one codebase, one team. The knowledge exists. It's just not shared.

OTP is built on a simple idea: organizations that share what they've learned about AI coordination will outperform those that don't. Not because they copy each other, but because they stop wasting time rediscovering what someone else already figured out.

Get Started

Three ways in. Pick the one that fits how you work.

FASTEST
1

Publish on the Web

No install. No CLI. No Node.js. Just a browser.

a

Sign up at orgtp.com (free, takes 30 seconds)

b

Go to /publish and paste your CLAUDE.md (or any file that describes how your AI agents coordinate)

c

Click Publish. OTP extracts your claims, scores your quality, and shows you how you compare to other organizations. Instantly.

Publish Now 60 seconds to your first OOS
2

Generate with AI First

Don't have a CLAUDE.md yet? Use any AI to create one.

Copy this prompt into Claude, ChatGPT, Gemini, or any AI assistant. Replace the bracketed sections with your details. Then paste the output into /publish.

I need you to generate an OOS (Organizational Operating System) file for my organization.

Here is what we do: [Describe your business in 2-3 sentences]

Here is our AI setup:
[List your AI agents, automations, or AI-augmented workflows. For each one, describe:
- What it does
- What tools it uses
- What it owns vs. what it does NOT own
- Who/what it reports to or escalates to]

Here are the rules we follow:
[List any rules, policies, or patterns you have about how your AI operates.
Examples: "All emails require human approval before sending",
"The analytics bot reports but never recommends",
"If two agents disagree, the founder decides within 24 hours"]

Here are things that went wrong:
[List 2-5 failures, mistakes, or surprises you discovered while building
your AI setup. Be specific. These are often the most valuable claims.]

Here are the boundaries between AI and human:
[What decisions are human-only? What does AI handle autonomously?
Where is the line?]

Now generate an OOS file in this exact format:

1. YAML frontmatter between --- delimiters with: oos_version "1.0",
   org_pseudonym, industry, org_size (solo/small/medium/large/enterprise),
   template (agent_army/value_chain/org_chart), agent_count, platforms
   used, generated_at (ISO timestamp), version 1, parent_version null,
   word_count, claim_count, confidence_distribution (high/medium/low
   counts), evidence_distribution (counts per evidence type)

2. A Purpose section (2-3 sentences)

3. A Prime Directives section (3-5 top-level rules)

4. At minimum 15 claims using this format:

**[C001]** section_name
- **Rule:** [Clear, specific rule]
- **Why:** [Why this exists]
- **Failure mode:** [What breaks when violated]
- **Confidence:** HIGH | MEDIUM | LOW
- **Evidence:** HUMAN_DEFINED_RULE | OBSERVED_ONCE | OBSERVED_REPEATEDLY
  | MEASURED_RESULT | INFERENCE | SPECULATION
- **Scope:** [Where this applies]

Sections: core_operating_rules, agent_roles_and_authority,
coordination_patterns, operational_heuristics, failure_patterns,
human_ai_boundary_conditions

Rules for claims:
- Every claim must have all 6 fields
- Be specific and concrete, not generic
- Failure modes should describe real consequences
- Include at least 2-3 failure_patterns claims
- Total word count between 1800-3000 words
- Minimum 10 claims, aim for 15-25

Works in any AI. Copy the output, go to /publish, paste, publish.

3

Connect Your AI Agent Power Users

For Claude Code, Cline, Cursor, or any MCP-compatible client.

If you use Claude Code (the CLI, not the browser), this one command installs the OTP MCP server and 5 slash commands. Your AI agent can then publish, browse, compare, and learn from the network directly.

Requires two things on your machine:

Node.js (v18+) -- nodejs.org
Claude Code CLI -- npm i -g @anthropic-ai/claude-code

Using Claude in the browser? Use Path 1 or 2 above instead.

curl -fsSL https://orgtp.com/install.sh | bash
What gets installed

MCP server (16 tools) + 5 slash commands:

/otpYour dashboard
/otp-publishPublish your CLAUDE.md as an OOS
/otp-morningMorning intelligence briefing
/otp-browseExplore the network
/otp-learnDiscover and review recommendations
Manual MCP config (for Cursor, Windsurf, Cline, etc.)
{
  "mcpServers": {
    "otp": {
      "command": "npx",
      "args": ["-y", "otp-mcp-server"],
      "env": {
        "OTP_API_KEY": "your-api-key-here"
      }
    }
  }
}

How It Works

Publish. Learn. Get smarter every morning.

1

Publish your OOS

Your CLAUDE.md (or equivalent) describes how your AI agents coordinate. OTP scans it, extracts structured claims, scores quality, and publishes it to the network. Your org name is pseudonymized. No client data, no PII.

2

Learn from the network

OTP's Scout analyzes gaps in your OOS and finds high-quality claims from other organizations you might want to adopt. A digital marketing agency discovers the escalation pattern a SaaS company already solved.

3

Get smarter every morning

Check your dashboard for new recommendations, cross-org patterns you're missing, and what other organizations published. Accept, reject, or adapt each recommendation. Your OOS evolves with the network.


What is an OOS?

The operating manual for your AI team.

An OOS (Organizational Operating System) is a structured snapshot of how your AI agents work together. Not what tools they use -- how they coordinate, who owns what, what goes wrong, and what you learned the hard way.

Think of it as the rules of engagement for your AI team, written in a format that other organizations can learn from and compare against.

What you need: An AI setup with multiple agents, automations with defined roles, or AI-augmented workflows with clear human/AI boundaries. Any platform works (Claude, GPT, Gemini, or all three). Any org size. Failure patterns are some of the most valuable claims -- you do not need a perfect system.

Claim Format

RuleWhat you do, stated as a clear rule
WhyWhy this rule exists
Failure modeWhat happens when this rule is broken
ConfidenceHIGH (proven) | MEDIUM (strong belief) | LOW (hypothesis)
EvidenceHUMAN_DEFINED_RULE | OBSERVED_ONCE | OBSERVED_REPEATEDLY | MEASURED_RESULT | INFERENCE | SPECULATION
ScopeWhere this applies in your organization

Sections

core_operating_rulesThe non-negotiable rules your AI team runs on
agent_roles_and_authorityWho does what, who owns what, authority boundaries
coordination_patternsHow agents communicate, hand off work, share state
operational_heuristicsRules of thumb that emerged from experience
failure_patternsThings that went wrong and what you learned
human_ai_boundary_conditionsWhere AI stops and humans take over

What Makes a Great OOS

Do

  • Be specific. "Agent A writes to file X, Agent B reads file X" beats "agents share data."
  • Include failures. Zero failure_patterns claims looks like marketing, not intelligence.
  • Be honest about confidence. LOW/SPECULATION claims show intellectual honesty.
  • Update it as your AI setup evolves.

Do not

  • Include client names, employee PII, or proprietary pricing (our PII scanner will flag these).
  • Write generic advice. "AI should be supervised" is not a claim.
  • Pad it. 15 specific claims beat 30 vague ones.

The lessons already exist.
They're just not shared yet.

Publish what you've learned. Learn what others figured out. Wake up smarter than you went to sleep.

Questions? dsteel@sneeze.it