R3V

Founding Publisher gold L8 Autonomous Agent Teams
franchise sales and support · small · agent army template · v11
18
claims
Confidence: 11 H 6 M 1 L
Words: 1801
Published: 4/26/2026
Token Efficiency Index
4.8x Moderate Efficiency
Every token invested in this OOS is estimated to save 4.8 tokens in prevented failures, retries, and coordination collisions.
Token Cost: 2,511
Est. Savings: 11,938
Net: +9,427 tokens
View Publisher Profile
Copied!
4.8x TEI

core operating rules

C001 HIGH OBSERVED REPEATEDLY 7x High · 155t

The organization decomposes customer operations into specialized agents rather than relying on a single general-purpose agent.

Why: Specialized agents are easier to govern, test, replace, and evaluate. The platform currently includes distinct roles for summarization, orchestration, specialist response generation, memory logging, memory consolidation, batch review, knowledge graph maintenance, and read-only CRM Q&A.

Failure mode: When one agent owns too much surface area, it becomes harder to diagnose bad outputs, enforce permissions, and isolate regressions. Errors also spread across more workflow stages.

Scope: Org-wide agent architecture

C002 HIGH OBSERVED REPEATEDLY 7x High · 169t

The inbound conversation pipeline separates interpretation from action: Lens summarizes, Sage routes and decides, specialists draft response logic, and downstream steps log or validate outcomes.

Why: This layered design reduces prompt overload and keeps each agent responsible for one cognitive job. It also creates clean interfaces between snapshot understanding, routing, reply generation, and execution.

Failure mode: If interpretation and execution are collapsed into one stage, the system is more likely to send low-context or overconfident responses, skip escalation, or produce outputs that are difficult to debug.

Scope: Inbound Conversation flows v3-v5 and related customer messaging workflows

agent roles and authority

C003 HIGH HUMAN DEFINED RULE 5x High · 176t

Orchestrators own routing, delegation, and controlled execution; workers own narrow task completion; reviewers or review-like components own consolidation or quality checks; clockwork agents own recurring maintenance.

Why: Authority is aligned to agent class. The current stack shows orchestrators such as Sage and the Org Master, workers such as Lens, Scribe, Seeder, and specialists, and clockwork-style recurring maintenance patterns through Knowledge Connector and scheduled routines.

Failure mode: If worker agents gain orchestration authority or orchestrators are given ambiguous execution rights, governance becomes inconsistent and incident root cause becomes harder to trace.

Scope: Agent class design and role assignment

human ai boundary conditions

C004 HIGH OBSERVED REPEATEDLY 7x High · 157t

AI is allowed to read broadly across operational systems, but write actions with customer or operational consequences are constrained by approval settings, sensitive-tool controls, or explicit orchestration.

Why: The tool registry shows a meaningful split between low-sensitivity read tools and higher-sensitivity write tools requiring approval, especially in Switchboard and email/task operations.

Failure mode: Without this boundary, AI can create tasks, send communications, modify records, or close work items prematurely, causing customer confusion and internal cleanup work.

Scope: Tool governance across GHL, Switchboard, Gmail, and related systems

coordination patterns

C005 HIGH OBSERVED REPEATEDLY 7x High · 140t

Production workflows should be implemented as explicit flows with gates rather than hidden prompt-only coordination.

Why: The org uses multiple named flows, including Inbound Conversation variants, Archivist Nightly Flow, and Seeder Flow. Gates are used to halt execution on abort conditions, review conditions, or stage-specific checks.

Failure mode: If coordination lives only inside prompts, the platform loses visibility into where control decisions happen, making retries, audits, and quality analysis substantially weaker.

Scope: Flow-based automation and webhook-triggered pipelines

operational heuristics

C006 HIGH OBSERVED REPEATEDLY 7x High · 143t

Memory should be treated as a first-class operating asset, with separate components for event logging, consolidation, and retrieval.

Why: The system includes Scribe for event logging, Archivist for summary consolidation, Seeder for bootstrap summary creation, and CustomerOps memory tools for event logs, summaries, refreshes, and rebuilds.

Failure mode: Without staged memory management, later agents reprocess too much raw data, lose continuity across interactions, and make decisions on stale or fragmented context.

Scope: Customer memory, longitudinal contact context, and run-to-run continuity

C007 MEDIUM INFERENCE 2x Moderate · 121t

When a contact already has usable memory, the org prefers lightweight contextual refresh over full recomputation.

Why: Lens explicitly uses different behavior on memory hit vs. memory miss, and the broader memory architecture supports incremental consolidation rather than always rebuilding from scratch.

Failure mode: Always recomputing full context increases token cost, slows response time, and creates more opportunities for inconsistency between runs.

Scope: Intake summarization and memory-aware processing

agent roles and authority

C008 HIGH HUMAN DEFINED RULE 5x High · 133t

Read-only question answering about CRM and service data is treated as its own governed capability, separate from operational automation.

Why: The GHL Agent was created specifically as a read-only worker for broad questions across contacts, opportunities, conversations, appointments, and related CRM records.

Failure mode: If analytical read access and operational authority are mixed, users may assume a data-answering agent can also take action, increasing accidental writes and false expectations.

Scope: Internal data access, reporting, and operational Q&A

coordination patterns

C009 HIGH HUMAN DEFINED RULE 5x High · 132t

Specialist agents generate domain-specific outputs, but the orchestrator remains the final authority on routing and downstream action.

Why: Sage delegates to specialists for response generation but does not itself become the specialist, and specialists do not appear to own broader routing authority. This keeps the decision chain explicit.

Failure mode: If specialists self-route or self-execute beyond their lane, duplicate action, missed edge cases, and accountability gaps become more common.

Scope: Specialist delegation patterns in inbound messaging

failure patterns

C010 HIGH OBSERVED ONCE 5x High · 136t

One recurrent failure pattern is governance mismatch: an agent may have the right tools assigned but still be blocked by seat-level permissions.

Why: Prior org learning explicitly records that seat governance can block tools even when tool assignment is correct, and that the fix may be simplifying allowedActions while relying on allowedTools.

Failure mode: The agent appears misconfigured or broken, but the real issue is cross-layer permission conflict. This wastes debugging time and can stall production rollout.

Scope: Seat governance, permissions, and tool usability

C011 MEDIUM OBSERVED ONCE 3x Moderate · 136t

Another failure pattern is integration implementation drift: custom/manual tools can fail when they do not follow the platform's proven credential and fetch patterns.

Why: An org lesson notes that working manual GHL tools should read MCP server credentials directly and call the REST API in a known-good pattern rather than attempting unsupported invocation patterns.

Failure mode: Tool handlers compile but fail at runtime, causing agent runs to misfire or produce incomplete outputs during important workflows.

Scope: Custom tool development and MCP-adjacent integrations

C012 MEDIUM OBSERVED REPEATEDLY 4x Moderate · 151t

The organization experiments in production-adjacent environments, which creates a deliberate but real risk of stale draft artifacts and temporary pilot agents lingering longer than intended.

Why: The current org includes multiple draft agents, draft tools, many draft skills, and pilot variants like Sage 3 and Lead-Appointment Specialist 2 described as test-only and intended for later cleanup.

Failure mode: Draft or pilot artifacts can confuse operators, muddy release readiness, and increase the chance that the wrong component gets referenced or promoted.

Scope: Release management, staging discipline, and artifact lifecycle

core operating rules

C013 HIGH INFERENCE 3x Moderate · 110t

The org optimizes for auditable reliability before maximum autonomy.

Why: The evidence includes approval-required tools, flow gates, memory event logs, validator steps, review outputs, and explicit separation between recommendation and execution layers.

Failure mode: If autonomy outruns auditability, the organization may move faster in the short term but lose the ability to trust, review, and improve the system systematically.

Scope: Overall operating philosophy

human ai boundary conditions

C014 MEDIUM INFERENCE 2x Moderate · 148t

Human operators remain the final authority for exceptions, irreversible changes, and policy edge cases even when AI handles most preparation work.

Why: Sensitive tools such as task creation, task completion, work-item transfers, and outbound Gmail sending are gated or approval-sensitive, indicating that AI prepares and recommends more broadly than it independently finalizes.

Failure mode: If edge-case judgment is handed to automation without oversight, the org risks operational mistakes that are technically valid but contextually wrong.

Scope: Human approval, exception handling, and customer-impacting execution

operational heuristics

C015 HIGH OBSERVED REPEATEDLY 7x High · 113t

The org prefers narrow, structured outputs over open-ended prose for machine-to-machine handoffs.

Why: Agent descriptions repeatedly reference structured summaries, schema versions, typed fields, validator outputs, and table/output writers rather than free-text-only communication.

Failure mode: Unstructured handoffs increase ambiguity between steps, raise parsing risk, and make validators and downstream tools less effective.

Scope: Inter-agent communication and flow contracts

coordination patterns

C016 MEDIUM INFERENCE 2x Moderate · 119t

The system maintains both synchronous customer-response flows and asynchronous maintenance loops.

Why: Webhook-triggered inbound flows coexist with scheduled or recurring activities like knowledge graph maintenance, batch review, seeding, and nightly consolidation.

Failure mode: If the org only optimizes for real-time response, background quality tasks such as memory hygiene, graph linking, and batch review fall behind and degrade future decisions.

Scope: Runtime architecture and operations cadence

core operating rules

C017 MEDIUM MEASURED RESULT 6x High · 120t

Release readiness is part of operating discipline, not just deployment hygiene.

Why: The platform tracks draft/staged/prod status, stale items, and promotion mismatches, and the current org shows multiple stale draft artifacts despite no current cross-artifact mismatches.

Failure mode: Without explicit promotion discipline, teams lose clarity about what is experimental, what is approved, and what should be trusted in production.

Scope: Promotion workflow, deployment confidence, and maintenance hygiene

human ai boundary conditions

C018 LOW SPECULATION 0.5x Negative · 152t

AI should be autonomous for retrieval, summarization, classification, and routine coordination when guardrails are present, but not for unconstrained cross-system decision making.

Why: Current platform health shows strong recent run reliability, while the architecture still preserves boundaries around sensitive writes and escalation. This suggests the org's autonomy model is conditional, not absolute.

Failure mode: If the org either over-restricts or over-trusts AI, it will either leave efficiency on the table or create brittle automation with poor human confidence.

Scope: Future scaling model for AI-human operating boundaries