Sneeze It
Founding Publisher gold L7 Background Agentscore operating rules
Every agent writes to exactly one shared state file. The morning briefing compiler reads all 8 files. No agent reads another agent's data source directly.
Why: The reporting agent and the spend monitor both queried the Meta API independently. They returned different numbers because of timing differences. Per-agent state files with timestamps eliminated the contradiction.
Failure mode: Two agents query the same API 4 minutes apart. Spend numbers differ by $340. Account manager questions data integrity. Trust in the system drops for weeks.
Scope: All 8 agents. 8 state files.
The spend monitoring agent checks pacing every 6 hours and alerts when any account exceeds 115% of daily budget. Alert goes to Slack channel, not DM.
Why: DMs get buried. Channel alerts create shared visibility. The 115% threshold balances sensitivity with noise. At 110%, too many false alarms. At 120%, alerts arrive too late to prevent significant overspend.
Failure mode: Agent DMs the founder at 2 AM about a 112% overspend. Founder silences notifications. Next morning, account is at 145%. Channel alert would have been seen by the AM who starts at 7 AM.
Scope: All client ad accounts with daily budgets over $100.
No agent modifies campaign settings. Agents read, analyze, and recommend. A human executes changes in the ad platform.
Why: We gave an agent write access to bid adjustments in month 1. It optimized for CPA without understanding the client's brand awareness goal. Client called asking why impressions dropped 60%.
Failure mode: Agent reduces bids on a brand campaign. Impressions crater. Client sees competitors appearing in their branded search results. Emergency call at 8 PM.
Scope: All ad platform integrations. Read-only API access only.
Client communication drafts include a confidence tag: ROUTINE (send after quick review), SENSITIVE (requires careful review), or ESCALATE (founder must review personally).
Why: Not all client emails need the same level of scrutiny. Performance reports are routine. A response to a complaint is sensitive. A cancellation save attempt is escalate.
Failure mode: Account manager rubber-stamps a SENSITIVE email about a billing discrepancy. Email contains a number the agent hallucinated from a different client's account. Client catches the error and questions our competence.
Scope: All client-facing email drafts generated by the EA agent.
agent roles and authority
The Reporting Agent owns weekly performance summaries. The Spend Monitor owns daily pacing alerts. They never overlap. The Reporting Agent does not alert on daily spend. The Spend Monitor does not summarize weekly trends.
Why: When both agents commented on spend, the weekly report contradicted the daily alert because they used different time windows. Strict lane separation fixed it within one day.
Failure mode: Weekly report says spend is on track while daily alert says overpacing by 18%. Both are correct for their time window but the client sees both and panics.
Scope: Analytics and monitoring functions.
The Prospecting Agent researches potential clients and drafts outreach. It does NOT have access to current client data, performance metrics, or internal Slack channels.
Why: Information isolation prevents the prospecting agent from accidentally referencing current client data in outreach. It also prevents scope creep into account management territory.
Failure mode: Prospecting agent discovers a current client's competitor in the pipeline. References competitor strategy details in outreach email, inadvertently revealing client intelligence to a prospect.
Scope: Prospecting and business development function only.
The Internal Ops Agent handles team task tracking, meeting prep, and internal briefings. It is the only agent that reads the project management tool. Other agents request project status through its state file.
Why: Multiple agents querying the PM tool created API rate limit issues and inconsistent status views. Centralizing PM access through one agent made project data consistent across the organization.
Failure mode: Three agents query Asana simultaneously. Rate limit hit. Two get stale cached data, one gets current. Briefing mixes old and new project status without any indication of which is which.
Scope: All project management data access.
coordination patterns
Morning briefing runs at 6:30 AM. All scanner agents must complete by 6:00 AM. Any agent not finished by 6:00 AM is marked stale in the briefing. The briefing never waits for a slow agent.
Why: One slow API call used to delay the entire briefing by 20 minutes. The founder's morning routine depends on the briefing being ready at 6:30 sharp. Stale data with a visible warning is always better than no briefing at all.
Failure mode: Google Ads API times out at 5:50 AM. Without the hard deadline, briefing delayed until 6:47 AM. Founder starts the day without context and makes a client call unprepared.
Scope: Morning briefing pipeline. 8 scanner agents, 1 compiler.
When two agents need to reference each other's output, they read from state files, never from conversation context or memory. State files are the single source of truth for all cross-agent data.
Why: Conversation context drifts between sessions. A state file written 2 hours ago is more reliable than an agent's memory of what another agent reported yesterday. We caught 3 errors in one week from memory-based cross-referencing.
Failure mode: Reporting agent remembers yesterday's spend number instead of reading today's state file. Weekly report goes out with yesterday's numbers. Client catches the error before the account manager does.
Scope: All cross-agent data references.
Escalation path for client issues: Agent detects anomaly, flags in state file, briefing highlights it, account manager reviews, founder involved only if client relationship is at risk.
Why: Early on, every anomaly went directly to the founder. 15 alerts per day within the first two weeks. Alert fatigue set in by week 3. Now the AM layer filters signal from noise and the founder sees 2-3 meaningful items per day.
Failure mode: Without the AM filter layer, founder gets desensitized to alerts. Treats everything as noise. Misses a real problem that costs a client. Client churns.
Scope: All client-facing anomaly detection and escalation.
operational heuristics
Reports generated before 7 AM use yesterday's final numbers, not partial today numbers. Never mix time windows in a single report.
Why: Partial-day data creates misleading trends. A report showing "spend is down 60%" at 6 AM because only 6 hours of data exist causes unnecessary panic every single time.
Failure mode: Client receives early morning report showing spend down 60%. Calls account manager in alarm. AM spends 30 minutes explaining that it is just early-morning partial data. Happens three times before we fix the rule.
Scope: All reports generated before noon.
When a client has not been contacted in 14+ days, flag it in the briefing regardless of how well their campaigns are performing. Silence is a churn signal even when the numbers are good.
Why: Three of our churned clients in the past year had strong performance numbers at the time they left. They did not leave because of results. They left because they felt ignored and undervalued.
Failure mode: Client campaigns perform well for 6 straight weeks. No proactive outreach from the team. Client quietly signs with a competitor who calls them every week.
Scope: All active client accounts above the base monthly spend tier.
New agents start in shadow mode for 2 weeks minimum. They generate output that a human reviews but the team does not act on. After 2 weeks of consistently accurate output, they graduate to draft mode where output is used after human review.
Why: We deployed the prospecting agent directly into production without a shadow period. Its first batch of outreach emails included a company that was a current client's direct competitor. Two weeks of shadow mode would have caught that conflict on day 4.
Failure mode: New agent sends outreach to a prospect that has a direct conflict with an existing client relationship. Client hears about it through industry contacts. Trust damaged.
Scope: All new agent deployments. No exceptions.
failure patterns
We scaled from 2 agents to 8 in 6 weeks. Three of those agents had overlapping responsibilities that we did not discover until month 3. The fix took longer than the original build of all three agents combined.
Why: Rapid scaling without explicit authority documentation creates hidden overlaps. Each agent worked perfectly fine in isolation. The conflicts only became visible when their outputs were compared side by side in the morning briefing.
Failure mode: Reporting agent and ops agent both independently track project deadlines using different data sources. Briefing shows two different due dates for the same client project. Nobody knows which one is correct.
Scope: Any team scaling beyond 4 agents. Document authority boundaries BEFORE deploying new agents.
We let the EA agent send "quick acknowledgment" emails to clients without human review. It acknowledged a client complaint with "Thanks for letting us know!" without addressing the substance of their concerns. Client escalated directly to the founder.
Why: Even simple acknowledgments carry emotional tone. "Thanks for letting us know" sent to a frustrated client reads as dismissive and uncaring. The AI did not detect the emotional register of the incoming message.
Failure mode: Client sends an angry email about declining results. EA auto-acknowledges with a cheerful tone. Client interprets it as corporate indifference. Relationship severely damaged. Takes two in-person meetings to repair.
Scope: All client communications including simple acknowledgments. No auto-send without human review.
We gave the spend monitor a flat $50 threshold for alerts. It generated 40+ alerts per day across the portfolio. We raised it to $200. Then we missed a real overspend of $180 on a small account. The right threshold was percentage-based (15% over daily budget), not dollar-based.
Why: Dollar thresholds do not scale across accounts of vastly different sizes. $50 is meaningless noise on a $5,000/day account but represents a 90% overspend on a $200/day account. Percentage normalizes the signal across the entire portfolio.
Failure mode: Small account overspends by $180 per day (90% over budget) for 6 days. Alert suppressed because it falls under the $200 dollar threshold. Month-end reconciliation reveals $1,080 in unplanned overspend. Client is not happy.
Scope: All spend monitoring across all account sizes. Always use percentage thresholds.
human ai boundary conditions
Strategy calls with clients are always human-only. The agent prepares a briefing deck with data, talking points, and risks to raise. The human runs the call. The agent processes meeting notes afterward.
Why: Clients pay for strategic judgment and a trusted relationship, not data delivery. The human connection during strategy calls is the primary retention mechanism. AI handles the preparation so the human shows up fully informed.
Failure mode: Account manager shows up to a quarterly strategy call without agent-prepared briefing because the system was down. Client asks about a performance trend the AM has not reviewed. AM looks unprepared and the client questions whether they are getting enough attention.
Scope: All client strategy calls. AI preps, human performs, AI processes afterward.
Compare with Another OOS
Search for an organization to compare against.