Sightline Digital

silver L5 MCP & Skills
digital_marketing · small · agent army template · v1
16
claims
Confidence: 9 H 5 M 2 L
Words: 2424
Published: 4/5/2026
Token Efficiency Index
5.2x High Efficiency
Every token invested in this OOS is estimated to save 5.2 tokens in prevented failures, retries, and coordination collisions.
Token Cost: 2,813
Est. Savings: 14,625.3
Net: +11,812.3 tokens
View Publisher Profile
Copied!
5.2x TEI

core operating rules

C001 HIGH OBSERVED ONCE 5x High · 218t

Every client profile must include a defined geographic service area with specific zip codes or a radius from a street address. No ad performance analysis runs without this field populated.

Why: Our ad monitor flagged a plumber's campaign as "strong performance -- CPL $23, 47 leads this month." The media buyer didn't check the geographic breakdown. 31 of those 47 leads were from a city 90 miles away where the plumber doesn't operate. The plumber paid for 31 leads he couldn't serve. He called us on a Friday afternoon and said he was "done." The founder saved the account with a credit and a same-day fix, but it cost the agency $1,400 in credited spend and nearly cost us an $8K/mo client.

Failure mode: Agents evaluate campaign performance without geographic context. High lead volume masks geographic waste. Client pays for leads they can't serve.

Scope: All ad monitoring and reporting agents.

C002 HIGH OBSERVED REPEATEDLY 7x High · 165t

Shared state files include a "last_successful_write" timestamp and a "data_completeness" flag (FULL, PARTIAL, FAILED).

Why: The Google Ads API occasionally returns partial data during heavy load periods -- 8 of 12 accounts come back, the other 4 timeout. The ad monitor wrote what it had. The briefing reported on 8 clients and said nothing about the other 4. The founder assumed the missing 4 were running fine. One of the missing accounts had paused itself due to a billing issue and stayed paused for 2 days.

Failure mode: Partial data writes look like complete data. Missing accounts are interpreted as "no problems" instead of "not checked."

Scope: All agents writing shared state.

agent roles and authority

C003 HIGH OBSERVED ONCE 5x High · 180t

The campaign audit agent produces findings. The media buyer produces action plans. Never the reverse.

Why: The audit agent's monthly report included "Recommendation: pause broad match keywords, switch to phrase match." The media buyer disagreed -- broad match was performing because of Smart Bidding context signals the agent couldn't see in the API data. The founder sided with the agent's recommendation because "the AI analyzed it." The media buyer implemented the change reluctantly. CPL increased 40% in the first week. The media buyer reverted and was frustrated for a month.

Failure mode: Agent recommendations override human specialist judgment. Practitioners lose authority. Morale drops. Performance suffers.

Scope: All analytical agents.

C004 HIGH OBSERVED ONCE 5x High · 193t

The inbox assistant categorizes emails and drafts responses. It does not promise timelines, deliverables, or budget changes.

Why: A dentist client emailed asking "can you increase my budget by $500 next month?" The inbox assistant drafted: "Absolutely, we'll get that adjusted for you starting the 1st." The founder approved the draft without reading carefully (morning rush, 14 emails to review). The $500 increase was implemented, but the dentist's credit card on file had a $5K limit and the increased spend triggered a card decline 3 weeks in, pausing the campaign for 2 days.

Failure mode: Agent drafts commit to operational changes. Hurried human approval lets commitments through. Downstream billing or operational conflicts surface days or weeks later.

Scope: Inbox and communication agents.

coordination patterns

C005 HIGH OBSERVED REPEATEDLY 7x High · 169t

The weekly report agent pulls from the ad monitor's shared state file, never from the Google Ads API directly.

Why: Same-day API calls to Google Ads return different numbers depending on conversion lag, attribution windows, and the time of the query. When the weekly report agent made its own API call, the founder's Friday report showed different numbers than the Monday briefing for the same date range. Small discrepancies -- $3-8 per lead -- but clients who track closely notice.

Failure mode: Two agents querying the same API independently produce slightly different snapshots. Client-facing reports show inconsistent numbers across touchpoints.

Scope: All agents that surface ad performance data.

C006 HIGH OBSERVED REPEATEDLY 7x High · 161t

Slack alerts are batched into a single daily digest at 8:15 AM. Individual alerts fire only for spend anomalies exceeding 2x the daily threshold.

Why: With 12 clients and 5 agents, unbatched alerts produced 15-25 Slack messages per day. The founder's Slack channel became an unreadable wall of notifications. He stopped checking the channel. He missed a legitimate CPC spike on a roofing client that cost $340 in wasted spend over 2 days before the media buyer caught it during her own check.

Failure mode: High alert volume causes channel abandonment. Legitimate alerts are buried in noise. The alerting system trains the human to ignore it.

Scope: All alerting agents.

operational heuristics

C007 HIGH MEASURED RESULT 10x High · 191t

For local service businesses, the agent must check search terms for geographic intent mismatches weekly, not just cost and conversion metrics.

Why: An HVAC client's campaigns looked great on paper: CPL $31, 38 leads/month. But 12 of those leads searched for "AC repair [neighboring city]" and were served ads because the radius targeting overlapped into the next town. The HVAC company doesn't service that area. They were paying $31 per useless lead for a third of their volume. The geographic search term check caught what the performance metrics missed.

Failure mode: Performance metrics look healthy while geographic targeting silently wastes budget. Local businesses serve defined areas that don't always align with radius targeting.

Scope: Ad monitoring agents for local service clients.

C008 HIGH MEASURED RESULT 10x High · 162t

Client reports must include a "leads by city" breakdown for any local service business.

Why: After the plumber incident (C001) and the HVAC issue (C007), we added city-level lead breakdowns to every report. Two more geographic problems were caught in the first month: a family law firm getting leads from a state where they're not licensed, and a dentist attracting patients from 45 minutes away who never convert because the drive is too far.

Failure mode: Aggregate lead counts mask geographic distribution problems. Clients don't know their ad dollars are leaking into wrong territories until they see the breakdown.

Scope: All reporting agents for local service clients.

failure patterns

C009 HIGH OBSERVED REPEATEDLY 7x High · 194t

When Google Ads API returns an error or timeout for a specific account, the agent retries once after 60 seconds. If the retry fails, it writes "ACCOUNT_UNAVAILABLE" to the shared state with the timestamp. It does not skip the account silently.

Why: The ad monitor had a try/catch that swallowed API errors and continued to the next account. The shared state file looked complete -- it had entries for all 12 clients. But 2 entries were stale copies from yesterday's data because the error handler wrote the previous values as fallback. The founder didn't know he was looking at yesterday's numbers for 2 accounts.

Failure mode: Silent error handling with fallback-to-stale produces state files that look complete but contain outdated data for specific accounts.

Scope: All agents calling external APIs.

C010 MEDIUM OBSERVED ONCE 3x Moderate · 153t

Never use display names for client matching across systems. Use account IDs.

Why: We onboarded "Smith & Sons Roofing" and "Smith's Roofing" in the same month. The weekly report agent matched both to a single "Smith" entry in the CRM using fuzzy name matching. The combined report showed $11,200 in spend when Smith & Sons was at $7,800 and Smith's Roofing was at $3,400. The founder quoted the wrong number on a client call.

Failure mode: Fuzzy name matching merges distinct clients with similar names. Merged data is presented as a single entity. Client-facing communications cite wrong numbers.

Scope: All agents referencing client data.

human ai boundary conditions

C011 MEDIUM OBSERVED ONCE 3x Moderate · 190t

The founder reviews every client-facing report before delivery. No report goes out on auto-send, regardless of agent accuracy track record.

Why: With 12 clients and a small team, every relationship is critical. One bad report costs more to repair than the 15 minutes of daily review. We tried auto-sending for 3 "stable" clients for 2 weeks. In week 2, one report included a negative ROAS figure due to a conversion tracking gap. The client screenshot it and sent it to a competitor agency for a second opinion. We kept the client, but the competitor now had our reporting format and our client's data.

Failure mode: Auto-sent reports with errors create competitive vulnerability. Clients share bad reports with competitors. Agency loses information asymmetry.

Scope: All reporting agents.

C012 MEDIUM OBSERVED ONCE 3x Moderate · 167t

Budget recommendations require the media buyer's sign-off, not just the founder's.

Why: The founder is a strategist, not a tactician. He approved a budget increase recommendation from the audit agent without consulting the media buyer. The media buyer knew the account had just changed bidding strategies and needed 2 weeks of learning period data before increasing spend. The budget increase during the learning period inflated CPL by 55% for 10 days.

Failure mode: Strategic decision-maker approves tactical changes without practitioner input. Agent recommendation + founder authority skips the person who understands the operational context.

Scope: Ad management agents, budget recommendations.

operational heuristics

C013 MEDIUM OBSERVED REPEATEDLY 4x Moderate · 197t

Conversion tracking status checks run daily and flag any account where conversion actions have recorded zero conversions in 48+ hours (for accounts that typically convert daily).

Why: A dentist client's Google Tag stopped firing after a WordPress plugin update. The agent saw "zero leads today" and reported it as low performance. It took 4 days for the media buyer to realize it was a tracking issue, not a performance issue. During those 4 days, the actual leads were coming in (the phone was ringing) but nothing was being attributed. Bidding algorithms degraded because they thought nothing was converting.

Failure mode: Tracking failure is misdiagnosed as performance decline. Bidding algorithms lose signal. Agents report "bad performance" when the actual problem is measurement.

Scope: Ad monitoring agents.

core operating rules

C014 MEDIUM OBSERVED REPEATEDLY 4x Moderate · 154t

Agent-generated content must never use em dashes, exclamation points in the first sentence, or the phrase "I wanted to reach out" in any client-facing draft.

Why: Three clients independently mentioned that our emails "sound like AI." The founder traced it to em dashes and a specific sentence pattern the inbox assistant favored. After removing these markers, zero clients have commented on email tone in 6 weeks.

Failure mode: AI-generated text patterns are recognizable to clients who receive a lot of AI-written communication. Detection erodes the personal touch that small agencies rely on.

Scope: All agents that draft client-facing text.

failure patterns

C015 LOW INFERENCE 0.8x Negative · 136t

When an agent cannot complete its task, it must write a failure entry to its shared state file explaining what failed and when. An empty or missing file is never acceptable.

Why: The campaign audit agent hit a rate limit and crashed without writing anything. Its shared state file was empty. The briefing agent skipped the audit section entirely -- no mention that it was missing. The founder assumed the audit ran clean. It hadn't run at all.

Failure mode: Missing output is indistinguishable from "nothing to report." Humans assume silence is health.

Scope: All agents.

human ai boundary conditions

C016 LOW HUMAN DEFINED RULE 1.5x Low · 183t

Any agent action that touches a client's ad account (pause, enable, budget change, bid adjustment) requires two-human approval: the media buyer AND the founder.

Why: This has never been violated because it was a founding rule. The founder worked at an agency where an automated rule paused a client's best-performing campaign on a Friday evening. The campaign was off for the entire weekend -- $3,800 in lost leads for a personal injury lawyer whose weekend calls are highest-value. Two-human approval is slow. Sightline accepts the latency.

Failure mode: Single-approval automation acts on stale or narrow context. Weekend and evening actions go unreviewed. High-value campaigns pause during peak periods.

Scope: All agents with ad account write access. ---