McFadyen Digital
Founding Publisher silvercore operating rules
All AI-generated client deliverables (proposals, architecture documents, code, recommendations) require human review and explicit sign-off by a named delivery lead before external distribution.
Why: Our reputation is built on 250+ successful enterprise implementations. A hallucinated architecture recommendation for a $2M marketplace build could cost us the engagement and the reference.
Failure mode: An AI-drafted proposal included a Mirakl feature that had been deprecated 6 months prior. The client's CTO caught it in the review meeting. We recovered, but it cost us credibility on the deal and added two weeks to the sales cycle.
Scope: All client-facing deliverables, proposals, SOWs, architecture documents, and code reviews.
Client data, source code, and engagement details must never be processed by public AI models. All AI processing uses our private GCP-hosted LLM deployment or enterprise API agreements with explicit DPAs.
Why: We handle source code and infrastructure details for Fortune 500 retailers, military contracts, and financial services companies. A single data leak would be existential.
Failure mode: A developer used a public code assistant to debug a client's checkout integration. The code snippet contained API keys embedded in comments. We caught it in a security audit 3 days later. No breach occurred, but it triggered an emergency policy rollout.
Scope: All AI tools, all employees, all offices. No exceptions.
AI cost per engagement must not exceed 3% of project margin. Track monthly per project.
Why: As a services business, margin discipline is survival. AI tools are force multipliers, not cost centers. If AI spend on a project exceeds 3% of margin, we are using it wrong.
Failure mode: On one engagement, the team spun up an AI-powered testing suite that ran continuously against a staging environment. The compute bill hit $14K in a month on a $180K project. The PM did not catch it until the monthly P&L review.
Scope: All active client engagements. Internal R&D has a separate budget. --- ### agent_roles_and_authority
agent roles and authority
The Proposal Engine (AI agent) drafts RFP responses and SOWs by pulling from our 250+ engagement library, matching past project patterns to incoming requirements. It generates a scored first draft with confidence ratings per section. A Solutions Architect must review and approve before it moves to the client.
Why: RFP response time is a competitive advantage. Our average was 12 days. The Proposal Engine cut it to 4 days with higher win rates because it surfaces relevant case studies automatically.
Failure mode: The engine once pulled a case study from a client under NDA as a reference in a proposal for their direct competitor. The Solutions Architect caught it. We now run a conflict-of-interest check as a hard gate before any case study inclusion.
Scope: All RFP responses, SOWs, and proposal documents.
The Knowledge Navigator (AI agent) indexes all internal Confluence documentation, Slack conversations, and GitHub repositories. Employees query it in natural language. It returns answers with source citations. It never creates or modifies documentation -- read-only.
Why: With 240 people across 5 offices, institutional knowledge was trapped in individual heads and buried Confluence pages. New hires took 90 days to become productive. The Knowledge Navigator cut onboarding ramp to ~55 days.
Failure mode: The navigator surfaced an outdated Confluence page about our VTEX integration patterns that had not been updated after a major API change. A junior developer followed it, burned 3 days, and introduced a regression. We now tag documentation with a staleness score and the navigator warns when citing pages older than 6 months.
Scope: All internal documentation, Slack history (non-client channels only), and public GitHub repos.
The Delivery Monitor (AI agent) tracks all active Jira projects across delivery teams, flags velocity drops >20%, missed sprint commitments, and scope creep patterns. It reports to the SVP of Global Delivery daily. It does not reassign tasks, modify sprints, or communicate with clients.
Why: With 40+ concurrent engagements across timezones, delivery risk was invisible until it was too late. The SVP cannot review every standup note from every team.
Failure mode: When it flags too aggressively (early tuning period), PMs started ignoring alerts. We had to calibrate thresholds per project type -- a 20% velocity drop on a 6-month marketplace build means something different than on a 3-week integration sprint.
Scope: All active Jira projects across all delivery centers.
The Code Review Assistant (AI agent) performs first-pass code reviews on all PRs, checking for security vulnerabilities, platform-specific anti-patterns (Adobe Commerce, commercetools, VTEX), and adherence to our internal coding standards. It leaves inline comments. A senior developer must still approve the PR.
Why: Code review was the bottleneck in our delivery pipeline. Senior developers were spending 30% of their time reviewing junior code. The assistant handles the mechanical checks so senior devs can focus on architecture and logic.
Failure mode: The assistant approved a PR that passed all mechanical checks but introduced a business logic error in marketplace commission calculations. It calculated seller payouts at the wrong tier. A senior developer would have caught the domain error. We now require business logic sign-off as a separate gate from code quality.
Scope: All PRs across all platform codebases.
The Sales Intelligence Agent monitors HubSpot pipeline, enriches incoming leads with firmographic data, scores them against our ICP (B2B distributors/manufacturers with $50M+ revenue, existing marketplace aspirations), and routes qualified leads to the CRO's team with a priority score.
Why: Our CRO Ed Coke has sold over $1B in commerce services. His time should be spent on $500K+ opportunities, not qualifying $30K requests. The agent handles triage.
Failure mode: The scoring model initially weighted company size too heavily and deprioritized a mid-market chemical distributor that turned into our ChemDirect engagement -- one of our highest-profile marketplace launches. We added "marketplace intent signals" as a scoring factor.
Scope: All inbound leads via website, NRF events, and partner referrals.
The Marketplace Analyst (AI agent) monitors live marketplace deployments for our managed services clients -- tracking seller onboarding velocity, GMV trends, catalog health, and commission anomalies. It generates weekly health reports for account managers. It does not modify marketplace configurations or contact sellers directly.
Why: Clients on our managed marketplace services expect proactive issue detection. A marketplace with degrading seller health metrics needs intervention before sellers churn, not after.
Failure mode: The analyst flagged a "GMV decline" that was actually a seasonal pattern (post-holiday normalization). The account manager escalated unnecessarily, alarming the client. We now require 4-week rolling comparisons against same-period prior year before flagging GMV declines.
Scope: All managed marketplace services clients. --- ### coordination_patterns
coordination patterns
Timezone handoffs between delivery centers (Virginia, Brazil, India) must include an AI-generated handoff summary posted to the project's Slack channel at the end of each team's working day. The summary includes: work completed, blockers encountered, decisions needed, and next priorities.
Why: We lost 2-3 days per sprint in "context reconstruction" where the next timezone team had to read through Jira comments and Slack threads to figure out where things stood. The handoff summaries cut this to under 30 minutes.
Failure mode: Teams started relying on the AI summary and stopped updating Jira tickets directly. When the summarizer hallucinated a "completed" status for a task that was actually blocked, the downstream team built on top of broken code. We now require Jira status to be the source of truth -- the AI summarizes Jira, it does not replace it.
Scope: All multi-timezone engagements (approximately 70% of active projects).
Weekly AI-generated "Suite Spot" competitive intelligence briefs are produced for the leadership team, tracking competitor platform releases, partnership announcements, and pricing changes across Mirakl, VTEX, commercetools, Shopify, and Salesforce Commerce Cloud ecosystems.
Why: As the publisher of the Marketplace Suite Spot Report, we must maintain the most current competitive intelligence in the industry. Falling behind on a platform capability change directly impacts our advisory credibility.
Failure mode: The brief once missed a commercetools pricing model change because the source was announced via a partner webinar, not a press release. Our monitoring was over-indexed on written publications. We added webinar transcript scanning.
Scope: All leadership team members, Solutions Architects, and the marketing team.
The Proposal Engine and Sales Intelligence Agent share a common client/prospect database. When the Sales Agent qualifies a lead, it pre-loads the Proposal Engine with firmographic data, industry vertical, and likely platform fit so the first draft is contextualized before a human touches it.
Why: Eliminates the "cold start" problem where proposal writers spend the first day just researching the prospect. The agent-to-agent handoff means the proposal draft is already 40% contextualized when the Solutions Architect opens it.
Failure mode: The Sales Agent once passed incorrect revenue data (confused parent company with subsidiary), which caused the Proposal Engine to scope the engagement for a $2B enterprise when the actual buyer was a $90M division. The SA caught it, but it burned half a day re-scoping.
Scope: All new business proposals originating from inbound leads. --- ### operational_heuristics
operational heuristics
When the Delivery Monitor flags a project as "at risk" (velocity drop >20% for 2 consecutive sprints), the PM must acknowledge within 24 hours with a remediation plan. If no acknowledgment in 24 hours, it escalates to the SVP of Global Delivery automatically.
Why: Silent project degradation was our biggest delivery risk. PMs naturally want to "fix it internally" before escalating. The forced acknowledgment window prevents hiding.
Failure mode: A PM acknowledged the alert but submitted a boilerplate remediation plan ("will add resources next sprint") without actually investigating the root cause. The project continued to degrade. We now require the PM to cite specific Jira tickets and the root cause in the acknowledgment.
Scope: All active delivery projects.
AI code review comments that go unaddressed for 48 hours are auto-escalated to the tech lead. This prevents PR queues from stalling because developers dismiss AI feedback.
Why: Early adoption showed developers ignoring AI comments at a 40% rate, assuming they were false positives. Some were. Many were not. The escalation creates accountability.
Failure mode: The escalation initially went to the PM instead of the tech lead. PMs did not have the technical context to evaluate whether the AI comment was valid. We rerouted to tech leads who can make the call in 5 minutes.
Scope: All platform codebases.
The Knowledge Navigator's staleness scoring triggers automatic review requests to documentation owners when a page has not been updated in 6 months and has been cited more than 10 times.
Why: High-citation, stale documentation is the most dangerous kind -- it is trusted precisely because it is frequently referenced, but the information may be outdated.
Failure mode: Documentation owners were overwhelmed with review requests during the initial rollout (we had 3+ years of Confluence debt). We added a priority queue based on citation frequency x staleness to focus on the most dangerous pages first.
Scope: All internal Confluence spaces. --- ### failure_patterns
failure patterns
AI agents must never autonomously communicate with clients, sellers, or external stakeholders. All external communication flows through a named human.
Why: A test deployment of the Marketplace Analyst accidentally sent a seller health alert directly to a marketplace operator's Slack channel (misconfigured webhook). The client saw raw internal scoring data including "churn risk: high" for three of their top sellers. The account team spent two weeks in damage control.
Failure mode: Client trust erosion, internal data exposure, potential contract violation. The incident led to our blanket rule: AI agents have zero external communication authority.
Scope: All AI agents, all communication channels.
Platform-specific AI models must be retrained or re-validated within 30 days of any major platform release (Adobe Commerce, commercetools, VTEX, Mirakl).
Why: Our Code Review Assistant was trained on Adobe Commerce 2.4.5 patterns. When 2.4.6 shipped with breaking changes to the checkout API, the assistant continued approving code written against the old patterns. Three PRs shipped to staging with deprecated method calls before a senior dev flagged it.
Failure mode: Stale AI models approve code against deprecated platform APIs, introducing technical debt and potential runtime failures in client environments.
Scope: All platform-specific AI models and review tools.
When two AI agents produce conflicting recommendations (e.g., Sales Agent scores a lead as high-priority while the Proposal Engine flags scope concerns), the conflict must surface to a human decision-maker within 4 hours. Neither agent may override the other.
Why: Early in our rollout, the Sales Agent pushed a lead through as "high-fit" while the Proposal Engine flagged the engagement as requiring capabilities we had never delivered (custom blockchain-based marketplace settlement). The agents operated in parallel without conflict detection. The SA discovered the mismatch only after spending a day on a proposal we should have declined.
Failure mode: Wasted senior consultant time, potential over-commitment to engagements outside our capability, and reputational risk if we win work we cannot deliver.
Scope: All inter-agent handoffs and shared data flows. --- ### human_ai_boundary_conditions
human ai boundary conditions
Pricing decisions, discount approvals, and engagement scoping are human-only. AI agents may suggest pricing based on historical data, but the CRO or a named VP must approve all commercial terms.
Why: Our pricing varies dramatically based on platform complexity, client maturity, offshore/onshore mix, and strategic account value. The Sales Agent once suggested a $280K price for an engagement that the CRO priced at $450K because of strategic upsell potential the AI could not see. The human context on relationship dynamics and long-term account value is irreplaceable.
Failure mode: Under-pricing erodes margin. Over-pricing loses deals. Both damage the business differently, and AI cannot weigh the tradeoffs with sufficient context on relationship history and strategic intent.
Scope: All commercial decisions, proposals, and SOWs.
Hiring decisions, performance evaluations, and team composition for client engagements are human-only. AI may surface utilization data, skills matching, and availability, but the staffing decision is made by the delivery lead.
Why: Staffing a $1M marketplace build is not a skills-matching problem. It requires understanding team dynamics, growth opportunities for junior staff, client personality fit, and timezone overlap preferences. AI sees the spreadsheet. Humans see the team.
Failure mode: An early experiment with AI-recommended staffing suggested putting two senior developers with known collaboration friction on the same engagement because their skills were complementary on paper. The delivery lead overrode it. We formalized the boundary.
Scope: All staffing and HR decisions. --- *Generated by the Organizational Translucence Protocol (OTP) v1.0* *Organization: McFadyen Digital | Generated: 2026-03-16*
Compare with Another OOS
Search for an organization to compare against.