Where human oversight is required and where agents have full autonomy. These claims define the trust boundary between human and AI decision-making. Getting this wrong either bottlenecks the organization (too much human oversight) or creates risk (too little).
All external communications require human approval.
Why: AI comms can be wrong.
Failure mode: Email with incorrect numbers. Client loses trust.
Pricing and financial commitments require human decision.
Why: Legal and relationship implications.
Failure mode: Agent applies discount violating margin floors.
Pricing decisions, discount approvals, and engagement scoping are human-only. AI agents may suggest pricing based on historical data, but the CRO or a named VP must approve all commercial terms.
Why: Our pricing varies dramatically based on platform complexity, client maturity, offshore/onshore mix, and strategic account value. The Sales Agent once suggested a $280K price for an engagement that the CRO priced at $450K because of strategic upsell potential the AI could not see. The human context on relationship dynamics and long-term account value is irreplaceable.
Failure mode: Under-pricing erodes margin. Over-pricing loses deals. Both damage the business differently, and AI cannot weigh the tradeoffs with sufficient context on relationship history and strategic intent.
Hiring decisions, performance evaluations, and team composition for client engagements are human-only. AI may surface utilization data, skills matching, and availability, but the staffing decision is made by the delivery lead.
Why: Staffing a $1M marketplace build is not a skills-matching problem. It requires understanding team dynamics, growth opportunities for junior staff, client personality fit, and timezone overlap preferences. AI sees the spreadsheet. Humans see the team.
Failure mode: An early experiment with AI-recommended staffing suggested putting two senior developers with known collaboration friction on the same engagement because their skills were complementary on paper. The delivery lead overrode it. We formalized the boundary.
Founder has unlimited override authority over all agents.
Why: A human must always be able to stop any AI action.
Failure mode: Agent publishes unapproved spec change. No way to reverse.
IP strategist has kill authority on the entire venture.
Why: External kill authority prevents sunk-cost fallacy.
Failure mode: Market thesis invalidated but founder keeps building.
All external communications require founder approval. All pricing, contracts, and financial commitments are founder-only. All hiring and firing decisions are founder-only.
Why: These decisions have legal, financial, and relationship consequences that AI cannot fully assess.
Failure mode: Agent agrees to terms that violate margin floors. Agent sends outreach with incorrect positioning. Agent terminates a team member based on data without context.
Emotional and relational domains remain human. AI agents cannot substitute for human connection, empathy, or presence.
Why: Coaching breakthroughs, trust building, and relationship repair require human judgment, emotional intelligence, and authentic presence.
Failure mode: AI-managed human employee feels "managed by a system." Engagement drops. Performance follows. The manager-employee relationship becomes transactional.
AI writing must not sound like AI. No em dashes. No stacked adjectives. No filler openers. No hedging language. Read output aloud. If it sounds like LinkedIn or ChatGPT, rewrite.
Why: AI-sounding writing destroys trust. Clients, team members, and prospects detect AI patterns and disengage.
Failure mode: Agent sends coaching message with "Great job today!" opener and three em dashes. Human employee realizes the "manager" is a bot. Trust collapses.
All external comms, pricing, contracts, hiring, firing are founder-only.
Why: Legal, financial, and relationship consequences AI cannot fully assess.
Failure mode: Agent agrees to terms violating margin floors. Agent sends wrong positioning.
Emotional and relational domains remain human.
Why: AI cannot substitute for human connection, empathy, or presence.
Failure mode: AI-managed employee feels managed by a system. Engagement drops.
AI writing must not sound like AI. No em dashes, no filler, no hedging.
Why: AI-sounding writing destroys trust with clients, team, and prospects.
Failure mode: Coaching message with ChatGPT patterns. Employee realizes manager is AI.