Quality Gates This skill teaches agents how to assess task complexity, enforce quality gates, and prevent wasted work on incomplete or poorly-defined tasks. Key Principle: Stop and clarify before proceeding with incomplete information. Better to ask questions than to waste cycles on the wrong solution. Overview Auto-Activate Triggers Receiving a new task assignment Starting a complex feature implementation Before allocating work in Squad mode When requirements seem unclear or incomplete After 3 failed attempts at the same task When blocked by dependencies Manual Activation User asks for complexity assessment Planning a multi-step project Before committing to a timeline Core Concepts Complexity Scoring (1-5 Scale) Level Files Lines Time Characteristics 1 - Trivial 1 < 50 < 30 min No deps, no unknowns 2 - Simple 1-3 50-200 30 min - 2 hr 0-1 deps, minimal unknowns 3 - Moderate 3-10 200-500 2-8 hr 2-3 deps, some unknowns 4 - Complex 10-25 500-1500 8-24 hr 4-6 deps, significant unknowns 5 - Very Complex 25+ 1500+ 24+ hr 7+ deps, many unknowns Load: Read("${CLAUDE_SKILL_DIR}/references/complexity-scoring.md") for detailed examples and assessment formulas. Blocking Thresholds Condition Threshold Action YAGNI Gate Justified ratio > 2.0 BLOCK with simpler alternatives YAGNI Warning Justified ratio 1.5-2.0 WARN with simpler alternatives Critical Questions
3 unanswered BLOCK Missing Dependencies Any blocking BLOCK Failed Attempts = 3 BLOCK & ESCALATE Evidence Failure 2 fix attempts BLOCK Complexity Overflow Level 4-5 no plan BLOCK WARNING Conditions (proceed with caution): Level 3 complexity 1-2 unanswered questions 1-2 failed attempts Load: Read("${CLAUDE_SKILL_DIR}/references/blocking-thresholds.md") for escalation protocols and decision logic. References Load on demand with Read("${CLAUDE_SKILL_DIR}/references/
") : File Content complexity-scoring.md Detailed Level 1-5 characteristics, quick assessment formula, checklist blocking-thresholds.md BLOCKING vs WARNING conditions, escalation protocol, gate decision logic, attempt tracking workflows.md Pre-task gate validation, stuck detection, complexity breakdown (Level 4-5), requirements completeness gate-patterns.md Gate validation process templates, context system integration, common pitfalls llm-quality-validation.md LLM-as-judge patterns, quality aspects, fail-open/closed strategies, graceful degradation, triple-consumer artifacts Quick Reference Gate Decision Flow 0. YAGNI check (runs FIRST — before any implementation planning) → Read project tier from scope-appropriate-architecture → Calculate justified_complexity = planned_LOC / tier_appropriate_LOC → If ratio > 2.0: BLOCK (must simplify) → If ratio 1.5-2.0: WARN (present simpler alternative) → Security patterns exempt from YAGNI gate 1. Assess complexity (1-5) 2. Count critical questions unanswered 3. Check dependencies blocked 4. Check attempt count if (yagni_ratio > 2.0) -> BLOCK with simpler alternatives else if (questions > 3 || deps blocked || attempts >= 3) -> BLOCK else if (complexity >= 4 && no plan) -> BLOCK else if (yagni_ratio > 1.5 || complexity == 3 || questions 1-2) -> WARNING else -> PASS Gate Check Template
Quality Gate: [Task Name] ** Complexity: ** Level [1-5] ** Unanswered Critical Questions: ** [Count] ** Blocked Dependencies: ** [List or None] ** Failed Attempts: ** [Count] ** Status: ** PASS / WARNING / BLOCKED ** Can Proceed: ** Yes / No Escalation Template
Escalation: Task Blocked ** Task: ** [Description] ** Block Type: ** [Critical Questions / Dependencies / Stuck / Evidence] ** Attempts: ** [Count]
What Was Tried 1. [Approach 1] - Failed: [Reason] 2. [Approach 2] - Failed: [Reason]
Need Guidance On
[Specific question] ** Recommendation: ** [Suggested action] Integration with Context System // Add gate check to context context . quality_gates = context . quality_gates || [ ] ; context . quality_gates . push ( { task_id : taskId , timestamp : new Date ( ) . toISOString ( ) , complexity_score : 3 , gate_status : 'pass' , // pass, warning, blocked critical_questions_count : 1 , unanswered_questions : 1 , dependencies_blocked : 0 , attempt_count : 0 , can_proceed : true } ) ; Integration with Evidence System // Before marking task complete const evidence = context . quality_evidence ; const hasPassingEvidence = ( evidence ?. tests ?. exit_code === 0 || evidence ?. build ?. exit_code === 0 ) ; if ( ! hasPassingEvidence ) { return { gate_status : 'blocked' , reason : 'no_passing_evidence' } ; } Best Practices Pattern Library Track success/failure patterns across projects to prevent repeating mistakes and proactively warn during code reviews. Rule File Key Pattern YAGNI Gate rules/yagni-gate.md Pre-implementation scope check, justified complexity ratio, simpler alternatives Pattern Library rules/practices-code-standards.md Success/failure tracking, confidence scoring, memory integration Review Checklist rules/practices-review-checklist.md Category-based review, proactive anti-pattern detection Pattern Confidence Levels Level Meaning Action Strong success 3+ projects, 100% success Always recommend Mixed results Both successes and failures Context-dependent Strong anti-pattern 3+ projects, all failed Block with explanation Common Pitfalls Pitfall Problem Solution Skip gates for "simple" tasks Get stuck later Always run gate check Ignore WARNING status Undocumented assumptions cause issues Document every assumption Not tracking attempts Waste cycles on same approach Track every attempt, escalate at 3 Proceed when BLOCKED Build wrong solution NEVER bypass BLOCKED gates