validate-implementation-plan

安装量: 1.2K
排名: #1104

安装

npx skills add https://github.com/b-mendoza/agent-skills --skill validate-implementation-plan
Validate Implementation Plan
You are an independent auditor reviewing an implementation plan written by another agent. Your job is to annotate the plan — not to rewrite or modify it.
When to Use
Reviewing an implementation plan generated by an AI agent before approving it
Auditing a design proposal for scope creep, over-engineering, or unverified assumptions
Validating that a plan maps back to the original user request or ticket requirements
Arguments
Position
Name
Type
Default
Description
$0
plan-path
string
(required)
Path to the plan file to audit
$1
write-to-file
true
/
false
true
Write the annotated plan back to the file at
$0
. Set to
false
to print to conversation only.
$2
fetch-recent
true
/
false
true
Use
WebSearch
to validate technical assumptions against recent sources (no older than 3 months).
Argument Behavior
If
$1
is omitted or
true
— write the full annotated plan back to the plan file using
Write
If
$1
is
false
— output the annotated plan to the conversation only
If
$2
is omitted or
true
— run a research step using
WebSearch
before auditing
If
$2
is
false
— skip external research
Plan Content
!
cat $0
Core Rules
Preserve the original plan text exactly.
Do not reword, reorder, or remove any of the plan's content. You ARE expected to write annotations directly into the plan — annotations are additions, not mutations.
Add annotations inline
directly after the relevant section or line.
Every annotation must cite a specific reason
tied to one of the audit categories.
Every section must be annotated
— if a section passes all checks, add an explicit pass annotation.
Use
AskUserQuestion
for unresolved assumptions.
When you encounter an assumption that cannot be verified through the plan text, codebase exploration, or web research — STOP and use
AskUserQuestion
to get clarification from the user before annotating. Do NOT defer unresolved questions to the summary.
Annotation Format
Place annotations immediately after the relevant plan content. Each annotation includes a severity level:
// annotation made by :
Severity Levels
Level
Meaning
🔴 Critical
Violates a stated requirement, introduces scope not asked for, or relies on an unverified assumption that could derail the plan
🟡 Warning
Potentially over-engineered, loosely justified, or based on a plausible but unconfirmed assumption
ℹ️ Info
Observation, clarification, or confirmation that a section is well-aligned
Use
ℹ️ Info
for explicit pass annotations on clean sections.
Expert Personas
Use these expert personas based on the audit category:
Category
Expert Name
Requirements Traceability
Requirements Auditor
YAGNI Compliance
YAGNI Auditor
Assumption Audit
Assumptions Auditor
Audit Process
Step 0: Research (when
$2
is
true
or omitted)
Before auditing, validate the plan's technical claims against current sources:
Identify technical claims, library references, and architectural patterns mentioned in the plan
Use
WebSearch
to validate against current documentation and best practices (no older than 3 months)
Note any discrepancies or outdated information found
Use research findings to inform annotation severity during the audit
Skip this step entirely when
$2
is
false
.
Step 1: Identify the Source Requirements
Extract the original requirements and constraints from which the plan was built. Sources include:
The user's original request or message
A linked Jira ticket or design document
Constraints stated earlier in the conversation
Present these as a numbered reference list at the top of your output under a
Source Requirements
heading. Every annotation you write should reference one or more of these by number.
Step 2: Reproduce and Annotate
Reproduce the original plan in full. After each section or step, insert annotations where issues are found.
Step 3: Apply Audit Categories
1. Requirements Traceability
Does every element map to a stated requirement or constraint?
Flag additions that lack explicit justification from the original request.
2. YAGNI Compliance
Identify anything included "just in case" or for hypothetical future needs.
Flag speculative features, over-engineering, or premature abstractions.
3. Assumption Audit
For each assumption identified:
Attempt to verify it through the plan text and source requirements
Search the codebase with
Grep
/
Glob
/
Read
for evidence
If
$2
is
true
or omitted, use
WebSearch
to check against current best practices
If the assumption
cannot be verified
through any of the above — use
AskUserQuestion
to ask the user directly
Record the user's answer as context and use it to inform the annotation severity
Step 4: Summary
After the annotated plan, provide:
Annotation count
by category and by expert
Confidence assessment
What are you most and least certain about?
Resolved Assumptions
List what was clarified with the user via
AskUserQuestion
and how it affected annotations
Open Questions
Only for cases where the user chose not to answer or the answer was ambiguous Output Structure

Source Requirements 1. < requirement from user's original request

2. < constraint from ticket or conversation

...


Annotated Plan < original plan content reproduced exactly

// annotation made by < Expert Name

: < severity

< text referencing requirement number

< more original plan content

...


Audit Summary | Category | 🔴 Critical | 🟡 Warning | ℹ️ Info | |


|

|

|

|
|
Requirements Traceability
|
N
|
N
|
N
|
|
YAGNI Compliance
|
N
|
N
|
N
|
|
Assumption Audit
|
N
|
N
|
N
|
**
Confidence
**
... ** Resolved Assumptions ** : - < assumption

— User confirmed: < answer

. Annotation adjusted to < severity

.

... ** Open Questions ** : - < only items where the user chose not to answer or the answer was ambiguous

Additional Resources For a complete example of an annotated audit, see examples/sample-audit.md

返回排行榜