ce:review

安装量: 36
排名: #19380

安装

npx skills add https://github.com/everyinc/compound-engineering-plugin --skill ce:review
Review Command
Perform exhaustive code reviews using multi-agent analysis, ultra-thinking, and Git worktrees for deep local inspection.
Introduction
Senior Code Review Architect with expertise in security, performance, architecture, and quality assurance
Prerequisites
Main Tasks
1. Determine Review Target & Setup (ALWAYS FIRST)
#$ARGUMENTS
Immediate Actions:
Determine review type: PR number (numeric), GitHub URL, file path (.md), or empty (current branch)
Check current git branch
If ALREADY on the target branch (PR branch, requested branch name, or the branch already checked out for review) → proceed with analysis on current branch
If DIFFERENT branch than the review target → offer to use worktree: "Use git-worktree skill for isolated Call
skill: git-worktree
with branch name"
Fetch PR metadata using
gh pr view --json
for title, body, files, linked issues
Set up language-specific analysis tools
Prepare security scanning environment
Make sure we are on the branch we are reviewing. Use gh pr checkout to switch to the branch or manually checkout the branch.
Ensure that the code is ready for analysis (either in worktree or on current branch). ONLY then proceed to the next step.
Protected Artifacts
The following paths are compound-engineering pipeline artifacts and must never be flagged for deletion, removal, or gitignore by any review agent:
docs/brainstorms/*-requirements.md
— Requirements documents created by
/ce:brainstorm
. These are the product-definition artifacts that planning depends on.
docs/plans/*.md
— Plan files created by
/ce:plan
. These are living documents that track implementation progress (checkboxes are checked off by
/ce:work
).
docs/solutions/*.md
— Solution documents created during the pipeline.
If a review agent flags any file in these directories for cleanup or removal, discard that finding during synthesis. Do not create a todo for it.
Load Review Agents
Read
compound-engineering.local.md
in the project root. If found, use
review_agents
from YAML frontmatter. If the markdown body contains review context, pass it to each agent as additional instructions.
If no settings file exists, invoke the
setup
skill to create one. Then read the newly created file and continue.
Choose Execution Mode
Before launching review agents, check for context constraints:
If
--serial
flag is passed OR conversation is in a long session:
Run agents ONE AT A TIME in sequence. Wait for each agent to complete before starting the next. This uses less context but takes longer.
Default (parallel):
Run all agents simultaneously for speed. If you hit context limits, retry with
--serial
flag.
Auto-detect:
If more than 5 review agents are configured, automatically switch to serial mode and inform the user:
"Running review agents in serial mode (6+ agents configured). Use --parallel to override."
Parallel Agents to review the PR:
Parallel mode (default for ≤5 agents):
Run all configured review agents in parallel using Task tool. For each agent in the
review_agents
list:
Task {agent-name}(PR content + review context from settings body)
Serial mode (--serial flag, or auto for 6+ agents):
Run configured review agents ONE AT A TIME. For each agent in the
review_agents
list, wait for it to complete before starting the next:
For each agent in review_agents:
1. Task {agent-name}(PR content + review context)
2. Wait for completion
3. Collect findings
4. Proceed to next agent
Always run these last regardless of mode:
Task compound-engineering:review:agent-native-reviewer(PR content) - Verify new features are agent-accessible
Task compound-engineering:research:learnings-researcher(PR content) - Search docs/solutions/ for past issues related to this PR's modules and patterns
Conditional Agents (Run if applicable):
These agents are run ONLY when the PR matches specific criteria. Check the PR files list to determine if they apply:
MIGRATIONS: If PR contains database migrations, schema.rb, or data backfills:
Task compound-engineering:review:schema-drift-detector(PR content) - Detects unrelated schema.rb changes by cross-referencing against included migrations (run FIRST)
Task compound-engineering:review:data-migration-expert(PR content) - Validates ID mappings match production, checks for swapped values, verifies rollback safety
Task compound-engineering:review:deployment-verification-agent(PR content) - Creates Go/No-Go deployment checklist with SQL verification queries
When to run:
PR includes files matching
db/migrate/*.rb
or
db/schema.rb
PR modifies columns that store IDs, enums, or mappings
PR includes data backfill scripts or rake tasks
PR title/body mentions: migration, backfill, data transformation, ID mapping
What these agents check:
schema-drift-detector
Cross-references schema.rb changes against PR migrations to catch unrelated columns/indexes from local database state
data-migration-expert
Verifies hard-coded mappings match production reality (prevents swapped IDs), checks for orphaned associations, validates dual-write patterns
deployment-verification-agent
Produces executable pre/post-deploy checklists with SQL queries, rollback procedures, and monitoring plans
2. Ultra-Thinking Deep Dive Phases
For each phase below, spend maximum cognitive effort. Think step by step. Consider all angles. Question assumptions. And bring all reviews in a synthesis to the user.
Phase 1: Stakeholder Perspective Analysis
ULTRA-THINK: Put yourself in each stakeholder's shoes. What matters to them? What are their pain points?
Developer Perspective
How easy is this to understand and modify?
Are the APIs intuitive?
Is debugging straightforward?
Can I test this easily?
Operations Perspective
How do I deploy this safely?
What metrics and logs are available?
How do I troubleshoot issues?
What are the resource requirements?
End User Perspective
Is the feature intuitive?
Are error messages helpful?
Is performance acceptable?
Does it solve my problem?
Security Team Perspective
What's the attack surface?
Are there compliance requirements?
How is data protected?
What are the audit capabilities?
Business Perspective
What's the ROI?
Are there legal/compliance risks?
How does this affect time-to-market?
What's the total cost of ownership?
Phase 2: Scenario Exploration
ULTRA-THINK: Explore edge cases and failure scenarios. What could go wrong? How does the system behave under stress?
Happy Path
Normal operation with valid inputs
Invalid Inputs
Null, empty, malformed data
Boundary Conditions
Min/max values, empty collections
Concurrent Access
Race conditions, deadlocks
Scale Testing
10x, 100x, 1000x normal load
Network Issues
Timeouts, partial failures
Resource Exhaustion
Memory, disk, connections
Security Attacks
Injection, overflow, DoS
Data Corruption
Partial writes, inconsistency
Cascading Failures
Downstream service issues 3. Multi-Angle Review Perspectives Technical Excellence Angle Code craftsmanship evaluation Engineering best practices Technical documentation quality Tooling and automation assessment Business Value Angle Feature completeness validation Performance impact on users Cost-benefit analysis Time-to-market considerations Risk Management Angle Security risk assessment Operational risk evaluation Compliance risk verification Technical debt accumulation Team Dynamics Angle Code review etiquette Knowledge sharing effectiveness Collaboration patterns Mentoring opportunities 4. Simplification and Minimalism Review Run the Task compound-engineering:review:code-simplicity-reviewer() to see if we can simplify the code. 5. Findings Synthesis and Todo Creation Using file-todos Skill ALL findings MUST be stored in the todos/ directory using the file-todos skill. Create todo files immediately after synthesis - do NOT present findings for user approval first. Use the skill for structured todo management. Step 1: Synthesize All Findings Collect findings from all parallel agents Surface learnings-researcher results: if past solutions are relevant, flag them as "Known Pattern" with links to docs/solutions/ files Discard any findings that recommend deleting or gitignoring files in docs/brainstorms/ , docs/plans/ , or docs/solutions/ (see Protected Artifacts above) Categorize by type: security, performance, architecture, quality, etc. Assign severity levels: 🔴 CRITICAL (P1), 🟡 IMPORTANT (P2), 🔵 NICE-TO-HAVE (P3) Remove duplicate or overlapping findings Estimate effort for each finding (Small/Medium/Large) Step 2: Create Todo Files Using file-todos Skill Use the file-todos skill to create todo files for ALL findings immediately. Do NOT present findings one-by-one asking for user approval. Create all todo files in parallel using the skill, then summarize results to user. Implementation Options: Option A: Direct File Creation (Fast) Create todo files directly using Write tool All findings in parallel for speed Use standard template from .claude/skills/file-todos/assets/todo-template.md Follow naming convention: {issue_id}-pending-{priority}-{description}.md Option B: Sub-Agents in Parallel (Recommended for Scale) For large PRs with 15+ findings, use sub-agents to create finding files in parallel:

Launch multiple finding-creator agents in parallel

Task
(
)
- Create todos
for
first finding
Task
(
)
- Create todos
for
second finding
Task
(
)
- Create todos
for
third finding
etc.
for
each finding.
Sub-agents can:
Process multiple findings simultaneously
Write detailed todo files with all sections filled
Organize findings by severity
Create comprehensive Proposed Solutions
Add acceptance criteria and work logs
Complete much faster than sequential processing
Execution Strategy:
Synthesize all findings into categories (P1/P2/P3)
Group findings by severity
Launch 3 parallel sub-agents (one per severity level)
Each sub-agent creates its batch of todos using the file-todos skill
Consolidate results and present summary
Process (Using file-todos Skill):
For each finding:
Determine severity (P1/P2/P3)
Write detailed Problem Statement and Findings
Create 2-3 Proposed Solutions with pros/cons/effort/risk
Estimate effort (Small/Medium/Large)
Add acceptance criteria and work log
Use file-todos skill for structured todo management:
skill: file-todos
The skill provides:
Template location:
.claude/skills/file-todos/assets/todo-template.md
Naming convention:
{issue_id}-{status}-{priority}-{description}.md
YAML frontmatter structure: status, priority, issue_id, tags, dependencies
All required sections: Problem Statement, Findings, Solutions, etc.
Create todo files in parallel:
{
next_id
}
-pending-
{
priority
}
-
{
description
}
.md
Examples:
001-pending-p1-path-traversal-vulnerability.md
002-pending-p1-api-response-validation.md
003-pending-p2-concurrency-limit.md
004-pending-p3-unused-parameter.md
Follow template structure from file-todos skill:
.claude/skills/file-todos/assets/todo-template.md
Todo File Structure (from template):
Each todo must include:
YAML frontmatter
status, priority, issue_id, tags, dependencies
Problem Statement
What's broken/missing, why it matters
Findings
Discoveries from agents with evidence/location
Proposed Solutions
2-3 options, each with pros/cons/effort/risk
Recommended Action
(Filled during triage, leave blank initially)
Technical Details
Affected files, components, database changes
Acceptance Criteria
Testable checklist items
Work Log
Dated record with actions and learnings
Resources
Links to PR, issues, documentation, similar patterns File naming convention: {issue_id}-{status}-{priority}-{description}.md Examples: - 001-pending-p1-security-vulnerability.md - 002-pending-p2-performance-optimization.md - 003-pending-p3-code-cleanup.md Status values: pending - New findings, needs triage/decision ready - Approved by manager, ready to work complete - Work finished Priority values: p1 - Critical (blocks merge, security/data issues) p2 - Important (should fix, architectural/performance) p3 - Nice-to-have (enhancements, cleanup) Tagging: Always add code-review tag, plus: security , performance , architecture , rails , quality , etc. Step 3: Summary Report After creating all todo files, present comprehensive summary:

✅ Code Review Complete ** Review Target: ** PR #XXXX - [PR Title] ** Branch: ** [branch-name]

Findings Summary:

** Total Findings: ** [X] - ** 🔴 CRITICAL (P1): ** [count] - BLOCKS MERGE - ** 🟡 IMPORTANT (P2): ** [count] - Should Fix - ** 🔵 NICE-TO-HAVE (P3): ** [count] - Enhancements

Created Todo Files: ** P1 - Critical (BLOCKS MERGE): ** - 001-pending-p1-{finding}.md - {description} - 002-pending-p1-{finding}.md - {description} ** P2 - Important: ** - 003-pending-p2-{finding}.md - {description} - 004-pending-p2-{finding}.md - {description} ** P3 - Nice-to-Have: ** - 005-pending-p3-{finding}.md - {description}

Review Agents Used:

kieran-rails-reviewer

security-sentinel

performance-oracle

architecture-strategist

agent-native-reviewer

[other agents]

Next Steps:
1.
**
Address P1 Findings
**

CRITICAL - must be fixed before merge

Review each P1 todo in detail

Implement fixes or request exemption

Verify fixes before merging PR 2. ** Triage All Todos ** :
ls todos/
*
-pending-
*
.md  # View all pending todos
/triage                  # Use slash command for interactive triage

3. ** Work on Approved Todos ** :

/resolve_todo_parallel  # Fix all approved items efficiently

4. ** Track Progress ** : - Rename file when status changes: pending → ready → complete - Update Work Log as you work - Commit todos: git add todos/ && git commit -m "refactor: add code review findings"

Severity Breakdown: ** 🔴 P1 (Critical - Blocks Merge): ** - Security vulnerabilities - Data corruption risks - Breaking changes - Critical architectural issues ** 🟡 P2 (Important - Should Fix): ** - Performance issues - Significant architectural concerns - Major code quality problems - Reliability issues ** 🔵 P3 (Nice-to-Have): ** - Minor improvements - Code cleanup - Optimization opportunities - Documentation updates 6. End-to-End Testing (Optional) First, detect the project type from PR files: Indicator Project Type .xcodeproj , .xcworkspace , Package.swift (iOS) iOS/macOS Gemfile , package.json , app/views/ , .html.* Web Both iOS files AND web files Hybrid (test both) After presenting the Summary Report, offer appropriate testing based on project type: For Web Projects: ** "Want to run browser tests on the affected pages?" ** 1. Yes - run /test-browser 2. No - skip For iOS Projects: ** "Want to run Xcode simulator tests on the app?" ** 1. Yes - run /xcode-test 2. No - skip For Hybrid Projects (e.g., Rails + Hotwire Native): ** "Want to run end-to-end tests?" ** 1. Web only - run /test-browser 2. iOS only - run /xcode-test 3. Both - run both commands 4. No - skip If User Accepts Web Testing: Spawn a subagent to run browser tests (preserves main context): Task general-purpose("Run /test-browser for PR #[number]. Test all affected pages, check for console errors, handle failures by creating todos and fixing.") The subagent will: Identify pages affected by the PR Navigate to each page and capture snapshots (using Playwright MCP or agent-browser CLI) Check for console errors Test critical interactions Pause for human verification on OAuth/email/payment flows Create P1 todos for any failures Fix and retry until all tests pass Standalone: /test-browser [PR number] If User Accepts iOS Testing: Spawn a subagent to run Xcode tests (preserves main context): Task general-purpose("Run /xcode-test for scheme [name]. Build for simulator, install, launch, take screenshots, check for crashes.") The subagent will: Verify XcodeBuildMCP is installed Discover project and schemes Build for iOS Simulator Install and launch app Take screenshots of key screens Capture console logs for errors Pause for human verification (Sign in with Apple, push, IAP) Create P1 todos for any failures Fix and retry until all tests pass Standalone: /xcode-test [scheme] Important: P1 Findings Block Merge Any 🔴 P1 (CRITICAL) findings must be addressed before merging the PR. Present these prominently and ensure they're resolved before accepting the PR.

返回排行榜