dual-axis-skill-reviewer

安装量: 63
排名: #11909

安装

npx skills add https://github.com/tradermonty/claude-trading-skills --skill dual-axis-skill-reviewer

Dual Axis Skill Reviewer Run the dual-axis reviewer script and save reports to reports/ . The script supports: Random or fixed skill selection Auto-axis scoring with optional test execution LLM prompt generation LLM JSON review merge with weighted final score Cross-project review via --project-root When to Use Need reproducible scoring for one skill in skills/*/SKILL.md . Need improvement items when final score is below 90. Need both deterministic checks and qualitative LLM code/content review. Need to review skills in a different project from the command line. Prerequisites Python 3.9+ uv (recommended — auto-resolves pyyaml dependency via inline metadata) For tests: uv sync --extra dev or equivalent in the target project For LLM-axis merge: JSON file that follows the LLM review schema (see Resources) Workflow Determine the correct script path based on your context: Same project : skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py Global install : ~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py The examples below use REVIEWER as a placeholder. Set it once:

If reviewing from the same project:

REVIEWER

skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py

If reviewing another project (global install):

REVIEWER =~ /.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py Step 1: Run Auto Axis + Generate LLM Prompt uv run " $REVIEWER " \ --project-root . \ --emit-llm-prompt \ --output-dir reports/ When reviewing a different project, point --project-root to it: uv run " $REVIEWER " \ --project-root /path/to/other/project \ --emit-llm-prompt \ --output-dir reports/ Step 2: Run LLM Review Use the generated prompt file in reports/skill_review_prompt__.md . Ask the LLM to return strict JSON output. When running inside Claude Code, let Claude act as orchestrator: read the generated prompt, produce the LLM review JSON, and save it for the merge step. Step 3: Merge Auto + LLM Axes uv run " $REVIEWER " \ --project-root . \ --skill < skill-name

\ --llm-review-json < path-to-llm-review.json

\ --auto-weight 0.5 \ --llm-weight 0.5 \ --output-dir reports/ Step 4: Optional Controls Fix selection for reproducibility: --skill or --seed Review all skills at once: --all Skip tests for quick triage: --skip-tests Change report location: --output-dir

Increase --auto-weight for stricter deterministic gating. Increase --llm-weight when qualitative/code-review depth is prioritized. Output reports/skill_review_.json reports/skill_review.md reports/skill_review_prompt_.md (when --emit-llm-prompt is enabled) Installation (Global) To use this skill from any project, symlink it into ~/.claude/skills/ : ln -sfn /path/to/claude-trading-skills/skills/dual-axis-skill-reviewer \ ~/.claude/skills/dual-axis-skill-reviewer After this, Claude Code will discover the skill in all projects, and the script is accessible at ~/.claude/skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py . Resources Auto axis scores metadata, workflow coverage, execution safety, artifact presence, and test health. Auto axis detects knowledge_only skills and adjusts script/test expectations to avoid unfair penalties. LLM axis scores deep content quality (correctness, risk, missing logic, maintainability). Final score is weighted average. If final score is below 90, improvement items are required and listed in the markdown report. Script: skills/dual-axis-skill-reviewer/scripts/run_dual_axis_review.py LLM schema: references/llm_review_schema.md Rubric detail: references/scoring_rubric.md

返回排行榜