post-mortem

安装量: 252
排名: #3474

安装

npx skills add https://github.com/boshu2/agentops --skill post-mortem

Post-Mortem Skill Purpose: Wrap up completed work — validate it shipped correctly, extract learnings, process the knowledge backlog, activate high-value insights, and retire stale knowledge. Six phases: Council — Did we implement it correctly? Extract — What did we learn? Process Backlog — Score, deduplicate, and flag stale learnings Activate — Promote high-value learnings to MEMORY.md and constraints Retire — Archive stale and superseded learnings Harvest — Surface next work for the flywheel Quick Start /post-mortem

wraps up recent work

/post-mortem epic-123

wraps up specific epic

/post-mortem --quick "insight"

quick-capture single learning (no council)

/post-mortem --process-only

skip council+extraction, run Phase 3-5 on backlog

/post-mortem --skip-activate

extract + process but don't write MEMORY.md

/post-mortem --deep recent

thorough council review

/post-mortem --mixed epic-123

cross-vendor (Claude + Codex)

/post-mortem --explorers = 2 epic-123

deep investigation before judging

/post-mortem --debate epic-123

two-round adversarial review

/post-mortem --skip-checkpoint-policy epic-123

skip ratchet chain validation

Flags Flag Default Description --quick "text" off Quick-capture a single learning directly to .agents/learnings/ without running a full post-mortem. Formerly handled by /retro --quick . --process-only off Skip council and extraction (Phase 1-2). Run Phase 3-5 on the existing backlog only. --skip-activate off Extract and process learnings but do not write to MEMORY.md (skip Phase 4 promotions). --deep off 3 judges (default for post-mortem) --mixed off Cross-vendor (Claude + Codex) judges --explorers=N off Each judge spawns N explorers before judging --debate off Two-round adversarial review --skip-checkpoint-policy off Skip ratchet chain validation --skip-sweep off Skip pre-council deep audit sweep Quick Mode Given /post-mortem --quick "insight text" : Quick Step 1: Generate Slug Create a slug from the content: first meaningful words, lowercase, hyphens, max 50 chars. Quick Step 2: Write Learning Directly Write to: .agents/learnings/YYYY-MM-DD-quick-.md


type : learning source : post - mortem - quick date : YYYY - MM - DD


Learning:
**
Category
**
:
<
auto-classify:
debugging|architecture|process|testing|security
>
**
Confidence
**
medium

What We Learned < user's insight text

Source Quick capture via /post-mortem --quick This skips the full pipeline — writes directly to learnings, no council or backlog processing. Quick Step 3: Confirm Learned: Saved to: .agents/learnings/YYYY-MM-DD-quick-.md For deeper reflection, use /post-mortem without --quick. Done. Return immediately after confirmation. Execution Steps Pre-Flight Checks Before proceeding, verify: Git repo exists: git rev-parse --git-dir 2>/dev/null — if not, error: "Not in a git repository" Work was done: git log --oneline -1 2>/dev/null — if empty, error: "No commits found. Run /implement first." Epic context: If epic ID provided, verify it has closed children. If 0 closed children, error: "No completed work to review." If --process-only : Skip Pre-Flight Checks through Step 3. Jump directly to Phase 3: Process Backlog. Step 0.4: Load Reference Documents (MANDATORY) Before Step 0.5 and Step 2.5, load required reference docs into context using the Read tool: REQUIRED_REFS=( "skills/post-mortem/references/checkpoint-policy.md" "skills/post-mortem/references/metadata-verification.md" "skills/post-mortem/references/closure-integrity-audit.md" ) For each reference file, use the Read tool to load its content and hold it in context for use in later steps. Do NOT just test file existence with [ -f ] -- actually read the content so it is available when Steps 0.5 and 2.5 need it. If a reference file does not exist (Read returns an error), log a warning and add it as a checkpoint warning in the council context. Proceed only if the missing reference is intentionally deferred. Step 0.5: Checkpoint-Policy Preflight (MANDATORY) Read references/checkpoint-policy.md for the full checkpoint-policy preflight procedure. It validates the ratchet chain, checks artifact availability, and runs idempotency checks. BLOCK on prior FAIL verdicts; WARN on everything else. Step 1: Identify Completed Work and Record Timing Record the post-mortem start time for cycle-time tracking: PM_START = $( date +%s ) If epic/issue ID provided: Use it directly. If no ID: Find recently completed work:

Check for closed beads

bd list --status closed --since "7 days ago" 2

/dev/null | head -5

Or check recent git activity

git log --oneline --since = "7 days ago" | head -10 Step 2: Load the Original Plan/Spec Before invoking council, load the original plan for comparison: If epic/issue ID provided: bd show to get the spec/description Search for plan doc: ls .agents/plans/ | grep Check git log: git log --oneline | head -10 to find the relevant bead reference If a plan is found, include it in the council packet's context.spec field: { "spec" : { "source" : "bead na-0042" , "content" : "" } } Step 2.2: Load Implementation Summary Check for a crank-generated phase-2 summary: PHASE2_SUMMARY = $( ls -t .agents/rpi/phase-2-summary-*-crank.md 2

/dev/null | head -1 ) if [ -n " $PHASE2_SUMMARY " ] ; then echo "Phase-2 summary found: $PHASE2_SUMMARY "

Read the summary with the Read tool for implementation context

fi
If available, use the phase-2 summary to understand what was implemented, how many waves ran, and which files were modified.
Step 2.3: Reconcile Plan vs Delivered Scope
Compare the original plan scope against what was actually delivered:
Read the plan from
.agents/plans/
(most recent)
Compare planned issues against closed issues (
bd children
)
Note any scope additions, removals, or modifications
Include scope delta in the post-mortem findings
Step 2.4: Closure Integrity Audit (MANDATORY)
Read
references/closure-integrity-audit.md
for the full procedure. Mechanically verifies:
Evidence precedence per child
— every closed child resolves on the strongest available evidence in this order:
commit
, then
staged
, then
worktree
Phantom bead detection
— flags children with generic titles ("task") or empty descriptions
Orphaned children
— beads in
bd list
but not linked to parent in
bd show
Multi-wave regression detection
— for crank epics, checks if a later wave removed code added by an earlier wave
Stretch goal audit
— verifies deferred stretch goals have documented rationale
Include results in the council packet as
context.closure_integrity
. WARN on 1-2 findings, FAIL on 3+.
If a closure is evidence-only or closes before its proving commit exists, emit a proof artifact with
bash skills/post-mortem/scripts/write-evidence-only-closure.sh
and cite the durable tracked copy at
.agents/releases/evidence-only-closures/.json
in the council packet. The writer also emits a local council copy at
.agents/council/evidence-only-closures/.json
. The packet must record the selected
evidence_mode
plus repo-state detail that distinguishes staged files from broader worktree state so active-session audits stay mechanically replayable.
Step 2.5: Pre-Council Metadata Verification (MANDATORY)
Read
references/metadata-verification.md
for the full verification procedure. Mechanically checks: plan vs actual files, file existence in commits, cross-references in docs, and ASCII diagram integrity. Failures are included in the council packet as
context.metadata_failures
.
Step 2.6: Pre-Council Deep Audit Sweep
Skip if
--quick
or
--skip-sweep
.
Before council runs, dispatch a deep audit sweep to systematically discover issues across all changed files. This uses the same protocol as
/vibe --deep
— see the deep audit protocol in the vibe skill (
skills/vibe/
) for the full specification.
In summary:
Identify all files in scope (from epic commits or recent changes)
Chunk files into batches of 3-5 by line count (<=100 lines -> batch of 5, 101-300 -> batch of 3, >300 -> solo)
Dispatch up to 8 Explore agents in parallel, each with a mandatory 8-category checklist per file (resource leaks, string safety, dead code, hardcoded values, edge cases, concurrency, error handling, HTTP/web security)
Merge all explorer findings into a sweep manifest at
.agents/council/sweep-manifest.md
Include sweep manifest in council packet — judges shift to adjudication mode (confirm/reject/reclassify sweep findings + add cross-cutting findings)
Why:
Post-mortem council judges exhibit satisfaction bias when reviewing monolithic file sets — they stop at ~10 findings regardless of actual issue count. Per-file explorers with category checklists find 3x more issues, and the sweep manifest gives judges structured input to adjudicate rather than discover from scratch.
Skip conditions:
--quick
flag -> skip (fast inline path)
--skip-sweep
flag -> skip (old behavior: judges do pure discovery)
No source files in scope -> skip (nothing to audit)
Step 3: Council Validates the Work
Run
/council
with the
retrospective
preset and always 3 judges:
/council --deep --preset=retrospective validate
Default (3 judges with retrospective perspectives):
plan-compliance
What was planned vs what was delivered? What's missing? What was added?
tech-debt
What shortcuts were taken? What will bite us later? What needs cleanup?
learnings
What patterns emerged? What should be extracted as reusable knowledge? Post-mortem always uses 3 judges ( --deep ) because completed work deserves thorough review. Timeout: Post-mortem inherits council timeout settings. If judges time out, the council report will note partial results. Post-mortem treats a partial council report the same as a full report — the verdict stands with available judges. The plan/spec content is injected into the council packet context so the plan-compliance judge can compare planned vs delivered. With --quick (inline, no spawning): /council --quick validate Single-agent structured review. Fast wrap-up without spawning. With debate mode: /post-mortem --debate epic-123 Enables adversarial two-round review for post-implementation validation. Use for high-stakes shipped work where missed findings have production consequences. See /council docs for full --debate details. Advanced options (passed through to council): --mixed — Cross-vendor (Claude + Codex) with retrospective perspectives --preset= — Override with different personas (e.g., --preset=ops for production readiness) --explorers=N — Each judge spawns N explorers to investigate the implementation deeply before judging --debate — Two-round adversarial review (judges critique each other's findings before final verdict) Phase 2: Extract Learnings Inline extraction of learnings from the completed work (formerly delegated to the retro skill). Step EX.1: Gather Context

Recent commits

git log --oneline -20 --since = "7 days ago"

Epic children (if epic ID provided)

bd children < epic-id

2

/dev/null | head -20

Recent plans and research

ls
-lt
.agents/plans/ .agents/research/
2
>
/dev/null
|
head
-10
Read relevant artifacts: research documents, plan documents, commit messages, code changes. Use the Read tool and git commands to understand what was done.
If retrospecting an epic:
Run the closure integrity quick-check from
references/context-gathering.md
(Phantom Bead Detection + Multi-Wave Regression Scan). Include any warnings in findings.
Step EX.2: Classify Learnings
Ask these questions:
What went well?
What approaches worked?
What was faster than expected?
What should we do again?
What went wrong?
What failed?
What took longer than expected?
What would we do differently?
What did we discover?
New patterns found
Codebase quirks learned
Tool tips discovered
Debugging insights
For each learning, capture:
ID
L1, L2, L3...
Category
debugging, architecture, process, testing, security
What
The specific insight
Why it matters
Impact on future work
Confidence
high, medium, low Step EX.3: Write Learnings Write to: .agents/learnings/YYYY-MM-DD-.md

id : learning - YYYY - MM - DD - <slug

type : learning date : YYYY - MM - DD category : <category

confidence : <high | medium | low


Learning:

What We Learned <1-2 sentences describing the insight>

Why It Matters <1 sentence on impact/value>

Source < What work this came from


Learning:
**
ID
**
L2 ... Step EX.4: Classify Learning Scope For each learning extracted in Step EX.3, classify: Question: "Does this learning reference specific files, packages, or architecture in THIS repo? Or is it a transferable pattern that helps any project?" Repo-specific -> Write to .agents/learnings/ (existing behavior from Step EX.3). Use git rev-parse --show-toplevel to resolve repo root — never write relative to cwd. Cross-cutting/transferable -> Rewrite to remove repo-specific context (file paths, function names, package names), then: Write abstracted version to ~/.agents/learnings/YYYY-MM-DD-.md (NOT local — one copy only) Run abstraction lint check: file = "" grep -iEn '(internal/|cmd/|.go:|/pkg/|/src/|AGENTS.md|CLAUDE.md)' " $file " 2

/dev/null grep -En '[A-Z][a-z]+[A-Z][a-z]+.(go|py|ts|rs)' " $file " 2

/dev/null grep -En './[a-z]+/' " $file " 2

/dev/null If matches: WARN user with matched lines, ask to proceed or revise. Never block the write. Note: Each learning goes to ONE location (local or global). No promoted_to needed — there's no local copy to mark when writing directly to global. Example abstraction: Local: "Athena's validate package needs O_CREATE|O_EXCL for atomic claims because Zeus spawns concurrent workers" Global: "Use O_CREATE|O_EXCL for atomic file creation when multiple processes may race on the same path" Step EX.5: Write Structured Findings to Registry Before backlog processing, normalize reusable council findings into .agents/findings/registry.jsonl . Use the tracked contract in docs/contracts/finding-registry.md : persist only reusable findings that should change future planning or review behavior require dedup_key , provenance, pattern , detection_question , checklist_item , applicable_when , and confidence applicable_when must use the controlled vocabulary from the contract append or merge by dedup_key use the contract's temp-file-plus-rename atomic write rule This registry is the v1 advisory prevention surface. It complements learnings and next-work; it does not replace them. Step EX.6: Compile Constraint Templates (Deferred Follow-On) Constraint compilation remains a follow-on surface. Do not present draft templates as active runtime enforcement in the v1 registry-first slice. If you are explicitly operating in a later compiler-enabled slice, for each extracted learning scoring >= 4/5 on actionability AND tagged "constraint" or "anti-pattern", run bash hooks/constraint-compiler.sh to generate a constraint template.

Compile high-scoring constraint/anti-pattern learnings into enforcement templates

for f in .agents/learnings/YYYY-MM-DD-*.md ; do [ -f " $f " ] || continue bash hooks/constraint-compiler.sh " $f " 2

/dev/null || true done This produces draft constraint templates in .agents/constraints/ that can later be activated via ao constraint activate , but that activation path is not part of the v1 registry-first contract. Phase 3: Process Backlog Score, deduplicate, and flag stale learnings across the full backlog. This phase runs on ALL learnings, not just those extracted in Phase 2. Read references/backlog-processing.md for detailed scoring formulas, deduplication logic, and staleness criteria. Step BP.1: Load Last-Processed Marker MARKER = ".agents/ao/last-processed" mkdir -p .agents/ao if [ ! -f " $MARKER " ] ; then date -v-30d +%Y-%m-%dT%H:%M:%S 2

/dev/null || date -d "30 days ago" --iso-8601 = seconds

" $MARKER " fi LAST_PROCESSED = $( cat " $MARKER " ) Step BP.2: Scan Unprocessed Learnings find .agents/learnings/ -name ".md" -newer " $MARKER " -not -path "/archive/*" -type f | sort If zero files found: report "Backlog empty — no unprocessed learnings" and skip to Phase 4. Step BP.3: Deduplicate For each pair of unprocessed learnings: Extract

Learning:

title Normalize: lowercase, strip punctuation, collapse whitespace If two normalized titles share >= 80% word overlap, merge: Keep the file with highest confidence (high > medium > low); if tied, keep most recent Archive the duplicate with a merged_into: pointer Step BP.4: Score Each Learning Compute composite score for each learning: Factor Values Points Confidence high=3, medium=2, low=1 1-3 Citations default=1, +1 per cite in .agents/ao/citations.jsonl 1+ Recency <7d=3, <30d=2, else=1 1-3 Score = confidence + citations + recency Step BP.5: Flag Stale Learnings that are >30 days old AND have zero citations are flagged for retirement in Phase 5.

Flag but do not archive yet — Phase 5 handles retirement

if [ " $DAYS_OLD " -gt 30 ] && [ " $CITE_COUNT " -eq 0 ] ; then echo "STALE: $LEARNING_FILE ( ${DAYS_OLD} d old, 0 citations)" fi Step BP.6: Report Phase 3 (Process Backlog) Summary: - N learnings scanned - N duplicates merged - N scored (range: X-Y) - N flagged stale Phase 4: Activate Promote high-value learnings and feed downstream systems. Read references/activation-policy.md for detailed promotion thresholds and procedures. If --skip-activate is set: Skip this phase entirely. Report "Phase 4 skipped (--skip-activate)." Step ACT.1: Promote to MEMORY.md Learnings with score >= 6 are promoted: Read the learning file Extract title and core insight Check MEMORY.md for duplicate entries (grep for key phrases) If no duplicate: append to

Key Lessons

in MEMORY.md

Key Lessons

** ** — < one-line insight</p> <blockquote> <p>(source: <code>.agents/learnings/<filename></code> ) Important: Append only. Never overwrite MEMORY.md. Step ACT.2: Compile Constraints (Deferred Follow-On) The v1 finding-registry loop stops at structured registry entries. Draft constraint compilation remains a deferred follow-on and must not be described as active runtime enforcement in this slice. If explicitly operating in a later compiler-enabled slice, anti-patterns and constraint-tagged learnings can still flow through bash hooks/constraint-compiler.sh : for f in .agents/learnings/YYYY-MM-DD-*.md ; do [ -f " $f " ] || continue grep -q "constraint|anti-pattern" " $f " 2</p> <p>/dev/null || continue bash hooks/constraint-compiler.sh " $f " 2</p> <p>/dev/null || true done Step ACT.3: Feed Next-Work Actionable improvements identified during processing -> append one schema v1.3 batch entry to .agents/rpi/next-work.jsonl using the tracked contract in ../../.agents/rpi/next-work.schema.md and the write procedure in references/harvest-next-work.md : mkdir -p .agents/rpi</p> </blockquote> <h1 id="build-valid_items-via-the-schema-validation-flow-in-referencesharvest-next-workmd">Build VALID_ITEMS via the schema-validation flow in references/harvest-next-work.md</h1> <h1 id="then-append-one-entry-per-post-mortem-epic">Then append one entry per post-mortem / epic.</h1> <h1 id="entry_timestamp">ENTRY_TIMESTAMP</h1> <p>" $( date -Iseconds ) " SOURCE_EPIC = " ${EPIC_ID :- recent} " VALID_ITEMS_JSON = " ${VALID_ITEMS_JSON :- [ ] } " printf '%s\n' " $( jq -cn \ --arg source_epic " $SOURCE_EPIC " \ --arg timestamp " $ENTRY_TIMESTAMP " \ --argjson items " $VALID_ITEMS_JSON " \ '{ source_epic: $source_epic, timestamp: $timestamp, items: $items, consumed: false, claim_status: "available", claimed_by: null, claimed_at: null, consumed_by: null, consumed_at: null }' ) "</p> <blockquote> <blockquote> <p>.agents/rpi/next-work.jsonl Step ACT.4: Update Marker date -Iseconds</p> </blockquote> <p>.agents/ao/last-processed This must be the LAST action in Phase 4. Step ACT.5: Report Phase 4 (Activate) Summary: - N promoted to MEMORY.md - N duplicates merged - N flagged for retirement - N constraints compiled - N improvements fed to next-work.jsonl Phase 5: Retire Stale Archive learnings that are no longer earning their keep. Step RET.1: Archive Stale Learnings Learnings flagged in Phase 3 (>30d old, zero citations): mkdir -p .agents/learnings/archive for f in < stale-files</p> <p>; do mv " $f " .agents/learnings/archive/ echo "Archived: $f (stale: >30d, 0 citations)" done Step RET.2: Archive Superseded Learnings Learnings merged during Phase 3 deduplication were already archived with merged_into: pointers. Verify the pointers are valid: for f in .agents/learnings/archive/*.md ; do [ -f " $f " ] || continue MERGED_INTO = $( grep "^merged_into:" " $f " 2</p> <p>/dev/null | awk '{print $2}' ) if [ -n " $MERGED_INTO " ] && [ ! -f " $MERGED_INTO " ] ; then echo "WARN: $f points to missing file: $MERGED_INTO " fi done Step RET.3: Clean MEMORY.md References If any archived learning was previously promoted to MEMORY.md, remove those entries: for f in < archived-files</p> <p>; do BASENAME = $( basename " $f " )</p> </blockquote> <h1 id="check-if-memorymd-references-this-file">Check if MEMORY.md references this file</h1> <p>if grep -q " $BASENAME " MEMORY.md 2</p> <blockquote> <p>/dev/null ; then echo "WARN: MEMORY.md references archived learning: $BASENAME — consider removing" fi done Note: Do not auto-delete MEMORY.md entries. WARN the user and let them decide. Step RET.4: Report Phase 5 (Retire) Summary: - N stale learnings archived - N superseded learnings archived - N MEMORY.md references to review Step 4: Write Post-Mortem Report Write to: .agents/council/YYYY-MM-DD-post-mortem-<topic>.md</p> </blockquote> <hr /> <p>id : post - mortem - YYYY - MM - DD - <topic - slug</p> <blockquote> <p>type : post - mortem date : YYYY - MM - DD source : "[[.agents/plans/YYYY-MM-DD-<plan-slug>]]"</p> </blockquote> <hr /> <h1 id="_10"></h1> <p>Post-Mortem: <Epic/Topic> ** Epic: ** < epic-id or "recent"</p> <blockquote> <p>** Duration: ** < elapsed time from PM_START to now</p> <p>** Cycle-Time Trend: ** <compare against prior post-mortems — is this faster or slower? Check .agents/council/ for prior post-mortem Duration values></p> </blockquote> <h2 id="_11"></h2> <p>Council Verdict: PASS / WARN / FAIL | Judge | Verdict | Key Finding | |</p> <hr /> <h2 id="_12">|</h2> <h2 id="_13">|</h2> <p>| | Plan-Compliance | ... | ... | | Tech-Debt | ... | ... | | Learnings | ... | ... |</p> <h3 id="_14"></h3> <p>Implementation Assessment < council summary</p> <blockquote></blockquote> <h3 id="_15"></h3> <p>Concerns < any issues found</p> <blockquote></blockquote> <h2 id="_16"></h2> <p>Learnings (from Phase 2)</p> <h3 id="_17"></h3> <h2 id="what-went-well">What Went Well</h2> <p>...</p> <h3 id="_18"></h3> <h2 id="what-was-hard">What Was Hard</h2> <p>...</p> <h3 id="_19"></h3> <h2 id="do-differently-next-time">Do Differently Next Time</h2> <p>...</p> <h3 id="_20"></h3> <h2 id="patterns-to-reuse">Patterns to Reuse</h2> <p>...</p> <h3 id="_21"></h3> <h2 id="anti-patterns-to-avoid">Anti-Patterns to Avoid</h2> <p>...</p> <h3 id="_22"></h3> <p>Footgun Entries (Required) List discovered footguns — common mistakes or surprising behaviors that cost time: | Footgun | Impact | Prevention | |</p> <hr /> <h2 id="_23">|</h2> <h2 id="_24">|</h2> <p>| | description | how it wasted time | how to prevent | These entries are promoted to <code>.agents/learnings/</code> and injected into future worker prompts to prevent recurrence. Zero-cycle lag between discovery and prevention.</p> <h2 id="_25"></h2> <p>Knowledge Lifecycle</p> <h3 id="_26"></h3> <h2 id="backlog-processing-phase-3">Backlog Processing (Phase 3)</h2> <h2 id="scanned-n-learnings">Scanned: N learnings</h2> <h2 id="merged-n-duplicates">Merged: N duplicates</h2> <p>Flagged stale: N</p> <h3 id="_27"></h3> <h2 id="activation-phase-4">Activation (Phase 4)</h2> <h2 id="promoted-to-memorymd-n">Promoted to MEMORY.md: N</h2> <h2 id="constraints-compiled-n">Constraints compiled: N</h2> <p>Next-work items fed: N</p> <h3 id="_28"></h3> <h2 id="retirement-phase-5">Retirement (Phase 5)</h2> <p>Archived: N learnings</p> <h2 id="_29"></h2> <p>Proactive Improvement Agenda |</p> <h1 id="_30"></h1> <p>| Area | Improvement | Priority | Horizon | Effort | Evidence | |</p> <hr /> <h2 id="_31">|</h2> <h2 id="_32">|</h2> <h2 id="_33">|</h2> <h2 id="_34">|</h2> <h2 id="_35">|</h2> <h2 id="_36">|</h2> <p>| | 1 | repo / execution / ci-automation | ... | P0/P1/P2 | now/next-cycle/later | S/M/L | ... |</p> <h2 id="_37"></h2> <p>Prior Findings Resolution Tracking | Metric | Value | |</p> <hr /> <h2 id="_38">|</h2> <p>| | Backlog entries analyzed | ... | | Prior findings total | ... | | Resolved findings | ... | | Unresolved findings | ... | | Resolution rate | ...% | | Source Epic | Findings | Resolved | Unresolved | Resolution Rate | |</p> <hr /> <p>| ---: | ---: | ---: | ---: | | ... | ... | ... | ... | ...% |</p> <h2 id="_39"></h2> <p>Command-Surface Parity Checklist | Command File | Run-path Covered by Test? | Evidence (file:line or test name) | Intentionally Uncovered? | Reason | |</p> <hr /> <h2 id="_40">|</h2> <h2 id="_41">|</h2> <h2 id="_42">|</h2> <h2 id="_43">|</h2> <p>| | cli/cmd/ao/ < command</p> <blockquote> <p>.go | yes/no | ... | yes/no | ... |</p> </blockquote> <h2 id="_44"></h2> <p>Next Work |</p> <h1 id="_45"></h1> <p>| Title | Type | Severity | Source | Target Repo | |</p> <hr /> <h2 id="_46">|</h2> <h2 id="_47">|</h2> <h2 id="_48">|</h2> <h2 id="_49">|</h2> <h2 id="_50">|</h2> <p>| | 1 | < title</p> <blockquote> <p>| tech-debt / improvement / pattern-fix / process-improvement | high / medium / low | council-finding / retro-learning / retro-pattern | < repo-name or *</p> <p>|</p> </blockquote> <h3 id="_51"></h3> <p>Recommended Next /rpi /rpi " < highest-value improvement</p> <blockquote> <p>"</p> </blockquote> <h2 id="_52"></h2> <dl> <dt>Status</dt> <dt>[ ] CLOSED - Work complete, learnings captured</dt> <dt>[ ] FOLLOW-UP - Issues need addressing (create new beads)</dt> <dt>Step 4.5: Synthesize Proactive Improvement Agenda (MANDATORY)</dt> <dt>After writing the post-mortem report, analyze extraction + council context and proactively propose improvements to repo quality and execution quality.</dt> <dt>Read the extraction output (from Phase 2) and the council report (from Step 3). For each learning, ask:</dt> <dt>What process does this improve?</dt> <dt>(build, test, review, deploy, documentation, automation, etc.)</dt> <dt>What's the concrete change?</dt> <dt>(new check, new automation, workflow change, tooling improvement)</dt> <dt>Is it actionable in one RPI cycle?</dt> <dt>(if not, split into smaller pieces)</dt> <dt>Coverage requirements:</dt> <dt>Include</dt> <dt>ALL</dt> <dt>improvements found (no cap).</dt> <dt>Cover all three surfaces:</dt> <dt>repo</dt> <dt>(code/contracts/docs quality)</dt> <dt>execution</dt> <dt>(planning/implementation/review workflow)</dt> <dt>ci-automation</dt> <dt>(validation/tooling reliability)</dt> <dt>Include at least</dt> <dt>1 quick win</dt> <dt>(small, low-risk, same-session viable).</dt> <dt>Write process improvement items with type</dt> <dt>process-improvement</dt> <dt>(distinct from</dt> <dt>tech-debt</dt> <dt>or</dt> <dt>improvement</dt> <dt>). Each item must have:</dt> <dt>title</dt> <dd> <dl> <dt>imperative form, e.g. "Add pre-commit lint check"</dt> <dt>area</dt> <dd> <dl> <dt>which part of the development process to improve</dt> <dt>description</dt> <dd> <dl> <dt>2-3 sentences describing the change and why retro evidence supports it</dt> <dt>evidence</dt> <dd> <dl> <dt>which retro finding or council finding motivates this</dt> <dt>priority</dt> <dd> <dl> <dt>P0 / P1 / P2</dt> <dt>horizon</dt> <dd> <dl> <dt>now / next-cycle / later</dt> <dt>effort</dt> <dd>S / M / L These items feed directly into Step 5 (Harvest Next Work) alongside council findings. They are the flywheel's growth vector — each cycle makes the system smarter. Write this into the post-mortem report under</dd> </dl> </dd> </dl> </dd> </dl> </dd> </dl> </dd> </dl> </dd> </dl> </dd> </dl> <h2 id="proactive-improvement-agenda">Proactive Improvement Agenda</h2> <p>. Example output:</p> <h2 id="_53"></h2> <p>Proactive Improvement Agenda |</p> <h1 id="_54"></h1> <p>| Area | Improvement | Priority | Horizon | Effort | Evidence | |</p> <hr /> <h2 id="_55">|</h2> <h2 id="_56">|</h2> <h2 id="_57">|</h2> <h2 id="_58">|</h2> <h2 id="_59">|</h2> <h2 id="_60">|</h2> <p>| | 1 | ci-automation | Add validation metadata requirement for Go tasks | P0 | now | S | Workers shipped untested code when metadata didn't require <code>go test</code> | | 2 | execution | Add consistency-check finding category in review | P1 | next-cycle | M | Partial refactoring left stale references undetected | Step 4.6: Prior-Findings Resolution Tracking (MANDATORY) After Step 4.5, compute and include prior-findings resolution tracking from .agents/rpi/next-work.jsonl . Read references/harvest-next-work.md for the jq queries that compute totals and per-source resolution rates. Write results into</p> <h2 id="prior-findings-resolution-tracking">Prior Findings Resolution Tracking</h2> <p>in the post-mortem report. Step 4.7: Command-Surface Parity Gate (MANDATORY) Before marking post-mortem complete, enforce command-surface parity for modified CLI commands: Identify modified command files under cli/cmd/ao/ from the reviewed scope. For each file, record at least one tested run-path (unit/integration/e2e) in</p> <h2 id="command-surface-parity-checklist">Command-Surface Parity Checklist</h2> <p>. Any intentionally uncovered command family must be explicitly listed with a reason and follow-up item. If any modified command file is missing both coverage evidence and an intentional-uncovered rationale, post-mortem cannot be marked complete. Step 5: Harvest Next Work Scan the council report and extracted learnings for actionable follow-up items: Council findings: Extract tech debt, warnings, and improvement suggestions from the council report (items with severity "significant" or "critical" that weren't addressed in this epic) Retro patterns: Extract recurring patterns from learnings that warrant dedicated RPIs (items from "Do Differently Next Time" and "Anti-Patterns to Avoid") Process improvements: Include all items from Step 4.5 (type: process-improvement ). These are the flywheel's growth vector — each cycle makes development more effective. Footgun entries (REQUIRED): Extract platform-specific gotchas, surprising API behaviors, or silent-failure modes discovered during implementation. Each must include: trigger condition, observable symptom, and fix. Write as type pattern-fix with source retro-learning . If a footgun was discovered this cycle, it must appear in this harvest — do not defer. Write</p> <h2 id="next-work">Next Work</h2> <p>section to the post-mortem report:</p> <h2 id="_61"></h2> <p>Next Work |</p> <h1 id="_62"></h1> <p>| Title | Type | Severity | Source | Target Repo | |</p> <hr /> <h2 id="_63">|</h2> <h2 id="_64">|</h2> <h2 id="_65">|</h2> <h2 id="_66">|</h2> <h2 id="_67">|</h2> <p>| | 1 | < title</p> <blockquote> <p>| tech-debt / improvement / pattern-fix / process-improvement | high / medium / low | council-finding / retro-learning / retro-pattern | < repo-name or *</p> <p>| SCHEMA VALIDATION (MANDATORY): Before writing, validate each harvested item against the tracked contract in .agents/rpi/next-work.schema.md . Read references/harvest-next-work.md for the validation function and write procedure. Drop invalid items; do NOT block the entire harvest. Write to next-work.jsonl (canonical path: .agents/rpi/next-work.jsonl ). Read references/harvest-next-work.md for the write procedure (target_repo assignment, claim/finalize lifecycle, JSONL format, required fields). Do NOT auto-create bd issues. Report the items and suggest: "Run /rpi --spawn-next to create an epic from these items." If no actionable items found, write: "No follow-up items identified. Flywheel stable." Step 6: Feed the Knowledge Flywheel Post-mortem automatically feeds learnings into the flywheel: if command -v ao &> /dev/null ; then ao forge markdown .agents/learnings/*.md 2</p> <p>/dev/null echo "Learnings indexed in knowledge flywheel"</p> </blockquote> <h1 id="validate-and-lock-artifacts-that-passed-council-review">Validate and lock artifacts that passed council review</h1> <p>ao temper validate --min-feedback 0 .agents/learnings/YYYY-MM-DD-*.md 2</p> <blockquote> <p>/dev/null || true echo "Artifacts validated for tempering"</p> </blockquote> <h1 id="close-session-and-trigger-full-flywheel-close-loop-includes-adaptive-feedback">Close session and trigger full flywheel close-loop (includes adaptive feedback)</h1> <p>ao session close 2</p> <blockquote> <p>/dev/null || true ao flywheel close-loop --quiet 2</p> <p>/dev/null || true echo "Session closed, flywheel loop triggered" else</p> </blockquote> <h1 id="learnings-are-already-in-agentslearnings-from-phase-2">Learnings are already in .agents/learnings/ from Phase 2.</h1> <h1 id="without-ao-cli-grep-based-search-in-research-and-inject">Without ao CLI, grep-based search in /research and /inject</h1> <h1 id="will-find-them-directly-no-copy-to-pending-needed">will find them directly — no copy to pending needed.</h1> <h1 id="feedback-loop-fallback-update-confidence-for-cited-learnings">Feedback-loop fallback: update confidence for cited learnings</h1> <p>mkdir -p .agents/ao if [ -f .agents/ao/citations.jsonl ] ; then echo "Processing citation feedback (ao-free fallback)..."</p> <h1 id="read-cited-learning-files-and-boost-confidence-notation">Read cited learning files and boost confidence notation</h1> <p>while IFS = read -r line ; do CITED_FILE = $( echo " $line " | grep -o '"learning_file":"[^"]*"' | cut -d '"' -f4 ) if [ -f " $CITED_FILE " ] ; then</p> <h1 id="note-confidence-boost-tracked-via-citation-count-not-file-modification">Note: confidence boost tracked via citation count, not file modification</h1> <p>echo "Cited: $CITED_FILE " fi done < .agents/ao/citations.jsonl fi</p> <h1 id="session-outcome-fallback-record-this-sessions-outcome">Session-outcome fallback: record this session's outcome</h1> <h1 id="epic_id">EPIC_ID</h1> <dl> <dt>"<epic-id>"</dt> <dt>echo</dt> <dt>"{</dt> <dt>\"</dt> <dt>epic</dt> <dt>\"</dt> <dt>:</dt> <dt>\"</dt> <dt>$EPIC_ID</dt> <dt>\"</dt> <dt>,</dt> <dt>\"</dt> <dt>verdict</dt> <dt>\"</dt> <dt>:</dt> <dt>\"</dt> <dt><council-verdict></dt> <dt>\"</dt> <dt>,</dt> <dt>\"</dt> <dt>cycle_time_minutes</dt> <dt>\"</dt> <dd>0, \" timestamp \" : \" $( date -Iseconds ) \" }"<blockquote> <blockquote> <p>.agents/ao/outcomes.jsonl</p> </blockquote> </blockquote> </dd> </dl> <h1 id="skip-ao-temper-validate-no-fallback-needed-tempering-is-an-optimization">Skip ao temper validate (no fallback needed — tempering is an optimization)</h1> <p>echo "Flywheel fed locally (ao CLI not available — learnings searchable via grep)" fi Step 7: Report to User Tell the user: Council verdict on implementation Key learnings Any follow-up items Location of post-mortem report Knowledge flywheel status Suggested next /rpi command from the harvested</p> <h2 id="next-work_1">Next Work</h2> <p>section (ALWAYS — this is how the flywheel spins itself) ALL proactive improvements, organized by priority (highlight one quick win) Knowledge lifecycle summary (Phase 3-5 stats) The next /rpi suggestion is MANDATORY, not opt-in. After every post-mortem, present the highest-severity harvested item as a ready-to-copy command:</p> <h2 id="_68"></h2> <p>Flywheel: Next Cycle Based on this post-mortem, the highest-priority follow-up is:</p> <blockquote> <p>** <title> ** ( < type</p> <p>, < severity</p> <p>)</p> <p><1-line description> Ready to run: /rpi "" Or see all N harvested items in <code>.agents/rpi/next-work.jsonl</code>. If no items were harvested, write: "Flywheel stable — no follow-up items identified." Integration with Workflow /plan epic-123 | v /pre-mortem (council on plan) | v /implement | v /vibe (council on code) | v Ship it | v /post-mortem <-- You are here | |-- Phase 1: Council validates implementation |-- Phase 2: Extract learnings (inline) |-- Phase 3: Process backlog (score, dedup, flag stale) |-- Phase 4: Activate (promote to MEMORY.md, compile constraints) |-- Phase 5: Retire stale learnings |-- Phase 6: Harvest next work |-- Suggest next /rpi --------------------+ | +----------------------------------------+ | (flywheel: learnings become next work) v /rpi "<highest-priority enhancement>" Examples Wrap Up Recent Work User says: /post-mortem What happens: Agent scans recent commits (last 7 days) Runs /council --deep --preset=retrospective validate recent 3 judges (plan-compliance, tech-debt, learnings) review Extracts learnings inline (Phase 2: context gathering, classification, writing) Processes backlog (Phase 3: scores, deduplicates, flags stale) Activates high-value learnings (Phase 4: promotes to MEMORY.md) Retires stale knowledge (Phase 5) Synthesizes process improvement proposals Harvests next-work items to .agents/rpi/next-work.jsonl Feeds learnings to knowledge flywheel via ao forge Result: Post-mortem report with learnings, tech debt identified, knowledge lifecycle stats, and suggested next /rpi command. Wrap Up Specific Epic User says: /post-mortem ag-5k2 What happens: Agent loads original plan from bd show ag-5k2 Council reviews implementation vs plan Phase 2 captures what went well and what was hard Phase 3 processes full backlog (not just this epic's learnings) Phase 4 promotes 2 learnings to MEMORY.md, compiles 1 constraint Process improvements identified (e.g., "Add pre-commit lint check") Next-work items harvested and written to JSONL Result: Epic-specific post-mortem with 3 harvested follow-up items, 2 promoted learnings, 1 new constraint. Quick Capture User says: /post-mortem --quick "always use O_CREATE|O_EXCL for atomic file creation when racing" What happens: Agent generates slug: atomic-file-creation-racing Writes to .agents/learnings/2026-03-03-quick-atomic-file-creation-racing.md Confirms and returns immediately Result: Learning captured in 5 seconds, no council or backlog processing. Process-Only Mode User says: /post-mortem --process-only What happens: Skips council and extraction entirely Phase 3: Scans 47 learnings, merges 3 duplicates, flags 8 stale Phase 4: Promotes 5 high-scoring learnings to MEMORY.md, compiles 2 constraints Phase 5: Archives 8 stale learnings Result: Knowledge backlog cleaned up without running a new post-mortem. Cross-Vendor Review User says: /post-mortem --mixed ag-3b7 What happens: Agent runs 3 Claude + 3 Codex judges Cross-vendor perspectives catch edge cases Verdict: WARN (missing error handling in 2 files) Phase 2-5 process learnings through the full lifecycle Harvests 1 tech-debt item Result: Higher confidence validation with cross-vendor review before closing epic. Troubleshooting Problem Cause Solution Council times out Epic too large or too many files changed Split post-mortem into smaller reviews or increase timeout No next-work items harvested Council found no tech debt or improvements Flywheel stable — write entry with empty items array to next-work.jsonl Schema validation failed Harvested item missing required field or has invalid enum value Drop invalid item, log error, proceed with valid items only Checkpoint-policy preflight blocks Prior FAIL verdict in ratchet chain without fix Resolve prior failure (fix + re-vibe) or skip checkpoint-policy via --skip-checkpoint-policy Metadata verification fails Plan vs actual files mismatch or missing cross-references Include failures in council packet as context.metadata_failures — judges assess severity Phase 3 finds zero learnings last-processed marker is very recent or no learnings exist Reset marker: date -v-30d +%Y-%m-%dT%H:%M:%S > .agents/ao/last-processed Phase 4 promotion duplicates MEMORY.md already has the insight Grep-based dedup should catch this; if not, manually deduplicate MEMORY.md Phase 5 archives too aggressively 30-day window too short for slow-cadence projects Adjust the staleness threshold in references/backlog-processing.md See Also skills/council/SKILL.md — Multi-model validation council skills/vibe/SKILL.md — Council validates code ( /vibe after coding) skills/pre-mortem/SKILL.md — Council validates plans (before implementation) Reference Documents references/harvest-next-work.md references/learning-templates.md references/plan-compliance-checklist.md references/closure-integrity-audit.md references/security-patterns.md references/checkpoint-policy.md references/metadata-verification.md references/context-gathering.md references/output-templates.md references/backlog-processing.md references/activation-policy.md</p> </blockquote> </article> <a href="/" class="back-link">← <span data-i18n="detail.backToLeaderboard">返回排行榜</span></a> </div> <aside class="sidebar"> <section class="related-skills" id="relatedSkillsSection"> <h2 class="related-title" data-i18n="detail.relatedSkills">相关 Skills</h2> <div class="related-list" id="relatedSkillsList"> <div class="skeleton-card"></div> <div class="skeleton-card"></div> <div class="skeleton-card"></div> </div> </section> </aside> </div> </div> <script src="https://unpkg.com/i18next@23.11.5/i18next.min.js" defer></script> <script src="https://unpkg.com/i18next-browser-languagedetector@7.2.1/i18nextBrowserLanguageDetector.min.js" defer></script> <script defer> // Language resources - same pattern as index page const resources = { 'zh-CN': null, 'en': null, 'ja': null, 'ko': null, 'zh-TW': null, 'es': null, 'fr': null }; // Load language files (only current + fallback for performance) async function loadLanguageResources() { const savedLang = localStorage.getItem('i18nextLng') || 'en'; const langsToLoad = new Set([savedLang, 'en']); // current + fallback await Promise.all([...langsToLoad].map(async (lang) => { try { const response = await fetch(`/locales/${lang}.json`); if (response.ok) { resources[lang] = { translation: await response.json() }; } } catch (error) { console.warn(`Failed to load ${lang} language file:`, error); } })); } // Load a single language on demand (for language switching) async function loadLanguage(lang) { if (resources[lang]) return; try { const response = await fetch(`/locales/${lang}.json`); if (response.ok) { resources[lang] = { translation: await response.json() }; i18next.addResourceBundle(lang, 'translation', resources[lang].translation); } } catch (error) { console.warn(`Failed to load ${lang} language file:`, error); } } // Initialize i18next async function initI18n() { try { await loadLanguageResources(); // Filter out null values from resources const validResources = {}; for (const [lang, data] of Object.entries(resources)) { if (data !== null) { validResources[lang] = data; } } console.log('Loaded languages:', Object.keys(validResources)); console.log('zh-CN resource:', validResources['zh-CN']); console.log('detail.home in resource:', validResources['zh-CN']?.translation?.detail?.home); // 检查是否有保存的语言偏好 const savedLang = localStorage.getItem('i18nextLng'); // 如果没有保存的语言偏好,默认使用英文 const defaultLang = savedLang && ['zh-CN', 'en', 'ja', 'ko', 'zh-TW', 'es', 'fr'].includes(savedLang) ? savedLang : 'en'; await i18next .use(i18nextBrowserLanguageDetector) .init({ lng: defaultLang, // 强制设置初始语言 fallbackLng: 'en', supportedLngs: ['zh-CN', 'en', 'ja', 'ko', 'zh-TW', 'es', 'fr'], resources: validResources, detection: { order: ['localStorage'], // 只使用 localStorage,不检测浏览器语言 caches: ['localStorage'], lookupLocalStorage: 'i18nextLng' }, interpolation: { escapeValue: false } }); console.log('i18next initialized, language:', i18next.language); console.log('Test translation:', i18next.t('detail.home')); // Set initial language in selector const langSwitcher = document.getElementById('langSwitcher'); langSwitcher.value = i18next.language; // Update page language updatePageLanguage(); // Language switch event langSwitcher.addEventListener('change', async (e) => { await loadLanguage(e.target.value); // load on demand i18next.changeLanguage(e.target.value).then(() => { updatePageLanguage(); localStorage.setItem('i18nextLng', e.target.value); }); }); } catch (error) { console.error('i18next init failed:', error); } } // Translation helper function t(key, options = {}) { return i18next.t(key, options); } // Update all translatable elements function updatePageLanguage() { // Update HTML lang attribute document.documentElement.lang = i18next.language; // Update elements with data-i18n attribute document.querySelectorAll('[data-i18n]').forEach(el => { const key = el.getAttribute('data-i18n'); el.textContent = t(key); }); } // Copy command function function copyCommand() { const command = document.getElementById('installCommand').textContent; const btn = document.getElementById('copyBtn'); navigator.clipboard.writeText(command).then(() => { btn.textContent = t('copied'); btn.classList.add('copied'); setTimeout(() => { btn.textContent = t('copy'); btn.classList.remove('copied'); }, 2000); }).catch(() => { // Fallback for non-HTTPS const textArea = document.createElement('textarea'); textArea.value = command; textArea.style.position = 'fixed'; textArea.style.left = '-9999px'; document.body.appendChild(textArea); textArea.select(); document.execCommand('copy'); document.body.removeChild(textArea); btn.textContent = t('copied'); btn.classList.add('copied'); setTimeout(() => { btn.textContent = t('copy'); btn.classList.remove('copied'); }, 2000); }); } // Initialize document.getElementById('copyBtn').addEventListener('click', copyCommand); initI18n(); // 异步加载相关 Skills async function loadRelatedSkills() { const owner = 'boshu2'; const skillName = 'post-mortem'; const currentLang = 'zh-CN'; const listContainer = document.getElementById('relatedSkillsList'); const section = document.getElementById('relatedSkillsSection'); try { const response = await fetch(`/api/related-skills/${encodeURIComponent(owner)}/${encodeURIComponent(skillName)}?limit=6`); if (!response.ok) { throw new Error('Failed to load'); } const data = await response.json(); const relatedSkills = data.related_skills || []; if (relatedSkills.length === 0) { // 没有相关推荐时隐藏整个区域 section.style.display = 'none'; return; } // 渲染相关 Skills listContainer.innerHTML = relatedSkills.map(skill => { const desc = skill.description || ''; const truncatedDesc = desc.length > 60 ? desc.substring(0, 60) + '...' : desc; return ` <a href="${currentLang === 'en' ? '' : '/' + currentLang}/skill/${skill.owner}/${skill.repo}/${skill.skill_name}" class="related-card"> <div class="related-name">${escapeHtml(skill.skill_name)}</div> <div class="related-meta"> <span class="related-owner">${escapeHtml(skill.owner)}</span> <span class="related-installs">${skill.installs}</span> </div> <div class="related-desc">${escapeHtml(truncatedDesc)}</div> </a> `; }).join(''); } catch (error) { console.error('Failed to load related skills:', error); // 加载失败时显示提示或隐藏 listContainer.innerHTML = '<div class="related-empty">暂无相关推荐</div>'; } } // HTML 转义 function escapeHtml(text) { const div = document.createElement('div'); div.textContent = text; return div.innerHTML; } // 页面加载完成后异步加载相关 Skills if (document.readyState === 'loading') { document.addEventListener('DOMContentLoaded', loadRelatedSkills); } else { loadRelatedSkills(); } </script> </body> </html>