self-improvement

安装量: 4.9K
排名: #596

安装

npx skills add https://github.com/pskoett/self-improving-agent --skill self-improvement

Self-Improvement Skill Log learnings and errors to markdown files for continuous improvement. Coding agents can later process these into fixes, and important learnings get promoted to project memory. Quick Reference Situation Action Command/operation fails Log to .learnings/ERRORS.md User corrects you Log to .learnings/LEARNINGS.md with category correction User wants missing feature Log to .learnings/FEATURE_REQUESTS.md API/external tool fails Log to .learnings/ERRORS.md with integration details Knowledge was outdated Log to .learnings/LEARNINGS.md with category knowledge_gap Found better approach Log to .learnings/LEARNINGS.md with category best_practice Similar to existing entry Link with See Also , consider priority bump Broadly applicable learning Promote to CLAUDE.md , AGENTS.md , and/or .github/copilot-instructions.md Workflow improvements Promote to AGENTS.md (OpenClaw workspace) Tool gotchas Promote to TOOLS.md (OpenClaw workspace) Behavioral patterns Promote to SOUL.md (OpenClaw workspace) OpenClaw Setup (Recommended) OpenClaw is the primary platform for this skill. It uses workspace-based prompt injection with automatic skill loading. Installation Via ClawdHub (recommended): clawdhub install self-improving-agent Manual: git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent Workspace Structure OpenClaw injects these files into every session: ~/.openclaw/workspace/ ├── AGENTS.md # Multi-agent workflows, delegation patterns ├── SOUL.md # Behavioral guidelines, personality, principles ├── TOOLS.md # Tool capabilities, integration gotchas ├── MEMORY.md # Long-term memory (main session only) ├── memory/ # Daily memory files │ └── YYYY-MM-DD.md └── .learnings/ # This skill's log files ├── LEARNINGS.md ├── ERRORS.md └── FEATURE_REQUESTS.md Create Learning Files mkdir -p ~/.openclaw/workspace/.learnings Then create the log files (or copy from assets/ ): LEARNINGS.md — corrections, knowledge gaps, best practices ERRORS.md — command failures, exceptions FEATURE_REQUESTS.md — user-requested capabilities Promotion Targets When learnings prove broadly applicable, promote them to workspace files: Learning Type Promote To Example Behavioral patterns SOUL.md "Be concise, avoid disclaimers" Workflow improvements AGENTS.md "Spawn sub-agents for long tasks" Tool gotchas TOOLS.md "Git push needs auth configured first" Inter-Session Communication OpenClaw provides tools to share learnings across sessions: sessions_list — View active/recent sessions sessions_history — Read another session's transcript sessions_send — Send a learning to another session sessions_spawn — Spawn a sub-agent for background work Optional: Enable Hook For automatic reminders at session start:

Copy hook to OpenClaw hooks directory

cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement

Enable it

openclaw hooks enable self-improvement See references/openclaw-integration.md for complete details. Generic Setup (Other Agents) For Claude Code, Codex, Copilot, or other agents, create .learnings/ in your project: mkdir -p .learnings Copy templates from assets/ or create files with headers. Logging Format Learning Entry Append to .learnings/LEARNINGS.md :

[LRN-YYYYMMDD-XXX] category
**
Logged
**
ISO-8601 timestamp
**
Priority
**
low | medium | high | critical
**
Status
**
pending
**
Area
**
frontend | backend | infra | tests | docs | config

Summary One-line description of what was learned

Details Full context: what happened, what was wrong, what's correct

Suggested Action Specific fix or improvement to make

Metadata

Source: conversation | error | user_feedback

Tags: tag1, tag2

See Also: LRN-20250110-001 (if related to existing entry)

Error Entry Append to .learnings/ERRORS.md :

[ERR-YYYYMMDD-XXX] skill_or_command_name
**
Logged
**
ISO-8601 timestamp
**
Priority
**
high
**
Status
**
pending
**
Area
**
frontend | backend | infra | tests | docs | config

Summary Brief description of what failed

Error Actual error message or output

Context

  • Command/operation attempted
  • Input or parameters used
  • Environment details if relevant

Suggested Fix

If identifiable, what might resolve this

Metadata

  • Reproducible: yes | no | unknown
  • Related Files: path/to/file.ext
  • See Also: ERR-20250110-001 (if recurring)

Feature Request Entry Append to .learnings/FEATURE_REQUESTS.md :

[FEAT-YYYYMMDD-XXX] capability_name
**
Logged
**
ISO-8601 timestamp
**
Priority
**
medium
**
Status
**
pending
**
Area
**
frontend | backend | infra | tests | docs | config

Requested Capability What the user wanted to do

User Context Why they needed it, what problem they're solving

Complexity Estimate simple | medium | complex

Suggested Implementation How this could be built, what it might extend

Metadata

Frequency: first_time | recurring

ID Generation Format: TYPE-YYYYMMDD-XXX TYPE: LRN (learning), ERR (error), FEAT (feature) YYYYMMDD: Current date XXX: Sequential number or random 3 chars (e.g., 001 , A7B ) Examples: LRN-20250115-001 , ERR-20250115-A3F , FEAT-20250115-002 Resolving Entries When an issue is fixed, update the entry: Change Status: pending → Status: resolved Add resolution block after Metadata:

Resolution

**
Resolved
**

2025-01-16T09:00:00Z

**
Commit/PR
**

abc123 or #42

**
Notes
**
Brief description of what was done Other status values: in_progress - Actively being worked on wont_fix - Decided not to address (add reason in Resolution notes) promoted - Elevated to CLAUDE.md, AGENTS.md, or .github/copilot-instructions.md Promoting to Project Memory When a learning is broadly applicable (not a one-off fix), promote it to permanent project memory. When to Promote Learning applies across multiple files/features Knowledge any contributor (human or AI) should know Prevents recurring mistakes Documents project-specific conventions Promotion Targets Target What Belongs There CLAUDE.md Project facts, conventions, gotchas for all Claude interactions AGENTS.md Agent-specific workflows, tool usage patterns, automation rules .github/copilot-instructions.md Project context and conventions for GitHub Copilot SOUL.md Behavioral guidelines, communication style, principles (OpenClaw workspace) TOOLS.md Tool capabilities, usage patterns, integration gotchas (OpenClaw workspace) How to Promote Distill the learning into a concise rule or fact Add to appropriate section in target file (create file if needed) Update original entry: Change Status: pending → Status: promoted Add Promoted: CLAUDE.md , AGENTS.md , or .github/copilot-instructions.md Promotion Examples Learning (verbose): Project uses pnpm workspaces. Attempted npm install but failed. Lock file is pnpm-lock.yaml . Must use pnpm install . In CLAUDE.md (concise):

Build & Dependencies

Package manager: pnpm (not npm) - use pnpm install Learning (verbose): When modifying API endpoints, must regenerate TypeScript client. Forgetting this causes type mismatches at runtime. In AGENTS.md (actionable):

After API Changes
1.
Regenerate client:
pnpm run generate:api
2.
Check for type errors:
pnpm tsc --noEmit
Recurring Pattern Detection
If logging something similar to an existing entry:
Search first
:
grep -r "keyword" .learnings/
Link entries
Add
See Also: ERR-20250110-001
in Metadata
Bump priority
if issue keeps recurring
Consider systemic fix
Recurring issues often indicate: Missing documentation (→ promote to CLAUDE.md or .github/copilot-instructions.md) Missing automation (→ add to AGENTS.md) Architectural problem (→ create tech debt ticket) Periodic Review Review .learnings/ at natural breakpoints: When to Review Before starting a new major task After completing a feature When working in an area with past learnings Weekly during active development Quick Status Check

Count pending items

grep -h "Status**: pending" .learnings/*.md | wc -l

List pending high-priority items

grep -B5 "Priority**: high" .learnings/*.md | grep "^## ["

Find learnings for a specific area

grep
-l
"Area**: backend"
.learnings/*.md
Review Actions
Resolve fixed items
Promote applicable learnings
Link related entries
Escalate recurring issues
Detection Triggers
Automatically log when you notice:
Corrections
(→ learning with
correction
category):
"No, that's not right..."
"Actually, it should be..."
"You're wrong about..."
"That's outdated..."
Feature Requests
(→ feature request):
"Can you also..."
"I wish you could..."
"Is there a way to..."
"Why can't you..."
Knowledge Gaps
(→ learning with
knowledge_gap
category):
User provides information you didn't know
Documentation you referenced is outdated
API behavior differs from your understanding
Errors
(→ error entry):
Command returns non-zero exit code
Exception or stack trace
Unexpected output or behavior
Timeout or connection failure
Priority Guidelines
Priority
When to Use
critical
Blocks core functionality, data loss risk, security issue
high
Significant impact, affects common workflows, recurring issue
medium
Moderate impact, workaround exists
low
Minor inconvenience, edge case, nice-to-have
Area Tags
Use to filter learnings by codebase region:
Area
Scope
frontend
UI, components, client-side code
backend
API, services, server-side code
infra
CI/CD, deployment, Docker, cloud
tests
Test files, testing utilities, coverage
docs
Documentation, comments, READMEs
config
Configuration files, environment, settings
Best Practices
Log immediately
- context is freshest right after the issue
Be specific
- future agents need to understand quickly
Include reproduction steps
- especially for errors
Link related files
- makes fixes easier
Suggest concrete fixes
- not just "investigate"
Use consistent categories
- enables filtering
Promote aggressively
- if in doubt, add to CLAUDE.md or .github/copilot-instructions.md
Review regularly
- stale learnings lose value
Gitignore Options
Keep learnings local
(per-developer):
.learnings
/
Track learnings in repo
(team-wide):
Don't add to .gitignore - learnings become shared knowledge.
Hybrid
(track templates, ignore entries):
.learnings
/
*
.md
!
.learnings
/
.gitkeep
Hook Integration
Enable automatic reminders through agent hooks. This is
opt-in
- you must explicitly configure hooks.
Quick Setup (Claude Code / Codex)
Create
.claude/settings.json
in your project:
{
"hooks"
:
{
"UserPromptSubmit"
:
[
{
"matcher"
:
""
,
"hooks"
:
[
{
"type"
:
"command"
,
"command"
:
"./skills/self-improvement/scripts/activator.sh"
}
]
}
]
}
}
This injects a learning evaluation reminder after each prompt (~50-100 tokens overhead).
Full Setup (With Error Detection)
{
"hooks"
:
{
"UserPromptSubmit"
:
[
{
"matcher"
:
""
,
"hooks"
:
[
{
"type"
:
"command"
,
"command"
:
"./skills/self-improvement/scripts/activator.sh"
}
]
}
]
,
"PostToolUse"
:
[
{
"matcher"
:
"Bash"
,
"hooks"
:
[
{
"type"
:
"command"
,
"command"
:
"./skills/self-improvement/scripts/error-detector.sh"
}
]
}
]
}
}
Available Hook Scripts
Script
Hook Type
Purpose
scripts/activator.sh
UserPromptSubmit
Reminds to evaluate learnings after tasks
scripts/error-detector.sh
PostToolUse (Bash)
Triggers on command errors
See
references/hooks-setup.md
for detailed configuration and troubleshooting.
Automatic Skill Extraction
When a learning is valuable enough to become a reusable skill, extract it using the provided helper.
Skill Extraction Criteria
A learning qualifies for skill extraction when ANY of these apply:
Criterion
Description
Recurring
Has
See Also
links to 2+ similar issues
Verified
Status is
resolved
with working fix
Non-obvious
Required actual debugging/investigation to discover
Broadly applicable
Not project-specific; useful across codebases
User-flagged
User says "save this as a skill" or similar
Extraction Workflow
Identify candidate
Learning meets extraction criteria
Run helper
(or create manually):
./skills/self-improvement/scripts/extract-skill.sh skill-name --dry-run
./skills/self-improvement/scripts/extract-skill.sh skill-name
Customize SKILL.md
Fill in template with learning content
Update learning
Set status to
promoted_to_skill
, add
Skill-Path
Verify
Read skill in fresh session to ensure it's self-contained
Manual Extraction
If you prefer manual creation:
Create
skills//SKILL.md
Use template from
assets/SKILL-TEMPLATE.md
Follow
Agent Skills spec
:
YAML frontmatter with
name
and
description
Name must match folder name
No README.md inside skill folder
Extraction Detection Triggers
Watch for these signals that a learning should become a skill:
In conversation:
"Save this as a skill"
"I keep running into this"
"This would be useful for other projects"
"Remember this pattern"
In learning entries:
Multiple
See Also
links (recurring issue)
High priority + resolved status
Category:
best_practice
with broad applicability
User feedback praising the solution
Skill Quality Gates
Before extraction, verify:
Solution is tested and working
Description is clear without original context
Code examples are self-contained
No project-specific hardcoded values
Follows skill naming conventions (lowercase, hyphens)
Multi-Agent Support
This skill works across different AI coding agents with agent-specific activation.
Claude Code
Activation
Hooks (UserPromptSubmit, PostToolUse)
Setup
:
.claude/settings.json
with hook configuration
Detection
Automatic via hook scripts
Codex CLI
Activation
Hooks (same pattern as Claude Code)
Setup
:
.codex/settings.json
with hook configuration
Detection
Automatic via hook scripts
GitHub Copilot
Activation
Manual (no hook support)
Setup
Add to .github/copilot-instructions.md :

Self-Improvement
After solving non-obvious issues, consider logging to
.learnings/
:
1.
Use format from self-improvement skill
2.
Link related entries with See Also
3.
Promote high-value learnings to skills
Ask in chat: "Should I log this as a learning?"
Detection
Manual review at session end
OpenClaw
Activation
Workspace injection + inter-agent messaging
Setup
See "OpenClaw Setup" section above
Detection
Via session tools and workspace files Agent-Agnostic Guidance Regardless of agent, apply self-improvement when you: Discover something non-obvious - solution wasn't immediate Correct yourself - initial approach was wrong Learn project conventions - discovered undocumented patterns Hit unexpected errors - especially if diagnosis was difficult Find better approaches - improved on your original solution Copilot Chat Integration For Copilot users, add this to your prompts when relevant: After completing this task, evaluate if any learnings should be logged to .learnings/ using the self-improvement skill format. Or use quick prompts: "Log this to learnings" "Create a skill from this solution" "Check .learnings/ for related issues"
返回排行榜