skill-system-memory

安装量: 44
排名: #16607

安装

npx skills add https://github.com/arthur0824hao/skills --skill skill-system-memory

Skill System Memory (PostgreSQL) Persistent shared memory for all AI agents. PostgreSQL 14+ on Linux or Windows. Memory failures look like intelligence failures — this skill ensures the right memory is retrieved at the right time. Quick Start Database agent_memory and all functions are created by init.sql in this skill directory.

Linux — replace 'postgres' with your PostgreSQL superuser if different (e.g. your system username)

psql -U postgres -c "CREATE DATABASE agent_memory;" psql -U postgres -d agent_memory -f init.sql

Windows (adjust path to your psql.exe; replace 'postgres' with your PG superuser if needed)

&
"C:\Program Files\PostgreSQL
\1
8
\b
in\psql.exe"
-U
postgres
-c
"CREATE DATABASE agent_memory;"
&
"C:\Program Files\PostgreSQL
\1
8
\b
in\psql.exe"
-U
postgres
-d
agent_memory
-f
init.sql
Note
If your PostgreSQL installation does not have a postgres role, use your actual PostgreSQL superuser name. On many Linux distros this matches your OS username. You can override at any time by setting PGUSER before running scripts: export PGUSER=your_pg_username (Linux/macOS) or $env:PGUSER = "your_pg_username" (PowerShell). Verify: SELECT * FROM memory_health_check(); Pure Skill Mode (default) This skill works without installing any plugin. In pure skill mode: you manually run scripts when you want (progressive disclosure) no global OpenCode config is modified automatically Optional bootstrap (asks + records choices + tries to install) Notes: Interactive mode defaults to NOT installing heavy optional components. Use -InstallAll / --install-all only when you're ready to install everything. Run the bootstrap script to choose optional components (pgpass, local embeddings, pgvector) and record decisions. Bootstrap can also optionally install the OpenCode compaction logging plugin (it will copy the plugin into your OpenCode plugins directory). Windows:

run from the skill directory

powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\bootstrap.ps1" Linux/macOS:

run from the skill directory

bash "scripts/bootstrap.sh" The selection record is stored at: ~/.config/opencode/skill-system-memory/setup.json Agent rule: If this file does not exist, ask the user if they want to enable optional components. Recommended: run bootstrap with all options enabled (then fix any failures it reports). On Windows, pgvector installation follows the official pgvector instructions (Visual Studio C++ + nmake /F Makefile.win ). The bootstrap will attempt to install prerequisites via winget . Optional automation: compaction logging (OpenCode plugin) If you want automatic compaction logging, install the OpenCode plugin template shipped with this skill. Option A (recommended): run bootstrap and choose the plugin option. Copy plugins/skill-system-memory.js to ~/.config/opencode/plugins/ Restart OpenCode Safety / rollback (if OpenCode gets stuck on startup): Remove or rename ~/.config/opencode/plugins/skill-system-memory.js Restart OpenCode Check logs: macOS/Linux: ~/.local/share/opencode/log/ Windows: %USERPROFILE%.local\share\opencode\log Plugin behavior notes: The plugin is designed to be a no-op unless you explicitly enabled it via bootstrap ( setup.json sets selected.opencode_plugin=true ). It only attempts a Postgres write if selected.pgpass=true (avoids hanging on auth prompts). Uninstall: Remove ~/.config/opencode/plugins/skill-system-memory.js Restart OpenCode Credentials (psql) Do NOT hardcode passwords in scripts, skill docs, or config files. Recommended options for non-interactive psql : .pgpass / pgpass.conf (recommended) Linux/macOS: ~/.pgpass (must be chmod 0600 ~/.pgpass or libpq will ignore it) Windows: %APPDATA%\postgresql\pgpass.conf (example: C:\Users\\AppData\Roaming\postgresql\pgpass.conf ) Format: hostname:port:database:username:password Docs: https://www.postgresql.org/docs/current/libpq-pgpass.html PGPASSFILE (optional override): point to a custom location for the password file PGPASSWORD (not recommended): only for quick local testing; environment variables can leak on some systems Docs: https://www.postgresql.org/docs/current/libpq-envars.html Tip: set connection defaults once (per shell) to shorten commands: export PGHOST = localhost export PGPORT = 5432 export PGDATABASE = agent_memory export PGUSER = postgres

change to your PG superuser if postgres role does not exist

Shell copy/paste safety: Avoid copying inline markdown backticks (e.g. semantic ) into your shell. In zsh, backticks are command substitution. Prefer using the wrapper scripts ( scripts/mem.sh , scripts/mem.ps1 ) or copy from fenced code blocks. One-time setup helper scripts This skill ships helper scripts (relative paths): scripts/setup-pgpass.ps1 scripts/setup-pgpass.sh OpenCode usage: run them from the skill directory. Windows run: powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\setup-pgpass.ps1" Linux/macOS run: bash "scripts/setup-pgpass.sh" Memory Types Type Lifespan Use When working 24h auto-expire Current conversation context (requires session_id ) episodic Permanent + decay Problem-solving experiences, debugging sessions semantic Permanent Extracted facts, knowledge, patterns procedural Permanent Step-by-step procedures, checklists (importance >= 7) Core Functions store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance) Auto-deduplicates by content hash. Duplicate inserts bump access_count and importance_score . SELECT store_memory ( 'semantic' , 'windows-networking' , ARRAY [ 'ssh' , 'tunnel' , 'port-conflict' ] , 'SSH Tunnel Port Conflict Resolution' , 'Fix: 1) taskkill /F /IM ssh.exe 2) Use processId not pid 3) Wait 3s' , '{"os": "Windows 11"}' , 'sisyphus' , NULL , 9.0 ) ; Wrapper: scripts/mem.py (推薦 — parameterized query,無 quoting 問題) Requirements : pip install psycopg2-binary

DB 連線狀態 + 記憶總數

python3 scripts/mem.py status

搜尋記憶(含特殊字元也安全)

python3 scripts/mem.py search "pgvector windows install" 5

儲存記憶(--content flag)

python3 scripts/mem.py store semantic project "pgvector install" "postgres,pgvector,windows" 8 --content "Steps: ..."

儲存記憶(stdin)

printf '%s' "Steps: ..." | python3 scripts/mem.py store semantic project "pgvector install" "postgres,pgvector,windows" 8

session 開頭自動撈出相關記憶

python3 scripts/mem.py context "pgvector ssh tunnel"

列出所有已使用的 tags / categories

python3 scripts/mem.py tags python3 scripts/mem.py categories Wrapper: scripts/mem.sh / scripts/mem.ps1 (shell fallback)

連線狀態

bash "scripts/mem.sh" status

搜尋

bash "scripts/mem.sh" search "pgvector windows install" 5

儲存 (content via stdin)

printf '%s' "Steps: ..." | bash "scripts/mem.sh" store semantic project "pgvector install" "postgres,pgvector,windows" 8

列出 tags / categories

bash "scripts/mem.sh" tags bash "scripts/mem.sh" categories powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\mem.ps1" types powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\mem.ps1" search "pgvector windows install" 5 "Steps: ..." | powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\mem.ps1" store semantic project "pgvector install" "postgres,pgvector,windows" 8 Router Integration (optional) If you use a Router skill that executes pinned pipelines, it can read a manifest embedded in this SKILL.md . For portability, the manifest block is fenced as YAML but the content is JSON (valid YAML). The Router parses it. { "schema_version": "2.0", "id": "skill-system-memory", "version": "0.2.0", "capabilities": ["memory-search", "memory-store", "memory-health", "memory-types", "memory-auto-write"], "effects": ["proc.exec", "db.read", "db.write"], "operations": { "search": { "description": "Search memories by natural language query. Returns ranked results with relevance scores.", "input": { "query": { "type": "string", "required": true, "description": "Natural language search query" }, "limit": { "type": "integer", "required": false, "default": 5, "description": "Max results" } }, "output": { "description": "Array of memory matches with id, title, content, relevance_score", "fields": { "status": "ok | error", "data": "array of {id, title, content, relevance_score}" } }, "entrypoints": { "unix": ["bash", "scripts/router_mem.sh", "search", "{query}", "{limit}"], "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\router_mem.ps1", "search", "{query}", "{limit}"] } }, "store": { "description": "Store a new memory. Auto-deduplicates by content hash.", "input": { "memory_type": { "type": "string", "required": true, "description": "One of: semantic, episodic, procedural, working" }, "category": { "type": "string", "required": true, "description": "Category name" }, "title": { "type": "string", "required": true, "description": "One-line summary" }, "tags_csv": { "type": "string", "required": true, "description": "Comma-separated tags" }, "importance": { "type": "integer", "required": true, "description": "1-10 importance score" } }, "output": { "description": "Confirmation with stored memory id", "fields": { "status": "ok | error", "id": "integer" } }, "entrypoints": { "unix": ["bash", "scripts/router_mem.sh", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"], "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\router_mem.ps1", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"] } }, "health": { "description": "Check memory system health: total count, average importance, stale count.", "input": {}, "output": { "description": "Health metrics", "fields": { "status": "ok | error", "data": "array of {metric, value, status}" } }, "entrypoints": { "unix": ["bash", "scripts/router_mem.sh", "health"], "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\router_mem.ps1", "health"] } }, "types": { "description": "List available memory types and their descriptions.", "input": {}, "output": { "description": "Memory type definitions", "fields": { "status": "ok | error", "data": "array of {type, lifespan, description}" } }, "entrypoints": { "unix": ["bash", "scripts/router_mem.sh", "types"], "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\router_mem.ps1", "types"] } }, "auto-write": { "description": "Procedure template for automatically storing a memory after solving a non-obvious problem.", "input": {}, "output": { "description": "Proposed memory fields to store", "fields": {"memory_type": "string", "category": "string", "title": "string", "tags_csv": "string", "importance": "integer", "content": "string"} }, "entrypoints": { "agent": "Follow scripts/auto-write-template.md" } } }, "stdout_contract": { "last_line_json": true } } Notes: The Router expects each step to print last-line JSON . These Router adapter scripts are separate from mem.sh / mem.ps1 to avoid breaking existing workflows. Visualize Memories (Markdown export) If querying PostgreSQL is too inconvenient for daily use, you can export memories into markdown files under ./Memory/ (current directory by default): bash "/scripts/sync_memory_to_md.sh" --out-dir "./Memory" Outputs: Memory/Long.md (semantic + procedural) Memory/Procedural.md (procedural only) Memory/Short.md (friction + compaction-daily + procedural highlights) Memory/Episodic.md (episodic) Backups: Backups are stored under Memory/.backups/ to avoid noisy git status . Use --no-backup to disable. The sync script will also create Memory/.gitignore if it doesn't exist (ignores .backups/ and SYNC_STATUS.txt ). Long index: Memory/Long.md includes an Index section (top categories + tags) to make the export browsable. search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit) Hybrid search: full-text (tsvector) + trigram similarity (pg_trgm) + tag filtering. Accepts plain English queries — no tsquery syntax needed. Relevance scoring: text_score * decay * recency * importance . -- Natural language SELECT * FROM search_memories ( 'ssh tunnel port conflict' , NULL , NULL , NULL , NULL , 7.0 , 5 ) ; -- Filter by type + tags SELECT * FROM search_memories ( 'troubleshooting steps' , ARRAY [ 'procedural' ] ::memory_type [ ] , NULL , ARRAY [ 'ssh' ] , NULL , 0.0 , 5 ) ; Returns: id, memory_type, category, title, content, importance_score, relevance_score, match_type Where match_type is one of: fulltext , trigram_title , trigram_content , metadata . memory_health_check() Returns: metric | value | status for total_memories , avg_importance , stale_count . apply_memory_decay() Decays episodic memories by 0.9999^days_since_access . Run daily. prune_stale_memories(age_days, max_importance, max_access_count) Soft-deletes old episodic memories below thresholds. Default: 180 days, importance <= 3, never accessed. Agent Workflow Auto-Write Template After fixing a bug or solving a non-obvious problem, store a memory using the standard template: Procedure: scripts/auto-write-template.md 最佳實踐 :優先使用 mem.py wrapper,避免 shell quoting 與認證問題。 Raw SQL 範例請見 Appendix: Raw SQL 。 Before a task

自動撈出相關記憶摘要(推薦)

python3 scripts/mem.py context "keywords from user request"

或搜尋並手動閱讀

python3 scripts/mem.py search
"keywords from user request"
5
If relevant memories found, reference them:
"Based on past experience (memory #1)..."
After solving a problem
python3 scripts/mem.py store semantic category-name
"One-line problem summary"
\
"tag1,tag2,tag3"
8
--content
"Detailed problem + solution"
When delegating to subagents
Include in prompt:
MUST DO FIRST:
python3 scripts/mem.py context 'relevant keywords'
MUST DO AFTER:
If you solved something new:
python3 scripts/mem.py store semantic '' '<tags>' <importance> --content '<solution>'</dt> <dt>Check memory system health</dt> <dt>python3 scripts/mem.py status</dt> <dt>Task Memory Layer (optional)</dt> <dt>This skill also ships a minimal task/issue layer inspired by Beads: graph semantics + deterministic "ready work" queries.</dt> <dt>Objects:</dt> <dt>agent_tasks</dt> <dd> <dl> <dt>tasks (status, priority, assignee)</dt> <dt>task_links</dt> <dd> <dl> <dt>typed links (</dt> <dt>blocks</dt> <dt>,</dt> <dt>parent_child</dt> <dt>,</dt> <dt>related</dt> <dt>, etc.)</dt> <dt>blocked_tasks_cache</dt> <dd> <dl> <dt>materialized cache to make ready queries fast</dt> <dt>task_memory_links</dt> <dd>link tasks to memories ( agent_memories ) for outcomes/notes Create tasks: INSERT INTO agent_tasks ( title , description , created_by , priority ) VALUES ( 'Install pgvector' , 'Windows build + enable extension' , 'user' , 1 ) ; Add dependencies: -- Task 1 blocks task 2 INSERT INTO task_links ( from_task_id , to_task_id , link_type ) VALUES ( 1 , 2 , 'blocks' ) ; -- Task 2 is parent of task 3 (used for transitive blocking) INSERT INTO task_links ( from_task_id , to_task_id , link_type ) VALUES ( 2 , 3 , 'parent_child' ) ; Rebuild blocked cache (usually auto via triggers): SELECT rebuild_blocked_tasks_cache ( ) ; Ready work query: SELECT id , title , priority FROM agent_tasks t WHERE t . deleted_at IS NULL AND t . status IN ( 'open' , 'in_progress' ) AND NOT EXISTS ( SELECT 1 FROM blocked_tasks_cache b WHERE b . task_id = t . id ) ORDER BY priority ASC , updated_at ASC LIMIT 50 ; Claim a task (atomic): SELECT claim_task ( 2 , 'agent-1' ) ; Link a task to a memory: INSERT INTO task_memory_links ( task_id , memory_id , link_type ) VALUES ( 2 , 123 , 'outcome' ) ; Optional add-on: conditional_blocks (not implemented yet) This is intentionally deferred until the core workflow feels solid. If you need it now, store a condition in task_links.metadata (e.g., { "os": "windows" } ) and treat it as documentation. Wrapper scripts (recommended) To avoid re-typing SQL, use the wrapper scripts shipped with this skill: Windows: powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\tasks.ps1" ready 50 powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\tasks.ps1" create "Install pgvector" 1 powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\tasks.ps1" claim 2 agent-1 Linux/macOS: bash "scripts/tasks.sh" ready 50 bash "scripts/tasks.sh" create "Install pgvector" 1 bash "scripts/tasks.sh" claim 2 agent-1 Compaction Log (high value) Compaction can delete context. Treat every compaction as an important event and record it. If you're using OpenCode, prefer the OpenCode plugin route for automatic compaction logging. OpenCode plugin (experimental.session.compacting) Copy plugins/skill-system-memory.js to ~/.config/opencode/plugins/ Restart OpenCode It writes local compaction events to: ~/.config/opencode/skill-system-memory/compaction-events.jsonl And will also attempt a best-effort Postgres store_memory(...) write (requires pgpass). Verify SELECT id , title , relevance_score FROM search_memories ( 'compaction' , NULL , NULL , NULL , NULL , 0 , 10 ) ; If nothing is inserted, set up .pgpass / pgpass.conf so psql can authenticate without prompting. Daily Compaction Consolidation Raw compaction events are noisy. Run a daily consolidation job that turns many compaction events into 1 daily memory. The consolidation scripts default to the OpenCode plugin event log path and will fall back to Claude Code paths if needed. OpenCode events: ~/.config/opencode/skill-system-memory-postgres/compaction-events.jsonl Output directory: ~/.config/opencode/skill-system-memory-postgres/compaction-daily/ Windows run (manual): powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\consolidate-compactions.ps1" Linux/macOS run (manual): bash "scripts/consolidate-compactions.sh" Scheduling: Windows Task Scheduler: create a daily task that runs the PowerShell command above Linux cron example:</dd> </dl> </dd> </dl> </dd> </dl> </dd> </dl> <h1 id="every-day-at-0210-utc">every day at 02:10 UTC</h1> <p>10 2</p> <hr /> <dl> <dt>bash</dt> <dt>"<skill-dir>/scripts/consolidate-compactions.sh"</dt> <dt>></dt> <dt>/dev/null</dt> <dt>2</dt> <dt>></dt> <dt>&1</dt> <dt>Appendix: Claude Code compatibility (optional)</dt> <dt>This repository also includes Claude Code hook scripts under</dt> <dt>hooks/</dt> <dt>. They are not required for OpenCode usage.</dt> <dt>Friction Log (turn pain into tooling)</dt> <dt>Whenever something is annoying, brittle, or fails:</dt> <dt>Store an</dt> <dt>episodic</dt> <dt>memory with category</dt> <dt>friction</dt> <dt>and tags for the tool/OS/error.</dt> <dt>If it repeats (2+ times), promote it to</dt> <dt>procedural</dt> <dt>memory (importance >= 7) with a checklist.</dt> <dt>Update this skill doc when the fix becomes a stable rule/workflow (so every agent learns it).</dt> <dt>Schema Overview</dt> <dt>agent_memories</dt> <dt>— General event log. Full-text search, trigram indexes, JSONB metadata, soft-delete.</dt> <dt>soul_states</dt> <dt>— One row per user. Structured personality/emotion/buffers JSONB. FK →</dt> <dt>agent_memories</dt> <dt>.</dt> <dt>insight_facets</dt> <dt>— Per-session facets with structured fields. FK →</dt> <dt>agent_memories</dt> <dt>.</dt> <dt>evolution_snapshots</dt> <dt>— Versioned evolution records with changes JSONB. FK →</dt> <dt>agent_memories</dt> <dt>.</dt> <dt>user_preferences</dt> <dt>— Key-value user preferences with confidence scores.</dt> <dt>memory_links</dt> <dt>— Graph relationships (references, supersedes, contradicts).</dt> <dt>working_memory</dt> <dt>— Ephemeral session context with auto-expire.</dt> <dt>Typed table functions (dual-write to both typed table and agent_memories):</dt> <dt>upsert_soul_state(user, yaml, personality, emotion, ...)</dt> <dt>→ soul_states</dt> <dt>insert_insight_facet(user, session_id, yaml, ...)</dt> <dt>→ insight_facets</dt> <dt>insert_evolution_snapshot(user, version_tag, target, ...)</dt> <dt>→ evolution_snapshots</dt> <dt>upsert_user_preference(user, key, value, source, confidence)</dt> <dt>→ user_preferences</dt> <dt>get_soul_state(user)</dt> <dt>,</dt> <dt>get_recent_facets(user, limit)</dt> <dt>,</dt> <dt>get_evolution_history(user, limit)</dt> <dt>,</dt> <dt>get_user_preferences(user)</dt> <dt>→ typed reads</dt> <dt>get_agent_context(user, facet_limit)</dt> <dt>→ aggregated context for plugin injection</dt> <dt>Migration from existing data: run</dt> <dt>migrate-typed-tables.sql</dt> <dt>to backfill typed tables from</dt> <dt>agent_memories</dt> <dt>.</dt> <dt>Key columns:</dt> <dt>memory_type</dt> <dt>,</dt> <dt>category</dt> <dt>,</dt> <dt>tags[]</dt> <dt>,</dt> <dt>title</dt> <dt>,</dt> <dt>content</dt> <dt>,</dt> <dt>content_hash</dt> <dt>(auto),</dt> <dt>metadata</dt> <dt>(JSONB),</dt> <dt>importance_score</dt> <dt>,</dt> <dt>access_count</dt> <dt>,</dt> <dt>relevance_decay</dt> <dt>,</dt> <dt>search_vector</dt> <dt>(auto).</dt> <dt>Anti-Patterns</dt> <dt>Don't</dt> <dt>Do Instead</dt> <dt>Store everything</dt> <dt>Only store non-obvious solutions</dt> <dt>Skip tags</dt> <dt>Tag comprehensively: tech, error codes, platform</dt> <dt>Use</dt> <dt>to_tsquery</dt> <dt>directly</dt> <dt>search_memories()</dt> <dt>handles this via</dt> <dt>plainto_tsquery</dt> <dt>One type for all data</dt> <dt>Use correct memory_type per content</dt> <dt>Forget importance rating</dt> <dt>Rate honestly: 9-10 battle-tested, 5-6 partial</dt> <dt>Sharp Edges</dt> <dt>Issue</dt> <dt>Severity</dt> <dt>Mitigation</dt> <dt>Chunks lose context</dt> <dt>Critical</dt> <dt>Store full problem+solution as one unit</dt> <dt>Old tech memories</dt> <dt>High</dt> <dt>apply_memory_decay()</dt> <dt>daily; prune stale</dt> <dt>Duplicate memories</dt> <dt>Medium</dt> <dt>store_memory()</dt> <dt>auto-deduplicates by content_hash</dt> <dt>No vector search</dt> <dt>Info</dt> <dt>pg_trgm provides fuzzy matching; pgvector can be added later</dt> <dt>Cross-Platform Notes</dt> <dt>PostgreSQL 14-18</dt> <dt>supported (no partitioning, no GENERATED ALWAYS)</dt> <dt>pg_trgm</dt> <dt>is the only required extension (ships with all PG distributions)</dt> <dt>Linux</dt> <dt>:</dt> <dt>psql -U postgres -d agent_memory -f init.sql</dt> <dt>Windows</dt> <dd> <dl> <dt>Use full path to psql.exe or add PG bin to PATH</dt> <dt>MCP postgres_query</dt> <dd>Works for read operations; DDL requires psql Maintenance SELECT apply_memory_decay ( ) ; -- daily SELECT prune_stale_memories ( 180 , 3.0 , 0 ) ; -- monthly DELETE FROM working_memory WHERE expires_at < NOW ( ) ; -- daily SELECT * FROM memory_health_check ( ) ; -- anytime Optional: pgvector Semantic Search If pgvector is installed on your PostgreSQL server, init.sql will: create extension vector (non-fatal if missing) add agent_memories.embedding vector (variable dimension) create search_memories_vector(p_embedding, p_embedding_dim, ...) Notes: This does NOT generate embeddings. You must populate agent_memories.embedding yourself. Once embeddings exist, you can do nearest-neighbor search: -- p_embedding is a pgvector literal; pass it from your app. -- Optionally filter by dimension (recommended when using multiple models). SELECT id , title , similarity FROM search_memories_vector ( '[0.01, 0.02, ...]' ::vector , 768 , NULL , NULL , NULL , NULL , 0.0 , 10 ) ; Note: variable-dimension vectors cannot be indexed with pgvector indexes. This is a tradeoff to support local models with different embedding sizes. If pgvector is not installed, everything else still works (fts + pg_trgm). Embedding Ingestion Pipeline pgvector search only works after you populate agent_memories.embedding . This skill ships ingestion scripts (relative paths). Run from the skill directory: scripts/ingest-embeddings.ps1 scripts/ingest-embeddings.sh They: find memories with embedding IS NULL call an OpenAI-compatible embeddings endpoint (including Ollama) write vectors into agent_memories.embedding vector Requirements: pgvector installed + init.sql applied (so agent_memories.embedding exists) .pgpass / pgpass.conf configured (so psql -w can write without prompting) env vars for embedding API: EMBEDDING_PROVIDER ( ollama or openai ; default openai ) EMBEDDING_API_KEY (required for openai ; optional for ollama ) EMBEDDING_API_URL (default depends on provider) EMBEDDING_MODEL (default depends on provider) EMBEDDING_DIMENSIONS (optional; forwarded to the embeddings endpoint when supported) Windows example: $env :EMBEDDING_PROVIDER = "ollama" $env :EMBEDDING_MODEL = "nomic-embed-text" powershell . exe - NoProfile - ExecutionPolicy Bypass - File "scripts\ingest-embeddings.ps1" - Limit 25 Linux/macOS example: export EMBEDDING_API_KEY = .. . export EMBEDDING_MODEL = text-embedding-3-small bash "scripts/ingest-embeddings.sh" Scheduling: run daily (or hourly) after you add new memories keep Limit small until you trust it Robustness note: On Windows, very long SQL strings can be fragile when passed via psql -c . The ingestion script writes per-row updates to a temporary .sql file and runs psql -f to avoid command-line length/quoting edge cases.</dd> </dl> </dd> </dl> </article> <a href="/" class="back-link">← <span data-i18n="detail.backToLeaderboard">返回排行榜</span></a> </div> <aside class="sidebar"> <section class="related-skills" id="relatedSkillsSection"> <h2 class="related-title" data-i18n="detail.relatedSkills">相关 Skills</h2> <div class="related-list" id="relatedSkillsList"> <div class="skeleton-card"></div> <div class="skeleton-card"></div> <div class="skeleton-card"></div> </div> </section> </aside> </div> </div> <script src="https://unpkg.com/i18next@23.11.5/i18next.min.js" defer></script> <script src="https://unpkg.com/i18next-browser-languagedetector@7.2.1/i18nextBrowserLanguageDetector.min.js" defer></script> <script defer> // Language resources - same pattern as index page const resources = { 'zh-CN': null, 'en': null, 'ja': null, 'ko': null, 'zh-TW': null, 'es': null, 'fr': null }; // Load language files (only current + fallback for performance) async function loadLanguageResources() { const savedLang = localStorage.getItem('i18nextLng') || 'en'; const langsToLoad = new Set([savedLang, 'en']); // current + fallback await Promise.all([...langsToLoad].map(async (lang) => { try { const response = await fetch(`/locales/${lang}.json`); if (response.ok) { resources[lang] = { translation: await response.json() }; } } catch (error) { console.warn(`Failed to load ${lang} language file:`, error); } })); } // Load a single language on demand (for language switching) async function loadLanguage(lang) { if (resources[lang]) return; try { const response = await fetch(`/locales/${lang}.json`); if (response.ok) { resources[lang] = { translation: await response.json() }; i18next.addResourceBundle(lang, 'translation', resources[lang].translation); } } catch (error) { console.warn(`Failed to load ${lang} language file:`, error); } } // Initialize i18next async function initI18n() { try { await loadLanguageResources(); // Filter out null values from resources const validResources = {}; for (const [lang, data] of Object.entries(resources)) { if (data !== null) { validResources[lang] = data; } } console.log('Loaded languages:', Object.keys(validResources)); console.log('zh-CN resource:', validResources['zh-CN']); console.log('detail.home in resource:', validResources['zh-CN']?.translation?.detail?.home); // 检查是否有保存的语言偏好 const savedLang = localStorage.getItem('i18nextLng'); // 如果没有保存的语言偏好,默认使用英文 const defaultLang = savedLang && ['zh-CN', 'en', 'ja', 'ko', 'zh-TW', 'es', 'fr'].includes(savedLang) ? savedLang : 'en'; await i18next .use(i18nextBrowserLanguageDetector) .init({ lng: defaultLang, // 强制设置初始语言 fallbackLng: 'en', supportedLngs: ['zh-CN', 'en', 'ja', 'ko', 'zh-TW', 'es', 'fr'], resources: validResources, detection: { order: ['localStorage'], // 只使用 localStorage,不检测浏览器语言 caches: ['localStorage'], lookupLocalStorage: 'i18nextLng' }, interpolation: { escapeValue: false } }); console.log('i18next initialized, language:', i18next.language); console.log('Test translation:', i18next.t('detail.home')); // Set initial language in selector const langSwitcher = document.getElementById('langSwitcher'); langSwitcher.value = i18next.language; // Update page language updatePageLanguage(); // Language switch event langSwitcher.addEventListener('change', async (e) => { await loadLanguage(e.target.value); // load on demand i18next.changeLanguage(e.target.value).then(() => { updatePageLanguage(); localStorage.setItem('i18nextLng', e.target.value); }); }); } catch (error) { console.error('i18next init failed:', error); } } // Translation helper function t(key, options = {}) { return i18next.t(key, options); } // Update all translatable elements function updatePageLanguage() { // Update HTML lang attribute document.documentElement.lang = i18next.language; // Update elements with data-i18n attribute document.querySelectorAll('[data-i18n]').forEach(el => { const key = el.getAttribute('data-i18n'); el.textContent = t(key); }); } // Copy command function function copyCommand() { const command = document.getElementById('installCommand').textContent; const btn = document.getElementById('copyBtn'); navigator.clipboard.writeText(command).then(() => { btn.textContent = t('copied'); btn.classList.add('copied'); setTimeout(() => { btn.textContent = t('copy'); btn.classList.remove('copied'); }, 2000); }).catch(() => { // Fallback for non-HTTPS const textArea = document.createElement('textarea'); textArea.value = command; textArea.style.position = 'fixed'; textArea.style.left = '-9999px'; document.body.appendChild(textArea); textArea.select(); document.execCommand('copy'); document.body.removeChild(textArea); btn.textContent = t('copied'); btn.classList.add('copied'); setTimeout(() => { btn.textContent = t('copy'); btn.classList.remove('copied'); }, 2000); }); } // Initialize document.getElementById('copyBtn').addEventListener('click', copyCommand); initI18n(); // 异步加载相关 Skills async function loadRelatedSkills() { const owner = 'arthur0824hao'; const skillName = 'skill-system-memory'; const currentLang = 'en'; const listContainer = document.getElementById('relatedSkillsList'); const section = document.getElementById('relatedSkillsSection'); try { const response = await fetch(`/api/related-skills/${encodeURIComponent(owner)}/${encodeURIComponent(skillName)}?limit=6`); if (!response.ok) { throw new Error('Failed to load'); } const data = await response.json(); const relatedSkills = data.related_skills || []; if (relatedSkills.length === 0) { // 没有相关推荐时隐藏整个区域 section.style.display = 'none'; return; } // 渲染相关 Skills listContainer.innerHTML = relatedSkills.map(skill => { const desc = skill.description || ''; const truncatedDesc = desc.length > 60 ? desc.substring(0, 60) + '...' : desc; return ` <a href="${currentLang === 'en' ? '' : '/' + currentLang}/skill/${skill.owner}/${skill.repo}/${skill.skill_name}" class="related-card"> <div class="related-name">${escapeHtml(skill.skill_name)}</div> <div class="related-meta"> <span class="related-owner">${escapeHtml(skill.owner)}</span> <span class="related-installs">${skill.installs}</span> </div> <div class="related-desc">${escapeHtml(truncatedDesc)}</div> </a> `; }).join(''); } catch (error) { console.error('Failed to load related skills:', error); // 加载失败时显示提示或隐藏 listContainer.innerHTML = '<div class="related-empty">暂无相关推荐</div>'; } } // HTML 转义 function escapeHtml(text) { const div = document.createElement('div'); div.textContent = text; return div.innerHTML; } // 页面加载完成后异步加载相关 Skills if (document.readyState === 'loading') { document.addEventListener('DOMContentLoaded', loadRelatedSkills); } else { loadRelatedSkills(); } </script> </body> </html>