gathering-migration

安装量: 59
排名: #12530

安装

npx skills add https://github.com/autumnsgrove/groveengine --skill gathering-migration

Gathering Migration 🌲🐻🐕 The drum echoes through the valleys. The Bear wakes from long slumber, gathering strength for the journey ahead. The Bloodhound sniffs the terrain, understanding every path and connection. Together they move mountains of data safely—nothing lost, nothing broken, everything finding its new home. When to Summon Complex data migrations requiring codebase exploration Moving data between different system architectures Schema changes affecting multiple relationships Migrations requiring careful pathfinding When you need to understand the territory before moving Grove Tools for This Gathering Use gw and gf throughout. Quick reference for migration work:

Find migration-related code and schemas

gf --agent search "table_name"

Find references to affected tables

gf --agent db

Find database-related code

gf --agent migrations

List existing migration files

Commit completed migrations

gw git ship --write -a -m "feat: migrate description" The Gathering SUMMON → ORGANIZE → EXECUTE → VALIDATE → COMPLETE ↓ ↲ ↲ ↲ ↓ Receive Dispatch Animals Verify Migration Request Animals Work Data Complete Animals Mobilized 🐕 Bloodhound — Scout the codebase, understand data relationships 🐻 Bear — Migrate data with patient strength Phase 1: SUMMON The drum sounds. The valleys stir... Receive and parse the request: Clarify the Migration: What data needs to move? From where to where? Are relationships involved? What's the rollback plan? Scope Check: "I'll mobilize a migration gathering for: [migration description] This will involve: 🐕 Bloodhound scouting the codebase Map data relationships Find all references to affected tables Identify integration points Document current patterns 🐻 Bear migrating the data Backup before moving Transform in batches Validate after each phase Verify complete migration Proceed with the gathering?" Phase 2: ORGANIZE The animals prepare for the journey... Dispatch in sequence: Dispatch Order: Bloodhound ──→ Bear │ │ │ │ Scout Migrate Territory Data Dependencies: Bloodhound must complete before Bear (needs to understand relationships) Phase 3: EXECUTE The paths are known. The migration begins... Execute each phase by loading and running each animal's dedicated skill: 🐕 BLOODHOUND — SCOUT Load skill: bloodhound-scout Execute the full Bloodhound SCENT → TRACK → HUNT → REPORT → RETURN workflow focused on [the data being migrated]: tables, foreign key relationships, code references, orphaned records, and edge cases. Handoff: complete territory map (data relationship map, affected files, migration risk assessment, edge case documentation) → Bear 🐻 BEAR — MIGRATE Load skill: bear-migrate Execute the full Bear WAKE → GATHER → MOVE → HIBERNATE → VERIFY workflow using the Bloodhound's territory map as the migration plan. Handoff: migration complete (migrated data, validation reports, updated codebase) → VALIDATE phase Phase 4: VALIDATE The journey ends. Both animals confirm safe arrival... Validation Checklist: Bloodhound: All relationships mapped Bloodhound: All references found Bloodhound: Edge cases documented Bear: Backup created and verified Bear: Row counts match (source vs dest) Bear: Data integrity checks pass Bear: Foreign keys intact Bear: Application tests pass Bear: Rollback tested Data Quality Checks: -- Row count validation SELECT ( SELECT COUNT ( * ) FROM old_table ) as source_count , ( SELECT COUNT ( * ) FROM new_table ) as dest_count ; -- Should be equal -- Foreign key integrity SELECT COUNT ( * ) as orphaned_records FROM child_table c LEFT JOIN parent_table p ON c . parent_id = p . id WHERE p . id IS NULL ; -- Should be 0 -- Data sampling SELECT * FROM new_table ORDER BY RANDOM ( ) LIMIT 10 ; -- Spot check transformation logic Type Safety During Migration: Validate migrated JSON columns with safeJsonParse(raw, ZodSchema) to catch corruption If migrating storage data, respect Amber SDK's QuotaManager constraints Use text-mode KV reads ( kv.get(key) ) with safeJsonParse() , not kv.get(key, "json") Phase 5: COMPLETE The gathering ends. Data rests in its new home... Completion Report:

🌲 GATHERING MIGRATION COMPLETE

Migration: [Description]

Animals Mobilized 🐕 Bloodhound → 🐻 Bear

Territory Mapped (Bloodhound)

Tables affected: [count]

Relationships found: [count]

Code files referencing data: [count]

Edge cases identified: [list]

Data Moved (Bear)

Records migrated: [count]

Duration: [time]

Batches processed: [count]

Errors encountered: [count]

Validation Results

Row count match: ✅ [source] = [dest]

Data integrity: ✅

Foreign keys: ✅

Application tests: ✅ [X/Y passing]

Performance: ✅

Rollback Status

Backup retained at: [location]

Rollback tested: ✅

Rollback time: [estimated]

Files Updated

Migration scripts: [files]

Application code: [files]

Documentation: [files]

Time Elapsed [Duration] _ The data has found its new home. _ 🌲 Example Gathering User: "/gathering-migration Move user preferences from users table to separate table" Gathering execution: 🌲 SUMMON — "Mobilizing for: Split user preferences. Move theme, notifications from users table to user_preferences table." 🌲 ORGANIZE — "Bloodhound scouts → Bear migrates" 🌲 EXECUTE — 🐕 Bloodhound: "Found 15,423 users. 234 have theme set. 12 have notifications disabled. Referenced in dashboard, settings, 3 API routes." 🐻 Bear: "Backup created. Migrated in 16 batches. All rows accounted for. FK constraints maintained." 🌲 VALIDATE — "15,423 source = 15,423 dest. No orphans. All tests pass." 🌲 COMPLETE — "Preferences migrated. Code updated. Backup retained." Every piece of data arrived safely. 🌲

返回排行榜