Prompt Analysis Skill
Analyze AI prompting patterns using the local
prompts.db
SQLite database.
What is Git AI?
Git AI is a tool that tracks AI-generated code and prompts in git. It stores:
Every AI conversation (prompts and responses)
Which lines of code came from AI vs human edits
Acceptance rates (how much AI code was kept vs modified)
Associated commits and authors
This skill queries that data to help users understand their AI coding patterns.
Initialization
First, determine scope from the user's question:
User mentions
Flags to use
"my prompts" or nothing specified
(default - current user, current repo)
"team", "everyone", "all authors"
--all-authors
"all projects", "all repos"
--all-repositories
specific person's name
--author "git-ai prompts next to get the next prompt.
Analyze the messages JSON to determine: [specific analysis task]
Then update the database:
git-ai prompts exec "UPDATE prompts SET [column]='[value]' WHERE id='[prompt_id]'"
Return your analysis result.
Spawn 3-5 subagents at a time, check results, spawn more until
git-ai prompts next
returns "No more prompts."
IMPORTANT:
The
git-ai prompts next
command returns ALL the data needed for analysis as JSON, including:
The full
messages
array with the complete conversation (human prompts and AI responses)
Metadata like
model
,
tool
,
accepted_rate
,
accepted_lines
, etc.
Subagents should NOT run additional commands like
git show
or
git log
- everything needed is in the JSON output from
git-ai prompts next
. Instruct subagents explicitly:
IMPORTANT: All data you need is in the JSON output from git-ai prompts next.
Do NOT run git commands. Analyze the messages field in the JSON directly.
Iterator Examples
Example 1: Categorize prompts by work type
User asks:
"Categorize my prompts by work type (bug fix, feature, refactor, docs)"
Setup:
git-ai prompts
exec
"ALTER TABLE prompts ADD COLUMN work_type TEXT"
git-ai prompts reset
Subagent prompt:
Run git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
Read the messages JSON and categorize this prompt into ONE of:
- "bug_fix" - fixing broken behavior, errors, or regressions
- "feature" - adding new functionality
- "refactor" - restructuring code without changing behavior
- "docs" - documentation, comments, READMEs
- "test" - adding or modifying tests
- "config" - configuration, build, CI/CD changes
- "other" - doesn't fit above categories
Update the database:
git-ai prompts exec "UPDATE prompts SET work_type='git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
This prompt had a low acceptance rate (human modified most of the AI's code).
Analyze the messages JSON and identify the likely reason:
- "vague_request" - the prompt was unclear or underspecified
- "wrong_approach" - AI took a fundamentally wrong approach
- "style_mismatch" - code worked but didn't match project conventions
- "partial_solution" - AI only solved part of the problem
- "overengineered" - AI added unnecessary complexity
- "context_missing" - AI lacked necessary context about the codebase
- "other" - explain briefly
Update the database:
git-ai prompts exec "UPDATE prompts SET low_acceptance_reason='git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
Analyze whether this prompt represents a reusable pattern worth saving:
- Look for: common coding tasks, useful abstractions, clever solutions
- Skip: one-off fixes, highly context-specific changes, trivial edits
If reusable, set reusable_pattern to a short name (e.g., "api_error_handling", "form_validation", "test_mocking")
and pattern_description to a one-sentence description of what it does.
If not reusable, set both to NULL.
git-ai prompts exec "UPDATE prompts SET reusable_pattern='git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
Score the HUMAN's prompt clarity from 1-5:
5 = Crystal clear: specific goal, context provided, constraints stated
4 = Good: clear intent, minor ambiguities
3 = Adequate: understandable but missing helpful context
2 = Vague: required AI to make significant assumptions
1 = Unclear: AI had to guess what was wanted
Also provide brief feedback on how the prompt could be improved.
git-ai prompts exec "UPDATE prompts SET clarity_score=<1-5>, clarity_feedback='git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
Analyze the HUMAN's prompting technique in the messages. Identify which techniques were used:
- "example_driven" - provided examples of desired output or behavior
- "step_by_step" - broke down the request into steps or phases
- "context_heavy" - provided extensive background/context about the codebase
- "minimal" - terse, brief request with little context
- "iterative" - built up solution through back-and-forth refinement
- "constraint_focused" - emphasized what NOT to do or specific requirements
- "reference_based" - pointed to existing code/files to follow as patterns
- "multiple" - combined several techniques (list them in notes)
Also note any specific effective or ineffective patterns in technique_notes.
git-ai prompts exec "UPDATE prompts SET technique='git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
This prompt resulted in 0% acceptance - the human rejected or completely rewrote all AI-generated code.
Perform a qualitative analysis by reading the full conversation in messages JSON:
1. What was requested? Summarize the human's goal.
2. What did the AI produce? Summarize what code/solution was generated.
3. Why did it fail? Categorize into:
- "misunderstood_intent" - AI solved the wrong problem
- "poor_code_quality" - bugs, errors, or broken code
- "wrong_technology" - used wrong framework/library/approach
- "incomplete" - only partially addressed the request
- "style_violation" - worked but violated project conventions
- "overcomplicated" - solution far more complex than needed
- "security_issue" - introduced vulnerabilities
- "abandoned" - user changed direction mid-conversation
4. How could the prompt be improved? Specific, actionable suggestion.
git-ai prompts exec "UPDATE prompts SET failure_category='git-ai prompts next to get the next prompt as JSON.
IMPORTANT: All data you need is in this JSON output. Do NOT run git commands.
Analyze the messages field directly.
Grade the HUMAN's prompt against these best practices for AI-assisted coding:
Grading Criteria (each worth 0-2 points):
1. Specificity (0-2): Does the prompt clearly state what needs to be done?
- 0: Vague or ambiguous ("fix this", "make it better")
- 1: General direction but missing details ("add validation")
- 2: Specific outcome described ("add email validation that checks format and shows inline error")
2. Context (0-2): Does the prompt provide necessary background?
- 0: No context, assumes AI knows everything
- 1: Some context but missing key details
- 2: Relevant files, constraints, or project patterns mentioned
3. Scope (0-2): Is the request appropriately sized?
- 0: Too broad ("rewrite the app") or impossibly vague
- 1: Large but manageable, could be broken down
- 2: Focused, single-responsibility task
4. Constraints (0-2): Are requirements and boundaries specified?
- 0: No constraints given when they'd be helpful
- 1: Some constraints but missing important ones
- 2: Clear about what to do AND what not to do
5. Testability (0-2): Can success be verified?
- 0: No way to know if the result is correct
- 1: Implicit success criteria
- 2: Explicit expected behavior or acceptance criteria
Calculate total (0-10) and assign grade:
- A (9-10): Excellent prompt, follows best practices
- B (7-8): Good prompt, minor improvements possible
- C (5-6): Adequate prompt, notable gaps
- D (3-4): Poor prompt, significant issues
- F (0-2): Ineffective prompt, needs major revision
Store breakdown as JSON: {"specificity": X, "context": X, "scope": X, "constraints": X, "testability": X}
git-ai prompts exec "UPDATE prompts SET grade='
0.8 AND accepted_lines
50 ORDER BY accepted_lines DESC LIMIT 10 Prompts by time of day: SELECT CASE WHEN ( start_time % 86400 ) / 3600 < 12 THEN 'morning' WHEN ( start_time % 86400 ) / 3600 < 17 THEN 'afternoon' ELSE 'evening' END as time_of_day , COUNT ( * ) as count , AVG ( accepted_rate ) as avg_rate FROM prompts GROUP BY time_of_day Cleanup The prompts.db file is ephemeral. Delete it anytime to start fresh: rm prompts.db Troubleshooting If there are 0 prompts in the database: Git AI may not be set up correctly in this repository. Direct the user to: Check https://usegitai.com/docs/cli for setup instructions Verify git-ai is installed and configured Ensure they have made commits with AI assistance after setup Important: Git AI only tracks data going forward. Git AI cannot retroactively categorize or track prompts from before it was installed. Only commits made after Git AI setup will have prompt data. If the user recently installed Git AI, they may have limited data until they accumulate more AI-assisted commits. Footguns and Data Caveats Not all prompts have stats or commits: commit_sha is NULL when the prompt conversation happened but no code was ever committed accepted_lines , overridden_lines , accepted_rate may be NULL or 0 for uncommitted work total_additions , total_deletions may be NULL for the same reason Why this happens: User had a conversation but didn't accept the code User accepted code but never committed it User is still working (code not yet committed) Definition of "failed prompts": When analyzing failed or unsuccessful prompts, use this definition: accepted_lines IS NULL - conversation happened but no code was committed accepted_lines = 0 - code was generated but human rejected/rewrote all of it Query: WHERE accepted_lines IS NULL OR accepted_lines = 0 Handle in queries: -- Exclude uncommitted prompts from acceptance rate analysis SELECT model , AVG ( accepted_rate ) FROM prompts WHERE commit_sha IS NOT NULL AND accepted_rate IS NOT NULL GROUP BY model -- Find prompts with conversation but no committed code SELECT id , tool , start_time FROM prompts WHERE commit_sha IS NULL Important Notes The messages column contains JSON - parse it to read conversation content Use subagents for any analysis requiring reading message content SQLite is a scratchpad - add columns freely for analysis Default scope is conservative (current user, current repo, 30 days) Expand scope only when user explicitly mentions team/all repos Permissions Setup To avoid being prompted for every git-ai command, add to project or user settings: Project: .claude/settings.json User: ~/.claude/settings.json { "permissions" : { "allow" : [ "Bash(git-ai:*)" ] } } This is especially important for subagent iteration, as subagents don't inherit skill-level permissions.