QA Docs Coverage (Jan 2026) - Discovery, Freshness, and Runbook Quality Modern Best Practices (January 2026) Docs as QA: Treat docs as production artifacts with owners, review cadence, and CI quality gates (links/style/contracts/freshness) Contract-first: Validate OpenAPI/AsyncAPI/JSON Schema in CI; use coverage tools (Swagger Coverage / OpenAPI Coverage) to detect gaps Runbook testability: Every runbook must be executable in staging; validate with synthetic tests and incident exercises Automation + observability: Track coverage %, freshness, and drift via CI dashboards; prevent regressions via PR checklists
This skill provides operational workflows for auditing existing codebases, identifying documentation gaps, and systematically generating missing documentation. It complements docs-codebase by providing the discovery and analysis layer.
Key Principle: Templates exist in docs-codebase. This skill tells you what to document and how to find undocumented components.
Core references: Diataxis (doc structure), OpenAPI (REST), AsyncAPI (events).
When to use Auditing an existing repo for missing/outdated documentation Adding documentation quality gates (lint/link checks/contracts/freshness) to CI/CD Validating runbooks for incident readiness (MTTR reduction) When to avoid Writing new documentation from scratch without a component inventory (use discovery first) Publishing AI-generated docs without human review and command/link verification Quick start
Use progressive disclosure: load only the reference file you need.
Discover components: references/discovery-patterns.md Measure coverage + gaps: references/audit-workflows.md (Phase 1-2) and assets/coverage-report-template.md Prioritize work: references/priority-framework.md Create an actionable backlog: assets/documentation-backlog-template.md and templates in docs-codebase Prevent regression: references/cicd-integration.md and references/freshness-tracking.md
Optional (recommended scripts; run from the repo being audited):
Local link check: python3 frameworks/shared-skills/skills/qa-docs-coverage/scripts/check_local_links.py docs/ Freshness report: python3 frameworks/shared-skills/skills/qa-docs-coverage/scripts/docs_freshness_report.py --docs-root docs/ Large Codebase Audit (100K-1M LOC)
For large codebases, the key principle is: LLMs don't need the entire codebase - they need the right context for the current task.
Phase 0: Context Extraction
Before starting an audit, extract codebase context using tools:
Tool Command/URL Use Case gitingest Replace "github.com" with "gitingest.com" Quick full-repo dump repo2txt https://github.com/kirill-markin/repo2txt Selective file extraction tree `tree -L 3 --dirsfirst -I 'node_modules .git Hierarchical Audit Strategy
For monorepos and large projects, audit hierarchically:
-
Root Level (Week 1) ├── AGENTS.md / CLAUDE.md exists? ├── README.md quality ├── ARCHITECTURE.md exists? └── docs/ directory structure
-
Module Level (Week 2-3) ├── Each major directory has AGENTS.md? ├── API documentation complete? └── Service boundaries documented?
-
Component Level (Week 4+) ├── Individual component READMEs ├── Code comments quality └── Test documentation
Cross-Platform Documentation Audit
Check for multi-tool compatibility:
[ ] AGENTS.md exists (cross-platform standard) [ ] CLAUDE.md exists or symlinked to AGENTS.md [ ] GEMINI.md symlinked (if using Gemini) [ ] File size under 300 lines (use @references for depth) [ ] Subdirectory docs for each major module
Large Codebase Coverage Checklist LARGE CODEBASE AUDIT CHECKLIST
Context Extraction: [ ] Generated codebase dump (gitingest/repo2txt) [ ] Created directory structure overview [ ] Identified major modules/services
Root Documentation: [ ] AGENTS.md / CLAUDE.md present and <300 lines [ ] README.md with quick start [ ] ARCHITECTURE.md with system overview [ ] Symlinks configured for cross-platform
Module Documentation: [ ] Each major directory has AGENTS.md [ ] API endpoints documented [ ] Database schemas documented [ ] Event/message contracts documented
Maintenance: [ ] Documentation ownership assigned [ ] Freshness tracking enabled [ ] CI/CD checks configured
Sources: Anthropic Claude Code Best Practices, OpenAI AGENTS.md Guide
Core QA (Default) What "Docs as QA" Means Treat docs as production quality artifacts: they reduce MTTR, enable safe changes, and define expected behavior. REQUIRED doc types for reliability and debugging ergonomics: "How to run locally/CI" and "how to test" Operational runbooks (alerts, common failures, rollback) Service contracts (OpenAPI/AsyncAPI) and schema examples Known issues and limitations (with workarounds) Coverage Model (Risk-Based) Prioritize docs by impact: P1: externally consumed contracts and failure behavior (OpenAPI/AsyncAPI, auth, error codes, SLOs). P2: internal integration and operational workflows (events, jobs, DB schema, runbooks). P3: developer reference (configs, utilities). Freshness Checks (Prevent Stale Docs) Define owners, review cadence, and a "last verified" field for critical docs. CI economics: Block PRs only for missing/invalid P1 docs. Warn for P2/P3 gaps; track via backlog. Run link checks and linting as fast pre-merge steps. Runbook Testability A runbook is "testable" if a new engineer can follow it and reach a measurable end state. Include: prerequisites, exact commands, expected outputs, rollback criteria, and escalation paths. Do / Avoid
Do:
Keep docs close to code (same repo) and version them with changes. Use contracts and examples as the source of truth for integrations.
Avoid:
Large "doc-only" projects with no owners and no CI gates. Writing runbooks that cannot be executed in a sandbox/staging environment. Quick Reference Audit Task Tool/Pattern Output Reference Discover APIs /*Controller.cs, /routes/*/.ts Component inventory discovery-patterns.md Calculate Coverage Swagger Coverage, manual diff Coverage report coverage-report-template.md Prioritize Gaps External → P1, Internal → P2, Config → P3 Documentation backlog priority-framework.md Generate Docs AI-assisted + docs-codebase templates Documentation files audit-workflows.md Phase 3 Validate Contracts Spectral, AsyncAPI CLI, OpenAPI diff Lint report cicd-integration.md Track Freshness Git blame, last-modified metadata Staleness report freshness-tracking.md Automate Checks GitHub Actions, GitLab CI, PR templates Continuous coverage cicd-integration.md Decision Tree: Documentation Audit Workflow User needs: [Audit Type] ├─ Starting fresh audit? │ ├─ Public-facing APIs? → Priority 1: External-Facing (OpenAPI, webhooks, error codes) │ ├─ Internal services/events? → Priority 2: Internal Integration (endpoints, schemas, jobs) │ └─ Configuration/utilities? → Priority 3: Developer Reference (options, helpers, constants) │ ├─ Found undocumented component? │ ├─ API/Controller? → Scan endpoints → Use api-docs-template → Priority 1 │ ├─ Service/Handler? → List responsibilities → Document contracts → Priority 2 │ ├─ Database/Entity? → Generate ER diagram → Document entities → Priority 2 │ ├─ Event/Message? → Map producer/consumer → Schema + examples → Priority 2 │ └─ Config/Utility? → Extract options → Defaults + descriptions → Priority 3 │ ├─ Large codebase with many gaps? │ └─ Use phase-based approach: │ 1. Discovery Scan → Coverage Analysis │ 2. Prioritize by impact (P1 → P2 → P3) │ 3. Generate docs incrementally (critical first) │ 4. Set up maintenance (PR templates, quarterly audits) │ └─ Maintaining existing docs? └─ Check for: ├─ Outdated docs (code changed, docs didn't) → Update or archive ├─ Orphaned docs (references non-existent code) → Remove └─ Missing coverage → Add to backlog → Prioritize
Navigation: Discovery & Analysis Component Discovery
Resource: references/discovery-patterns.md
Language-specific patterns for discovering documentable components:
.NET/C# codebase (Controllers, Services, DbContexts, Kafka handlers) Node.js/TypeScript codebase (Routes, Services, Models, Middleware) Python codebase (Views, Models, Tasks, Config) Go, Java/Spring, React/Frontend patterns Discovery commands (ripgrep, grep, find) Cross-reference discovery (Kafka topics, external APIs, webhooks) Priority Framework
Resource: references/priority-framework.md
Framework for prioritizing documentation efforts:
Priority 1: External-Facing (public APIs, webhooks, auth) - Must document Priority 2: Internal Integration (services, events, database) - Should document Priority 3: Developer Reference (config, utilities) - Nice to have Prioritization decision tree Documentation debt scoring (formula + interpretation) Compliance considerations (ISO 27001, GDPR, HIPAA) Audit Workflows
Resource: references/audit-workflows.md
Systematic workflows for conducting audits:
Phase 1: Discovery Scan (identify all components) Phase 2: Coverage Analysis (compare against existing docs) Phase 3: Generate Documentation (use templates) Phase 4: Maintain Coverage (PR templates, CI/CD checks) Audit types (full, incremental, targeted) Audit checklist (pre-audit, during, post-audit) Tools and automation CI/CD Integration
Resource: references/cicd-integration.md
Automated documentation checks and enforcement:
PR template documentation checklists CI/CD coverage gates (GitHub Actions, GitLab CI, Jenkins) Pre-commit hooks (Git, Husky) Documentation linters (markdownlint, Vale, link checkers) API contract validation (Spectral, AsyncAPI CLI) Coverage tools (Swagger Coverage, OpenAPI Coverage) Automated coverage reports Best practices and anti-patterns Freshness Tracking
Resource: references/freshness-tracking.md
Track documentation staleness and drift from code:
Freshness metadata standards (last_verified, owner, review_cadence) Git-based freshness analysis scripts Staleness thresholds by priority (P1: 30 days, P2: 60 days, P3: 90 days) CI/CD freshness gates (GitHub Actions, GitLab CI) Observability dashboards and metrics Automated doc reminder bots Navigation: Templates Coverage Report Template
Template: assets/coverage-report-template.md
Structured coverage report with:
Executive summary (coverage %, key findings, recommendations) Coverage by category (API, Service, Data, Events, Infrastructure) Gap analysis (P1, P2, P3 with impact/effort) Outdated documentation tracking Documentation debt score Action plan (sprints + ongoing) Documentation Backlog Template
Template: assets/documentation-backlog-template.md
Backlog tracking with:
Status summary (In Progress, To Do P1/P2/P3, Blocked, Completed) Task organization by priority Templates reference (quick links) Effort estimates (Low < 2h, Medium 2-8h, High > 8h) Review cadence (weekly, bi-weekly, monthly, quarterly) Output Artifacts
After running an audit, produce these artifacts:
Coverage Report - .codex/docs/audit/coverage-report.md
Overall coverage percentage Detailed findings by category Gap analysis with priorities Recommendations and next audit date
Documentation Backlog - .codex/docs/audit/documentation-backlog.md
In Progress items with owners To Do items by priority (P1, P2, P3) Blocked items with resolution path Completed items with dates
Generated Documentation - .codex/docs/ (organized by category)
API reference (public/private) Event catalog (Kafka/messaging) Database schema (ER diagrams) Background jobs (runbooks) Integration with Foundation Skills
This skill works closely with:
docs-codebase - Provides templates for:
api-docs-template.md - REST API documentation adr-template.md - Architecture decisions readme-template.md - Project overviews changelog-template.md - Release history
Workflow:
Use qa-docs-coverage to discover gaps Use docs-codebase templates to fill gaps Use qa-docs-coverage CI/CD integration to maintain coverage Anti-Patterns to Avoid Documenting everything at once - Prioritize by impact, document incrementally Merging doc drafts without review - Drafts must be validated by owners and runnable in practice Ignoring outdated docs - Outdated docs are worse than no docs Documentation without ownership - Assign owners for each doc area Skipping the audit - Don't assume you know what's documented Blocking all PRs - Only block for P1 gaps, warn for P2/P3 Optional: AI / Automation
Do:
Use AI to draft docs from code and tickets, then require human review and link/command verification. Use AI to propose "freshness diffs" and missing doc sections; validate by running the runbook steps.
Avoid:
Publishing unverified drafts that include incorrect commands, unsafe advice, or hallucinated endpoints. Success Criteria
Immediate (After Audit):
Coverage report clearly shows gaps with priorities Documentation backlog is actionable and assigned Critical gaps (P1) identified with owners
Short-term (1-2 Sprints):
All P1 gaps documented Documentation coverage > 80% for external-facing components Documentation backlog actively managed
Long-term (Ongoing):
Quarterly audits show improving coverage (upward trend) PR documentation checklist compliance > 90% "How do I" questions in Slack decrease Onboarding time for new engineers decreases Related Skills docs-codebase - Templates for writing documentation (README, ADR, API docs, changelog) docs-ai-prd - PRD and tech spec templates for new features software-code-review - Code review including documentation standards Usage Notes
For Claude: When auditing a codebase:
Start with discovery - Use references/discovery-patterns.md to find components Calculate coverage - Compare discovered components vs existing docs Prioritize gaps - Use references/priority-framework.md to assign P1/P2/P3 Follow workflows - Use references/audit-workflows.md for systematic approach Use templates - Reference docs-codebase for documentation structure Set up automation - Use references/cicd-integration.md for ongoing maintenance
Remember: The goal is not 100% coverage, but useful coverage for the target audience. Document what developers, operators, and integrators actually need.