Paths: File paths ( shared/ , references/ , ../ln- ) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. Inputs Input Required Source Description storyId Yes args, git branch, kanban, user Story to process Resolution: Story Resolution Chain. Status filter: To Review Test Researcher Researches real-world problems and edge cases before test planning to ensure tests cover actual user pain points, not just AC. Purpose & Scope Research common problems for the feature domain using Web Search, MCP Ref, Context7. Analyze how competitors solve the same problem. Find customer complaints and pain points from forums, StackOverflow, Reddit. Post structured findings as Linear comment for downstream skills (ln-522, ln-523). No test creation or status changes. When to Use This skill should be used when: Invoked by ln-520-test-planner at start of test planning pipeline Story has non-trivial functionality (external APIs, file formats, authentication) Need to discover edge cases beyond AC Skip research when: Story is trivial (simple CRUD, no external dependencies) Research comment already exists on Story User explicitly requests to skip Workflow Phase 1: Discovery MANDATORY READ: Load shared/references/input_resolution_pattern.md Resolve storyId (per input_resolution_pattern.md): IF args provided → use args ELSE IF git branch matches feature/{id}- → extract id ELSE IF kanban has exactly 1 Story in [To Review] → suggest ELSE → AskUserQuestion: show Stories from kanban filtered by [To Review] Auto-discover Team ID from docs/tasks/kanban_board.md Phase 2: Extract Feature Domain Fetch Story from Linear Parse Story goal and AC to identify: What technology/API/format is involved? What is the user's goal? (e.g., "translate XLIFF files", "authenticate via OAuth") Extract keywords for research queries Phase 3: Research Common Problems Use available tools to find real-world problems: Web Search: "[feature] common problems" "[format] edge cases" "[API] gotchas" "[technology] known issues" MCP Ref: ref_search_documentation("[feature] error handling best practices") ref_search_documentation("[format] validation rules") Context7: Query relevant library docs for known issues Check API documentation for limitations Phase 4: Research Competitor Solutions Web Search: "[competitor] [feature] how it works" "[feature] comparison" "[product type] best practices" Analysis: How do market leaders handle this functionality? What UX patterns do they use? What error handling approaches are common? Phase 5: Research Customer Complaints Web Search: "[feature] complaints" "[product type] user problems" "[format] issues reddit" "[format] issues stackoverflow" Analysis: What do users actually struggle with? What are common frustrations? What gaps exist between user expectations and typical implementations? Phase 6: Compile and Post Findings Compile findings into categories: Input validation issues (malformed data, encoding, size limits) Edge cases (empty input, special characters, Unicode) Error handling (timeouts, rate limits, partial failures) Security concerns (injection, authentication bypass) Competitor advantages (features we should match or exceed) Customer pain points (problems users actually complain about) Post Linear comment on Story with research summary:
Test Research: {Feature}
Sources Consulted
Common Problems Found 1. ** Problem 1: ** Description + test case suggestion 2. ** Problem 2: ** Description + test case suggestion
Competitor Analysis
** Competitor A: ** How they handle this + what we can learn - ** Competitor B: ** Their approach + gaps we can exploit
Customer Pain Points
** Complaint 1: ** What users struggle with + test to prevent - ** Complaint 2: ** Common frustration + how to verify we solve it
Recommended Test Coverage
[ ] Test case for problem 1
[ ] Test case for competitor parity
[ ] Test case for customer pain point
_ This research informs both manual tests (ln-522) and automated tests (ln-523). _ Critical Rules No test creation: Only research and documentation. No status changes: Only Linear comment. Source attribution: Always include URLs for sources consulted. Actionable findings: Each problem should suggest a test case. Skip trivial Stories: Don't research "Add button to page". Definition of Done Feature domain extracted from Story (technology/API/format identified) Common problems researched (Web Search + MCP Ref + Context7) Competitor solutions analyzed (at least 1-2 competitors) Customer complaints found (forums, StackOverflow, Reddit) Findings compiled into categories Linear comment posted with "## Test Research: {Feature}" header At least 3 recommended test cases suggested Output: Linear comment with research findings for ln-522 and ln-523 to use. Reference Files Research methodology: Web Search, MCP Ref, Context7 tools Comment format: Structured markdown with sources Downstream consumers: ln-522-manual-tester, ln-523-auto-test-planner MANDATORY READ: shared/references/research_tool_fallback.md Version: 1.0.0 Last Updated: 2026-01-15