Dogfood web apps by browsing them as a real user would — with their goals, their patience, and their context. Goes beyond "does it work?" to "is it good?" by tracking emotional friction (trust, anxiety, confusion), counting click efficiency, testing resilience, and asking the ultimate question: "would I come back?" Uses Chrome MCP (for authenticated apps with your session) or Playwright for browser automation. Produces structured audit reports with findings ranked by impact.
Browser Tool Detection
Before starting any mode, detect available browser tools:
Chrome MCP
(
mcp__claude-in-chrome__*
) — preferred for authenticated apps. Uses the user's logged-in Chrome session, so OAuth/cookies just work.
Playwright MCP
(
mcp__plugin_playwright_playwright__*
) — for public apps or parallel sessions.
playwright-cli
— for scripted flows and sub-agent browser tasks.
If none are available, inform the user and suggest installing Chrome MCP or Playwright.
See
references/browser-tools.md
for tool-specific commands.
Operating Modes
Mode 1: UX Walkthrough (Dogfooding)
When
"ux walkthrough", "walk through the app", "is the app intuitive?", "ux audit", "dogfood this"
This is the highest-value mode. You are
dogfooding
the app — using it as a real user would, with their goals, their constraints, and their patience level. Not a mechanical checklist pass, but genuinely trying to get a job done.
Step 1: Adopt a User Persona
Ask the user two questions:
Task scenario
What does the user need to accomplish? (e.g., "Create a new patient and book them for surgery")
Who is the user?
What's their context? (e.g., "A busy receptionist between phone calls, on a desktop, moderate tech comfort")
If the user doesn't specify a persona, adopt a reasonable default: a non-technical person who is time-poor, mildly distracted, and using this app for the first time today.
Step 2: Approach with Fresh Eyes
Navigate to the app's entry point. From here, attempt the task with
no prior knowledge of the UI
. Adopt the persona's mindset:
Don't use browser dev tools or read source code to figure out where things are
Don't assume labels mean what a developer intended — read them literally
If something is confusing, don't power through — note it as friction
If you feel uncertain about what a button will do, that's a finding
Step 3: Evaluate Each Screen
At each screen, evaluate against the walkthrough checklist (see
references/walkthrough-checklist.md
). Key questions to hold in mind:
Clarity
Is the next step obvious without thinking?
Trust
Do I feel confident this will do what I expect? Am I afraid I'll break something?
Efficiency
How many clicks/steps is this taking? Is there a shorter path?
Recovery
If I make a mistake right now, can I get back?
Delight vs frustration
Would I sigh, smile, or swear at this moment?
Step 4: Count the Cost
Track the effort required to complete the task:
Click count
How many clicks from start to finish?
Decision points
How many times did I have to stop and think?
Dead ends
Did I go down a wrong path and have to backtrack?
Time impression
Does this feel fast or tedious?
Step 5: Test Resilience
After completing the main task, test what happens when things go wrong:
Navigate away mid-form — is data preserved?
Submit with missing/bad data — are error messages helpful and specific?
Use the back button — does the app handle it gracefully?
Refresh the page — does state survive?
Step 6: Ask the Big Questions
After completing (or failing) the task, reflect as the persona:
Would I come back?
Or would I look for an alternative?
Could I teach a colleague to use this?
In under 2 minutes?
What's the one thing that would make this twice as easy?
Step 7: Document and Report
Take screenshots at friction points. Compile findings into a UX audit report.
Write report to
docs/ux-audit-YYYY-MM-DD.md
using the template from
references/report-template.md
Severity levels
:
Critical
— blocks the user from completing their task
High
— causes confusion or significant friction
Medium
— suboptimal but the user can work around it
Low
— polish and minor improvements
Mode 2: QA Sweep
When
"qa test", "test all pages", "check everything works", "qa sweep"
Systematic mechanical testing of all pages and features.
Discover pages
Read the app's router config, sitemap, or manually navigate the sidebar/menu to find all routes
Create a task list
of areas to test (group by feature area)
For each page/feature
:
Page renders without errors
Data displays correctly (tables, lists, details)
Forms submit successfully (create)
Records can be edited (update)
Delete operations work with confirmation
Validation fires on bad input
Empty states display correctly
Error states are handled
Cross-cutting concerns
:
Dark mode: all elements visible, no contrast issues
Mobile viewport (375px): layout doesn't break, touch targets adequate
Navigate pages, take screenshots, read page content, evaluate usability
Brief confirmation
Before starting a full QA sweep (can be lengthy), before writing report files
Ask first
Before submitting forms with real data, before clicking delete/destructive actions
Tips
Chrome MCP is ideal for authenticated apps — it uses your real session
For long QA sweeps, use the task list to track progress across pages
Take screenshots at key friction points — they make the report actionable
Run UX walkthrough before QA sweep — finding "buttons work but users are confused" is more valuable than "all buttons work"
Stay in persona throughout — if you catch yourself thinking "a developer would know to..." stop. The user isn't a developer.
Every hesitation is a finding. If you paused to figure out what to click, that's friction worth reporting.
The "one thing to make it twice as easy" is often the most actionable insight in the whole report
Reference Files
When
Read
Evaluating each screen during walkthrough
references/walkthrough-checklist.md
Writing the audit report
references/report-template.md
Browser tool commands and selection
references/browser-tools.md