Add automated QA testing with Playwright to an existing game project. Tests verify your game boots, scenes work, scoring functions, and visuals haven't broken — like a safety net for your game.
Instructions
Analyze the game at
$ARGUMENTS
(or the current directory if no path given).
First, load the game-qa skill to get the full testing patterns and fixtures.
Step 1: Audit testability
Read
package.json
to identify the engine and dev server port
Read
vite.config.js
for the server port
Read
src/main.js
to check if
window.GAME
,
window.GAME_STATE
,
window.EVENT_BUS
are exposed
Read
src/core/GameState.js
to understand what state is available
Read
src/core/EventBus.js
to understand what events exist
Read
src/core/Constants.js
to understand game parameters (rates, speeds, durations, max values)
Read all scene files to understand the game flow
Read
design-brief.md
if it exists — it documents expected mechanics, magnitudes, and win/lose reachability
with the correct dev server port and webServer config
Expose
window.GAME
,
window.GAME_STATE
,
window.EVENT_BUS
,
window.EVENTS
in
src/main.js
if not already present
Create the test directory structure:
tests/
├── e2e/
│ ├── game.spec.js
│ ├── visual.spec.js
│ └── perf.spec.js
├── fixtures/
│ └── game-test.js
└── helpers/
└── seed-random.js
Add npm scripts:
test
,
test:ui
,
test:headed
,
test:update-snapshots
Step 3: Generate tests
Write tests based on what the game actually does:
game.spec.js
Boot test, scene transitions, input handling, scoring, game over, restart
visual.spec.js
Screenshot regression for stable scenes (gameplay initial state, game over). Skip active gameplay screenshots — moving objects make them unstable.
perf.spec.js
Load time budget, FPS during gameplay, canvas dimensions
Follow the game-qa skill patterns. Use
gamePage
fixture. Use
page.evaluate()
to read game state. Use
page.keyboard.press()
for input.
Step 4: Design-intent tests
Add a
test.describe('Design Intent')
block to game.spec.js. These tests catch
mechanics that technically exist but are too weak to matter.
Lose condition
Detect deterministically whether the game has a lose state.
Read
GameState.js
— if it has a
won
,
result
, or similar boolean/enum
field, the game distinguishes win from loss. Also check
render_game_to_text()
in
main.js
— if it returns distinct outcome modes (e.g.,
'win'
vs
'game_over'
), the game has a lose state.
If a lose state exists: start the game, provide NO input, let it run to
completion (use
page.waitForFunction
with the round duration from
Constants.js). Assert the outcome is the losing one (e.g.,
won === false
,
mode === 'game_over'
).
This assertion is non-negotiable.
Do NOT write a test that passes when the
player wins by doing nothing. If the current game behavior is "player wins
with no input," that is a bug — write the test to catch it.
Opponent/AI pressure
If an AI-driven mechanic exists (auto-climbing bar,
enemy spawning, difficulty ramp), test that it produces substantial state
changes. Run the game for half its duration without player input. Assert the
opponent's state reaches at least 25% of its maximum. If
design-brief.md
exists, use its expected magnitudes for thresholds. Otherwise, derive from
Constants.js: calculate
rate * duration
and assert it reaches meaningful
levels.
Win condition
Test that active player input leads to a win. Provide rapid
input throughout the round and assert the outcome is a win.
Step 5: Entity interaction audit
Audit collision and interaction logic for asymmetries that would confuse a
first-time player.
If
design-brief.md
has an "Entity Interactions" section, use it as the
checklist. Otherwise, audit
GameScene.js
directly:
Find all collision handlers, overlap checks, or distance-based interactions
Map which entities interact with which others
Flag any visible moving entity that interacts with one side (player OR
opponent) but not the other — add a
// QA FLAG: asymmetric interaction
comment in the test file noting the entity name and the asymmetry
This is a flag, not a hard fail. Some asymmetries are intentional (e.g.,
hazards that only affect the player). The flag ensures the asymmetry is a
conscious design choice, not an oversight.
Step 6: Run and verify
Run
npx playwright test
to execute all tests
If visual tests fail on first run, that's expected — generate baselines with
npx playwright test --update-snapshots
Run again to verify all tests pass
Summarize results
Step 7: Report
Tell the user in plain English:
How many tests were created and what they check
How to run them:
npm test
(headless),
npm run test:headed
(see the browser),
npm run test:ui
(interactive dashboard)
"These tests are your safety net. Run them after making changes to make sure nothing broke."
Next Step
Tell the user:
Your game now has automated tests! Finally, run
/game-creator:review-game
for a full architecture review — it checks your code structure, performance patterns, and gives you a score with specific improvement suggestions.
Pipeline progress:
/make-game
→
/design-game
→
/add-audio
→
/qa-game
→
/review-game