Debug and Testing
The project includes a dedicated debug interface and a batch test runner, both accessible only in non-production environments. These tools allow manual board manipulation, AI move inspection, position evaluation, and regression testing against known board states.
Accessing Debug Mode
In local development (FRONT_WHERE=local), a wrench icon button appears in the header on the home page. Clicking it disables Player 2 AI, resets the board, and navigates to /debug.
Direct navigation to /debug also works.
Debug Page (/debug)
The debug page provides a full game board with manual control over stone placement and AI interaction. It reuses the same UI features available in the regular game (history mode, undo, evaluation, export/import -- see Features) but adds several debug-specific capabilities described below.
Stone Placement
Click any intersection to place a stone. Unlike the regular game page, the debug page does not automatically request an AI move after placement. Stones are placed via debugAddStoneToBoardData(), which bypasses normal game flow validation.
Turn Lock
The Turn Lock toggle controls whether the active turn advances after placing a stone.
| State | Behavior |
|---|---|
| Locked (default) | Turn does not switch after placement. Allows placing multiple stones of the same color consecutively. |
| Unlocked | Turn alternates normally (X -> O -> X). |
Requesting AI Moves
The Send button sends the current board state to the connected AI engine over WebSocket (/ws/debug endpoint). The request includes:
- Full board state (19x19 grid)
- Capture scores for both players
- Game settings (capture rules, double-three restriction, difficulty)
- Last played move coordinates
The AI responds with its chosen move, which is placed on the board automatically.
Manual Turn Switching
In debug mode, clicking the player avatars in the header or sidebar switches the active turn. These buttons are disabled outside of debug mode.
Restart
The Restart button clears the board and sends a reset message to the backend, resetting any per-connection state (transposition tables, difficulty cache).
Debug-to-Test Workflow
The debug page and test page share the same JSON format for game state. A typical workflow for creating a new test case:
- Set up a board position in debug mode using manual stone placement
- Export the state as JSON (see Export / Import)
- Save the exported file as
init.jsonin a new test case directory - Create the corresponding
expected.jsonwith the expected AI response
WebSocket Endpoints
Debug mode connects to /ws/debug instead of /ws. Both endpoints share the same backend handler logic -- the separation exists for connection isolation so debug sessions don't interfere with production games.
Local:
- Minimax:
ws://localhost:{LOCAL_MINIMAX}/ws/debug - AlphaZero:
ws://localhost:{LOCAL_ALPHAZERO}/ws/debug
Production:
- Minimax:
wss://sungyongcho.com/minimax/ws/debug - AlphaZero:
wss://sungyongcho.com/alphazero/ws/debug
See WebSocket JSON Protocol for the full request/response format.
Test Page (/test)
The test page (/test) runs regression tests against the minimax engine using pre-defined board positions.
How It Works
- Test cases are loaded from
front/assets/testCases/*/ - Each test case directory contains:
init.json-- the initial board state to send to the AIexpected.json-- the expected board state after the AI responds
- The page sends the initial state as a
testrequest to the minimax backend (always uses hard difficulty PVS path) - The AI's response board is compared against the expected board
- A Passed / Not Passed badge indicates the result
Test Case Format
Each JSON file contains:
{
"boardData": [[{"stone": "."}, {"stone": "X"}, ...]],
"histories": [
{
"coordinate": {"x": 8, "y": 9},
"stone": "X",
"capturedStones": []
}
],
"settings": {
"enableCapture": true,
"enableDoubleThreeRestriction": true,
"totalPairCaptured": 5,
...
},
"turn": "O",
"gameOver": false
}
jsonboardData is a 19x19 array where each cell is {"stone": "."}, {"stone": "X"}, or {"stone": "O"}.
Running Tests
- Individual test -- click the bolt icon button on any test case accordion
- Run all -- click the "test all" button at the top. Tests run sequentially with a 500ms delay between requests
- Debug a test case -- click the "debug" button on any test case to import its initial state into the debug page for manual inspection
Adding a New Test Case
- Create a new directory under
front/assets/testCases/with a descriptive name (e.g.,block-open-four) - Add
init.jsonwith the starting board state (see Debug-to-Test Workflow) - Add
expected.jsonwith the board state you expect after the AI responds - The test page auto-discovers test cases via
import.meta.glob, so no registration is needed
Current Test Cases
The test suite covers 24 scenarios including:
- Threat detection:
attack-closed-four,block-open-three,block-four-three - Capture mechanics:
avoid-catch-by-opponent,block-open-three-by-catch,capture-critical - Win conditions:
breakable-five,non-breakable-five - Edge cases:
corner-capture-vulnerable,critical-capture-vulnerable
Relevant Environment Variables
| Variable | Default | Purpose |
|---|---|---|
FRONT_WHERE | prod (in .env.example; set to local for development) | Set to prod to hide debug UI entry points |
LOCAL_MINIMAX | 8005 | Minimax engine WebSocket port |
LOCAL_MINIMAX_GDB | 8006 | Minimax port for GDB-attached debugging |
LOCAL_ALPHAZERO | 8080 | AlphaZero engine WebSocket port |