I recently built two browser games -- Sudoku and Snake -- as single-file HTML games with no dependencies, no build tools, and significant help from AI coding assistants. The experience taught me a lot about what AI is genuinely good at in game development and where human judgment is still essential.
This is not a tutorial on building games. It is a candid look at the workflow, the surprises, and the practical lessons for anyone considering using AI to build interactive browser applications.
What AI Generated Well
Game structure and boilerplate. The initial scaffold for both games -- HTML structure, CSS grid layout for Sudoku, Canvas setup for Snake, event listeners, game loop -- came together quickly. AI assistants understand the patterns of browser games because there are thousands of implementations in their training data. The generated code was clean, well-organized, and worked on the first try for the basic structure.
Standard algorithms. The Snake game loop (move, check collision, grow, render) and Sudoku's constraint checking (row, column, box validation) were generated correctly without any guidance beyond describing what was needed. These are well-known algorithms with clear specifications.
UI polish. CSS styling, responsive layouts, hover effects, color schemes, keyboard and touch input handling. The AI was particularly good at generating the CSS grid for the Sudoku board with the 3x3 box borders using nth-child selectors -- a task that would have taken me several rounds of trial and error to get right manually.
Feature additions. Once the core game was working, adding features like a timer, hint system, notes mode (Sudoku), speed settings, high score persistence (Snake), and undo functionality went smoothly. Each feature was a well-defined, self-contained addition that the AI could implement without breaking existing code.
Where AI Struggled
Performance-critical algorithms. This was the biggest lesson. The AI-generated Sudoku puzzle generator used a brute-force uniqueness checker that called a recursive solver for every cell it tried to remove from the completed grid. It was algorithmically correct but computationally disastrous -- it froze the browser tab for 30+ seconds on "Hard" difficulty.
The fix required replacing the naive solver with one using the Minimum Remaining Values (MRV) heuristic, which picks the most constrained cell first. This brought generation time from 30 seconds to under 20 milliseconds. The AI suggested this optimization when asked, but it did not consider performance proactively. It generated the simplest correct implementation, not the most efficient one.
Subtle browser API conflicts. The Sudoku game declared var history = [] as an undo stack at the top level in strict mode. This silently conflicted with window.history (the browser's History API), which is read-only. The script failed without any visible error. This kind of bug -- where a variable name collision with a browser global causes a silent failure -- is exactly the type of issue AI tends to miss because it does not run the code in a real browser context.
Edge cases in game logic. The Snake game needed careful handling of the case where the player reverses direction into themselves (pressing left while moving right). The initial AI implementation allowed this, causing an instant game-over. The fix was simple (ignore direction reversals), but it was not something the AI anticipated. Game design is full of these "obvious in hindsight" edge cases that require actually playing the game to notice.
The Effective Workflow
After iterating on both games, I settled on a workflow that plays to AI's strengths while compensating for its weaknesses:
- Describe the game in detail upfront. The more specific the initial prompt, the better the first draft. Include the tech stack (vanilla JS, single HTML file), the features you want (difficulty levels, scoring, mobile support), and any constraints (no dependencies, must work offline).
- Get a working prototype first, then test it yourself. Let the AI generate the complete game, then actually play it. You will find bugs that no amount of code review would catch -- timing issues, feel problems, missing edge cases.
- Fix bugs conversationally. Describe what you observed ("the game freezes when generating a Hard puzzle") rather than prescribing a solution. Let the AI diagnose and propose fixes. Its diagnostic ability is often better than its ability to write bug-free code from scratch.
- Run automated tests for critical logic. For the Sudoku generator, I wrote a Node.js test script that generated hundreds of puzzles and verified each had a valid, unique solution. This caught issues that manual play would have missed.
- Review performance last. Get the game working correctly first, then profile. AI-generated code often has O(n!) algorithms hiding behind clean syntax. Use the browser's Performance tab to find bottlenecks.
Single-File Games: Why It Works
Both Sudoku and Snake are single index.html files with inline CSS and JavaScript. No build step, no modules, no bundler. This constraint turns out to be ideal for AI-assisted development for a few reasons:
- Full context. The AI can read the entire game in one shot. No cross-file references, no import chains, no build configuration to understand. This eliminates an entire class of errors where the AI misunderstands project structure.
- Instant feedback. Open the file in a browser, make a change, refresh. No compile step, no hot reload lag, no build errors to debug. The iteration speed is as fast as it can possibly be.
- Simple deployment. Copy the file to a web server. Done. No build artifacts, no static site generator, no CI pipeline.
The trade-off is that the file gets large (the Sudoku game is ~450 lines). But for a self-contained browser game, 450 lines in one file is more maintainable than 20 files with imports, because there is zero cognitive overhead in understanding where things are.
Would I Do It Again?
Yes, without hesitation. The AI took each game from "idea" to "playable prototype" in minutes instead of hours. The time I saved on boilerplate and standard algorithms was spent on the parts that actually required human judgment: game feel, performance tuning, and edge case handling.
The key insight is that AI does not replace the game developer. It replaces the tedious parts of game development -- the scaffolding, the event plumbing, the CSS fiddling -- and lets you focus on the parts that make a game actually fun to play. That is a trade-off I will take every time.