The Big web/ Move and Parser Migrations
Moving the web app to a proper structure, migrating parsers to Pest/Pratt, and building the Z-Machine opcode engine.
The big web/ move and parser migrations
Sometimes you look at a codebase and think “I know exactly where everything is.” Then you try to explain it to someone else and realize you’ve been navigating by muscle memory, not logic. Today I finally fixed that—moving all the web source code from src/ into web/src/ where it actually belongs in a polyglot monorepo.
The change itself was simple: git mv src web/src. The fallout was not.
Every import path broke. Every Vite alias pointed somewhere that no longer existed. The TypeScript config complained. The build succeeded but the dev server served nothing. I spent a solid hour chasing down a path alias that looked correct in three different config files but was actually using the wrong format in vite.config.ts—turns out Rollup wants an array format for resolve.alias, not the object format that works fine in development.
(The git log from this day reads like a debugging diary: “fix: resolve all TypeScript type errors”, “fix: correct import paths in tuning components”, “refactor: migrate imports to @/ path aliases without extensions”. Each commit represents about twenty minutes of “wait, why doesn’t that work?”)
But here’s the thing about painful refactors: they force you to actually understand your own code. By the end of the day, I could draw the project structure from memory. The Rust and WASM code lives at the project root (cores/, languages/, assemblers/). The web frontend lives inside Vite’s root (web/src/). Clean separation, no ambiguity.
The parser migration marathon
While I was in refactoring mode, I tackled something I’d been putting off: migrating the language parsers to Pest.
The emulator runs five interpreted languages—BASIC, Forth, REXX, Scheme, and Prolog—plus multiple assemblers for the CPU cores. Each one had its own hand-rolled parser, and each one had its own quirks. The BASIC tokenizer was 200+ lines of careful character-by-character processing. The Prolog lexer handled edge cases like 'it''s' (escaped quotes in atoms) with increasingly convoluted logic.
Pest replaces all of that with grammar files. The Prolog grammar is 78 lines. It handles every edge case the hand-rolled version did, produces byte-for-byte identical token streams, and is actually readable by humans who aren’t me six months ago.
The BASIC migration was more ambitious—I added all the C64/VIC-20 keywords I’d been meaning to support: RUN, LIST, NEW, CONT, GET, CLR, SYS, WAIT, LOAD, SAVE, VERIFY, plus the Commodore-specific TI and TI$ timer functions. Nineteen new comparison tests, all passing. The grammar now covers the full Microsoft/Commodore BASIC dialect.
(Writing a grammar is oddly satisfying. You define rules like keyword = { "PRINT" | "INPUT" | "GOTO" } and the parser just… works. No state machines, no lookahead hacks, no “but what if there’s a space here?”)
Z-Machine: from notes to runtime
The other thread running through this day was Z-Machine work. I’d been treating it as a research project—reading specs, sketching out designs, building test fixtures. Today it became real code.
The opcode execution engine handles variable and operand types for all the major instruction classes: 2OP (two operand), 1OP (one operand), 0OP (zero operand), and VAR (variable). Each opcode type has its own encoding rules, and getting them wrong means the interpreter reads garbage and crashes spectacularly.
The approach I took: build the decoder first, wrap it in tests, then implement opcodes one at a time against the test harness. It’s slower than just writing code, but it means I catch encoding bugs immediately instead of discovering them three weeks later when Zork crashes on turn 47.
// The satisfying part: operand decoding that actually works
switch (opcode_type) {
case OP_2OP:
decode_2op_operands(zm, &op1, &op2);
break;
case OP_VAR:
decode_var_operands(zm, ops, &op_count);
break;
}
The test story files (905.z5, zork1.z3) are in the repo now. Not playable yet, but the decoder can step through them without exploding.
Quality of life: test harnesses and signal cleanup
I also found and fixed a bug that had been causing intermittent connection issues: seven CPU emulator backends were missing DTR/RTS cleanup in their onDisconnect() handlers. When a backend disconnected, the RS-232 signal state persisted, and the next connection attempt saw stale control signals and got confused.
The fix was trivial—add deassertDTR() and deassertRTS() calls—but finding it required tracing through modem state corruption across multiple connection cycles. The weather backend tests now verify signal handling explicitly, which means this class of bug should be caught immediately in the future.
What changed
- Moved frontend code to
web/with updated Vite root and path aliases - Migrated BASIC/Prolog tokenizers to Pest parsers (61% code reduction for Prolog)
- Added full C64/VIC-20 BASIC keyword coverage to Pest grammar
- Built Z-Machine opcode execution engine with test harness
- Fixed RS-232 signal leak in 7 CPU emulator backends
What I was going for
This was a maintenance and foundation day. The directory restructure makes the project navigable. The parser migrations make the language runtimes maintainable. The Z-Machine work turns a research project into something that might actually run games.
What went sideways
Import path cleanup cascaded through about 80 files. The tuning workbench (experimental audio development code) uses outdated API methods, so I had to exclude it from type checking entirely rather than fix it now. And the Vite alias format issue cost me an embarrassing amount of time for what turned out to be a one-character fix.
What’s next
The structure holds. Next was to build on it—deployment prep, multi-environment configuration, and backend service growth.
Previous: 2026-01-26