Deployment Prep and Backend Growth
Kubernetes/K3s deployment configs, CI/CD scaffolding, schema-driven API generation, and the Star Trek BASIC backend.
On this page
The emulator hit the size where “it runs for me” stopped being evidence. If I needed to ship tomorrow, I wanted to already know the shape would hold. So today was scaffolding: K3s configs, pipeline wiring, schema-driven codegen, and fixing a WASM race that made Star Trek BASIC fail silently.
Multi-Environment Boundaries
The deployment plan splits dev and prod into separate namespaces with their own ingress, secrets, and quotas. Path-based routing keeps the services distinct:
/api/*goes to the API service (priority 1)./wsgoes to the WebSocket service (priority 2)./is the frontend catch‑all (priority 3).
That priority ordering is not optional. If the frontend catches /api/health first, you get HTML where the caller expects JSON, and the failure mode looks like a logic bug instead of a routing bug. Cilium’s ingress rules make the ordering explicit.
CI/CD That Protects Time
Three workflows, each with a different time budget:
ci-cd.ymlfor the full mainline pipeline—lint through deploy.pr-validation.ymlfor fast feedback on pull requests, targeting under 15 minutes.nightly.ymlfor the long, exhaustive run that catches slow regressions.
The sequence is deliberate: lint before tests, unit before E2E, security audits on every build. That ordering collapses wasted minutes into early failures. Aggressive caching (Cargo, npm, Docker layers) should cut build times by 3–5x once the caches warm up.
Schema-Driven API Client
The frontend client is generated from JSON schemas in server/schemas/ via npm run codegen:ts. The output is a typed API client that the build regenerates whenever schemas change.
The constraint: the server owns the contract, and the frontend inherits it. That eliminates silent drift between what the server sends and what the client expects.
Star Trek BASIC and the WASM Gate
Star Trek BASIC gained a proper backend entry (555-1702), which immediately exposed a WASM timing issue. The program could load before the worker module was ready. Nothing crashed—the game just failed to run. Classic async initialization bug.
The fix is causal ordering: wait for the worker’s READY message, then load the program, then run. When the ready signal is the gate, the race disappears. The debugging took longer than the fix because the failure was silent; adding logging throughout the initialization flow made the sequence visible.
CPU Core Memory Hygiene
I also tightened memory behaviour in the CPU cores. The 8088’s 1 MB memory moves to Box<[u8]> for predictable allocation (no Vec resize overhead), UART output swaps String::clone() for std::mem::take() (zero-copy extraction), and buffers get explicit 64 KB caps to prevent unbounded growth. This touched all five cores: Z80, Intel 8080, Intel 8088, MOS 6502, and x86.
None of this is visible in a demo, but it reduces heap churn and keeps long sessions stable. All 157 unit tests still pass.
Tomorrow: push more hardware realism through the system now that the deployment skeleton can carry it. The viewport scaling story still needs attention—CRT layout on mobile is a known problem when the keyboard opens.
Previous: 2026-01-27