Back to Articles
2026 / 01
| 5 min read

Deployment Prep and Backend Growth

Kubernetes/K3s deployment configs, CI/CD scaffolding, schema-driven API generation, and the Star Trek BASIC backend.

emulator kubernetes ci-cd deployment star-trek

Deployment prep and backend growth

I hit the point where “it works on my machine” stops being good enough. The emulator had grown complex enough that I needed real deployment infrastructure—not because I was launching tomorrow, but because I needed to know I could launch tomorrow.

So I spent the day building the scaffolding: Kubernetes configs, CI/CD pipelines, and the kind of boring infrastructure that makes everything else possible.

Multi-environment everything

The deployment architecture splits cleanly: emulator-ca-prod and emulator-ca-dev as separate namespaces, each with its own ingress, secrets, and resource quotas. Path-based routing handles the three main services:

  • /api/* → API service (priority 1)
  • /ws → WebSocket service (priority 2)
  • / → Frontend service (catch-all, priority 3)

That priority ordering matters more than you’d think. Get it wrong and your API calls hit the frontend instead, which returns a nice HTML 404 page that confuses everyone involved.

(The first time I tested this, I had the priorities backwards. The frontend happily served index.html for /api/health. Took me longer than I’d like to admit to figure out why my health check was returning <!DOCTYPE html>.)

CI/CD that actually helps

I wrote three pipeline configurations:

Main pipeline (ci-cd.yml): Ten jobs from lint to deploy, about 45 minutes total. Aggressive caching for both Cargo and npm makes repeat runs much faster.

PR validation (pr-validation.yml): Fast feedback under 15 minutes. Parallel checks, fail-fast on lint errors, no point running expensive tests if the code doesn’t even compile cleanly.

Nightly builds (nightly.yml): The comprehensive run that takes 2+ hours but catches the subtle stuff—security audits, benchmark regressions, the full E2E suite.

The key insight was fail-fast ordering: lint before tests, unit before E2E, security audits on every build. If your code style is wrong, I don’t need to spend 40 minutes discovering your tests also fail.

Schema-driven sanity

One piece of infrastructure that doesn’t look impressive but prevents entire categories of bugs: generating the API client from JSON schema.

/**
 * EmulatorApiClient - Auto-generated typed API client
 *
 * DO NOT MODIFY BY HAND. Generated from server/schemas/api.schema.json
 * Run: npm run codegen:api
 */
export type Transport = 'websocket' | 'http' | 'auto';

The server defines the schema. The generator produces TypeScript types. The frontend inherits them. When someone changes a field name on the server, the build fails on the frontend instead of shipping a production bug.

(If you’ve ever debugged a production incident that turned out to be “someone renamed userId to user_id three weeks ago and nobody noticed until a customer complained”—yeah, this prevents that.)

Star Trek BASIC and the WASM race condition

On the fun side: Star Trek BASIC got a proper backend. On the not-fun side: it immediately revealed a race condition in the WASM initialization.

The problem was timing: the program would try to load before the WASM module was fully initialized, causing silent failures. Everything looked fine—no errors, no warnings—but the game just didn’t run.

The fix required coordinating three async operations:

  1. Wait for the WASM READY message
  2. Load the program (converted to synchronous with promise-based fetch)
  3. Then—and only then—send the RUN command
// Wait for WASM READY message before loading program
worker.addEventListener('message', (e) => {
  if (e.data.type === 'READY') {
    // NOW it's safe to load
    loadProgram();
  }
});

This is the kind of bug that doesn’t show up in tests but appears in production when timing varies by a few milliseconds.

Viewport scaling (the unsung hero)

I also spent time on something users will never notice unless it breaks: the viewport scaler that keeps the CRT interface visible on different screen sizes.

// Measure natural dimensions ONCE before any transforms
this.measureNaturalDimensions();

// Calculate scale to fit available space
const scaleX = availableWidth / this.naturalWidth;
const scaleY = availableHeight / this.naturalHeight;
const scale = Math.min(scaleX, scaleY, 1.2);

// Handle mobile keyboard specially
if (this.isMobileKeyboardOpen && this.detectMobileKeyboard) {
  this.element.style.transformOrigin = 'top center';
} else {
  this.element.style.transformOrigin = 'center center';
}

When a phone keyboard opens, the available viewport shrinks dramatically. Without handling this, the CRT either gets clipped or jumps around weirdly. The fix: detect the keyboard and pin the transform origin to the top instead of centering.

(This took more debugging than the entire Kubernetes setup. Mobile Safari lies about viewport dimensions during keyboard transitions.)

CPU core optimizations

While I was in infrastructure mode, I also improved the CPU cores’ memory handling:

  • Replace Vec<u8> with Box<[u8]> for the Intel 8088’s 1MB memory (eliminates allocation overhead)
  • Replace String::clone() with std::mem::take() for UART output (zero-copy extraction)
  • Add 64KB buffer limits to prevent unbounded UART buffer growth

These changes affect all five cores (Z80, Intel 8080, Intel 8088, MOS 6502, x86). Not visible to users, but the reduced heap allocations and better cache locality make everything slightly snappier.

What changed

  • Added Kubernetes/K3s deployment configs with multi-environment support
  • Built complete CI/CD pipeline with three workflow configurations
  • Introduced schema-driven API client generation
  • Fixed WASM initialization race condition in Star Trek BASIC
  • Added responsive viewport scaling with mobile keyboard detection
  • Optimized CPU core memory allocation across all emulator cores

What I was going for

“Deployable in anger” was the goal. Not just running locally, but running in a real environment with monitoring, rollback capability, and the kind of infrastructure that lets you sleep at night after a deploy.

What went sideways

The Star Trek race condition ate several hours. The viewport scaler mobile keyboard issue was surprisingly tricky. And I discovered that path priority ordering in Kubernetes ingress rules is not intuitive at all.

What’s next

With deployment infrastructure in place, I could ship features more confidently. Next was storage peripherals and audio realism—the mechanical sounds that make floppy drives and cassette decks feel real.

Previous: 2026-01-27