Back to Articles
2026 / 02
| 7 min read

Deep Dive: Lazy Loading 42 Backends with Dynamic Imports

How I shrank the emulator's main bundle from 1MB to 104KB by lazy-loading backends during dial tones—and why users never notice the delay.

emulator vite performance code-splitting typescript lazy-loading

Deep Dive: Lazy Loading 42 Backends with Dynamic Imports

Hiding a megabyte of JavaScript behind modem handshakes


The emulator has 42 backends. Games, language interpreters, CPU emulators, BBS services—each one a self-contained world you can dial into. Some are simple (Tic-Tac-Toe), some are monsters (Z-Machine interpreter with WASM), but they all share one problem: they all need to exist somewhere.

For months, that somewhere was main.js. One bundle. One megabyte. Every visitor downloaded every backend whether they’d ever dial it or not.

This is the story of fixing that.

The Problem: 1MB of JavaScript Nobody Asked For

When I first built the backend system, the registry looked like this:

// The old way: static imports
import { BasicInterpreter } from '@/backend/basic-interpreter';
import { ForthInterpreter } from '@/backend/forth-interpreter';
import { DungeonCrawler } from '@/backend/dungeon';
import { ZMachineBackend } from '@languages/zil/backend-zmachine';
// ... 38 more imports ...

export const BACKEND_REGISTRY: Record<string, typeof BackendInterface> = {
  '5550300': BasicInterpreter,
  '5550400': ForthInterpreter,
  '5550100': DungeonCrawler,
  '5550365': ZMachineBackend,
  // ...
};

Clean. Simple. Every backend imported at the top of the file, registered by phone number. Vite dutifully bundled everything into one massive chunk.

The production build told the story:

dist/assets/main-abc123.js    1,057 KB

Over a megabyte for the main bundle. The Z-Machine interpreter alone was 200KB. The BASIC interpreter pulled in a WASM module. Every language, every game, every demo—all loaded on first page view.

On fast connections, nobody noticed. On mobile? On rural connections? First meaningful paint was painful. The terminal sat there, blank, while browsers chewed through a megabyte of JavaScript that 90% of users would never execute.

The Solution: Dynamic Imports and Code Splitting

The fix is conceptually simple: don’t import backends until someone dials them.

JavaScript has had dynamic imports since ES2020. Instead of import X from 'y' at the top of a file, you can write await import('y') anywhere. The module loads on demand, and bundlers like Vite are smart enough to split it into a separate chunk.

Here’s what the registry looks like now:

// The new way: loader functions with dynamic imports
export type BackendLoader = () => Promise<new () => BackendInterface>;

export const BACKEND_REGISTRY: Record<string, BackendLoader> = {
  // === Games ===
  '5550100': () => import('@/backend/dungeon').then((m) => m.DungeonCrawler),
  '5550238': () => import('@/backend/adventure').then((m) => m.ColossalCaveAdventure),
  '5550601': () => import('@/backend/tictactoe').then((m) => m.TicTacToe),
  
  // === Language Interpreters ===
  '5550300': () => import('@/backend/basic-interpreter').then((m) => m.BasicInterpreter),
  '5550400': () => import('@/backend/forth-interpreter').then((m) => m.ForthInterpreter),
  '5550365': () => import('@languages/zil/backend-zmachine').then((m) => m.ZMachineBackend),
  
  // === CPU Emulators ===
  '5558008': () => import('@/backend/intel8008-test').then((m) => m.Intel8008Test),
  '5556502': () => import('@/backend/6502-video').then((m) => m.C64VideoBackend),
  
  // ... 34 more entries ...
} as const;

Each entry is now a function that returns a Promise. Vite sees those import() calls and creates separate chunks automatically. No configuration needed—it just works.

The build output after this change:

dist/assets/main-xyz789.js           104 KB  (90% reduction!)
dist/assets/dungeon-abc123.js         23 KB
dist/assets/adventure-def456.js       45 KB
dist/assets/basic-interpreter-ghi789.js  56 KB
dist/assets/backend-zmachine-jkl012.js   87 KB
... ~40 more chunks ...

104KB for the main bundle. Each backend in its own chunk, loaded only when dialed.

The Clever Bit: Preloading During Dial Tones

Here’s the thing about dial-up modems: they’re slow. Painfully, nostalgically slow. When you dial a number in the emulator, you hear:

  1. Dial tone (~500ms)
  2. DTMF digits (~700ms for 7 digits)
  3. Ring tone (~2-4 seconds)
  4. Modem handshake (~1-3 seconds depending on protocol)

That’s 4-8 seconds of audio before the connection establishes. The user is waiting—listening to those wonderful retro sounds, feeling the anticipation of a 1995 BBS login.

Why waste that time?

When dialing starts, we kick off the backend load in parallel:

// In ConnectionManager, when user starts dialing
modem.onDial((number) => {
  this.dialedNumber = normalizePhoneNumber(number);
  
  // Start preloading the backend while dial tones play
  // This allows lazy-loaded backends to load in parallel with audio
  if (this.backendFactory.preload) {
    const preloadPromise = this.backendFactory.preload(this.dialedNumber);
    if (preloadPromise) {
      console.log(`[ConnectionManager] Preloading backend for ${this.dialedNumber}`);
    }
  }
});

The preload() function starts the dynamic import but doesn’t await it:

// In RegistryBackendFactory
preload(phoneNumber: string): Promise<void> | null {
  const normalized = phoneNumber.replace(/[-\s()]/g, '');

  // Already loaded synchronously
  if (this.backends.has(normalized)) {
    return Promise.resolve();
  }

  // Already loading
  if (this.pendingLoads.has(normalized)) {
    return this.pendingLoads.get(normalized)!.then(() => {});
  }

  // Has a loader - start loading
  const loader = this.loaders.get(normalized);
  if (loader) {
    const loadPromise = loader();
    this.pendingLoads.set(normalized, loadPromise);

    loadPromise
      .then((BackendClass) => {
        this.backends.set(normalized, BackendClass);
        this.pendingLoads.delete(normalized);
        console.log(`[BackendFactory] Preloaded backend for ${normalized}`);
      })
      .catch((err) => {
        this.pendingLoads.delete(normalized);
        console.error(`[BackendFactory] Failed to preload ${normalized}:`, err);
      });

    return loadPromise.then(() => {});
  }

  // No backend registered
  return null;
}

The beauty of this approach: by the time the handshake finishes and we need to spawn the backend, it’s almost always already loaded. The network fetch happened during the dial tone. The JavaScript parsed during the ringing. Users perceive zero delay because the delay was hidden inside the expected waiting time.

(I love it when audio feedback serves double duty. The modem sounds aren’t just nostalgic—they’re a loading screen.)

The Factory Pattern: Sync vs Async Creation

The backend factory had to grow a split personality. Some code needs to create backends synchronously (tests, certain initialization paths). Other code can handle async creation (the actual call flow).

export class RegistryBackendFactory implements BackendFactory {
  private backends: Map<string, new () => BackendInterface>;
  private loaders: Map<string, BackendLoader>;
  private pendingLoads: Map<string, Promise<new () => BackendInterface>>;

  /**
   * Create a backend process for the given phone number (sync version)
   * Only works if backend is already loaded. Use createAsync for lazy-loaded backends.
   */
  create(phoneNumber: string): BackendProcess | null {
    const normalized = phoneNumber.replace(/[-\s()]/g, '');
    const BackendClass = this.backends.get(normalized);
    
    if (!BackendClass) {
      console.warn(`No backend for: ${normalized}`);
      return null;
    }

    return this.instantiateBackend(BackendClass, normalized);
  }

  /**
   * Create a backend process for the given phone number (async version)
   * Loads the backend if not already loaded, then creates an instance.
   */
  async createAsync(phoneNumber: string): Promise<BackendProcess | null> {
    const normalized = phoneNumber.replace(/[-\s()]/g, '');

    // Check if already loaded
    let BackendClass = this.backends.get(normalized);

    if (!BackendClass) {
      // Check if currently loading (preload in progress)
      const pending = this.pendingLoads.get(normalized);
      if (pending) {
        BackendClass = await pending;
      } else {
        // Check for loader
        const loader = this.loaders.get(normalized);
        if (loader) {
          const loadPromise = loader();
          this.pendingLoads.set(normalized, loadPromise);
          try {
            BackendClass = await loadPromise;
            this.backends.set(normalized, BackendClass);
            console.log(`[BackendFactory] Loaded backend for ${normalized}`);
          } catch (err) {
            console.error(`[BackendFactory] Failed to load ${normalized}:`, err);
            return null;
          } finally {
            this.pendingLoads.delete(normalized);
          }
        }
      }
    }

    if (!BackendClass) {
      console.warn(`No backend for: ${normalized}`);
      return null;
    }

    return this.instantiateBackend(BackendClass, normalized);
  }
}

The pendingLoads map is the key to avoiding duplicate fetches. If you dial a number, hang up, and immediately redial, the second dial will await the same Promise as the first. No wasted bandwidth, no race conditions.

WASM Modules: They Come Along for the Ride

Several backends use WASM for the heavy lifting—the Z80 emulator, the BASIC interpreter, the Z-Machine. Each WASM module is a separate .wasm file that needs to load alongside its JavaScript glue code.

Here’s what a WASM-backed backend looks like internally:

// Inside z80-hello/index.ts
let Z80Emulator: typeof Z80EmulatorType | null = null;

async function initZ80(): Promise<void> {
  if (Z80Emulator) return;  // Already initialized

  try {
    const emuModule = await import('@cores/z80/pkg/z80_wasm.js');
    await emuModule.default();  // Initialize WASM
    Z80Emulator = emuModule.Z80Emulator;
  } catch (err) {
    console.error('Failed to load Z80 WASM:', err);
    throw err;
  }
}

export class Z80HelloBackend extends BackendInterface {
  async onConnect(): Promise<void> {
    await initZ80();
    // Now we can use Z80Emulator
  }
}

When Vite sees that import('@cores/z80/pkg/z80_wasm.js'), it bundles the JavaScript glue into the backend’s chunk. The .wasm file gets copied to the build output and is fetched separately when emuModule.default() runs.

The timing works out nicely: the JavaScript chunk loads during preload, and the WASM binary loads during onConnect(). For most backends, the WASM is relatively small (20-80KB) and loads before the user finishes reading the welcome banner.

(For the truly massive WASM modules like the full C64 emulator, we might need a loading indicator. But that’s a problem for future me.)

Vite Configuration: Letting the Magic Happen

Vite’s code splitting is automatic for dynamic imports, but I added some manual chunking to keep related code together:

// vite.config.ts
export default defineConfig({
  build: {
    rollupOptions: {
      output: {
        manualChunks: (id) => {
          // Vendor libraries - split large packages for better caching
          if (id.includes('node_modules')) {
            if (id.includes('tone')) return 'vendor-tone';
            if (id.includes('@xterm')) return 'vendor-xterm';
            if (id.includes('lit')) return 'vendor-lit';
            return 'vendor';
          }

          // Backend shared infrastructure only - individual backends are lazy-loaded
          // via dynamic imports and get their own chunks automatically
          if (
            id.includes('backend-interface') ||
            id.includes('backend-disk-storage') ||
            id.includes('cpu-emulator-base') ||
            id.includes('worker-based-interpreter-backend')
          ) {
            return 'backend-base';
          }
          // Don't assign backends to a chunk - let dynamic imports create chunks

          // Serial/modem communication layer
          if (id.includes('/serial/')) return 'serial';

          // Audio system
          if (id.includes('/audio/')) return 'audio';
        },
      },
    },
  },
});

The key line is the comment: “Don’t assign backends to a chunk.” For anything with a dynamic import, let Vite decide. It’s smarter than me about dependency graphs.

The shared backend infrastructure (backend-interface, cpu-emulator-base) goes into a backend-base chunk that loads with the first backend. This prevents every backend from duplicating the base class code.

The Display Name Problem

Here’s a fun side effect of lazy loading: you can’t call backend.constructor.name until you’ve loaded the backend. But the welcome screen wants to show friendly names like “Dungeon Crawler” instead of phone numbers.

The old code did this:

// This required loading every backend just to get its name
for (const [phone, BackendClass] of Object.entries(BACKEND_REGISTRY)) {
  const displayName = getBackendDisplayName(BackendClass.name);
  welcomeScreen.addEntry(phone, displayName);
}

The new code uses a separate mapping:

const PHONE_DISPLAY_NAMES: Record<string, string> = {
  '5550100': 'Dungeon Crawler',
  '5550238': 'Colossal Cave',
  '5550300': 'BASIC',
  '5550365': 'Zork I',
  // ... all 42 entries ...
};

export function getDisplayNameForPhone(phoneNumber: string): string {
  const normalized = phoneNumber.replace(/[-\s()]/g, '');
  return PHONE_DISPLAY_NAMES[normalized] || normalized;
}

Is it redundant? Yes. Is it fast? Also yes. The display name mapping is a few KB of strings, not a megabyte of code. The welcome screen renders instantly without loading any backends.

Results: Before and After

Before lazy loading:

  • main.js: 1,057 KB
  • Time to first paint: ~3 seconds on slow 3G
  • All code loaded regardless of usage

After lazy loading:

  • main.js: 104 KB (90% reduction)
  • Backend chunks: 11-87 KB each (~40 chunks)
  • Time to first paint: ~800ms on slow 3G
  • Code loads only when dialed

Perceived latency:

  • Backend load time hidden behind dial tones
  • Most backends fully loaded before handshake completes
  • Zero user-visible delay in normal usage

The Tradeoffs

Nothing is free. Here’s what lazy loading costs:

1. First-Dial Latency on Fast Connections

If you’re on a fast connection and dial instantly, the backend might not be fully loaded yet. The createAsync() call blocks until loading completes. In practice, even on fiber, the dial sequence takes 2+ seconds, and most backends load in under 500ms. But it’s theoretically possible to hit a delay.

2. Increased Complexity

The factory now has sync and async paths. The registry is functions instead of classes. Tests need to handle Promises. Not dramatically harder, but more moving parts.

3. Network Waterfall for Heavy Backends

A backend with WASM loads in two stages: JavaScript chunk first, then WASM binary. On very slow connections, this waterfall could be noticeable. The preload helps, but doesn’t eliminate it.

4. Module Duplication Risk

If two backends share a utility module, Vite might inline it into both chunks. The shared base classes are handled explicitly in manualChunks, but I need to watch for other shared code. A good problem to have compared to a 1MB bundle.

Lessons Learned

The loading screen was already there. Modem handshakes are naturally slow. Instead of fighting user expectations (“why is this taking so long?”), I leaned into them. The delay is authentic. The delay is hidden. Win-win.

Dynamic imports are trivially easy in modern tooling. I expected to fight Vite’s bundler. Instead, I changed import X from 'y' to () => import('y').then(m => m.X) and everything worked. Vite handled the chunking, the hashing, the cache-busting—all of it.

Display names shouldn’t require loading code. Metadata and behavior should live separately. I could have put a static displayName property on each class, but that still requires loading the class. A separate mapping is simpler and faster.

Preloading is the secret sauce. Lazy loading without preloading would feel sluggish. Starting the load early, while the user is happily listening to dial tones, makes it feel instant. The best performance optimization is the one users never notice.


See also: Deep Dive: Bell 103 Audio Modem — the dial tones that hide your loading times

See also: Deep Dive: Bell 212A and V.32bis Handshakes — even longer handshakes for even more loading time