Skip to main content
Back to Articles
2026 / 02
| 5 min read

Deep Dive: Custom Canvas Charts for 30fps Animation

Chart.js redraws the full scene every frame—correct for dashboards, wrong for playback. The fix was to treat the chart as a video surface.

code-evolution-analyzer canvas performance javascript visualization
On this page

The stutter was subtle at first. Around commit 800, the playback would hiccup; by commit 1,500, it was unwatchable. I had been blaming the browser, the dataset, the animation loop—everything except the chart library doing exactly what it was designed to do.

Chart.js redraws the entire chart on every update. That is correct behaviour for a dashboard where data arrives in batches. It is wrong for a streaming playback where I am adding one point per frame and expecting smooth motion. The tool was not broken; the model was.

This is the story of replacing Chart.js with a custom Canvas 2D renderer, and the three bugs I had to fix along the way.


The Shape of the Problem

The Code Evolution Analyzer plays back a repository’s history as an animation. Each frame advances the commit clock and adds new data to the chart—line counts, file counts, bytes—for every tracked language. The animation runs at roughly 30fps, which means the chart needs to redraw 30 times per second as the dataset grows.

At ~2,000 commits and ~16 languages, that is tens of thousands of line segments by the end of playback. Chart.js was not struggling because it was slow; it was struggling because it was doing a complete scene rebuild every frame. The garbage collector pressure alone was enough to cause visible stutter.

The constraint here is absolute: if the renderer cannot keep up with the frame rate, the animation breaks. Everything else—axis labels, legends, tooltips—is negotiable.

The Local Bug: O(n²) Before the Renderer

My first performance wall was self-inflicted. I was rebuilding the entire dataset on every frame:

// BEFORE: O(n²) - rebuilds all data each frame
function updateChart() {
  const datasets = [];

  for (const lang of ALL_LANGUAGES) {
    const data = [];
    for (let i = 0; i <= currentIndex; i++) {
      data.push(DATA[i].languages[lang]?.code || 0);
    }
    datasets.push({ label: lang, data });
  }

  chart.data.datasets = datasets;
  chart.update('none');
}

At 2,000 commits and 16 languages, that is ~32,000 new array elements per frame before Chart.js even touches the canvas. The GC pressure alone was enough to stutter.

The fix was incremental appends:

// AFTER: O(languages) per frame - only append new points
let lastChartIndex = -1;

function updateChart() {
  if (currentIndex < lastChartIndex) {
    // Reset when scrubbing backwards
    lastChartIndex = -1;
    chart.data.datasets.forEach(d => d.data = []);
    chart.data.labels = [];
  }

  for (let i = lastChartIndex + 1; i <= currentIndex; i++) {
    chart.data.labels.push(i + 1);
    for (let j = 0; j < ALL_LANGUAGES.length; j++) {
      const lang = ALL_LANGUAGES[j];
      const value = DATA[i].languages[lang]?.code || 0;
      chart.data.datasets[j].data.push(value);
    }
  }

  lastChartIndex = currentIndex;
  chart.update('none');
}

That change removed the quadratic waste, but the render loop still stuttered with a growing dataset. Chart.js was now receiving minimal new data, but it was still walking the entire dataset on every repaint.

The Renderer: Canvas 2D With Bounded Work

I replaced Chart.js with a direct Canvas 2D renderer. The trade-off is clear: I lose declarative configuration, but I gain control over exactly which pixels get touched each frame.

The smallest behaviour here is “draw lines for N languages across M points.” That is straight-line work for the GPU, which browsers handle well—if you let them.

function renderChart() {
  ctx.fillStyle = '#0d1117';
  ctx.fillRect(0, 0, chartWidth, chartHeight);

  // Find max value for scaling
  let maxValue = 0;
  for (let i = 0; i <= currentIndex; i++) {
    for (const lang of ALL_LANGUAGES) {
      const val = DATA[i].languages[lang]?.code || 0;
      if (val > maxValue) maxValue = val;
    }
  }
  maxValue *= 1.1;

  // Draw each language line
  for (const lang of ALL_LANGUAGES) {
    ctx.strokeStyle = LANGUAGE_COLORS[lang];
    ctx.lineWidth = 1.5;
    ctx.beginPath();

    for (let i = 0; i <= currentIndex; i++) {
      const val = DATA[i].languages[lang]?.code || 0;
      const x = PADDING_LEFT + (i / currentIndex) * plotWidth;
      const y = plotHeight - (val / maxValue) * plotHeight + PADDING_TOP;

      if (i === 0) ctx.moveTo(x, y);
      else ctx.lineTo(x, y);
    }

    ctx.stroke();
  }
}

The problem now is not speed, it is unbounded work. As the playback advances, the number of segments grows without limit. The renderer is fast, but “fast times infinity” is still trouble.

Decimation: Respect the Pixel Budget

A 1,000px-wide canvas cannot display 2,000 distinct x positions. If two data points map to the same pixel column, one of them is wasted work. The constraint is the screen, not the data.

I decimated the dataset so the renderer only ever draws ~800 points per language:

const MAX_RENDER_POINTS = 800;

function renderChart() {
  const totalPoints = currentIndex + 1;
  const step = Math.max(1, Math.ceil(totalPoints / MAX_RENDER_POINTS));

  for (const lang of ALL_LANGUAGES) {
    ctx.strokeStyle = LANGUAGE_COLORS[lang];
    ctx.beginPath();
    let firstPoint = true;

    for (let i = 0; i <= currentIndex; i += step) {
      const val = DATA[i].languages[lang]?.code || 0;
      const x = PADDING_LEFT + (i / currentIndex) * plotWidth;
      const y = plotHeight - (val / maxValue) * plotHeight + PADDING_TOP;

      if (firstPoint) {
        ctx.moveTo(x, y);
        firstPoint = false;
      } else {
        ctx.lineTo(x, y);
      }
    }

    // Always include the current point
    if (step > 1) {
      const val = DATA[currentIndex].languages[lang]?.code || 0;
      const x = PADDING_LEFT + plotWidth;
      const y = plotHeight - (val / maxValue) * plotHeight + PADDING_TOP;
      ctx.lineTo(x, y);
    }

    ctx.stroke();
  }
}

That “always include the current point” line stops the right edge from jittering as the sampling window shifts. Without it, the leading edge would snap between sample points during playback. It is a small detail that protects the main visual invariant: the rightmost point should track smoothly.

High-DPI and Frame Rate

Two more constraints showed up once the renderer was fast enough to notice details.

First, high-DPI canvases are blurry if you render at CSS size. The fix is to scale the canvas buffer by devicePixelRatio and draw in CSS coordinates:

const dpr = window.devicePixelRatio || 1;
const rect = chartCanvas.getBoundingClientRect();
chartCanvas.width = rect.width * dpr;
chartCanvas.height = rect.height * dpr;
chartCtx.scale(dpr, dpr);

Second, 60fps is wasted work for this animation. The data changes at most 30 times per second, so rendering faster burns CPU for invisible frames. I let requestAnimationFrame control timing and made the animation loop respect the frame budget rather than racing ahead.

Both changes are small individually, but together they are the difference between “fast” and “quietly fast.” The animation no longer heats the laptop when left running.

Draw Order: Keep Small Lines Visible

When 16 lines overlap, large languages bury small ones. JavaScript covers everything if you draw it last.

I sort languages by their current value and draw from smallest to largest, so the larger lines sit behind and the smaller ones remain visible:

const langValues = ALL_LANGUAGES
  .map(lang => ({
    lang,
    color: LANGUAGE_COLORS[lang],
    currentValue: getMetricValue(DATA[currentIndex].languages[lang], currentMetric)
  }))
  .sort((a, b) => a.currentValue - b.currentValue);

for (const { lang, color } of langValues) {
  ctx.strokeStyle = color;
  // ... draw line
}

This is not a rendering feature; it is a visibility contract. The chart is about watching relative change over time, and burying minority languages defeats the purpose. A language that appears in commit 50 and grows steadily should be visible at commit 2,000, even if JavaScript has 10x more lines.

What Changed in Practice

With Chart.js, I observed ~10–18fps on large repositories (1,000–2,000 commits). With the custom renderer and decimation, the playback holds near 60fps on the same data, and I cap it at 30fps to reduce CPU usage.

The specific numbers matter less than the shape: rendering cost is now bounded by the screen resolution, not by history length. A repository with 500 commits and a repository with 5,000 commits render at the same speed once decimation kicks in.


The model that finally held: the chart is a playback surface, not a reporting dashboard. Playback surfaces have frame budgets. Frame budgets demand bounded work. Bounded work means decimating to the pixel grid and drawing from back to front.

Chart.js is still excellent for static charts. For streaming animation with growing data, you need to own the render loop.


See also: Building a Code Evolution Analyzer in a Weekend — the full project story
See also: Deep Dive: Audio Sonification — the sound design experiment