Quick Start — Get a Worker Running in 5 Minutes
Six months ago, I shipped a data-visualization dashboard that crunched CSV files with 50,000+ rows client-side. Every time a user uploaded a file, the browser froze for 3–4 seconds. Buttons stopped responding, animations stuttered, the scroll bar locked up completely. The fix took about 30 lines of code and zero third-party libraries — just Web Workers.
Below is the bare minimum to move work off the main thread. No framework, no build step.
Create a file called worker.js:
// worker.js
self.onmessage = function (e) {
const result = heavyComputation(e.data);
self.postMessage(result);
};
function heavyComputation(data) {
// Simulate expensive work
let sum = 0;
for (let i = 0; i < data.iterations; i++) {
sum += Math.sqrt(i) * Math.sin(i);
}
return sum;
}
Then wire it up from your main script:
// main.js
const worker = new Worker('worker.js');
worker.postMessage({ iterations: 10_000_000 });
worker.onmessage = function (e) {
console.log('Result from worker:', e.data);
// Update the UI here — safe, back on the main thread
};
worker.onerror = function (err) {
console.error('Worker error:', err.message);
};
Done. While the worker churns through ten million iterations, the UI thread stays free. Buttons respond, animations run, users don’t notice a thing.
Deep Dive — What Is Actually Happening
The Single-Threaded Problem
JavaScript runs on a single thread. The event loop handles everything: DOM updates, user input, network callbacks, your application logic. When one task runs long — say, sorting 100,000 records or decoding a large image — every other task waits its turn. That waiting is what users feel as a frozen interface.
Web Workers fix this by running JavaScript in a completely separate OS thread. Each worker gets its own heap, its own event loop, and its own global scope (self instead of window). The two threads never share memory directly. They talk through message passing, which keeps them safely isolated from each other.
The Messaging Model
Data passed between threads is copied by default using the structured clone algorithm. It handles objects, arrays, typed arrays, Map, Set, and ArrayBuffer — but not functions, DOM nodes, or class instances with methods.
// Sending complex data
worker.postMessage({
matrix: [[1, 2], [3, 4]],
config: { normalize: true, precision: 4 }
});
// Inside worker.js
self.onmessage = function (e) {
const { matrix, config } = e.data;
const result = processMatrix(matrix, config);
self.postMessage(result);
};
Transferable Objects — Zero-Copy for Large Data
Copying a large ArrayBuffer — think raw audio samples or image pixel data — is expensive. Instead, you can transfer ownership to the worker. The transfer takes constant time regardless of buffer size:
// main.js — transfer a 10MB ArrayBuffer to the worker
const buffer = new ArrayBuffer(10 * 1024 * 1024);
worker.postMessage({ buffer }, [buffer]);
// After this line, `buffer` is neutered — main thread can no longer read it
// worker.js — transfer it back after processing
self.onmessage = function (e) {
const buf = e.data.buffer;
// ... process buf ...
self.postMessage({ result: buf }, [buf]);
};
I use this pattern for canvas image processing in production. Transferring a 4K image buffer drops from ~8ms down to under 0.1ms. For anything above a few hundred KB, transferring beats copying every time.
Advanced Usage
Worker Pools for Parallel Processing
One worker gives you one extra thread. If the workload can be split — sorting chunks of a dataset, running parallel inference calls, processing individual video frames — a pool of workers multiplies throughput proportionally to CPU core count.
// worker-pool.js
class WorkerPool {
constructor(workerScript, poolSize = navigator.hardwareConcurrency || 4) {
this.workers = Array.from({ length: poolSize }, () => ({
worker: new Worker(workerScript),
busy: false
}));
this.queue = [];
}
run(data) {
return new Promise((resolve, reject) => {
const free = this.workers.find(w => !w.busy);
if (free) {
this._dispatch(free, data, resolve, reject);
} else {
this.queue.push({ data, resolve, reject });
}
});
}
_dispatch(slot, data, resolve, reject) {
slot.busy = true;
slot.worker.onmessage = (e) => {
resolve(e.data);
slot.busy = false;
if (this.queue.length > 0) {
const next = this.queue.shift();
this._dispatch(slot, next.data, next.resolve, next.reject);
}
};
slot.worker.onerror = (err) => {
reject(err);
slot.busy = false;
};
slot.worker.postMessage(data);
}
}
// Usage
const pool = new WorkerPool('worker.js', 4);
const promises = chunks.map(chunk => pool.run(chunk));
const results = await Promise.all(promises);
navigator.hardwareConcurrency returns the number of logical CPU cores — 4 on a mid-range laptop, 12+ on a modern desktop. Sizing the pool to match avoids over-threading on low-end phones while still saturating faster hardware.
Inline Workers with Blob URLs
Shipping a separate worker.js file gets awkward with bundlers. Define the worker inline instead:
const workerCode = `
self.onmessage = function(e) {
const result = e.data.reduce((acc, n) => acc + n, 0);
self.postMessage(result);
};
`;
const blob = new Blob([workerCode], { type: 'application/javascript' });
const url = URL.createObjectURL(blob);
const worker = new Worker(url);
URL.revokeObjectURL(url); // Clean up the URL after worker is created
Vite and webpack 5 both support new Worker(new URL('./worker.js', import.meta.url)) with full ESM module support. That’s the cleaner path if you’re on a modern build setup.
Shared Workers for Cross-Tab Communication
A regular Worker is private to one tab. A SharedWorker lives across every tab of the same origin — handy for shared WebSocket connections or a cross-tab caching layer:
// shared-worker.js
const connections = [];
self.onconnect = function (e) {
const port = e.ports[0];
connections.push(port);
port.onmessage = function (msg) {
// Broadcast to all connected tabs
connections.forEach(p => p.postMessage(msg.data));
};
port.start();
};
// main.js (any tab)
const sw = new SharedWorker('shared-worker.js');
sw.port.onmessage = (e) => console.log('Broadcast received:', e.data);
sw.port.start();
sw.port.postMessage('Hello from Tab 1');
Practical Tips From Production
Know When NOT to Use a Worker
Workers aren’t free. Spinning one up costs roughly 5–10ms and a few MB of memory. For tasks that complete in under 50ms, the overhead likely cancels out the benefit. My rule of thumb: if the work would block the UI for more than one animation frame (~16ms), it belongs in a worker.
Good candidates: JSON parsing of large payloads, cryptography, compression/decompression, image convolution, ONNX inference, and any tight computational loop.
Poor candidates: sorting a few thousand items, string formatting, or anything that needs DOM access — workers have no document or window.
Error Handling and Graceful Degradation
const worker = new Worker('worker.js');
worker.onerror = (e) => {
console.error(`Worker error at ${e.filename}:${e.lineno} — ${e.message}`);
// Fall back to main-thread execution
const result = heavyComputation(pendingData);
updateUI(result);
};
// Always add a timeout for long-running workers
const TIMEOUT_MS = 30_000;
const timeoutId = setTimeout(() => {
console.warn('Worker timed out, terminating');
worker.terminate();
}, TIMEOUT_MS);
worker.onmessage = (e) => {
clearTimeout(timeoutId);
updateUI(e.data);
};
Debugging Workers in Chrome DevTools
Open DevTools → Sources tab → look for the Threads panel on the right. Workers show up there with their own call stacks. You can set breakpoints inside worker scripts exactly as you would in main-thread code. Honestly, I wish I’d found this earlier — I spent two hours logging values to track down a bug that a single breakpoint would’ve caught in ten seconds.
Measuring the Impact
Before shipping, measure with the Performance tab. Record a trace with and without the worker. The key signal is Long Tasks — any task blocking the main thread for more than 50ms. After moving my CSV parser into a worker, long tasks during file upload dropped from 4 to zero. Lighthouse’s Total Blocking Time fell from 680ms to 40ms on the same workload.
Web Workers are one of those APIs that look approachable until you hit the edge cases. The core API is three methods. The hard part is everything around it: deciding pool size, handling transferables correctly, writing graceful fallbacks, knowing when the overhead isn’t worth it. Get the mental model right — two independent threads communicating by message — and the rest falls into place. After that, spinning up a worker feels as routine as writing an async function.

