Introduction
JavaScript runs on a single thread per browser tab. Rendering, JS execution, and user input handling all happen on the same thread, so any heavy computation blocks the UI. Web Workers are a built-in browser API that solves this problem.
Why does the main thread get blocked? In Chrome, any task exceeding 50ms is classified as a Long Task. Since one frame at 60fps is roughly 16ms, a task over 50ms causes at least 3 dropped frames. This is exactly why scrolling stutters, button clicks feel unresponsive, and animations jank.
Types of Web Workers
| Category | Dedicated Worker | Shared Worker | Service Worker |
|---|---|---|---|
| Scope | Only the creating script | Shared across multiple windows/iframes on the same origin | Acts as a network proxy |
| Communication | Direct postMessage | Via MessagePort | Event-based (FetchEvent, etc.) |
| Use case | Offloading CPU-intensive work | Sharing state across tabs, shared WebSocket | Offline cache, push notifications, PWA |
| Lifecycle | Terminates when page closes | Terminates when all connections close | Managed by browser (stateless) |
This article focuses on Dedicated Workers, the most general-purpose type.
Core API: postMessage and Structured Clone
The main thread and workers communicate via postMessage. Data is deep-copied using the Structured Clone Algorithm.
// worker.ts
self.onmessage = (e: MessageEvent<{ data: number[] }>) => {
const { data } = e.data;
const sorted = [...data].sort((a, b) => a - b);
self.postMessage({ sorted });
};
// main.ts
const worker = new Worker(new URL('./worker.ts', import.meta.url), {
type: 'module',
});
worker.postMessage({ data: hugeArray });
worker.onmessage = (e: MessageEvent<{ sorted: number[] }>) => {
renderList(e.data.sorted);
};What Can and Cannot Be Cloned
The Structured Clone algorithm supports a wide range of types.
- Supported: Primitives, Array, Map, Set, Date, RegExp, ArrayBuffer, TypedArray, Blob, File, Error
- Not supported: Function, DOM nodes, Symbol, prototype chains, getters/setters
If you encounter a DataCloneError, you're most likely trying to pass a function or DOM node. Always remember that workers cannot access the DOM.
postMessage Overhead
Structured Clone is not free. Here are benchmarks from Chrome.
| Number of object keys | Round-trip time |
|---|---|
| Under 1,000 | Under 1ms |
| 10,000 | ~2.5ms |
| 100,000 | ~35ms |
| 1,000,000 | ~550ms |
For objects with a few thousand properties or fewer, the overhead is negligible. But for large data, the cost itself becomes a problem. This is where Transferable Objects come in.
Transferable Objects: Ownership Transfer Instead of Copying
Transferable objects don't copy data — they transfer memory ownership. It's a zero-copy operation, so the transfer cost is nearly constant regardless of size.
// Comparing 32MB ArrayBuffer transfer
const buffer = new ArrayBuffer(32 * 1024 * 1024);
// Clone: ~302ms
worker.postMessage({ buffer });
// Transfer: ~6.6ms (roughly 45x faster)
worker.postMessage({ buffer }, [buffer]);
// After transfer: buffer.byteLength === 0 (detached)Common transferable objects include ArrayBuffer, MessagePort, ReadableStream, OffscreenCanvas, and ImageBitmap.
One caveat I ran into: Transferring a large number of small Transferable objects at once can actually reverse the performance gain.
| Number of items (100 bytes each) | Clone | Transfer |
|---|---|---|
| 10,000 | 5ms | 2,015ms |
| 100,000 | 65ms | 7,609ms |
In Chrome/Edge, transferring tens of thousands of individual ArrayBuffers slowed down exponentially due to mapping overhead. Takeaway: Transfer works best for a small number of large objects; Clone is better for a large number of small objects.
Real-World Use Cases
1) Parsing/Filtering Large Datasets
This is the most common pattern. Parsing CSV/JSON files over 10MB, fuzzy-searching lists with tens of thousands of entries, or applying complex filter conditions — all handled in a worker.
// filter-worker.ts
interface FilterParams {
records: Record<string, string | number>[];
query: string;
fields: string[];
}
self.onmessage = (e: MessageEvent<FilterParams>) => {
const { records, query, fields } = e.data;
const lowerQuery = query.toLowerCase();
const filtered = records.filter((record) =>
fields.some((field) => {
const value = record[field];
return typeof value === 'string' && value.toLowerCase().includes(lowerQuery);
}),
);
self.postMessage({ filtered, total: records.length, matched: filtered.length });
};2) Image Processing
Send Canvas pixel data to a worker to apply filters. Transferring an OffscreenCanvas lets the worker render directly.
// image-worker.ts
self.onmessage = (e: MessageEvent<{ imageData: ImageData }>) => {
const { imageData } = e.data;
const pixels = imageData.data;
// Grayscale conversion
for (let i = 0; i < pixels.length; i += 4) {
const avg = (pixels[i] + pixels[i + 1] + pixels[i + 2]) / 3;
pixels[i] = avg; // R
pixels[i + 1] = avg; // G
pixels[i + 2] = avg; // B
}
self.postMessage({ imageData }, [imageData.data.buffer]);
};3) Encryption/Hash Operations
The Web Crypto API is accessible inside workers via self.crypto.subtle.
// hash-worker.ts
self.onmessage = async (e: MessageEvent<{ data: ArrayBuffer }>) => {
const hashBuffer = await crypto.subtle.digest('SHA-256', e.data.data);
const hashArray = Array.from(new Uint8Array(hashBuffer));
const hashHex = hashArray.map((b) => b.toString(16).padStart(2, '0')).join('');
self.postMessage({ hash: hashHex });
};Simplifying Communication with Comlink
postMessage-based communication gets complex quickly. Comlink (~1.1KB) from Google Chrome Labs lets you call worker functions as if they were local async functions.
// math-worker.ts
import * as Comlink from 'comlink';
const api = {
calculatePrimes(limit: number): number[] {
const primes: number[] = [];
for (let i = 2; i <= limit; i++) {
let isPrime = true;
for (let j = 2; j * j <= i; j++) {
if (i % j === 0) {
isPrime = false;
break;
}
}
if (isPrime) primes.push(i);
}
return primes;
},
};
Comlink.expose(api);// main.ts
import * as Comlink from 'comlink';
const worker = new Worker(new URL('./math-worker.ts', import.meta.url), {
type: 'module',
});
const api = Comlink.wrap<{ calculatePrimes(limit: number): number[] }>(worker);
// Call it just like a local function
const primes = await api.calculatePrimes(1_000_000);The postMessage/onmessage boilerplate disappears, and type safety is preserved. Transferable transfers are also supported via Comlink.transfer().
SharedArrayBuffer and Atomics
postMessage either copies data or transfers ownership. SharedArrayBuffer lets multiple threads directly share the same memory.
Security Requirements
After the 2018 Spectre vulnerability, SharedArrayBuffer is only available in environments with Cross-Origin Isolation enabled.
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
You must verify that window.crossOriginIsolated is true before using it. These headers can cause compatibility issues with third-party iframes and images, so always assess the impact before adopting them.
Thread Synchronization with Atomics
When multiple threads access a SharedArrayBuffer concurrently, race conditions occur. Atomics provides atomic operations to prevent this.
// Sharing progress from worker to main thread
// main.ts
const sab = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT);
const progress = new Int32Array(sab);
const worker = new Worker(new URL('./heavy-worker.ts', import.meta.url), {
type: 'module',
});
worker.postMessage({ progress });
// Poll progress from main thread
const interval = setInterval(() => {
const current = Atomics.load(progress, 0);
updateProgressBar(current);
if (current >= 100) clearInterval(interval);
}, 100);// heavy-worker.ts
self.onmessage = (e: MessageEvent<{ progress: Int32Array }>) => {
const { progress } = e.data;
for (let i = 0; i <= 100; i++) {
performChunk(i);
Atomics.store(progress, 0, i);
Atomics.notify(progress, 0);
}
};Here's a summary of key Atomics methods.
| Method | Description |
|---|---|
Atomics.load(ta, idx) | Atomic read |
Atomics.store(ta, idx, val) | Atomic write |
Atomics.add / sub | Atomic addition/subtraction |
Atomics.compareExchange | CAS (Compare-And-Swap) operation |
Atomics.wait(ta, idx, val) | Blocks if value matches (only usable in workers) |
Atomics.waitAsync(ta, idx, val) | Non-blocking wait, returns a Promise (usable on the main thread too) |
Atomics.notify(ta, idx, count) | Wakes up waiting threads |
SharedArrayBuffer is powerful, but for simple data transfer, postMessage is far simpler and safer. Concurrency bugs (race conditions, deadlocks) are extremely difficult to debug, so it's best to use SharedArrayBuffer only when you truly need shared memory.
Bundler Integration
Webpack 5
const worker = new Worker(new URL('./worker.ts', import.meta.url));Webpack recognizes this pattern and bundles the worker file as a separate entry point.
Vite
const worker = new Worker(new URL('./worker.ts', import.meta.url), {
type: 'module',
});A query string approach also works.
import MyWorker from './worker?worker';
const worker = new MyWorker();Using Workers in React/Next.js
Worker initialization must happen inside useEffect, because the Worker API doesn't exist in SSR environments.
function DataProcessor({ data }: { data: number[] }) {
const workerRef = useRef<Worker | null>(null);
const [result, setResult] = useState<number[]>([]);
useEffect(() => {
workerRef.current = new Worker(new URL('../workers/processor.ts', import.meta.url), {
type: 'module',
});
workerRef.current.onmessage = (e: MessageEvent<{ processed: number[] }>) => {
setResult(e.data.processed);
};
return () => {
workerRef.current?.terminate();
};
}, []);
useEffect(() => {
workerRef.current?.postMessage({ data });
}, [data]);
return <List items={result} />;
}When to Use Workers
Moving every task to a worker isn't always beneficial. You need to account for worker creation cost (~40ms) and postMessage overhead.
| Criterion | Description |
|---|---|
| Over 16ms | Exceeds one frame at 60fps. Worker candidate. |
| Over 50ms | Long Task threshold. Directly impacts UX. Strongly recommended to offload. |
| Communication > Computation | If the task itself takes only a few ms, offloading to a worker is actually slower. |
To measure, use performance.now() to time tasks, or monitor Long Tasks with PerformanceObserver.
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
if (entry.duration > 50) {
reportLongTask(entry);
}
}
});
observer.observe({ type: 'longtask', buffered: true });You can also visually identify Long Tasks in the Chrome DevTools Performance tab.
Key Caveats
1) No DOM access
document, window, localStorage, etc. are unavailable inside workers. self serves as the worker's global scope instead. However, fetch, IndexedDB, WebSocket, and crypto.subtle are available.
2) Worker creation has a cost
Creating a worker instance takes roughly 40ms, plus additional time for loading and parsing the separate JS file. If you create and terminate workers per task, the cost accumulates. It's better to use a Worker Pool pattern that reuses pre-created workers. Use navigator.hardwareConcurrency to determine the pool size based on the number of logical cores.
3) Consider memory overhead
Each worker has its own JS engine instance and its own GC. Creating too many workers can cause memory usage to spike.
4) Check Module Workers support
The { type: 'module' } option is supported in Chrome 80+, Firefox 114+, and Safari 15+. This covers about 97% of users, but if you need to support legacy environments, you'll have to use classic workers.
Conclusion
Web Workers are the most direct way to keep the main thread responsive while handling heavy computation.
The criteria I currently use are straightforward:
- Computation over 50ms: Always offload to a worker
- Large binary data: Transfer with Transferable objects
- Simple JSON data: Structured Clone (default postMessage behavior)
- Complex worker communication: Abstract with Comlink
- Only when you truly need shared memory: SharedArrayBuffer + Atomics
With INP now established as a Core Web Vitals metric, main thread optimization is only growing in importance. Approaching Web Workers not as a mere performance optimization but as an architectural pattern for guaranteeing UI responsiveness makes it much easier to decide when to adopt them.
Thanks for reading.
References
- web.dev - Use web workers to run JavaScript off the browser's main thread
- web.dev - A concrete web worker use case
- MDN - Using Web Workers
- MDN - Transferable objects
- MDN - Structured clone algorithm
- MDN - SharedArrayBuffer
- MDN - Atomics
- Chrome Blog - Transferable objects: Lightning fast
- web.dev - Making your website cross-origin isolated using COOP and COEP
- GitHub - GoogleChromeLabs/comlink
- James Milner - Examining Web Worker Performance
- Performance issue of using massive transferable objects in Web Worker
- V8 - Atomics.wait, Atomics.notify, Atomics.waitAsync