Introduction
I first used Tone.js when adding interactive sound features to an internal project at work.
The browser's Web Audio API is powerful, but using it directly means manually wiring up AudioNodes, calculating frequencies, and managing timing yourself. Tone.js abstracts away that complexity, letting you work with musical concepts (notes, BPM, effect chains) right out of the box.
Projects like Google's Chrome Music Lab and Ableton's Learning Music also use Tone.js under the hood.
Before You Start: The Autoplay Policy
Modern browsers block audio autoplay on page load. The AudioContext starts in a "suspended" state and can only be activated after a user gesture (e.g., a click).
document.getElementById('start-btn')?.addEventListener('click', async () => {
await Tone.start();
// Audio is now available
});You must call Tone.start() inside a user interaction handler. Miss this, and you'll get no sound at all.
Core Architecture: Source → Effect → Destination
The audio signal flow in Tone.js is straightforward:
- Source: Generates sound (Synth, Player, Sampler, etc.)
- Effect: Processes sound (Reverb, Delay, Filter, etc.)
- Destination: Outputs to speakers
const synth = new Tone.Synth();
const reverb = new Tone.Reverb(1.5);
const delay = new Tone.FeedbackDelay('8n', 0.3);
// Chain: Synth → Delay → Reverb → Speakers
synth.chain(delay, reverb, Tone.Destination);
synth.triggerAttackRelease('C4', '8n');Use .chain() to connect effects in order, ending with Tone.getDestination() to route to the speakers. For a single effect, .toDestination() is more concise.
Time Representation
One of the things that makes Tone.js so convenient is its support for musical time notation.
| Format | Example | Description |
|---|---|---|
| Notation | "4n", "8n", "2n." | Quarter note, eighth note, dotted half note |
| Transport Time | "1:2:3" | Bar:Beat:Sixteenth |
| Seconds | 1.5 | Numbers are in seconds |
| Measures | "2m" | In measures |
| Now-relative | "+0.5" | Relative to current time |
When BPM changes, notation-based times adjust automatically. At 120 BPM, "4n" is 0.5 seconds; at 60 BPM, it's 1 second.
Sources: Generating Sound
Synthesizers
// Basic synthesizer
const synth = new Tone.Synth().toDestination();
synth.triggerAttackRelease('C4', '8n'); // Play C4 for an eighth note
// FM synthesis
const fmSynth = new Tone.FMSynth({
harmonicity: 3,
modulationIndex: 10,
}).toDestination();
fmSynth.triggerAttackRelease('A3', '4n');
// Plucked string sound
const pluck = new Tone.PluckSynth().toDestination();
pluck.triggerAttack('E4');All basic synthesizers are monophonic (single voice). To play chords, wrap them in PolySynth.
const polySynth = new Tone.PolySynth(Tone.Synth).toDestination();
polySynth.triggerAttackRelease(['C4', 'E4', 'G4'], '2n'); // C major chordPlaying Audio Files
// Single file playback
const player = new Tone.Player('/samples/kick.wav').toDestination();
await Tone.loaded(); // Wait for all buffers to load
player.start();
// Sampler: Map files to pitches → use like an instrument
const sampler = new Tone.Sampler({
urls: {
A3: 'A3.mp3',
C4: 'C4.mp3',
E4: 'E4.mp3',
},
baseUrl: '/samples/piano/',
}).toDestination();
await Tone.loaded();
sampler.triggerAttackRelease('D4', '4n'); // Auto-repitched from the nearest mapped sampleSampler automatically repitches unmapped notes from the closest available sample. You don't need to record every key.
Effects: Processing Sound
Key Effects
// Reverb (spatial depth)
const reverb = new Tone.Reverb({ decay: 2.5, wet: 0.6 });
// Delay (echo)
const delay = new Tone.FeedbackDelay({
delayTime: '8n',
feedback: 0.4,
wet: 0.3,
});
// Distortion
const dist = new Tone.Distortion(0.8);
// Filter (frequency band removal)
const filter = new Tone.Filter({
frequency: 1000,
type: 'lowpass',
rolloff: -24,
});
// Chorus (richer sound)
const chorus = new Tone.Chorus(4, 2.5, 0.5).start();Wet/Dry Control
Every effect has a wet property to control the ratio between the original and processed signal.
const reverb = new Tone.Reverb(2);
reverb.wet.value = 0.5; // 50% original + 50% reverb
// Smooth transition
reverb.wet.rampTo(1, 3); // Ramp to 100% reverb over 3 secondsTransport: The Heart of Timing
Transport is the master timekeeper for your entire app. It manages BPM-based musical timing with a level of precision that JavaScript's setTimeout can't match.
setTimeout commonly drifts by ~100ms, but Transport passes an exact time parameter to callbacks, achieving sub-millisecond accuracy.
Tone.Transport.bpm.value = 120;
// Schedule a callback at a specific point
Tone.Transport.schedule((time) => {
synth.triggerAttackRelease('C4', '8n', time);
}, '0:0:0'); // Bar 0, beat 0
// Repeat at regular intervals
Tone.Transport.scheduleRepeat((time) => {
synth.triggerAttackRelease('E4', '16n', time);
}, '4n'); // Every quarter note
Tone.Transport.start();Here's an important gotcha. Transport callbacks are driven by the audio scheduling timeline, so mixing them directly with DOM updates can make the visual timing drift from the audio timing. If you need to update the UI, hand that work off with Tone.Draw.schedule() so it lines up with the browser's rendering timing.
Tone.Transport.scheduleRepeat((time) => {
synth.triggerAttackRelease('C4', '8n', time);
// Use Draw for UI updates
Tone.Draw.schedule(() => {
highlightCurrentBeat();
}, time);
}, '4n');Sequence and Part
Use Sequence or Part when building repeating patterns.
// Sequence: Evenly spaced sequential events
const seq = new Tone.Sequence(
(time, note) => {
synth.triggerAttackRelease(note, '8n', time);
},
['C4', 'E4', 'G4', null, 'B4', 'G4', 'E4', null], // null = rest
'8n', // Eighth note intervals
).start(0);
// Part: Events with individual timing
const part = new Tone.Part(
(time, event) => {
synth.triggerAttackRelease(event.note, event.dur, time);
},
[
{ time: '0:0:0', note: 'C4', dur: '4n' },
{ time: '0:1:0', note: 'E4', dur: '4n' },
{ time: '0:2:0', note: 'G4', dur: '2n' },
],
).start(0);
Tone.Transport.start();Sequence works well for evenly spaced patterns like drum beats, while Part is better suited for melodies where each note has its own timing.
Loop
For simple repetition, Loop is the most concise option.
const loop = new Tone.Loop((time) => {
synth.triggerAttackRelease('C4', '8n', time);
}, '4n').start(0);Audio Visualization
Tone.js also provides analysis tools.
// FFT: Frequency domain analysis
const fft = new Tone.FFT(256);
synth.connect(fft);
synth.toDestination();
function draw() {
const values = fft.getValue(); // Float32Array (decibel values)
renderFrequencyBars(values);
requestAnimationFrame(draw);
}
draw();
// Waveform: Time domain waveform
const waveform = new Tone.Waveform(1024);
synth.connect(waveform);
synth.toDestination();
function drawWave() {
const values = waveform.getValue(); // Float32Array (-1 to 1)
renderWaveform(values);
requestAnimationFrame(drawWave);
}
drawWave();
// Meter: Real-time volume
const meter = new Tone.Meter();
synth.connect(meter);
synth.toDestination();
// Read current dB value with meter.getValue()Combined with Canvas or WebGL, you can build audio visualizers.
Using with React/Next.js
React Integration Pattern
Store Tone.js instances in useRef and initialize them in useEffect. This prevents re-creation on every re-render.
function SynthPad() {
const synthRef = useRef<Tone.Synth | null>(null);
const [isReady, setIsReady] = useState(false);
useEffect(() => {
synthRef.current = new Tone.Synth().toDestination();
return () => {
synthRef.current?.dispose();
};
}, []);
const handleStart = async () => {
await Tone.start();
setIsReady(true);
};
const playNote = (note: string) => {
if (!isReady) return;
synthRef.current?.triggerAttackRelease(note, '8n');
};
return (
<div>
{!isReady && <button onClick={handleStart}>Start Audio</button>}
{isReady && (
<div>
{['C4', 'D4', 'E4', 'F4', 'G4'].map((note) => (
<button key={note} onPointerDown={() => playNote(note)}>
{note}
</button>
))}
</div>
)}
</div>
);
}Next.js SSR Considerations
There is no window or AudioContext on the server side. Creating Tone.js objects at the top level of a component will cause SSR errors. Always initialize inside useEffect only.
Memory Management
Calling .dispose() disconnects all Web Audio nodes for that instance and makes them eligible for GC. This must be handled in React's useEffect cleanup.
AudioBuffers are significantly larger than the original MP3/WAV files, so if you repeatedly create Player instances without properly disposing the previous ones, memory fills up fast.
Gotchas
1) Mobile Browser Issues
On iOS Safari, events like incoming calls or headphone disconnection put the AudioContext into an "interrupted" state. This is an Apple-specific state not in the W3C spec, so you need to recover with Tone.context.resume().
2) Bundle Size
Tone.js is a full-featured music framework, making it larger than simple sound playback libraries like Howler.js (~7KB). If you only need notification sounds, it might be overkill. That said, it supports ESM, so tree shaking can strip unused code.
3) CPU-Heavy Nodes
Tone.Reverb (internally a ConvolverNode) and Tone.Panner3D are the most CPU-intensive. On mobile, it's best to limit the number of effects.
4) Latency Settings
The default lookAhead is 0.1 seconds. This value favors stability, but you might notice delay during real-time interactions. You can lower it if needed, but going too low risks audio glitches.
Tone.setContext(new Tone.Context({ latencyHint: 'interactive' }));Takeaways
Tone.js made working with music in the browser realistically feasible. It absorbs most of the pain of raw Web Audio API — node management, timing calculations, and cross-browser compatibility.
Here are the key takeaways from my experience:
- Understanding the Sequence/Part/Loop patterns lets you express complex rhythms declaratively.
- When integrating with React, sticking to the useRef + useEffect + dispose pattern is essential to avoid memory issues.
If your project needs musical interaction rather than simple sound effects, Tone.js is the most practical choice.
Thanks for reading.