npm install audio-to-haptics Analyze an audio or video file once — the library pre-computes when and how hard to vibrate. After that, just play the media and haptics fire automatically in sync, driven by a requestAnimationFrame loop.
import { useRef } from 'react'
import { useHaptics } from 'audio-to-haptics/react'
export default function App() {
const ref = useRef<HTMLAudioElement>(null)
const { analyze, ready } = useHaptics(ref)
return (
<div>
<audio ref={ref} controls />
<button onClick={() => analyze('YOUR_AUDIO_URL')}>
{ready ? 'Re-analyze' : 'Load audio'}
</button>
</div>
)
} Both paths use the same hook — analyze() takes a URL, analyzeBuffer() takes raw bytes from a file input or drag-and-drop.
const { analyze, analyzeBuffer, ready } = useHaptics(ref)
// From URL
analyze('YOUR_AUDIO_URL')
// From file input — same pipeline, local files
const onFile = async (e) =>
analyzeBuffer(await e.target.files[0].arrayBuffer())
// In your JSX:
// <input type="file" accept="audio/*,video/*" onChange={onFile} /> Pass a videoRef instead of audioRef — the hook and engine attach to whichever media element you give them.
// Swap the ref type — everything else is identical
const ref = useRef<HTMLVideoElement>(null)
const { analyze, ready } = useHaptics(ref)
analyze('YOUR_VIDEO_URL')
// In your JSX:
// <video ref={ref} controls /> Suppress haptics without stopping playback or losing the analysis. One flag, instant toggle.
const { muted, toggleMuted } = useHaptics(ref)
// In your JSX:
// <button onClick={toggleMuted}>
// {muted ? 'Unmute haptics' : 'Mute haptics'}
// </button> The library exposes per-frame amplitude and chain type so you can sync any visual — SVG, canvas, CSS — to the same data driving the haptics. The visualiser on the landing page is built with exactly this.
const {
analyze,
playbackBucketIntensity, // 0–1, varies frame-by-frame as audio decays
playbackChainIsShortBurst, // true = transient (kick, gunshot, heartbeat)
// false = sustained (bass, long note) or silence
} = useHaptics(ref)
// Both update every animation frame — no extra loop needed
// Plug into SVG, canvas, CSS — whatever you're building
// In your JSX:
// <circle r={20 + playbackBucketIntensity * 40} /> Tune how the algorithm detects haptic events. Try them live in the playground →
// Pass options as a second argument — applied at construction
const { analyze, ready } = useHaptics(ref, {
spikeRatio: 2.0, // higher = fewer, more dramatic haptics
intensityFloor: 0.65, // minimum vibration strength (0–1)
shortChainBuckets: 8, // chains shorter than this fire as a solid pulse
}) Call analyze() once before playback. The library fetches and decodes the audio, splits it into ~60ms buckets, and pre-computes exactly when and how hard to vibrate. Nothing is recalculated during playback — it's all ready ahead of time.
Each bucket is compared to the average of the buckets before it. If the current amplitude is significantly louder than the recent past, it triggers a vibration. This is edge detection — a kick drum stands out locally even inside a wall of loud music.
Short chains (a kick, a gunshot, a heartbeat) fire as a single solid pulse — just impact. Chains longer than shortChainBuckets use PWM: rapid on/off cycles where the duty cycle represents intensity. The motor's inertia smooths the pulses into a perceived partial amplitude.
All knobs are optional — pass any subset as the second argument to useHaptics() or the HapticEngine constructor. Defaults work well for most audio.
Spike detection
spikeRatio default: 1.5 How much louder the current moment needs to be compared to the recent past to trigger a vibration. Higher = fewer, punchier haptics; lower = more sensitive, fires on quieter sounds. See diagram ↑
sustainLowerBound default: 0.75 How much a vibrating chain can decay before it stops. 0.75 means the current bucket can be as quiet as 75% of the previous one and still keep vibrating — catches natural decay tails like a reverb or a long note fading out.
sustainUpperBound default: 1.01 How much a vibrating chain can rise before it stops sustaining. Prevents a growing section of audio from being falsely held on — only true decay tails qualify, not crescendos.
neighborRadius default: 4 How many past buckets (~240ms at default bucketSize) to average when computing the spike baseline. Larger = the algorithm looks further back for its reference point.
vibrateThresholdRatio default: 0.4 Noise floor as a fraction of peak amplitude. Buckets quieter than this fraction of the loudest moment in the audio never vibrate, regardless of spike ratio.
vibrateThresholdMin default: 0.040 Absolute minimum amplitude below which haptics never fire. Catches very quiet audio where the relative noise floor alone wouldn't be enough.
Intensity & timing
shortChainBuckets default: 4 Chains shorter than this number of buckets fire as a single solid max-intensity pulse. Longer chains use PWM. Controls the split between punchy transients (kick, gunshot, heartbeat) and sustained sounds. See diagram ↑
intensityFloor default: 0.5 Minimum PWM duty cycle for sustained chains. Prevents the motor from stalling on quiet audio — at 50% the motor stays spinning even at low amplitude.
cycleMs default: 20 PWM cycle length in milliseconds. The motor's inertia smooths rapid on/off cycles at this duration into a perceived partial amplitude. Don't go below ~20ms — cycles shorter than motor response time lose the effect.
bucketSize default: 2646 Audio samples per analysis bucket (~60ms at 44.1kHz). Determines the timing resolution of haptic events. Smaller = more precise, but the motor needs ~40ms to spin up so there's a practical lower limit.
Haptics require the Web Vibration API — supported on Android in Chrome, Samsung Internet, and Opera Mobile. Not supported on iOS (Safari or Chrome) or desktop. On unsupported platforms the library loads and analyzes normally, vibration calls are silently skipped.
How it's built, the algorithm, and the Android mute window — read the full writeup on Dev.to →