jhartquist

Rust implementation of the Resonate algorithm for low-latency spectral analysis, with Python and WebAssembly bindings

29
1
100% credibility
Found Apr 23, 2026 at 29 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

A toolkit for real-time audio frequency analysis that creates spectrogram-like views of sound with very low delay, usable in web browsers, scripts, or native apps.

How It Works

1
🔍 Discover the sound visualizer

You find a cool tool online that turns audio into real-time pictures of pitches and frequencies, perfect for music or voice apps.

2
🎙️ Try the live demo

Plug in your microphone and speak or play music – watch colorful patterns light up instantly showing every note as it happens.

3
📥 Bring it to your project

Pick the easy version for web pages, simple scripts, or your favorite coding setup and add it with one quick download.

4
🎵 Choose your sound detectors

Tell it which pitches or notes to watch for, like a piano keyboard from low to high.

5
▶️ Feed in your sounds

Play a song, record your voice, or stream live audio right into the tool.

6
🌈 See the frequency magic

Get instant pictures of sound waves breaking down into pitches over time, super smooth and fast without delays.

Create amazing audio experiences

Now build live music visualizers, smart voice analyzers, or train apps that understand sound perfectly.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 29 to 29 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is resonators?

Resonators delivers low-latency spectral analysis like STFT or CQT, but with per-sample updates and zero windowing or buffering. You feed it audio samples via a resonator bank tuned to custom frequencies, getting magnitudes, phases, or complex outputs on demand. Rust core with Python (PyPI) and WebAssembly (npm) bindings makes it a drop-in for real-time apps, portable across platforms without Apple-specific deps.

Why is it gaining traction?

It outperforms the reference C++ impl by 1.6x in benchmarks on M2, with fixed memory and per-bin time constants for tailored time-frequency tradeoffs. Browser demos stream mic input to WebGL spectrograms via AudioWorklet, and Rust GitHub Actions CI with cache keeps builds fast and reproducible. Unlike general FFT libs, it's built for latency-first scenarios where resolution can flex.

Who should use this?

Audio DSP engineers in live effects or VSTs needing sub-frame latency. Web devs piping mic data to shaders in browser games or visualizers. Python ML researchers training on non-uniform freq grids, like resonators band analysis in Wuthering Waves audio mods or custom YuGiOh soundtracks—not your resonators deck in Master Duel or PoE builds.

Verdict

Grab it for real-time spectral needs if latency beats precision; Rust GitHub workflow and trending crate vibes shine, but 1.0% credibility score and 29 stars mean it's alpha—v0.1 with strong docs, tests, and demos. Prototype via live spectrogram before production.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.