kiosvantra

🧬 Metronous — The measured mind. Local AI agent telemetry, benchmarking and model calibration for OpenCode agents.

17
3
100% credibility
Found Apr 06, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

Metronous tracks AI agent usage in OpenCode, benchmarks performance and costs weekly, recommends model optimizations, and displays everything in a terminal dashboard.

How It Works

1
🚀 Discover Metronous

You hear about a free tool that tracks your AI coding helpers' performance and suggests cheaper, smarter models.

2
📥 Install with one click

Run a simple command to download and set it up – it automatically connects to your AI coding app without hassle.

3
✅ See it connected

Restart your AI app and notice the 'Connected' message – now every session is tracked quietly in the background.

4
🤖 Use your AI helpers normally

Keep building and coding with your agents – costs, speed, and accuracy are captured automatically.

5
📊 Open the dashboard

Type one command to launch a beautiful screen showing live sessions, benchmarks, charts, and tips.

6
💡 Spot improvements

Review weekly reports to see which helpers need better models for speed or savings.

🎉 Smarter, cheaper AI

Switch models based on real data – enjoy faster responses and lower bills while your projects thrive.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is metronous?

Metronous is a Go tool for local telemetry, benchmarking, and model calibration of OpenCode AI agents—like a metronome keeping your measured mind in sync. It captures real-time session data (tool calls, tokens, costs) via a plugin or MCP shim, stores it in SQLite, and runs weekly benchmarks to evaluate agent accuracy, ROI, and latency against editable thresholds. Developers get a TUI dashboard with live tracking, charts, and model switch recommendations, all without cloud services.

Why is it gaining traction?

One-command install sets up a background daemon (systemd/launchd), auto-patches OpenCode config, and discovers agents dynamically—no manual wiring. The 5-tab terminal UI delivers instant insights like session timelines and cost breakdowns, plus F5-triggered benchmarks, making agent optimization feel effortless. For OpenCode users, it turns vague performance hunches into data-driven actions like "switch to Haiku for 30% savings."

Who should use this?

OpenCode power users managing custom agents (build, plan, explore, or ecosystem ones like sdd-orchestrator/apply) who want to benchmark accuracy and trim LLM bills. Ideal for solo devs or small teams running local AI workflows, especially those tired of untracked costs in long sessions or iterating models blindly.

Verdict

Try it if you're deep into OpenCode—solid TUI and automation punch above the 17 stars and 1.0% credibility score. Early-stage with great docs and tests, but wait for Windows polish if not on Linux/macOS.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.