eden-dev-inc

High-performance, cache-friendly telemetry for Rust.

11
2
100% credibility
Found Apr 08, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

High-performance Rust library for tracking app metrics and traces without slowing down multi-threaded services.

How It Works

1
🔍 Discover fast performance tracking

You hear about a Rust tool that lets you measure your app's speed and activity without slowing it down.

2
📦 Add it to your project

You easily include it in your Rust app with a simple line in your setup file.

3
✨ Define what to watch

You create simple trackers for things like request counts or wait times in just a few lines.

4
📊 Start measuring

In your code, you quickly note when requests happen or times pass—no hassle.

5
đź”— Share with your dashboard

You connect it to your monitoring screen so data flows automatically.

🚀 Insights without slowdown

Your app runs super fast while you get clear views of everything happening inside.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is fast-telemetry?

fast-telemetry delivers high-performance, cache-friendly metrics and lightweight spans for Rust applications, letting you track counters, gauges, histograms, distributions, and traces without slowing down hot paths. It exports directly to Prometheus text, DogStatsD UDP, or OTLP protobuf over HTTP, with derive macros for easy metric structs and background loops for production shipping. Born from a Redis proxy handling millions of ops/sec, it shards writes to eliminate contention in multi-core setups.

Why is it gaining traction?

In benchmarks, it crushes OpenTelemetry SDK throughput—often 10-100x faster on contended counters under 16+ threads—while keeping export costs low via batched, compressed OTLP. Devs love the zero-config derive for labeled metrics (compile-time enums or runtime dynamics with eviction), plus span APIs that mimic W3C traceparent without SDK overhead. It's a drop-in for perf hogs needing raw speed over full ecosystem plumbing.

Who should use this?

Rust backend engineers scaling high-performance services like proxies, game servers (think Roblox telemetry fast flags), or F1 data pipelines where metrics bottleneck at millions/sec. Ideal for teams profiling OpenTelemetry as a hotspot and wanting Prometheus/DogStatsD/OTLP without latency spikes. Skip if you're under 10k events/sec or need auto-propagation.

Verdict

Grab it for contention-free telemetry in perf-critical Rust backends—benchmarks prove it works—but with 11 stars and 1.0% credibility, treat as experimental. Solid docs and harnesses help, but production needs more battle-testing before prime time.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.