envidera

envidera / zench

Public

Programmable benchmarks for Rust tests

16
0
100% credibility
Found Mar 12, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

Zench is a Rust library for embedding performance measurements into tests and code examples, delivering terminal reports with stats on time, stability, samples, and outliers.

How It Works

1
🔍 Discover Zench

You hear about a handy tool that makes checking your code's speed simple and fits right into your daily work.

2
📦 Add it easily

Slip it into your project with one quick step, no fuss or complications.

3
Time your functions

In your tests, just tag the code you want to measure – it feels effortless.

4
▶️ Run tests as usual

Fire up your regular checks, and colorful reports pop up showing speeds and steadiness.

5
📊 Spot improvements

Review clear stats on average times, wobbles, and odd results to tweak what needs it.

🚀 Keep code speedy

Now your work stays fast forever, catching slowdowns early with friendly alerts.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is zench?

Zench is a lightweight Rust benchmarking library designed for seamless workflow integration, letting you run benchmarks anywhere—unit tests, examples, or benches—without leaving your cargo test pipeline. Set ZENCH=warn (or panic) and cargo test --release spits out terminal reports with median times, stability metrics like CV and std dev, sample counts, and outlier detection. It handles nanoseconds to seconds automatically, using only std lib on stable Rust.

Why is it gaining traction?

It stands out by embedding benchmarks in everyday tests, even private functions, unlike Criterion's separate harness—perfect for catching perf regressions inline. Programmable reports let you filter/sort by median or outliers, split groups, and assert via issue! macro that warns or fails based on env vars, turning metrics into CI gates. No deps, cargo-native workflow, and black_box wrappers make it dead simple versus zenchat scripting or zenchay dashboards.

Who should use this?

Rust devs benchmarking algos (like fib or vec ops) in tests, not just benches; teams adding perf assertions to PRs/CI; anyone comparing impls (loop vs iterator) without profiler overhead. Skip if you need flame graphs—pair with cargo-flamegraph.

Verdict

Early alpha (13 stars, 1% credibility) with WIP docs, but solid examples and Linux-tested stability make it worth a dev-dep trial for Rust perf workflows. Production? Wait for 1.0; for now, prototype your zenchef login perf checks here. (198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.