evo-hq

evo-hq / evo

Public

A plugin for Claude Code and Codex that turns your codebase into an autoresearch loop — discovers what to measure, instruments the benchmark, then runs tree search with parallel subagents.

238
21
100% credibility
Found Apr 14, 2026 at 238 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Evo is a plugin for AI coding assistants that automatically discovers benchmarks, runs optimization experiments with parallel agents, and tracks improvements via a dashboard.

How It Works

1
🔍 Find Evo

You discover Evo, a smart helper plugin, in your AI coding assistant's store while looking for ways to improve your code.

2
Add It Easily

With one simple click, you add Evo to your coding workspace so it's ready to use in any project.

3
🎯 Set Your Goal

You pick the code file to improve and tell Evo how to measure success, like speed or accuracy.

4
🧪 Run First Check

Evo explores your code, sets up tests, and runs a starting score so everyone knows the baseline.

5
🚀 Start Improvements

You say go, and Evo launches teams of smart helpers to try changes and keep only the winners.

6
📊 Watch Progress

A handy screen shows live updates on experiments, scores climbing higher, and safe keepers.

Better Code Wins

Your code is now improved with proven changes saved safely, plus a full story of what worked.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 238 to 238 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is evo?

Evo is a Python plugin for Claude Code and Codex that automates code optimization by turning your repo into an autoresearch loop. Point it at a target file and benchmark command—it auto-discovers metrics, instruments your eval suite, then spawns parallel subagents in git worktrees to explore improvements via tree search. You get a local dashboard for real-time monitoring of scores, traces, and experiment trees.

Why is it gaining traction?

Unlike pure hill-climbing loops like Karpathy's autoresearch, evo branches multiple paths with shared failure traces, gating for regressions, and configurable subagents/budgets. Developers love the hands-off flow: slash `/evo:discover` to baseline, then `/evo:optimize` for autonomous iterations that commit winners. For evo github searches or plugin claude code fans, it's a structured evolution tool that beats manual tuning.

Who should use this?

ML engineers with evals like accuracy or latency benchmarks, backend devs optimizing APIs under load, or game devs (evo wars github vibes) tuning simulations. Ideal if you have a CLI-runnable benchmark and want AI-driven evolution without babysitting agents—frontend plugin claude code users get git-isolated experiments.

Verdict

Try evo if autonomous optimization fits your workflow; it's clever for structured evo land experiments but early at 238 stars and 1.0% credibility score. Solid docs and dashboard make it playable now, but watch for distributed evals in future releases before production bets.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.