kayba-ai

🪞 Make your agents recursively self-improve

34
3
100% credibility
Found Mar 28, 2026 at 34 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

recursive-improve captures interactions from AI agents, analyzes failure patterns in traces, and applies targeted code and prompt fixes to enable recursive self-improvement.

How It Works

1
🔍 Discover the magic

You hear about a simple way to make your AI helper get smarter by learning from its own mistakes.

2
📦 Add to your project

You place the tool in your AI helper's folder and set it up with a quick start.

3
👀 Watch it work

Run your AI helper on tasks, and it quietly notes every conversation and decision it makes.

4
🛠️ Ask for fixes

Tell your coding AI to review the notes, spot patterns in failures, and suggest smart changes.

5
📊 Test improvements

Run your helper again on the same tasks and see the success scores climb higher.

6
📈 View the dashboard

Open a simple screen showing before-and-after results for each round of fixes.

🚀 Smarter agent

Your AI helper now handles tougher problems, wastes less time, and succeeds more often.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 34 to 34 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is recursive-improve?

recursive-improve is a Python tool that lets your AI agents recursively self-improve by capturing every LLM call via simple patches for OpenAI, Anthropic, and LiteLLM. Drop it into your agent project with `recursive-improve init`, wrap runs in a tracing session, and it analyzes traces for failure patterns like loops or give-ups, then generates targeted code or prompt fixes via Claude skills. The result: agents that compound improvements over runs, tackling harder tasks with fewer tokens—no more manual tweaks.

Why is it gaining traction?

It closes the stateless agent loop with a full pipeline: trace capture, benchmark CLI for before/after metrics, a dashboard to visualize cycles across git branches, and `/ratchet` for overnight autonomous loops that only keep wins. Unlike one-off evals, it ties recursive criticism and improvement directly to git commits, making iteration measurable and reversible. Developers chasing recursive improvement dig the zero-code-change tracing and how it surfaces domain-specific metrics from your traces.

Who should use this?

Agent builders making AI agents from scratch, tweaking ones in ChatGPT or with Copilot, or optimizing for production—like those exploring how to make money with AI agents on Reddit. Ideal for backend devs iterating on tools for tasks from booking flights to game bots (even quirky ones like making agents talk in Valorant), where edge cases kill reliability.

Verdict

Early alpha with 34 stars and 1.0% credibility score—docs shine via the README and dashboard, but expect rough edges like custom eval integration. Worth a spin in a side project if you're serious about recursive agent improvement; init takes seconds, gains compound fast.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.