JoaquinMulet

Autonomous code optimization that works while you sleep (Autoresearch with Claude Code). Define a metric, point it at your code, go to bed. Wake up to a faster, smaller, better system — with correctness verified at every step.

17
6
100% credibility
Found Mar 16, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

AGR is an autonomous tool that uses AI to experiment on and optimize code for better speed, size, or accuracy while verifying correctness at every step.

How It Works

1
🔍 Discover AGR

You find this clever tool on GitHub that promises to make your code run faster or smaller automatically while you sleep.

2
📥 Add to Your AI Helper

You download it and slip it into your AI coding assistant like adding a new skill.

3
🎯 Pick Your Goal

You simply tell it what you want to improve, like speed, accuracy, or making files tinier.

4
🚀 Start the Magic

With one easy command, you launch it and it sets everything up on its own.

5
🛌 Relax Overnight

You go to bed while it quietly tests ideas, checks everything stays correct, and picks the winners.

🎉 Wake to Wins

In the morning, your code is dramatically better—faster, smaller, or smarter—with proof it still works perfectly.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Artificial-General-Research?

Artificial-General-Research (AGR) is a Claude Code skill for autonomous code optimization in artificial general intelligence research and production codebases. Point it at your repo with a metric like speed, bundle size, or accuracy via simple /agr commands in Claude Code, then fire up a bash loop—it runs AI-driven experiments overnight, verifying correctness with checksums and guards. Wake up to measurably better code, like the 45x speedup demoed on a C++/Python spatial library.

Why is it gaining traction?

It edges out basic autoresearch tools with user-noticeable reliability: per-benchmark variance handling spots real gains amid noise, exhausted approach tracking skips dead ends, and supervisor audits catch hidden wins. The hook is true set-it-and-forget-it autonomy on GitHub repos, blending Claude Code's power with bash simplicity for consistent results across 10-100+ iterations—no context drift or flaky benches.

Who should use this?

Library maintainers chasing perf gains on wall-clock time or ML accuracy, backend teams optimizing API p95 latency or SQL queries, and AGI researchers automating prompt engineering evals or autonomous code evolution. Fits any project with a benchmark.py setup, from Docker image shrinks to cloud cost cuts.

Verdict

Worth a spin for autonomous coder workflows if you have Claude Code access—killer case study proves it delivers, but 17 stars and 1.0% credibility scream early alpha. Solid docs offset thin maturity; prototype on toy code before prime time.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.