iii-hq

Autonomous ML research infrastructure for autoresearch by Karpathy. Multi-GPU parallelism, structured experiment tracking, adaptive search strategy.

17
1
100% credibility
Found Mar 13, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

n-autoresearch provides infrastructure for AI agents to iteratively modify, train, and evaluate machine learning models on multiple GPUs with structured tracking and adaptive strategies.

How It Works

1
🔍 Discover n-autoresearch

You hear about this cool tool that lets AI automatically test and improve machine learning recipes on your powerful computers.

2
📥 Get everything ready

You download the files, prepare some sample data once, and make sure your computers with graphics cards are set up.

3
▶️ Start the helpers

In a few new windows, you launch the main organizer and one helper for each graphics card to get the system running smoothly.

4
🤖 Invite your AI assistant

You show your AI buddy the instructions and let it start suggesting changes to the recipe and running quick tests.

5
📊 Watch magic happen

The AI tries ideas, trains mini-models for just five minutes each, picks the winners, and gets smarter suggestions for next tries.

6
🔄 Review progress anytime

You check reports to see improving scores, best results so far, and ideas for what to try next across all your graphics cards.

🏆 Enjoy better models

Your AI discovers stronger machine learning setups automatically, saving you tons of time and effort.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is n-autoresearch?

n-autoresearch is autonomous research infrastructure that lets AI agents like Claude or GitHub autonomous agents run ML experiments at scale. Inspired by Karpathy's autoresearch, it handles 5-minute training runs on train.py, tracks val_bpb metrics via REST APIs (e.g., /api/experiment/register, /api/search/suggest), and supports multi-GPU parallelism with crash recovery. Built in Python with Rust GPU workers on iii-engine, it replaces bash loops with queryable history and adaptive exploration for autonomous ML research analysts.

Why is it gaining traction?

It scales single-GPU autoresearch to N GPUs across machines, with adaptive strategies that shift from broad exploration to targeted exploit/combine/ablation based on history. Agents get concrete suggestions via API, near-miss detection for combos, and TSV exports—features missing in flat-log originals. Developers dig the structured state, GPU pool management (/api/pool/acquire), and seamless integration for autonomous coding ui flows.

Who should use this?

ML engineers tweaking language models via autonomous github copilot or agents, especially those with multi-GPU clusters running Karpathy-style loops. Ideal for autonomous research groups experimenting on datasets like autonomous driving dataset github, needing parallel adaptive search without babysitting crashes. Skip if you're solo on CPU.

Verdict

Promising upgrade for autonomous research wikipedia-style workflows, but at 10 stars and 1.0% credibility, it's early—docs are solid via README quickstart, but expect tweaks. Grab it if you have NVIDIA GPUs and want structured autonomy now.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.