wanshuiyin

ARIS ⚔️ (Auto-Research-In-Sleep) — Claude Code skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation via Codex MCP

568
55
100% credibility
Found Mar 10, 2026 at 59 stars 10x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TeX
AI Summary

Custom abilities for an AI assistant to autonomously review, iteratively improve, and experiment on machine learning research papers overnight.

How It Works

1
📰 Discover the Helper

You hear about a smart helper that can review and improve your research paper all by itself while you sleep.

2
🔧 Prepare Your AI Assistant

You add the special abilities to your AI assistant so it can handle research tasks on its own.

3
📄 Share Your Paper

You give your research paper to the assistant, ready for it to check and suggest fixes.

4
🚀 Start the Magic Loop

You tell it to automatically review, fix issues, run tests, and repeat until the paper is great — then go to bed.

5
😴 Rest Easy

The assistant works overnight, scoring your paper, finding weaknesses, and making it stronger without bothering you.

🌅 Wake to Success

You check in the morning to find your paper improved, experiments done, and ready to submit with confidence.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 59 to 568 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Auto-claude-code-research-in-sleep?

This project delivers custom Claude Code CLI skills for autonomous ML research, letting you kick off overnight workflows where Claude Code reviews papers, runs GPU experiments, and iterates fixes via cross-model loops with Codex MCP. Instead of manual debugging and rewriting, it handles scoring, weakness detection, and narrative pivots—turning borderline rejects into submission-ready work while you sleep. Built around Claude Code's free install and open-source skills, it integrates Claude GitHub actions for seamless repo handling.

Why is it gaining traction?

It stands out with safety-capped auto loops (max 4 rounds, skips big GPU jobs) and genuine cross-LLM review—Claude executes, Codex critiques—avoiding self-grading bias in tools like Claude GitHub Copilot. Developers hook on commands like `/auto-review-loop` or `/research-lit` for instant lit reviews and gap spotting, plus easy Claude Code download and pricing-free setup via git clone. The real-run score progression from 5/10 to 7.5/10 proves tangible ML paper upgrades without constant oversight.

Who should use this?

ML researchers grinding arXiv submissions, iterating experiments on remote GPUs, or scouting NeurIPS ideas via Claude Code web or pro tiers. Ideal for solo PhDs or small teams using Claude GitHub integration to automate review loops, but skip if your Claude GitHub connector isn't working or you lack Codex MCP setup. Perfect for autonomous Claude workflows in discrete models or diffusion research.

Verdict

Promising for Claude Code skills enthusiasts, but at 37 stars and 1.0% credibility, it's early-stage with solid docs yet unproven at scale—test on toy projects first. Worth a Claude Code install if you're deep in ML autonomy; fork for custom thresholds.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.