InternScience

MLEvolve is an open-source autonomous system for end-to-end machine learning algorithm design and optimization powered by progressive search and experience-driven memory.

130
20
100% credibility
Found Feb 19, 2026 at 59 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

MLEvolve is an open-source AI agent system that autonomously solves machine learning competitions by iteratively generating, executing, debugging, and improving code solutions.

How It Works

1
🔍 Discover MLEvolve

You hear about MLEvolve, a smart helper that automatically solves fun data puzzles like Kaggle competitions.

2
📁 Prepare your puzzle data

Gather your data files into one folder, just like putting puzzle pieces in a box.

3
🔗 Connect a thinking brain

Link a helpful AI service so MLEvolve can think and create solutions for you.

4
🚀 Start the magic

Run a simple command to launch MLEvolve on your data puzzle.

5
🧠 Watch it solve automatically

MLEvolve thinks step-by-step, writes code, tests ideas, fixes mistakes, and improves until it finds great answers.

6
📊 Check the results

See the best solutions, scores, and ready-to-use code files appear in your folder.

🏆 Celebrate top scores

Your puzzle is solved with winning entries, ready to submit and shine on leaderboards!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 59 to 130 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is MLEvolve?

MLEvolve is an open-source Python system that autonomously handles end-to-end machine learning algorithm design and optimization for Kaggle-style competitions. You feed it a dataset via mle-bench, tweak config.yaml with your Gemini API key, and run `bash run_single_task.sh ` to get a search tree of evolving solutions, landing top code in `./runs/`. Powered by progressive search and experience-driven memory, it spits out leaderboard-beating models without manual coding.

Why is it gaining traction?

It crushes the MLE-bench leaderboard at #1 with 61.33% any-medal rate in just 12 hours using Gemini—faster than rivals needing 24. Developers dig the multi-mode planning (memory-enhanced or single-shot) and cross-branch fusion that learns from failures, delivering diverse, high-performing ML pipelines. The global memory layer reinforces winning strategies across runs, making optimization feel smart and adaptive.

Who should use this?

ML engineers benchmarking agentic systems against OpenAI's MLE-bench, Kaggle competitors automating baseline-to-gold pipelines, or researchers prototyping autonomous learning algorithms. Ideal for anyone with Gemini access testing end-to-end optimization on tabular/image tasks, especially if you're tired of hand-tuning models.

Verdict

Promising for autonomous ML experimentation, but at 49 stars and 1.0% credibility, it's early-stage—expect setup tweaks and sparse docs. Try it on a single task if leaderboard SOTA intrigues you; skip for production until more battle-tested.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.