ZJU-REAL

ZJU-REAL / SDAR

Public

Official code for "Self-Distilled Agentic Reinforcement Learning"

38
0
100% credibility
Found May 15, 2026 at 38 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

SDAR is a reinforcement learning method that uses self-distillation to train AI agents more effectively on benchmarks like ALFWorld, WebShop, and Search-QA.

How It Works

1
📰 Discover SDAR

You find a new research paper on self-distilled agentic reinforcement learning that promises big improvements for AI agents in everyday tasks.

2
🛠️ Set up your workspace

Create a fresh space on your computer to build and train smarter AI agents.

3
Pick a practice world
🏠
Household tasks

Agents learn to pick, clean, heat objects in kitchens.

🛒
Online shopping

Agents navigate websites to buy items.

🔍
Search challenges

Agents find answers using search tools.

4
📥 Gather practice scenarios

Download ready-made scenarios so your agent has plenty of tasks to learn from.

5
🚀 Start agent training

Hit launch and watch your agent practice, learn from its mistakes, and get smarter over time.

6
📊 Review progress

Check graphs and scores to see your agent beating standard methods.

🏆 Smarter agents ready

Your trained agent now handles complex tasks better, ready for real-world challenges!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 38 to 38 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is SDAR?

SDAR implements self-distilled agentic reinforcement learning, distilling knowledge from LLMs into RL agents to boost performance on text-based embodied tasks. It tackles weak baselines in environments like ALFWorld (household chores), WebShop (ecommerce navigation), and Search-QA (retrieval), delivering clear gains via Python scripts that train 3B models out of the box. Users get bash commands for setup, training on GPU clusters, and checkpoint merging—plug in vLLM or Torch for quick runs.

Why is it gaining traction?

Unlike generic RL libs, SDAR bundles env installs (ALFWorld, WebShop, Search) with one-liners, skipping manual data prep or custom wrappers. The hook? Official code from a fresh arXiv paper (2605.15155), reproducing SOTA lifts on real benchmarks—devs grab it for verifiable results without reinventing env pipelines or distillation logic.

Who should use this?

RL researchers replicating agentic LLM papers, especially on embodied QA or web agents. Ideal for academics tuning distillation on ALFWorld chores or WebShop shopping, or labs extending to darts-like planning (nod to Darmstadt roots) or retrieval (Search-R1 style).

Verdict

Grab it if you're in agentic RL—solid README and scripts make it runnable despite 38 stars and 1.0% credibility score signaling early days. Maturity lags (no broad tests), but official GitHub releases and env support make it a low-risk benchmark repro tool.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.