Qredence

DSPy's Recursive Language Model (RLM) with Modal Sandbox for secure cloud-based code execution

21
2
100% credibility
Found Feb 13, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

fleet-rlm enables AI agents to safely analyze large documents by generating and executing code in isolated cloud sandboxes.

How It Works

1
🔍 Discover fleet-rlm

You hear about a smart helper that lets AI dig into giant books or reports without you copying everything over.

2
📦 Get it ready

With one simple command, you bring the helper onto your computer.

3
🔗 Link your AI brain

You connect a thinking service and a safe online workspace so your helper can work securely.

4
🚀 Wake up your researcher

Your AI researcher springs to life, ready to tackle big questions with code it writes itself.

5
💬 Chat or command

You start asking questions about huge files, or load one and say what to find.

Unlock insights

You get clear answers and discoveries from massive info, safely and easily.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 21 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is fleet-rlm?

Fleet-rlm is a Python package blending DSPy from Stanford's dspy github repo with Modal for recursive language models that run code securely in cloud sandboxes. It lets DSPy agents generate Python scripts to probe massive documents or datasets remotely, skipping the hassle of local downloads or prompt bloat. Users get a CLI for demos like `fleet-rlm code-chat`, a terminal UI for interactive sessions, and an API server for production.

Why is it gaining traction?

Unlike basic DSPy github examples or local interpreters, it sandboxes code execution in Modal's cloud for zero local risk, while DSPy's optimizers ensure reliable code from models. The killer hook: agents handle long-context tasks by writing targeted scripts—grep, chunk, analyze—faster than stuffing everything into prompts. Claude Code integration via `fleet-rlm init` adds skills for dspy cli github workflows, making it a natural for agent builders.

Who should use this?

DSPy developers tweaking agents on dspy tutorial github examples who hit context limits analyzing repos or logs. Claude users needing secure cloud-based code execution for research tasks like "extract API endpoints from docs." Teams building dspy github copilot-style tools for batch doc processing without infra headaches.

Verdict

Early beta with 10 stars and 1.0% credibility score signals low maturity, but strong docs, tests, and Makefile make it playable now. Grab it if you're deep in DSPy and Modal—solid for prototypes, watch for stability.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.