avbiswas

Recursive Language Models (https://arxiv.org/abs/2512.24601)

196
38
100% credibility
Found Feb 23, 2026 at 84 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

fast-rlm is a lightweight Python package that lets AI models process long contexts by recursively delegating subtasks to sub-agents via an interactive code environment.

How It Works

1
๐Ÿ‘€ Discover fast-rlm

You stumble upon this handy tool through a fun video or online search, promising to help AI handle super long or tricky questions without forgetting details.

2
๐Ÿ“ฆ Set it up quickly

You easily add the tool to your Python setup with a simple install, getting everything ready in moments.

3
๐Ÿ”— Link your AI thinker

You connect a smart AI service account so the tool can borrow its powerful brain for deep thinking.

4
๐Ÿ’ญ Ask a tough question

You write a simple line with your big challenge, like counting specific letters across dozens of fruit names.

5
โœจ Watch it solve step by step

You run it and see the AI explore, break down the task, call little helpers, and build the answer right before your eyes.

6
๐Ÿ“Š Check the story behind it

Logs save automatically, letting you peek at every thought and helper chat if you want to understand more.

โœ… Get spot-on results

You receive the exact answer you needed, plus a summary of all the smart work it did, feeling amazed at how it conquered the hard part.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 84 to 196 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is fast-rlm?

fast-rlm is a Python library implementing recursive language models from the arxiv paper, letting LLMs tackle arbitrarily long prompts via an external REPL and recursive sub-agents. You pip install it, set an OpenRouter or OpenAI-compatible API key, and run queries like `fast_rlm.run("Count r's in 50 fruits")`โ€”it auto-chunks context, spawns sub-agents up to configurable depth, and caps spend. Built with Deno and TypeScript backend, it spits out results plus usage stats, saving JSONL logs viewable via CLI or TUI.

Why is it gaining traction?

Unlike standard LLMs choking on long contexts, this github recursive language model handles million-char inputs by smart recursive chunking, with hard budget limits and max depth to avoid bill shocks. Devs dig the live terminal UI tracking steps/usage, post-run log viewers (CLI stats or Bun-powered TUI tree), and benchmarks for longbench/oolong tasks. Zero-setup Python API plus config YAML beats verbose agent frameworks.

Who should use this?

AI experimenters replicating the recursive language models paper for long-context QA or counting tasks. Agent builders processing huge docs like books or datasets without window limits. Researchers benchmarking recursive LLMs on synth data.

Verdict

Early alpha with 48 stars and 1.0% credibility scoreโ€”solid README/video/docs/benchmarks, but light tests mean experiment only. Grab it for fast recursive language models github prototyping if long prompts haunt you.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.