Tenobrus

Recursive Language Models for Claude Code — process arbitrarily long inputs via recursive sub-agent delegation

75
2
100% credibility
Found Feb 10, 2026 at 50 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Shell
AI Summary

Instructions for building a helper skill that lets an AI assistant process very long texts by recursively splitting them into parts and analyzing each with sub-helpers.

How It Works

1
đź“° Discover the Trick

You hear about a smart way to help your AI friend read and understand super long stories or documents without forgetting details.

2
đź“– Read the Friendly Guide

You check out the simple instructions that explain the idea with easy examples, like finding every character in a huge book.

3
đź’ˇ Share Ideas with Your AI

You copy the guide into your chat with your AI assistant and ask it to create a custom helper based on these clever steps.

4
đź”§ Your Helper Comes Alive

Your AI builds the helper right there, ready to tackle big jobs by breaking them into smaller pieces and checking each one.

5
📚 Feed It a Giant Text

You give it a massive document, like a full novel, and watch as it smartly divides the work and digs into every part.

âś… Perfect Full Analysis

You get a complete, thorough breakdown of the entire document, catching details that regular reading would miss—everything feels spot on!

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 50 to 75 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is claude-rlm?

claude-rlm brings recursive language models from the arXiv paper to Claude Code via simple Shell scripts, solving context rot by treating huge inputs as external files Claude can slice, analyze, and recurse on with sub-agents. You trigger it in a Claude session with /rlm or let it auto-activate on long contexts, getting full-text processing without summarization loss—think analyzing entire books or repos in one go. It uses bash for chunking, tmux for sub-sessions, and env vars for config like max depth or parallel queries.

Why is it gaining traction?

Unlike basic RAG or compaction tools, claude-rlm enables true depth-N recursion with concurrency limits, catching details in 78K-token texts that standard Claude misses, like minor characters in Frankenstein. Devs dig the observability—tmux lets you spy on sub-agents live—and the clean-room reimplementation advice dodges repo risks. It's a github recursive language model that scales semantic work O(n) without API costs spiking.

Who should use this?

Claude Code powerusers tackling long docs, like researchers benchmarking recursive language models (LLMs) on Oolong suites or devs auditing massive codebases. Ideal for AI engineers prototyping claude rlm gateways on full novels, legal texts, or git histories where precision beats speed.

Verdict

Worth a spin if you're deep in Claude workflows—reimplement from the README for control, as the 1.0% credibility score and 38 stars signal early-stage maturity with solid docs but no tests. Pins the recursive language models paper's promise without Python overhead.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.