viplismism

CLI for Recursive Language Models (arXiv:2512.24601)

142
13
100% credibility
Found Mar 13, 2026 at 140 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A desktop chat tool that enables AI to analyze and answer questions about large files or codebases by iteratively generating code to process them.

How It Works

1
📖 Discover the tool

You hear about a smart helper that lets AI tackle huge documents or codebases without forgetting details.

2
🚀 Get it set up

Install the tool easily so it's ready on your computer.

3
🤝 Link your AI

Choose your preferred AI service and enter a private code once to connect it securely.

4
💬 Start chatting

Open the friendly chat window where everything happens.

5
📁 Add your files

Drag in files, folders, web pages, or paste text to give it context.

6
🧠 Ask smart questions

Type your question and watch it cleverly break down the info, think in steps, and build the perfect answer.

7
📋 Review past work

Browse saved sessions to revisit answers anytime.

🎉 Unlock big insights

You now easily get precise answers from massive files, saving hours of manual work.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 140 to 142 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is rlm-cli?

rlm-cli is a TypeScript CLI that brings Recursive Language Models to your terminal, letting LLMs process huge contexts by generating Python code for chunking, sub-queries, and aggregation—instead of cramming everything into one prompt. Fire up `rlm` for an interactive REPL where you load files, recursive directories (skipping node_modules like aws cli recursive copy), globs, or URLs, then query with Anthropic, OpenAI, or Google models. It saves full trajectories for review and includes benchmarks on long-context datasets.

Why is it gaining traction?

Unlike basic LLM CLIs, rlm-cli shows the full loop live—code gen, execution, sub-queries—in a polished TUI with trajectory viewer, beating direct prompts on tasks like timeline counting or narrative QA. Recursive glob loading (cli recursive search) and single-shot scripting (rlm run --file codebase/ "find bugs") make it dead simple for big repos, while multi-provider switching and GitHub-friendly setup hook devs tired of context blowups. Benchmarks prove it handles 10MB+ without losing accuracy.

Who should use this?

Backend devs dissecting monorepos or legacy codebases, where cli find recursive falls short on semantic search. AI tinkerers benchmarking long-context perf, or ops folks scripting analysis on logs/docs via stdin/URLs. Pairs well with GitHub Copilot for code review, or GitHub Actions for automated repo audits on Linux/Ubuntu/Windows.

Verdict

Grab it if you're hitting LLM context walls—installs clean via npm, docs are crisp, interactive mode shines. At 66 stars and 1.0% credibility, it's early but mature enough for daily use; watch for more benchmarks as it grows.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.