JaredStewart

Tree-sitter-powered code indexing server that gives LLM agents precise, on-demand access to symbols, implementations, callers, tests, and grep across multi-language projects - so they explore codebases through targeted queries instead of loading everything into context.

177
15
100% credibility
Found Feb 12, 2026 at 126 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

CodeRLM is an indexing system and AI coding companion add-on that enables precise exploration of codebases by providing targeted views of files, symbols, implementations, callers, and searches.

How It Works

1
🔍 Discover CodeRLM

You hear about a handy tool that lets your AI coding buddy deeply understand big projects without getting overwhelmed.

2
📦 Add to your AI

You easily install the add-on into your AI coding companion to unlock smarter code exploration.

3
⚙️ Wake the code scanner

With a simple build and start, you launch the background helper that quietly scans and maps your project's files.

4
📂 Open your project

You load your codebase into the AI chat, and it automatically connects to the scanner.

5
💬 Ask smart questions

You chat naturally or use a quick command like '/coderlm how does login work?' to explore.

6
🧠 AI explores precisely

Your AI pulls exact details like file overviews, function code, who calls what, and searches—feeling super efficient.

🎉 Codebase mastered

You and your AI navigate, understand, and work on even huge projects with confidence and speed.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 126 to 177 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is coderlm?

CoderLM is a Rust-based indexing server that scans multi-language codebases (Rust, Python, TypeScript/JavaScript, Go) using tree-sitter to extract symbols, file trees, and cross-references. It exposes a JSON API for LLM agents to query precise details like implementations, callers, tests, variables, grep patterns, and code snippets on demand—instead of loading everything into context. A Claude Code plugin adds slash commands like `/coderlm query="how does authentication work?"` and a Python CLI for structured workflows.

Why is it gaining traction?

It stands out by letting agents explore large codebases through targeted queries, auto-reindexing on file changes, and handling multiple projects without eviction hiccups up to a configurable limit. The Claude integration hooks into session lifecycle for seamless init/search/impl flows, while API endpoints like `/symbols/callers` and `/grep` deliver exact results with context. Developers notice the speed: no more heuristic file dumps, just indexed precision across symbols and grep.

Who should use this?

Backend engineers refactoring unfamiliar Rust/Go monorepos, where agents need callers and tests without context bloat. Frontend devs auditing TypeScript codebases for symbol dependencies via quick CLI queries. Teams using Claude Code for code reviews, tired of agents hallucinating file paths.

Verdict

Worth a spin for LLM-driven code exploration—solid docs and quickstart make setup fast despite 17 stars and 1.0% credibility signaling early maturity. Increase max-projects for bigger teams; MIT license invites contributions.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.