chocks

chocks / locode

Public

Local-first AI coding CLI. Routes simple tasks to a local LLM (Ollama), complex tasks to Claude. Saves tokens.

21
0
100% credibility
Found Mar 10, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

Locode is a terminal-based AI assistant for coding that automatically routes simple tasks to a local model and complex ones to a cloud service like Claude to optimize speed and cost.

How It Works

1
🔍 Discover Locode

You hear about a smart coding helper that runs mostly on your own computer to save money and works fast.

2
📥 Get it on your computer

You add this handy tool to your setup with a quick and easy step.

3
🛠️ Run the welcome guide

Follow a simple wizard that prepares your local brainpower and optionally links a powerful cloud thinker for tough jobs.

4
Start chatting about code
Quick local tasks

Simple searches and reads happen super fast using your computer's power.

🧠
Smart cloud boosts

Tricky refactors or new code get extra smarts from the cloud when needed.

5
📊 See your savings

Check stats to watch how much time and money you're saving by mixing local and cloud help.

🚀 Code smarter every day

Your coding flows faster with reliable help right in your terminal, feeling powerful yet affordable.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 21 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is locode?

Locode is a TypeScript CLI for local-first AI coding on GitHub, routing simple tasks like grep, file reads, or git queries to a free local LLM via Ollama, while escalating complex coding jobs like refactoring or test writing to Claude. Install globally with npm, run `locode setup` to grab Ollama and a model like qwen2.5-coder:7b plus your Anthropic key, then chat in a REPL or fire off one-shot prompts with `locode run`. It tracks tokens, shows savings stats, and benchmarks hybrid vs. claude-only vs. local-only modes to prove the cost cuts.

Why is it gaining traction?

Unlike full-cloud tools that burn tokens on everything, locode smartly routes based on regex rules or LLM confidence, keeping quick repo explores local to save cash while handing architecture to Claude's reasoning. Custom `locode.yaml` lets you tweak models, thresholds, and even add MCP servers for tools like GitHub issues. The REPL's interactive confirmations and auto-escalation from local struggles make it feel responsive without babysitting.

Who should use this?

Backend devs debugging git diffs or grepping large codebases without API bills. Full-stack teams prototyping features where simple file ops stay local but complex logic needs Claude. Solo coders on laptops running Ollama who hate context-switching between local LLMs and paid APIs.

Verdict

Try it for local-first coding workflows—solid REPL and routing shine despite alpha status, 14 stars, and 1.0% credibility score. Low maturity means expect config tweaks, but benchmarks and token logs make it worth the spin-up for token-conscious CLI fans.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.