larsderidder

See what your AI sees. Framework-agnostic LLM context window visualizer.

178
13
100% credibility
Found Feb 12, 2026 at 10 stars 18x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

Context Lens is a proxy tool that visualizes the context window contents, token usage, costs, and conversation structures for AI coding assistants interacting with LLM APIs via a web interface.

How It Works

1
🔍 Discover Context Lens

You hear about a helpful tool that lets you peek inside your AI coding helper's memory to see exactly what's influencing its answers.

2
📦 Get it set up

With one simple command, you add it to your computer – it's quick and needs nothing extra.

3
🚀 Launch with your AI buddy

You type a fun command like 'try your coding idea' and it starts everything, opening a web page automatically.

4
🌐 Watch the live show

A colorful dashboard pops up in your browser, tracking every chat turn in real-time like a movie of your AI's thoughts.

5
💬 Chat and code away

You keep using your AI coding tool as usual, while the dashboard quietly captures all the details behind the scenes.

6
📈 Explore the insights

Zoom into charts showing space usage, money spent, changes between chats, and smart alerts for big issues.

Master your AI

Now you spot why answers are off, trim waste to save cash, and make your coding sessions smarter and smoother.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 178 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is context-lens?

Context Lens is a TypeScript proxy that slips between your LLM coding tools—like Claude Code, Aider, Codex, or Gemini CLI—and their APIs, capturing every request to visualize the context window. Fire it up with `npx context-lens claude "your prompt"` for a web UI at localhost:4041 showing treemaps of context composition (system prompts, tools, images), turn diffs, cost breakdowns, and findings like overflow risks or unused tools. See what your AI sees, like a contact lens overlay on lens switch context, without changing your workflow.

Why is it gaining traction?

It auto-detects tools via headers or prompts, threads conversations across subagents, and flags waste like large tool results—stuff log diving can't match. Export sessions as LHAR for sharing, track costs per model, and persist data across restarts. Devs dig the real-time streaming passthrough and MITM mode for Cloudflare-blocked ChatGPT, making it dead simple to see GitHub commit history buildup or Copilot chat history token hogs.

Who should use this?

AI coding agent users tweaking prompts in Aider or Claude: see GitHub repo size bloating contexts, GitHub Copilot usage spikes, or secrets in tool calls. Ideal for prompt engineers auditing GitHub stars over time in system injections or devs seeing what RAM your PC agent chews on large repos.

Verdict

Solid prototype at 10 stars and 1.0% credibility—early dev badge warns of rough edges, but docs and CLI nail quickstarts. Grab it for debugging LLM bloat if you run coding agents; hold off for stable teams until more tests and polish land.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.