samuelfaj

samuelfaj / distill

Public

Distill large CLI outputs into small answers for LLMs and save tokens!

38
0
100% credibility
Found Mar 07, 2026 at 38 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

Distill is a tool that pipes command-line outputs through a local AI to produce concise answers to specific questions, minimizing token usage for larger AI models.

How It Works

1
đź“– Discover Distill

You learn about Distill, a handy tool that squeezes big piles of computer output into short, useful answers for AI chats.

2
🛠️ Add to Your Tools

You quickly install Distill on your computer so it's ready to use anytime.

3
đź§  Set Up Local Helper

You grab a lightweight AI thinker that runs right on your machine to power the summaries.

4
✨ Try Your First Summary

You run a command, ask Distill a question about its output like 'what changed?', and instantly get a crisp, tiny response instead of walls of text.

5
⚙️ Customize If You Like

You tweak simple preferences, like your preferred thinker or wait times, to fit your style.

6
🤖 Guide Your AI Friends

You add a note to your AI assistants' instructions to always run commands through Distill for smarter, shorter replies.

🎉 Save Big on Words

Your AI conversations now zoom with tiny, perfect summaries, slashing word waste by up to 99% while keeping all the key details.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 38 to 38 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is distill?

Distill is a TypeScript CLI tool that pipes massive command-line outputs—like logs, git diffs, or terraform plans—through a local Ollama model to extract concise answers to your specific question, slashing token usage for downstream LLMs. Instead of feeding thousands of tokens of noise to paid models, you get 1-3 sentences of signal, with examples like `git diff | distill "what changed?"` or `npm audit | distill "extract vulnerabilities as JSON"`. It auto-detects watch-mode cycles for diffs over time and passes interactive prompts like [y/N] straight through.

Why is it gaining traction?

In LLM agent workflows, command outputs bloat context windows, but distill delivers up to 99% token savings via smart prompts tuned for distillation—like comparing cycles in watch mode or batch summaries—without needing custom scripts. Devs hook it into tools like Codex or Claude Code via global instructions, and its Ollama integration (defaults to qwen2.5:3b) keeps everything local and fast. Unlike generic summarizers, it mirrors pipeline exit codes with pipefail and handles TTY quirks seamlessly.

Who should use this?

Backend engineers building AI agents that shell out to tools, or DevOps folks parsing terraform/npm outputs before LLM review. Frontend devs monitoring builds with `bun test | distill "did tests pass?"` in CI loops. Anyone chaining local distillation with paid LLMs in setups like deepseek distill github or distill llm github flows.

Verdict

Grab it if you're prototyping LLM agents—solid docs and examples make the 18 stars and 1.0% credibility score forgivable for such an early project. Maturity lags with limited testing visible, but it's polished enough for daily drivers; watch for broader model support beyond Ollama/qwen. (198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.