bilalimamoglu

bilalimamoglu / sift

Public

Turn noisy command output into short, actionable diagnoses for AI coding agents.

24
2
100% credibility
Found Mar 20, 2026 at 24 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

Sift condenses noisy test failures, lint errors, build logs, and similar outputs into actionable root causes and fixes using smart shortcuts and lightweight AI.

How It Works

1
🔍 Discover sift

You hear about a handy tool that turns walls of confusing test errors into simple root causes and fixes.

2
📦 Get sift

You add it to your computer with one easy command, and it's ready to use right away.

3
đź§  Connect your AI helper

You set up a quick connection to a smart service so sift can think and summarize when needed.

4
🚀 Feed it your test output

You run your tests and pipe the messy results through sift, instantly getting a clear list of what's broken and how to fix it.

5
🔄 Rerun smarter

After a quick fix, you rerun just the remaining problems and see exactly what's left.

6
đź‘€ Zoom in if needed

For tricky spots, you ask for more details or the original bits without digging through everything.

🎉 Debug like magic

Your tests pass faster, bugs vanish quicker, and your AI buddy focuses on real fixes instead of reading noise.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 24 to 24 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is sift?

Sift is a TypeScript CLI tool that captures noisy command output from tests, lints, builds, or diffs and distills it into concise diagnoses—like root causes, failure anchors, and fix suggestions—for feeding to AI coding agents. Pipe pytest, vitest, or tsc logs through `sift exec --preset test-status -- pytest`, and it shrinks 13k lines to actionable bullets, often via heuristics alone to skip model calls. Unlike opencv sift github or sift algorithm github tools, it targets dev workflows, saving tokens so your main agent focuses on fixes.

Why is it gaining traction?

It hooks devs by slashing AI context costs—benchmarks show 62% fewer tokens, 65% faster loops on real 640-test suites—while presets handle common pains like shared env blockers or snapshot drifts without always hitting LLMs. Rerun flows like `sift rerun --remaining` auto-narrow failing tests, and agent installers wire instructions into Claude/Codex prompts. No fluff: heuristics fire first, cheap models fallback.

Who should use this?

Backend devs debugging pytest/jest floods, frontend teams sifting ESLint/tsc walls, or infra folks scanning terraform plans/npm audits before AI triage. Ideal if you're piping logs to Claude/Codex and tired of token burn on noise—pairs with VSCode GitHub extensions or turning GitHub repos into diagrams via reduced diffs.

Verdict

Try the test-status preset now; docs and benchmarks are solid for an early TypeScript gem with 24 stars and 1.0% credibility. Maturity shows in Vitest coverage (80%+ lines), but scale cautiously until more adoption. (198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.