What is cinsights?
cinsights is a Python tool that analyzes coding agent sessions from Claude Code, Cursor, or Codex, turning raw traces into team-wide insights on friction, patterns, and improvements. It ingests data from local files, Entire.io git checkpoints, or Phoenix traces, then generates per-project digests with quick wins like CLAUDE.md rules and feature suggestions, plus developer profiles tracking interaction styles over time. Run it via CLIโ`pip install cinsights`, `cinsights refresh`, `cinsights digest project my-app`โand view results in a Svelte web UI at localhost:8100.
Why is it gaining traction?
Unlike basic coding agent benchmarks or GitHub coding assistants that focus on single runs, cinsights aggregates across sessions for coding agents comparison, ranking agent skills by metrics like read-edit ratios and error rates, with grounded friction analysis tied to evidence. It stands out with actionable outputs: copy-paste fixes for repeated pains, plus support for local LLMs like Ollama, making it cheap to run open source coding agent insights without cloud costs. Devs hook on the "doctor" view for trends and the zero-LLM indexing step that scores sessions first.
Who should use this?
Engineering leads at teams mixing Claude Code, Cursor, and Codex across projects, needing visibility into agent effectiveness and workflow bottlenecks. Devs tuning coding agent models or skills via Entire.io/Phoenix data, or managers benchmarking github coding ai usage before scaling to paid tiers. Ideal for mid-sized squads evaluating coding agent pricing and ROI through session trends.
Verdict
Try it if you're deep into coding agentsโquick setup and solid docs make the alpha stage (17 stars, 1.0% credibility) forgivable, with strong tests and Makefile targets for dev. Skip for production until DB scales beyond SQLite; pair with cron for daily insights.
(198 words)