reprompt-dev

Discover, analyze, and evolve your best prompts from AI coding sessions

22
0
100% credibility
Found Mar 19, 2026 at 22 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

reprompt is a local tool that scans AI coding chat histories from tools like Claude Code and Cursor to score prompts, detect patterns, and provide insights for better AI interactions.

How It Works

1
💡 Discover reprompt

You hear about a helpful tool that checks your chats with AI coding helpers to spot ways to write better instructions.

2
🚀 Get it set up

You add it to your computer in seconds so it's ready to use anytime.

3
🔍 Review your past chats

You tell it to look at your recent conversations with AI tools like Claude or Cursor.

4
📊 See your prompt report

Instantly get scores for each instruction, top patterns you repeat, and tips to improve.

5
Check any new idea

Type in a new instruction and get an instant quality score with suggestions.

6
📚 Build your favorites

Save your best instructions as reusable templates for next time.

🎉 Prompt like a pro

Your instructions get sharper, AI helps faster, and you save time every day.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 22 to 22 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is reprompt?

reprompt is a Python CLI tool to discover, analyze, and evolve prompts from AI coding sessions across Claude Code, Cursor, Aider, Gemini CLI, and more. It scans session logs for prompts, deduplicates semantic near-matches like "fix auth bug" vs "debug authentication issue," and scores them 0-100 using research from Google, Stanford, EMNLP, and Prompt Report on structure, context, position, repetition, and clarity. Run `reprompt scan` for reports, `score` for instant feedback, or `wrapped` for a shareable Prompt DNA card.

Why is it gaining traction?

Zero network calls, local TF-IDF analysis under 1ms per prompt, and privacy-first design beat generic prompt generators or cloud analyzers. Hooks like weekly digests comparing specificity trends, auto-extracted libraries of reusable patterns, and GitHub Actions linting for prompt quality in CI make it stick for devs iterating on prompting. Shareable cards benchmarking you against "prompters" add social proof without exposing text.

Who should use this?

Claude Code power users spotting debug loops, Cursor devs refining refactor prompts, or Aider teams building test suites who want data on why sessions spiral. Prompt engineering hobbyists tracking evolution over weeks, or devops leads gating PRs on `reprompt lint` for consistent quality.

Verdict

Install for daily AI coding: high signal from low noise, with 95% test coverage and solid docs despite 22 stars and 1.0% credibility. Early maturity means watch for more adapters, but it's already sharper than most prompt tools on GitHub.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.