eric-tramel
47
5
100% credibility
Found Feb 20, 2026 at 21 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A standalone checker that analyzes text for formulaic AI writing patterns using rules, outputting a 0-100 score, violation details, and improvement suggestions.

How It Works

1
🔍 Discover slop-guard

You hear about a handy writing checker that spots robotic patterns in text and gives a simple score.

2
📥 Grab the tool

Download the free checker to your computer and start it up easily.

3
🔌 Link to your AI helper

Connect it to your AI writing assistant so they can use it anytime you need a scan.

4
✏️ Paste or pick text

Share your writing with the assistant or select a file to check.

5
📊 Get your score

See a clear 0-100 score, highlights of issues, and friendly tips to make it sound more natural.

Polish perfect prose

Fix the spots with the advice, and your writing feels fresh, human, and ready to shine.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 21 to 47 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is slop-guard?

Slop-guard is a Python tool that scans text for formulaic AI writing patterns and assigns a 0-100 slop score using pure regex rules—no LLMs or APIs needed. Feed it prose via check_slop(text) or check_slop_file(path), and it spits out JSON with the score, violation details with context snippets, per-category counts, and targeted advice like "vary sentence lengths." Run it as a stdio MCP server with uv for instant integration, perfect for stopping AI slop on GitHub repos, YouTube scripts, or news drafts.

Why is it gaining traction?

Unlike cloudy AI detectors, slop-guard runs locally and lightning-fast, flagging specifics like overused "delve" verbs, bullet-heavy structures, or Claude tics such as "not just X, but Y." Developers dig the actionable output—exact matches, penalties, and fix suggestions—plus benchmarks on real newspaper corpora showing clean scores cluster high. It's the slop guardian devs reach for to score and stop slop before it sinks content quality.

Who should use this?

Content devs polishing blog posts or docs, AI-assisted writers self-auditing Claude outputs, and journalists vetting submissions for slop news network vibes. Python scripters automating prose checks in CI pipelines or GitHub actions will love the MCP server for Claude Code hooks.

Verdict

Grab it if you're battling AI slop—solid docs, benchmarks, and MIT license make it production-ready despite 12 stars and 1.0% credibility signaling early maturity. Tune hyperparameters for custom needs, but test on your corpus first.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.