raroque

Agent skill that audits vibe-coded apps for common security vulnerabilities introduced by AI coding assistants

25
1
100% credibility
Found Mar 16, 2026 at 25 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

A skill for AI coding assistants that audits applications for common security vulnerabilities like hardcoded secrets and weak database protections.

How It Works

1
💡 Discover Vibe Security

While using your AI helper to quickly build an app, you learn about a handy security checker that spots common safety mistakes.

2
Add the security skill

You easily add this security tool to your AI helper so it can watch for risks as you build.

3
🔍 Ask for a safety check

You tell your AI helper to review your app for security issues, and it scans everything thoroughly.

4
⚠️ Spot the problems

Your AI helper highlights easy-to-miss dangers like exposed private info or weak protections.

5
🔧 Fix the issues

Following simple suggestions from your AI, you patch up the weak spots to make your app stronger.

Enjoy a secure app

Your app is now safe from common pitfalls, ready to share confidently with users.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 25 to 25 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is vibe-security-skill?

This agent skill audits apps built with AI coding assistants like Claude Code or OpenAI Codex for security holes they commonly introduce, such as hardcoded secrets or missing database row-level security. You install it via npx skills add on GitHub repos, then trigger audits with commands like /vibe-security or natural queries like "check for vulns." It pulls in relevant checks for your stack—Supabase RLS, Stripe payments, React Native bundles—without bloating context.

Why is it gaining traction?

Unlike generic linters, it targets AI-specific anti-patterns like client-tamperable prices or tokens in localStorage, auto-activating on auth or payments code in agent skills workflows. Devs on Reddit and GitHub rave about its tight integration with agent GitHub Claude, Copilot in VSCode or IntelliJ, and skills.io repos, saving hours on manual reviews. The MIT license and community contributions keep rules fresh for emerging agent GitHub actions and OpenAI/Microsoft tools.

Who should use this?

Solo devs or small teams vibe-coding fullstack apps with GitHub Copilot, Claude agents, or Antigravity skills, especially those hitting Supabase, Firebase, or Stripe. Frontend/mobile builders shipping React Native or Next.js who skip security basics. AI agent enthusiasts comparing skills vs MCP in GitHub repos needing quick pre-deploy audits.

Verdict

Grab it if you're leaning on AI agents—solid docs and user-facing triggers make it dead simple despite 25 stars and 1.0% credibility score signaling early days. Maturity lags with no tests visible, but contribute to rules and watch it grow; skip for production-critical scans until more battle-tested.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.