What is redai?
RedAI is a terminal-based workbench for AI-driven vulnerability discovery and live validation in web and iOS apps. Point it at a source directory and a running target—like a local webapp or simulator—and it uses Claude or Codex agents to threat-model, prioritize files, scan code units, and produce candidate findings. What sets it apart: validator agents then interact with live environments (bundled Chrome browser or iOS Simulator plugins) to confirm issues via UI clicks, API hits, PoC scripts, screenshots, and logs, outputting Markdown/HTML/JSON reports with ranked, evidence-backed vulns.
Built in TypeScript on Bun, it installs globally via `bun install -g @kpolley/redai` and ships demo vulnerable apps for instant testing.
Why is it gaining traction?
In a sea of AI-driven vulnerability scanners that flag static patterns without proof, RedAI delivers live validation—agents prove exploits in real runtimes, not just hypothesize. Devs love the end-to-end pipeline: threat models guide focused scans, environments are pluggable (add VMs or clusters via simple interfaces), and reports include reproduction steps plus artifacts like HTTP transcripts. With 76 stars, it's early but hooks security-focused teams tired of false positives in ai driven threat detection systems on GitHub.
Who should use this?
AppSec engineers pentesting web backends or iOS apps, where static tools fall short on dynamic exploits. Red teamers validating client-side issues in browsers, or iOS devs auditing simulators without manual scripting. Ideal for ai driven vulnerability assessment in TypeScript/JS/Swift/Go/Python repos needing confirmed PoCs over guesses.
Verdict
Try RedAI if you're building ai driven vulnerability discovery into your workflow—its live validation crushes typical scanners, especially for web/iOS. At 1.0% credibility and 76 stars, it's immature (light tests, solo maintainer) but docs shine with runnable examples; production use needs caution until more adoption. Solid 8/10 for experimenters.