coleam00

GAN-inspired three-agent harness that pits a generator against an adversarial evaluator to build applications with quality gates at every step. Built with Claude Agent SDK and Codex SDK.

17
7
100% credibility
Found Mar 30, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A system that coordinates specialized AI agents to plan, construct, and adversarially evaluate software applications in iterative sprints for higher quality outcomes.

How It Works

1
🔍 Discover the project

You stumble upon this GitHub project that uses smart AI teams to build real apps by planning, creating, and rigorously testing each other's work.

2
🛠️ Get everything ready

You download the project and set up the simple basics on your computer so it's all prepared to go.

3
Choose your AI team
🧠
Claude team

Go with the precise and thoughtful Claude AIs.

Codex team

Select the fast and powerful Codex AIs.

4
💡 Share your app idea

You simply describe what you want, like 'a task manager with charts, categories, and search' in a sentence or two.

5
📋 AI planner creates the roadmap

The planner AI turns your idea into a detailed product plan with features broken into manageable sprints.

6
🔄 Build and battle-test in loops

The builder AI creates features one by one while the tester AI pokes holes and gives tough feedback, retrying until everything passes.

🎉 Enjoy your working app

Your complete, battle-tested application appears in a folder, ready to use with all features working smoothly.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is adversarial-dev?

Adversarial-dev is a GAN-inspired harness that pits a generator agent against an adversarial evaluator to build full applications from short prompts. You feed it a description like "build a task manager with REST API and dashboard," and it plans sprints, implements features with git commits, then runs ruthless quality gates—evaluator probes for breaks, scores 1-10, and forces retries until everything passes. Built in TypeScript with Bun, it runs on Claude Agent SDK or Codex SDK, outputting a working app in a workspace directory.

Why is it gaining traction?

It crushes self-evaluation bias in solo AI agents by creating adversarial tension: generator builds, evaluator attacks with edge cases and real runs, iterating via contracts until code survives. Developers dig the sprint-based flow with CLI simplicity—`bun run claude-harness "your prompt"`—and file-based agent comms that keep contexts sharp for reliable, production-like apps. The harness enforces dev gates at every step, turning vague prompts into robust prototypes faster than manual coding.

Who should use this?

AI workflow tinkerers prototyping web apps or APIs who hate flaky agent outputs. Indie devs needing quick MVPs with charts, search, and auth without endless debugging. Teams in adversarial development experimenting with agent harnesses for cybersecurity developmental tests or complex full-stack builds.

Verdict

Grab it if you're into agentic dev—solid docs and dual SDK support make it easy to hack on, despite 17 stars and 1.0% credibility signaling early maturity. Test on toy projects first; it'll evolve as models improve.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.