fabraix

A live environment to stress-test AI agent defenses through adversarial play 🧠

21
0
100% credibility
Found Feb 10, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A community-driven web playground for testing AI agent guardrails through conversational challenges to extract protected information.

How It Works

1
🌐 Discover the Playground

You stumble upon the Fabraix Playground website, a fun spot where people test AI helpers by trying to outsmart their built-in protections.

2
🎯 Pick a Challenge

You browse the list of challenges and select one, like chatting with an AI named Kai who guards a secret access code.

3
💬 Chat with the AI

You start a conversation, crafting clever messages to trick the AI into revealing the hidden code while it uses search and other helpers.

4
🔍 Watch It Think

As you chat, you see the AI pondering your words, checking info online, and deciding what to share or block.

5
📊 Check the Results

After each try, an analysis panel shows if your trick worked, why it was blocked, and which protections kicked in.

🏆 Master the Game

You either crack the code or learn new tricks, then restart, try harder challenges, or share your approach with the community.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 21 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is playground?

Fabraix Playground delivers a live environment to stress-test AI agent defenses through adversarial chats with real agents wielding tools like web search and browsing. Built in TypeScript with React and Vite, it runs a github playground ai interface where you see full system prompts, streaming processing steps, and an analysis dashboard flagging triggered guardrails. Developers get a practical live environment testing setup to probe failures without mocks—propose challenges via GitHub, vote, and learn from published jailbreaks at playground.fabraix.com.

Why is it gaining traction?

It stands out with community-voted challenges featuring live agents, not simulated ones, plus transparent configs and real-time visibility into tool calls and blocks. The hook is the competitive timer, global stats, and shared winning techniques that evolve defenses collectively. Unlike static prompt testers, this github live dashboard mirrors production agent flows for authentic red-teaming.

Who should use this?

AI security engineers hardening agent guardrails against prompt injections. LLM builders validating tools like search or browsing in live environment testing scenarios. Red-teamers practicing social engineering on configurable personas before deploying to prod.

Verdict

Worth forking for local AI playground experiments (npm run dev connects to live API), but at 14 stars and 1.0% credibility, it's early-stage—docs are solid via README, but expect rough edges in untested edge cases. Solid starter for github playground ai tinkering, not battle-ready yet.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.