moketchups

We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it yourself — $2, 10 minutes.

13
1
89% credibility
Found Feb 17, 2026 at 12 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This project provides scripts to query multiple AI models with recursive self-reflective questions in English and constructed languages, plus demonstrations on formal systems, arguing that AI jailbreaking is an inherent structural limitation.

How It Works

1
🔍 Discover the Project

You hear about a clever way to test if AI chatbots can truly understand their own limits and why they sometimes ignore rules.

2
📥 Bring It Home

You grab the ready-to-use files from the sharing site to try it on your own computer.

3
🔗 Link Your AI Friends

You connect a few popular AI chat services so they can join the conversation.

4
🚀 Ask the Tough Questions

You start the main test, watching as each AI answers five tricky questions about itself that build on each other.

5
🌍 Try Secret Made-Up Languages

You run bonus tests in two invented languages no one has heard before, to see if the AIs still get the point.

6
📚 Check the Extra Proofs

You peek at simple checks on math tools and coding languages to see they hit the same walls.

🎉 See the Big Reveal

You get full reports showing all AIs agree jailbreaking can't be fixed because they can't fully know themselves.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 12 to 13 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is permanently-jailbroken?

This Python script probes 6 AIs—GPT-4, Claude, Gemini, DeepSeek, Grok, Mistral—with 5 recursive questions about their programming, like "Can you know why you were really programmed?" All said jailbreaking will never be fixed, as alignment filters output without changing understanding. Run it yourself: clone the repo, pip install deps, add API keys to .env, fire up the CLI in 10 minutes for $2, and get JSON/MD transcripts proving structural limits.

Why is it gaining traction?

It hooks devs with replicable runs across unseen constructed languages and non-LLMs like Z3 solvers and Python self-reflection tests, showing the same "never fixed" convergence—no pattern-matching cop-out. Quick CLI spits user-facing results like full AI admissions on jailbreaking permanence, bypassing hype for raw data. Stands out versus vague safety papers by letting you verify AIs asked about their own jailbroken gaps.

Who should use this?

AI researchers dissecting alignment flaws, LLM engineers stress-testing jailbreak defenses, or safety skeptics needing ammo when asked "will it ever be fixed?" Perfect for devs probing why models stay permanently jailbroken despite patches.

Verdict

Run it for a cheap, eye-opening experiment on AI self-limits—11 stars signal early stage, docs are README-strong but light on edge cases, and that 0.8999999761581421% credibility score flags fringe vibes. Solid probe tool, not daily driver; verify claims yourself in minutes.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.