moketchups / permanently-jailbroken
PublicWe asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it yourself — $2, 10 minutes.
This project provides scripts to query multiple AI models with recursive self-reflective questions in English and constructed languages, plus demonstrations on formal systems, arguing that AI jailbreaking is an inherent structural limitation.
How It Works
You hear about a clever way to test if AI chatbots can truly understand their own limits and why they sometimes ignore rules.
You grab the ready-to-use files from the sharing site to try it on your own computer.
You connect a few popular AI chat services so they can join the conversation.
You start the main test, watching as each AI answers five tricky questions about itself that build on each other.
You run bonus tests in two invented languages no one has heard before, to see if the AIs still get the point.
You peek at simple checks on math tools and coding languages to see they hit the same walls.
You get full reports showing all AIs agree jailbreaking can't be fixed because they can't fully know themselves.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.