Deso-PK / make-trust-irrelevant
Publicplan-bound authorization architecture for governing privileged effects in untrusted computational agents.
This repository hosts a conceptual essay arguing that agentic AI safety requires kernel-enforced, reduce-only authority mechanisms to eliminate the need for trust.
How It Works
You stumble upon this GitHub page while searching for fresh thoughts on making AI helpers safer.
You open the main document and dive into the big idea that trust isn't the way to keep AI in check.
It hits you how giving AI too much power without strict limits leads to mistakes or tricks, just like in games.
You learn about everyday dangers like accidental deletions or sneaky data grabs that current setups allow.
The key insight lands: use tight, short-lived permissions that can't grow, enforced deep in the system like game rules.
From a player's view, you see why mechanics beat hoping for good behavior every time.
You finish feeling smarter about building safe AI that can't go wrong even if it wants to.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.