`Paper repo for “Coherence-Guided Dead-Head Identification in Frozen Transformers,” including manuscript sources, figures, frozen result artifacts, and verification scripts.`
This project offers a standalone scanner and analysis tools to detect inactive attention heads in transformer models using a universal physics-derived threshold, validated on models from GPT-2 to Llama with high precision.
How It Works
You hear about a smart way to spot inactive parts in AI language models without any guesswork or tuning.
You gather the common building blocks needed to run AI model checkups on your computer.
Pick an AI model like GPT-2, launch the checker, and watch it analyze attention connections automatically.
A clear layer-by-layer view appears, showing which parts are dead, alive, or protected.
Choose a colorful web page summary or data file with pictures and details for deeper insights.
Now you know exactly which model parts aren't pulling their weight, ready to slim down your AI safely.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.