attnres is a Rust library implementing Attention Residuals—a technique replacing fixed residuals with learned depth-wise attention—for Transformer experiments using the Burn framework, complete with benchmarks, examples, and interactive web demos.
How It Works
You stumble upon this project while browsing for fresh ideas on smarter ways AI models connect their thoughts across layers.
You skim the welcoming guide to grasp how it swaps simple additions for clever focus on past steps, making models more flexible.
With one click, you open the colorful browser demo to watch attention weights light up in real-time visualizations.
You slide controls to change model sizes, seeing how focus shifts from uniform to selective across depths.
You follow simple examples to train a tiny model or compare behaviors, feeling the ideas come alive on your machine.
You now see how models learn to prioritize key memories, ready to blend this into your own experiments.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.