DoubtedSteam

Flash Attention implementation that returns both output and attention scores. High-performance, memory-efficient attention with score extraction for analysis and visualization.

14
0
100% credibility
Found Feb 04, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This repository offers optimized attention mechanisms for machine learning models that compute both outputs and detailed attention scores efficiently for analysis and visualization.

How It Works

1
πŸ” Discover faster AI thinking

You hear about a tool that makes AI models process thoughts quicker while showing exactly how they connect ideas.

2
πŸ“¦ Add it to your collection

You simply include this speedy thinker into your AI building setup with a quick addition.

3
🧩 Prepare your idea pieces

You gather your query, key, and value pieces that represent the thoughts your AI needs to connect.

4
⚑ Launch and see magic

With one call, you get both the smart results and a map of how ideas linked together, super fast.

5
πŸ“Š Explore the connection map

You look at the attention scores to understand which ideas influenced each other most.

6
🏎️ Test the speed boost

You run quick checks to confirm it's much faster than regular ways, with proof in numbers.

πŸŽ‰ AI thinks faster, you see clearer

Your model runs blazingly quick while giving deep insights into its decision-making.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Flash_Attn_with_Score?

This Python library extends flash attention 2 with PyTorch and Triton kernels to compute high-performance, memory-efficient attention while returning both output and raw attention scores (QK^T * scale) in one fused call. It solves the pain of recomputing scores separately for analysis or visualization, which kills speed in standard flash attention pytorch setups. Drop in `attention_with_scores(q, k, v, causal=True)` and get scores shaped (B, H, Q_len, K_len) alongside output, with causal masking and GQA support.

Why is it gaining traction?

Unlike flash attention github releases or PyTorch SDPA, it fuses score extraction without duplicate QK computation, delivering 4-10x speedups over naive baselines per benchmarks on Qwen configs. Developers grab it for attention analysis without the full matrix memory bomb, plus extras like row/column score sums and cross-token sums for dissecting LLM attention patterns. Built-in benchmarking suite lets you compare flash attention 2 windows wheels or emulators instantly.

Who should use this?

Transformer researchers probing attention for interpretability, like visualizing heads in Llama or debugging flash linear attention flows. LLM fine-tuners needing cross-segment analysis to spot info bottlenecks. PyTorch devs swapping SDPA for production inference where scores aid monitoring.

Verdict

Grab it if you need scores with flash attention perfβ€”docs and API are solid, benchmarks convince. At 12 stars and 1.0% credibility, it's early but mature enough for experiments; watch github flash attention releases for polish.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.