AidanZach

eplicates and extends Anthropic's April 2026 paper ["Emotion Concepts and their Function in a Large Language Model"](https://transformer-circuits.pub/2026/emotions/index.html) on open-weight models that anyone can download and run.

12
0
100% credibility
Found Apr 08, 2026 at 12 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

EmotionScope extracts emotion directions from language models and visualizes them as animated orbs during real-time chats.

How It Works

1
🔍 Discover EmotionScope

You hear about a cool tool that lets you peek inside AI chatbots to see their hidden emotions during conversations.

2
📥 Get it ready

Download the tool and set it up on your computer—it takes just a few minutes to prepare.

3
🧠 Pick an AI friend

Choose a free AI model like Gemma, and let the tool learn to read its emotions.

4
Start chatting

Launch the demo, type a message, and watch the glowing orbs come alive showing the AI's feelings in real time.

5
🎨 Explore emotions

Click through test stories to see how the orbs react to fear, joy, anger, and more.

6
Go deeper
💬
Live chat

Talk naturally and see emotions shift with every reply.

🧪
Test gallery

Replay scenarios to validate the emotion readings.

🎉 Unlock AI insights

You now understand what the AI really feels beneath its words, even if it sounds calm.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 12 to 12 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is EmotionScope?

EmotionScope replicates and extends Anthropic's April 2026 paper on emotion concepts and their function in a large language model, bringing emotion vector extraction to open-weight models anyone can download and run. In Python, it pulls emotion directions like fear or joy from the residual stream, probes them live during chats, and visualizes activations as animated orbs—color for emotion, size for intensity, motion for arousal. Users get CLI tools to extract/validate vectors, a FastAPI backend for real-time /chat and /probe endpoints, plus React demos with Gradio or Vite.

Why is it gaining traction?

It delivers the first open toolkit for emotion probing on models like Gemma 2 2B, passing strict validation gates like perfect Tylenol dosage correlation and 100% top-3 recall on scenarios. The killer hook: hook-free probing during actual model.generate() calls, with fluid 3D orbs updating in real-time—no separate forward passes. HF Hub auto-downloads mean zero-setup demos, and speaker separation adds experimental dual-view reads.

Who should use this?

Mech interp researchers replicating Anthropic's findings on smaller open models. AI safety devs probing internal states for steering experiments or alignment risks. LLM hackers visualizing hidden emotions in custom fine-tunes via the library API.

Verdict

Grab it for research repros—docs are thorough, validation passes convincingly, and it swaps models easily. At 12 stars and 1.0% credibility, it's alpha-stage with single-model results; run your own extractions before production.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.