shi3z

Browser GUI on top of facebook/tribev2: video → fsaverage5 fMRI prediction with timeline, brain renders, and ffmpeg cut tooling.

44
1
100% credibility
Found May 11, 2026 at 44 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
HTML
AI Summary

A browser-based tool for uploading short videos to predict and visualize brain activity patterns using a Meta AI model, with features to identify and generate cuts for boring segments.

How It Works

1
🔍 Discover Auto Excitement

You hear about this fun tool that shows how videos excite the brain and helps cut out boring parts.

2
💻 Get it ready

You download and prepare the tool on your own computer so it's all set up for use.

3
🌐 Open the webpage

You start the tool and open its webpage right in your browser.

4
🎥 Upload your video

You pick a short video from your files and upload it with a progress bar showing how it's going.

5
See the analysis

You wait a moment while it listens to the words, understands the scenes, and predicts brain activity over time.

6
🧠 Watch the brain light up

A cool 3D brain model appears, glowing and changing colors perfectly in sync with your video as you drag to spin it.

7
✂️ Spot boring moments

You slide controls to highlight low-energy stretches and preview the video skipping those dull parts.

Download your edit

You get a simple ready-to-run script that trims the boring bits, leaving only the exciting highlights.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 44 to 44 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is auto-excitement?

Auto-Excitement is a browser-based GUI built on Python and FastAPI that lets you upload a short video (MP4, WebM, etc.) and uses Meta's TribeV2 model to predict fMRI BOLD responses across the fsaverage5 cortical surface, collapsed into 7 Yeo networks plus brain-state axes like excitement and valence. You get a synced WebGL brain render, timeline with waveforms/transcripts/thumbnails, Chart.js charts, and an auto-generated ffmpeg script to cut boring segments based on low excitement thresholds. It handles English, Japanese (native or translated), and other languages via WhisperX, all from your browser after a quick server spin-up.

Why is it gaining traction?

It stands out by packing heavy brain prediction into a zero-install browser experience—no Jupyter notebooks or custom viz scripts needed—while delivering 60fps WebGL brain spins, in-browser boring-cut previews, and one-click ffmpeg exports. Developers dig the real progress bars, SSE streaming, and language flexibility (Japanese translate shines), making it a fast prototype tool over raw TribeV2 CLI grinding. The auto brain axis heuristics hook cognitive devs wanting quick video-brain insights without FreeSurfer pipelines.

Who should use this?

Cognitive neuroscientists testing video stimuli on predicted fMRI. Video researchers or filmmakers auto-editing clips by engagement signals. Python ML devs building browser GUI prototypes for brain data viz, especially with GPU acceleration for V-JEPA encoding.

Verdict

Worth forking for brain-video experiments—solid README setup guide, API endpoints, and perf (78s end-to-end on GPU)—but at 44 stars and 1.0% credibility, it's early; expect tweaks for production. Clone via browser download GitHub if you're prototyping auto brain tools.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.