facebookresearch

This repository contains the code to train and evaluate TRIBE v2, a multimodal model for brain response prediction

365
64
100% credibility
Found Mar 27, 2026 at 365 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Jupyter Notebook
AI Summary

TRIBE v2 is an AI model from Meta that predicts fMRI brain activity patterns from video, audio, or text inputs using pretrained foundation models.

How It Works

1
📰 Discover TRIBE v2

You hear about this exciting tool from Meta AI that predicts how brains react to videos, sounds, or words, with a demo to try right away.

2
💻 Set it up easily

Follow simple steps to install everything on your computer, no coding needed beyond a quick setup.

3
🧠 Load the brain predictor

Grab the ready-trained model that understands sights, sounds, and language like a real brain.

4
🎥 Feed in your media

Upload a video clip, audio story, or text passage, and watch the magic start.

5
🔮 See brain predictions

Instantly get maps showing activity across the brain's surface for every moment.

6
🖼️ Visualize the results

Create beautiful brain images highlighting where activity lights up in response to your input.

Unlock brain insights

Now you can explore how brains respond to any media, perfect for research or curiosity!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 365 to 365 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is tribev2?

TRIBE v2 predicts fMRI brain responses to videos, audio, or text using a multimodal transformer that fuses LLaMA for language, V-JEPA2 for vision, and Wav2Vec-BERT for sound, mapping to fsaverage cortical surfaces. This GitHub repository contains code to load pretrained weights from Hugging Face, run inference on any media file, and train new models on fMRI datasets—all in Python with PyTorch Lightning. Users get vertex-wise predictions for the "average" brain in seconds via a simple API like `model.predict(events)`.

Why is it gaining traction?

It stands out with dead-simple inference (pip install, Colab-ready) and built-in brain visualizations via PyVista or Nilearn, skipping weeks of feature extraction boilerplate. Pretrained on diverse fMRI studies like Algonauts and BOLD Moments, it beats unimodal baselines on cortical encoding—ideal for quick prototyping without managing massive feature caches. The repo's clean structure avoids common GitHub pains like out of bounds symlinks or unmerged paths, with grids for Slurm training.

Who should use this?

Computational neuroscientists benchmarking brain encoding models on naturalistic stimuli. ML researchers in vision-language-neuroscience hybrids needing fMRI predictions for videos or narratives. Devs exploring in-silico neuroscience who want Hugging Face-style pretrained brain models without custom pipelines.

Verdict

Grab it for neuroscience ML experiments—pretrained weights and demos make it instantly useful despite 365 stars and 1.0% credibility score signaling early maturity. Docs are solid via README and Colab; add tests for production. (198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.