bowang-lab

bowang-lab / EchoJEPA

Public

EchoJEPA: A Latent Predictive Foundation Model for Echocardiography

246
34
100% credibility
Found Feb 07, 2026 at 124 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

EchoJEPA is an open-source codebase for training self-supervised video foundation models on large echocardiography datasets and evaluating them on clinical tasks like ejection fraction estimation and view classification.

How It Works

1
🔍 Discover EchoJEPA

You find this smart tool for understanding heart ultrasound videos through its research paper or website.

2
💻 Set up your space

Create a simple workspace and add the needed tools with easy steps.

3
📥 Get pretrained brains

Download ready-made models trained on millions of heart scans to jumpstart your work.

4
📹 Prepare your videos

List your echocardiography clips in a simple format so the tool can learn from them.

5
🚀 Train on heart data

Watch the model learn heart structures and movements from your videos super efficiently.

6
🧪 Test heart measurements

Check predictions for things like heart pumping strength or scan views with quick evaluations.

Analyze new scans

Use your trained model to get reliable insights from patient ultrasounds right away.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 124 to 246 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is EchoJEPA?

EchoJEPA is a Python-based foundation model for echocardiography video analysis, using a latent predictive approach to learn robust anatomical representations that filter out ultrasound noise like speckle. It processes video clips to enable downstream tasks such as left ventricular ejection fraction (LVEF) estimation, right ventricular systolic pressure (RVSP) prediction, and view classification. Users get pretrained checkpoints from 18 million echos, plus scripts to pretrain on datasets like MIMIC-IV-ECHO and run frozen-backbone probes.

Why is it gaining traction?

It crushes baselines by 20% on LVEF and 17% on RVSP, with extreme sample efficiency—hitting 79% view accuracy on just 1% labeled data versus 42% for competitors on full sets. The model generalizes across adult/pediatric cases and resists acoustic perturbations, degrading only 2% where others drop 17%. Developers dig the plug-and-play eval configs for regression and classification on echo datasets.

Who should use this?

Cardiology AI engineers building LVEF/RVSP predictors from echo videos. Medical imaging researchers probing foundation models for noisy ultrasound data. Clinicians prototyping zero-shot tools on pediatric or perturbed scans without massive relabeling.

Verdict

Solid for echo AI prototyping—strong paper results and ready checkpoints make it a quick win over VideoMAE or EchoPrime. Low 199 stars and 1.0% credibility score flag early maturity; test rigorously on your data before production.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.