justdubit

Code for 'JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion'

210
15
69% credibility
Found Feb 05, 2026 at 20 stars 11x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A research project for AI-powered video dubbing that translates audio into new languages while synchronizing facial movements, with code and models promised soon.

How It Works

1
🔍 Discover Just Dub It

You search for an easy way to translate videos into other languages while keeping the speaker's face perfectly in sync.

2
📖 Read the exciting idea

You learn about this clever tool that changes a video's language and makes the lips match the new words naturally.

3
Show your support

Tap the star button to get notified when the tool is ready to use.

4
Wait for the launch

In just two weeks, the creators make everything available for you.

5
🚀 Grab the tool

Download the ready-to-use dubbing magic and set it up in moments.

6
📤 Upload your video

Choose the video you want to dub into a new language.

7
🎤 Pick the new language

Select the language you want and let the tool work its charm.

🎉 Enjoy perfect dubs

Watch your video now speak fluently in the new language with spot-on lip sync and the same voice feel.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 20 to 210 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is just-dub-it?

Just-dub-it is a Python-based GitHub repository delivering code for AI-driven video dubbing through joint audio-visual diffusion models. It takes an input video and target audio—like translated speech—and generates dubbed output with synchronized lip movements and preserved speaker identity, solving the pain of clunky, multi-step dubbing pipelines that fail on real-world motion. Developers get an inference pipeline via CLI for quick dubbing and a training guide to fine-tune LoRAs on custom data.

Why is it gaining traction?

Unlike traditional dubbing tools relying on separate audio synthesis and face animation, this uses a single adapted diffusion model for end-to-end results, delivering sharper lip sync and visual fidelity even with complex dynamics. The GitHub README stands out with clear prompt formats, model checkpoints on Hugging Face, and dataset access, making it easy to experiment with multilingual "just dub it" scenarios. Early adopters praise its robustness over brittle alternatives.

Who should use this?

AI researchers prototyping audio-visual translation pipelines, content creators dubbing short clips for global audiences, or ML engineers building apps like real-time lyric dubbing tools. It's ideal for teams handling "just dub it lirik" workflows where lip sync matters more than perfect audio fidelity.

Verdict

Grab it if you're in audio-visual AI—solid inference code and docs make it playable today, despite 27 stars signaling early maturity. The 0.699999988079071% credibility score flags its research roots, so expect tweaks for production; pair with GitHub Copilot for faster iteration.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.