Songluchuan

TDMM-LM dataset

19
0
100% credibility
Found Apr 30, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A large-scale dataset of synthesized facial videos with text prompts and 3D parameters for advancing text-driven facial animation models, including tools for extracting facial parameters using SMIRK and SPECTRE.

How It Works

1
📖 Discover TDMM-LM Dataset

You find a treasure trove of 80 hours of expressive face videos created from text prompts, perfect for training AI that animates emotions.

2
⬇️ Download Videos and Notes

Run the easy download script to grab the videos, matching text descriptions, and ready-to-use annotations.

3
Pick Your Face Tracker
😊
Try SMIRK

Quick setup for capturing subtle smiles and wild expressions.

🗣️
Go with SPECTRE

Great for talking heads with natural mouth movements.

4
Extract Face Magic

Feed in your videos and watch as the tool spits out precise 3D shapes, poses, and emotions for every moment.

5
🎥 Preview Your Results

Render the details back into smooth videos to see lifelike faces matching the original prompts.

🚀 Power Up Your AI Faces

Use the rich data to train smarter models that bring any text description to emotional life.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is TDMM-LM_data?

TDMM-LM_data delivers a massive Python-accessible dataset for the TDMM-LM project, packing ~80 hours of synthetic face videos spanning emotions, expressions, and head poses, each paired with text prompts and 3D facial parameters. It solves the scarcity of high-quality data for training text-driven facial animation and understanding models by providing ~70 hours of downloadable videos plus JSON annotations. Users grab the data via a simple GDrive shell script and process videos into params using bundled SMIRK or Spectre inverse tools.

Why is it gaining traction?

This stands out with its foundation-model-synthesized diversity, letting devs benchmark text-to-face generation faithfully across extremes like subtle smiles or wild head turns—rare in real-capture datasets. The ready-to-run Python batch scripts for param extraction skip manual preprocessing hassles, hooking researchers needing quick eval pipelines over generic face data dumps.

Who should use this?

Facial animation researchers fine-tuning LLMs for expressive talking heads, or CV engineers evaluating text-conditioned 3D face models on diverse emotions. Ideal for academics replicating TDMM-LM paper results or prototyping emotion-aware avatars without curating their own synth data.

Verdict

Grab it if you're in text-driven face gen—solid niche utility despite 19 stars and 1.0% credibility signaling early-stage maturity; docs are paper-linked but light on examples. Pair with the full TDMM-LM code for real impact.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.