Visko-Platform

VEFX-Bench: A Holistic Benchmark for Generic Video Editing and Visual Effects

10
0
100% credibility
Found Apr 23, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

VEFX-Bench provides a benchmark dataset and AI reward model to evaluate the quality of text-driven video edits on instruction following, render quality, and edit exclusivity.

How It Works

1
🔍 Discover VEFX-Bench

You hear about this handy tool for checking how well AI video edits turn out, perfect for comparing different editing apps.

2
📥 Get the tool ready

You download everything to your computer and set it up so it's all prepared for your videos.

3
📹 Gather your video pair

You pick an original video, your edited version, and write down what change you wanted to make.

4
Run the quality check

You feed in the videos and instruction, and the tool quickly analyzes how good the edit is.

5
📊 Review the scores

You get clear numbers on how well it followed instructions, looks realistic, and changed only what it should.

6
🏆 Check the leaderboard

You see how your results stack up against top video editing AIs on the rankings.

🎉 Master better edits

Now you know exactly what's great or needs work, helping you create amazing videos every time.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is VEFX-Bench?

VEFX-Bench is a Python-based holistic benchmark for generic video editing and visual effects, letting you score text-driven edits with a VLM reward model from Hugging Face. Feed it original and edited video pairs plus an instruction—like "remove the backpack"—and it outputs scores from 1-4 across instructional following, render quality, and edit exclusivity. Developers get a quick-start CLI, Python API for single or batch scoring, and sample videos to test immediately, plus a 5k-example dataset and live leaderboard.

Why is it gaining traction?

It stands out with a structured leaderboard ranking commercial tools like Kling against open-source ones, using a geometric aggregate score that balances all dimensions. The Python package handles video sampling at 4 FPS, runs on CUDA GPUs with bfloat16, and scales to multi-GPU batch jobs via CSV input—perfect for eval pipelines without custom scripting. Ties into a paper and project page with demo GIFs showing real edits like object removal or style transfer.

Who should use this?

Video AI researchers benchmarking gen models on precise edits, like attribute changes or camera zooms. Teams at VFX studios or apps like Runway evaluating edit exclusivity to avoid over-editing side effects. Python devs integrating automated scoring into training loops for custom video editing systems.

Verdict

Early days with 10 stars and 1.0% credibility score, but solid docs, HF models, and Apache license make it worth a spin for video benchmark needs—install and score your samples in minutes. Skip if you need battle-tested maturity; otherwise, it's a practical VEFX eval tool.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.