hukcc

Paper list of Video LLM hallucination. Welcome to Star and Contribute!

22
0
100% credibility
Found Feb 10, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A curated academic resource list compiling papers, benchmarks, and mitigation techniques for hallucinations in video large language models.

How It Works

1
πŸ” Discover the collection

You search online for why video AI sometimes describes things that aren't there and stumble upon this helpful list of studies.

2
πŸ“– Read the introduction

You open the page and see a clear overview of different kinds of video AI mistakes, like mixing up event order or inventing details.

3
🌈 Understand the categories

The neat charts and explanations help you grasp how these errors happen, making complex ideas feel simple and organized.

4
πŸ“Š Explore test examples

You browse lists of real tests people use to check if video AIs are accurate, with links to try them out.

5
πŸ› οΈ Check fix ideas

You look at ways researchers suggest to reduce these mistakes, noting which ones are easy to use without extra work.

6
πŸ”— Dive into a resource

You click on a promising study or example that fits your interest and visit its page for more details.

πŸŽ‰ Gain new insights

Now you have a treasure trove of ideas to improve video AI reliability or understand its limits better.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 22 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Awesome-Video-Hallucination?

This awesome GitHub curated list delivers a structured survey of papers on hallucination in video LLMs, tackling distorted outputs like event misordering or fabricated content. Developers get instant access to 19 benchmarks and 23 mitigation methods, organized by a mechanism-driven taxonomy splitting issues into dynamic distortion and content fabrication. Built as a markdown-powered resource, it links directly to papers, venues, and code repos for quick evaluation.

Why is it gaining traction?

Unlike scattered arXiv searches, this curated intel GitHub stands out with its taxonomy that maps benchmarks and methods to real hallucination types, plus training-free flags and code availability badges. The hook for devs: one-page overview of 2023-2026 papers from top venues like CVPR and NeurIPS, saving hours on video LLM reliability checks. It's a github curated list benchmarked for video hallucination mitigation.

Who should use this?

ML engineers fine-tuning video LLMs for accurate temporal reasoning, like in surveillance or AR apps. Researchers benchmarking models against spatiotemporal errors or audio-visual conflicts. Teams auditing LLM outputs for fabrication in content generation pipelines.

Verdict

Solid starting point for video LLM devs despite 15 stars and 1.0% credibility scoreβ€”docs are comprehensive and PR-friendly, but watch for updates as the field explodes. Star it if you're diving into hallucination papers; skip if you need production-ready tools.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.