ZhihaoZhu

Forgetting-Aware Curriculum for VLM Self-Evolution — adversarial difficulty scheduling with forgetting detection across 6 VQA skill clusters

46
4
100% credibility
Found Apr 03, 2026 at 46 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A framework for iteratively improving vision-language AI models through self-challenge cycles that detect and counteract forgetting across diverse visual tasks like math diagrams, charts, and documents.

How It Works

1
📖 Discover SelfEvolve-VLM

You learn about a helpful tool that trains AI to understand pictures and questions better over time, without forgetting what it already knows.

2
🛠️ Get ready to use it

You download the tool and set it up on your computer with strong graphics power, picking simple settings for skills like math charts or science diagrams.

3
🚀 Start the magic

You press go, and the AI begins challenging itself with image questions, learning round by round to stay sharp on every skill.

4
📈 Watch it grow

You follow along as it focuses more on weaker areas, like spatial puzzles or document reading, to keep all abilities balanced and strong.

5
🧪 Check the results

You test the improved AI on fixed sets of picture questions across math, charts, science, and more to see real progress.

🎉 AI remembers and excels

Your AI now masters visual reasoning in all areas, growing smarter without losing a single skill it once learned.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 46 to 46 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is forgetting-aware-vlm?

This Python project powers self-evolution for vision-language models (VLMs) on VQA tasks, tackling catastrophic forgetting where models lose old skills while gaining new ones. It detects forgetting across 6 skill clusters—like math charts, documents, and spatial reasoning—then schedules adversarial difficulty via a forgetting-aware curriculum to balance training data and rewards. Run it with `python scripts/train.py --config configs/default.yaml` on Qwen2.5-VL models using PyTorch and OpenRLHF for GRPO training.

Why is it gaining traction?

Unlike plain self-play methods that let VLMs regress on mastered domains, this adds probe-based forgetting detection and dynamic scheduling to prioritize weak clusters without manual tuning. Developers dig the ablation configs to isolate curriculum vs. reward bonuses, plus standalone evaluation via `python scripts/evaluate.py` and debug modes for quick tests on smaller models. It delivers measurable gains in balanced VQA performance over uniform baselines.

Who should use this?

VLM researchers iterating on self-improvement for multimodal reasoning, like boosting Qwen-VL on mixed VQA benchmarks. Fine-tuners handling skill drift in RLHF pipelines for charts, science diagrams, or DocVQA. Teams with A100 GPUs prototyping forgetting-aware training before scaling.

Verdict

Grab it for VLM self-evolution experiments—solid docs, 59 passing tests, and resumable runs make it usable now, despite 46 stars and 1.0% credibility signaling early days. Maturity lags production needs, but it's a smart prototype for preventing forgetting in VQA skill clusters.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.