UT-Austin-RobIn / continual-vla-rl
PublicSimple Recipe Works: Vision-Language-Action Models are Natural Continual Learners with Reinforcement Learning
Research codebase implementing continual reinforcement learning methods for vision-language-action models on robot manipulation tasks using the LIBERO benchmark.
How It Works
You find this project through a research paper on teaching robots new skills without forgetting old ones.
Set up a simple space on your computer to run robot experiments.
Download sample robot tasks and pre-trained 'brains' that understand vision and language.
Start a quick session where the robot practices one task, like picking up objects.
Watch the robot learn a sequence of tasks while remembering everything perfectly.
Review success rates and graphs to see how well different learning methods work.
You now have tools to experiment with robots that adapt and remember forever.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.