gaolongsen

🤖🧠👾 Graph VLA with Control Barrier Function in Dual-Arm Robotics Manipulation

65
6
100% credibility
Found Mar 09, 2026 at 49 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This repository provides code and models for training vision-language-action systems that enable safe dual-arm robotic manipulation using control barrier functions.

How It Works

1
🔍 Discover safe robot teamwork

You find this project while researching how two robots can work together safely on tasks like building with blocks.

2
🛠️ Prepare your robot lab

Gather your two robot arms and cameras, then follow simple guides to connect everything so they can see and move.

3
📚 Teach with example videos

Show the robots videos of people doing tasks like stacking or picking, helping them learn language instructions too.

4
🧠 Train smart robot brains

The system learns to understand sights, words, and safe movements, practicing in a virtual world first.

5
🧪 Test in safe simulations

Watch virtual robots practice tasks without real risks, tweaking until they handle instructions perfectly.

6
🔌 Connect real robots

Link your physical robots and add invisible safety shields that prevent crashes during movements.

Robots team up safely

Your dual robots now follow voice commands to manipulate objects together, always staying collision-free and precise.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 49 to 65 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is GFVLA_CBF?

GFVLA_CBF is a Python framework for training and deploying graph-fused vision-language-action (VLA) models on dual-arm robots like UR5e and UR10e, ensuring semantically safe manipulation via control barrier functions (CBFs). It takes point clouds and language instructions to generate coordinated actions, then applies real-time CBF filtering for collision avoidance between arms, obstacles, and workspace bounds. Users get simulation demos, hardware scripts, and QP solvers for guaranteed safety without retraining.

Why is it gaining traction?

Unlike standard VLAs that risk crashes in multi-arm setups, this adds graph theory for scene understanding and CBF barriers for minimal-action tweaks, enabling reliable language-conditioned tasks like block stacking. The github graph repo's README details quick hardware integration via IP addresses, with simulation fallbacks—developers grab it for its plug-and-play safety on real robots. Low graph github stars (45) but strong github graph commits show active evolution.

Who should use this?

Robotics engineers building dual-arm manipulators for assembly or pick-place with natural language goals, especially those needing CBF safety layers. Ideal for researchers prototyping scene graph VLA policies on RLBench or MetaWorld, or hardware teams tuning UR arms without collisions.

Verdict

Worth forking for dual-arm VLA safety experiments, but 1.0% credibility and 45 stars signal early research stage—expect incomplete docs and RLBench tweaks. Pair with mature sims before production.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.