AutoLab-SAI-SJTU

[RSS 2026] Official code & data for "OmniNavBench: Beyond Isolation β€” A Unified Benchmark for General-Purpose Navigation"

17
1
100% credibility
Found May 14, 2026 at 20 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

OmniNavBench is a simulation platform for testing AI robot navigation across varied robots, realistic homes, and mixed tasks using human demonstration paths.

How It Works

1
πŸ” Discover OmniNavBench

You hear about a fun way to test how smart AI can make robots explore rooms, follow people, or find objects in realistic homes.

2
πŸ“₯ Grab the scenes and stories

Download ready-made home layouts and human-guided paths so your robot has places to practice navigating.

3
πŸ€– Pick your robot buddy

Choose a walking dog, rolling cart, or standing human robot, plus simple or chatty instructions.

4
πŸš€ Start the adventure

Click run and watch your robot zoom around, dodging furniture and heading to goals just like a real explorer!

5
πŸ“Š Test many homes

Repeat across dozens of rooms to see how well it handles different challenges.

πŸ† See your scores shine

Get clear success rates and rankings to know how smart your robot navigator really is!

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 20 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is OmniNavBench?

OmniNavBench delivers a unified Python benchmark for general-purpose robot navigation, straight from RSS 2026. It tests policies on composite instructions mixing PointNav, VLN, ObjectNav, SocialNav, human following, and EQA across H1 humanoid, Aliengo quadruped, and Carter wheeled robots in 170 GRScenes synthetic plus Matterport3D real environments. Download the HF dataset, expose your policy via HTTP endpoint, run evals with runBench.py, and submit trajectories to the live leaderboard for SR, SPL, and more.

Why is it gaining traction?

Ditches isolated single-task benchmarks and A* shortcuts for 1,779 human-teleop trajectories capturing real behaviors like glancing and avoidance. Modular sensor support (RGB-D, LiDAR) and robot configs let you swap embodiments without rewriting code, with reference adapters for Uni-NaVid, MTU3D, and others. Batch scripts handle multi-GPU sweeps over styles and scenes, perfect for RSS 2026 submissions.

Who should use this?

Robotics researchers evaluating VLN or multi-task nav agents across morphologies. Navigation teams needing a 2026 benchmark with human-like data, or labs integrating policies via simple HTTP servers for leaderboard ranking.

Verdict

Strong RSS 2026 entry with excellent docs, HF dataset, and plug-and-play CLIβ€”grab it for nav benchmarks. But 17 stars and 1.0% credibility signal early days; test the forward baseline locally before committing.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.