fengyi233

fengyi233 / carlaocc

Public

[CVPR 2026] An Instance-Centric Panoptic Occupancy Prediction Benchmark for Autonomous Driving

32
0
100% credibility
Found Apr 12, 2026 at 32 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Jupyter Notebook
AI Summary

CarlaOcc provides a benchmark dataset with 100K frames of multi-modal sensor data and voxel-level panoptic occupancy ground truth generated in the CARLA simulator for autonomous driving research.

How It Works

1
🔍 Discover CarlaOcc

You stumble upon this exciting benchmark while exploring new datasets for self-driving car research.

2
📥 Download the dataset

Grab the free data package from Hugging Face and bring realistic driving scenes to your computer.

3
Unpack your world

Extract the files with one click and watch 100K frames of detailed 3D environments appear.

4
👀 Explore the views

Open the viewer to see camera images, depth maps, LiDAR scans, and occupancy grids side by side.

5
🚗 Dive into details

Check traffic info, semantic labels, and voxel-level predictions to understand every scene perfectly.

🎉 Ready for breakthroughs

With everything visualized and understood, you're set to train models or analyze autonomous driving perfectly.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 32 to 32 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is carlaocc?

CarlaOcc delivers a benchmark dataset and toolchain for instance-centric panoptic occupancy prediction in autonomous driving, generating 100K frames of voxel-level (0.05m res) semantic and instance labels from the CARLA simulator. It solves the lack of consistent 3D ground truth by producing multi-modal data—RGB from 6 cams, depth, semantics, lidar, normals, and occupancy—via a full pipeline: data collection, UE5 scene export, and mesh-based voxelization. Python scripts and Jupyter tutorials let you download from Hugging Face, visualize modalities like `python vis_dataset.py --vis_modality all`, or regenerate data yourself; check the CVPR 2026 accepted paper on GitHub for details amid cvpr 2026 papers buzz.

Why is it gaining traction?

Unlike sparse nuScenes or Waymo datasets, CarlaOcc offers physically consistent occupancy via high-fidelity meshes, plus resampled grids (0.1m to 0.4m) for efficient training. Devs dig the end-to-end reproducibility in CARLA UE5, quick mini-dataset for prototyping, and bash scripts like `gen_modalities.sh` for batch processing. It's hooking autonomous ML folks scouting CVPR 2026 github repos early.

Who should use this?

Autonomous driving researchers benchmarking 3D panoptic predictors against cvpr 2026 baselines, perception engineers needing sim GT for occupancy nets in BEV or surround views, or sim-to-real teams generating custom traffic scenarios with varying densities.

Verdict

Grab the HF dataset now for CVPR 2026 experiments—solid docs and vis tools make it instantly usable despite 32 stars and pending training code. 1.0% credibility reflects early maturity, but the pipeline's reliability shines for autonomous benchmarks.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.