3D Semantic Occupancy Prediction
5 papers with code • 0 benchmarks • 0 datasets
Uses sparse LiDAR semantic labels for training and testing
Benchmarks
These leaderboards are used to track progress in 3D Semantic Occupancy Prediction
Most implemented papers
OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction
The vision-based perception for autonomous driving has undergone a transformation from the bird-eye-view (BEV) representations to the 3D semantic occupancy.
PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic Occupancy Prediction
To address this, we propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively and a PointOcc model to process them efficiently.
InverseMatrixVT3D: An Efficient Projection Matrix-Based Approach for 3D Occupancy Prediction
In contrast, our approach leverages two projection matrices to store the static mapping relationships and matrix multiplications to efficiently generate global Bird's Eye View (BEV) features and local 3D feature volumes.
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
HyDRa achieves a new state-of-the-art for camera-radar fusion of 64. 2 NDS (+1. 8) and 58. 4 AMOTA (+1. 5) on the public nuScenes dataset.
GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction
To address this, we propose an object-centric representation to describe 3D scenes with sparse 3D semantic Gaussians where each Gaussian represents a flexible region of interest and its semantic features.