3D Semantic Occupancy Prediction

5 papers with code • 0 benchmarks • 0 datasets

Uses sparse LiDAR semantic labels for training and testing

Most implemented papers

OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy Prediction

zhangyp15/occformer ICCV 2023

The vision-based perception for autonomous driving has undergone a transformation from the bird-eye-view (BEV) representations to the 3D semantic occupancy.

PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic Occupancy Prediction

wzzheng/pointocc 31 Aug 2023

To address this, we propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively and a PointOcc model to process them efficiently.

InverseMatrixVT3D: An Efficient Projection Matrix-Based Approach for 3D Occupancy Prediction

danielming123/inversematrixvt3d 23 Jan 2024

In contrast, our approach leverages two projection matrices to store the static mapping relationships and matrix multiplications to efficiently generate global Bird's Eye View (BEV) features and local 3D feature volumes.

Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception

phi-wol/hydra 12 Mar 2024

HyDRa achieves a new state-of-the-art for camera-radar fusion of 64. 2 NDS (+1. 8) and 58. 4 AMOTA (+1. 5) on the public nuScenes dataset.

GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction

huang-yh/gaussianformer 27 May 2024

To address this, we propose an object-centric representation to describe 3D scenes with sparse 3D semantic Gaussians where each Gaussian represents a flexible region of interest and its semantic features.