The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It features:
856 PAPERS • 20 BENCHMARKS
The Matterport3D dataset is a large RGB-D dataset for scene understanding in indoor environments. It contains 10,800 panoramic views inside 90 real building-scale scenes, constructed from 194,400 RGB-D images. Each scene is a residential building consisting of multiple rooms and floor levels, and is annotated with surface construction, camera poses, and semantic segmentation.
387 PAPERS • 5 BENCHMARKS
The dataset was collected using the Intel RealSense D435i camera, which was configured to produce synchronized accelerometer and gyroscope measurements at 400 Hz, along with synchronized VGA-size (640 x 480) RGB and depth streams at 30 Hz. The depth frames are acquired using active stereo and is aligned to the RGB frame using the sensor factory calibration. All the measurements are timestamped.
18 PAPERS • 1 BENCHMARK
TransCG is the first large-scale real-world dataset for transparent object depth completion and grasping, which contains 57,715 RGB-D images of 51 transparent objects and many opaque objects captured from different perspectives (~240 viewpoints) of 130 scenes under real-world settings. The samples are captured by two different types of cameras (Realsense D435 & L515).
5 PAPERS • 1 BENCHMARK
SuperCaustics is a simulation tool made in Unreal Engine for generating massive computer vision datasets that include transparent objects.
2 PAPERS • 1 BENCHMARK
Bosch Industrial Depth Completion Dataset (BIDCD) is an RGBD dataset for of static table-top scenes with industrial objects. The data was collected with a RealSense depth-camera mounted on a robotic arm, i.e. from multiple Points-of-View (POV), approximately 60 for each scene. We generated depth ground truth with a customized pipeline for removing erroneous depth values, and applied Multi-View geometry to fuse the cleaned depth frames and fill-in missing information. The fused scene mesh was back-projected to each POV, and finally a bi-lateral filter was applied to reduce the remaining holes.
1 PAPER • NO BENCHMARKS YET
PLAD is a dataset where sparse depth is provided by line-based visual SLAM to verify StructMDC.
1 PAPER • 1 BENCHMARK