DROID-SLAM is a deep learning based SLAM system. It consists of recurrent iterative updates of camera pose and pixelwise depth through a Dense Bundle Adjustment layer. This layer leverages geometric constraints, improves accuracy and robustness, and enables a monocular system to handle stereo or RGB-D input without retraining. It builds a dense 3D map of the environment while simultaneously localizing the camera within the map.
Source: DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D CamerasPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Simultaneous Localization and Mapping | 2 | 33.33% |
Point Cloud Generation | 1 | 16.67% |
Point Cloud Registration | 1 | 16.67% |
Semantic Segmentation | 1 | 16.67% |
Pose Estimation | 1 | 16.67% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |