Radar Object Detection
9 papers with code • 0 benchmarks • 2 datasets
The radar object detection (ROD) task aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.
Benchmarks
These leaderboards are used to track progress in Radar Object Detection
Libraries
Use these libraries to find Radar Object Detection models and implementationsMost implemented papers
K-Radar: 4D Radar Object Detection for Autonomous Driving in Various Weather Conditions
In this work, we introduce KAIST-Radar (K-Radar), a novel large-scale object detection dataset and benchmark that contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions, together with carefully annotated 3D bounding box labels of objects on the roads.
RODNet: Radar Object Detection Using Cross-Modal Supervision
Radar is usually more robust than the camera in severe driving scenarios, e. g., weak/strong lighting and bad weather.
RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization
Finally, we propose a method to evaluate the object detection performance of the RODNet.
RADDet: Range-Azimuth-Doppler based Radar Object Detection for Dynamic Road Users
In this paper, we collect a novel radar dataset that contains radar data in the form of Range-Azimuth-Doppler tensors along with the bounding boxes on the tensor for dynamic road users, category labels, and 2D bounding boxes on the Cartesian Bird-Eye-View range map.
DAROD: A Deep Automotive Radar Object Detector on Range-Doppler maps
Due to the small number of raw data automotive radar datasets and the low resolution of such radar sensors, automotive radar object detection has been little explored with deep learning models in comparison to camera and lidar-based approaches.
A recurrent CNN for online object detection on raw radar frames
Exploiting the time information (e. g., multiple frames) has been shown to help to capture better the dynamics of objects and, therefore, the variation in the shape of objects.
T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC Radar Signals
Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems.
RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection Model
This improvement was associated with the increasing use of LiDAR sensors and point cloud data to facilitate the task of object detection and recognition in autonomous driving.
RTNH+: Enhanced 4D Radar Object Detection Network using Combined CFAR-based Two-level Preprocessing and Vertical Encoding
Four-dimensional (4D) Radar is a useful sensor for 3D object detection and the relative radial speed estimation of surrounding objects under various weather conditions.