Detection and Classification of Unmanned Aerial Vehicles with Camera and Radar
Synopsis
This paper presents a real-time system for unmanned aerial vehicle (UAV) detection that combines vision-based deep learning with advanced FMCW Range–Doppler radar processing. The visual subsystem is based on the YOLOv8 model deployed on the NVIDIA Jetson Orin Nano platform using an industrial Basler camera. The radar subsystem, implemented on a Raspberry Pi, performs Gaussian smoothing, background subtraction, and target detection using threshold-based segmentation followed by DBSCAN clustering in the range domain. UDP communication between the subsystems enables temporal synchronization of detections and their fusion within a predefined time window. Experimental results demonstrate that the combined approach improves detection reliability compared to individual sensing modalities, particularly under challenging environmental conditions. The paper analyzes the system architecture, computational performance on edge devices, and discusses current limitations and potential directions for further improvements.
Downloads
Pages
Published
Series
Categories
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.





