Improving the performance of low visibility 3D detection for autonomous vehicles with camera-radar fusion

Authors

  • Ruan Bispo Departamento de Engenharia Elétrica e de Computação, Escola de Engenharia de São Carlos, Universidade de São Paulo
  • Bruno Borges Departamento de Engenharia Elétrica e de Computação, Escola de Engenharia de São Carlos, Universidade de São Paulo
  • Valdir Grassi Jr. Departamento de Engenharia Elétrica e de Computação, Escola de Engenharia de São Carlos, Universidade de São Paulo

Keywords:

Self-driving cars, sensor fusion, nuScenes, camera-radar, adverse weather conditions

Abstract

Since the emergence of autonomous vehicles, certain tasks such as object detection have become more necessary, and the adoption or rejection of the technology depends on accurately locating and identifying vehicles and pedestrians on the streets. Considering the current conditions, where human drivers are capable of efficiently recognizing and estimating the distance between these obstacles on roads under any weather and lighting conditions, it is expected that, as feasibility requirements for the adoption of autonomous vehicles on the streets, the vehicle is capable of performing the same function equally or superiorly, considering the same precision and task execution time. Thus, this work presents the modification of a 3D object detection architecture using camera-radar sensor fusion to reduce processing time, data volume, and memory required by the base paper. Results demonstrated a significant reduction in computational cost while maintaining metrics at the same level as the base work.

Downloads

Published

2024-10-18

Issue

Section

Articles