Estimação de Distância de Marcos Visuais por Segmentação de Imagens e Processos Gaussianos

Authors

  • Serenini, D. Programa de Pós-Graduação em Engenharia de Sistemas e Automação, Universidade Federal de Lavras, MG
  • Lima, D. A. Departamento de Automática, Universidade Federal de Lavras, MG
  • Santos, R. A. Programa de Pós-Graduação em Engenharia de Sistemas e Automação, Universidade Federal de Lavras, MG
  • Barbosa, B. H. G. Departamento de Automática, Universidade Federal de Lavras, MG

Keywords:

Autonomous vehicle, Gaussian Process, Computer Vision, Deep Learning.

Abstract

In the constantly evolving landscape of autonomous vehicle technologies, vehicle localization accuracy emerges as a significant challenge. The objective of this work is to propose an algorithm to estimate the distance between a vehicle equipped with a camera and environmental landmarks using prediction techniques. Using Python and a real database of approximately 8000 samples collected by an autonomous vehicle, the performance of YOLO-v8 networks for Image Detection and Segmentation, DeTr (Detection Transformers), and SAM (Segment Anything Model) combined with the GPR (Gaussian Process Regression) model was evaluated. The YOLO-v8 network demonstrated superiority in object detection with an average Recall of 0.76 and maP@0.5 of 0.891, highlighting the effectiveness of segmentation masks for the detection task as well. The integration of YOLO-v8 Segmentation and SAM improved environmental perception, achieving a DICE coefficient of 71.039% and reducing the error in distance prediction to an MAE of 1.15 meters. However, this combination increased processing time, posing challenges for real-time applications with the hardware used. The inclusion of segmentation characteristics significantly enhanced the accuracy of the GPR model, demonstrating the potential of computer vision techniques to improve localization and decision- making in autonomous vehicles.

Downloads

Published

2024-10-18

Issue

Section

Articles