Applying Deep Learning for cockpit segmentation in the context of mixed reality

Authors

  • Alexandre Leles Sousa Laboratório de Robótica, sistemas inteligentes e Complexos - RobSIC Instituto de Ciências Tecnológicas Universidade Federal de Itajubá, Campus Itabira, MG.
  • Pedro de Oliveira Nielson Laboratório de Robótica, sistemas inteligentes e Complexos - RobSIC Instituto de Ciências Tecnológicas Universidade Federal de Itajubá, Campus Itabira, MG.
  • Erick Oliveira Rodrigues Universidade Tecnológica Federal do Paraná - UTFPR, Campus Pato Branco/PR
  • Rafael Francisco dos Santos Laboratório de Robótica, sistemas inteligentes e Complexos - RobSIC Instituto de Ciências Tecnológicas Universidade Federal de Itajubá, Campus Itabira, MG.
  • Giovani Bernardes Vitor Laboratório de Robótica, sistemas inteligentes e Complexos - RobSIC Instituto de Ciências Tecnológicas Universidade Federal de Itajubá, Campus Itabira, MG.

Keywords:

Image Processing, Computer Vision, Mixed Reality, Convolutional Neural Network, Semantic Segmentation

Abstract

Computer vision is an area that has been growing continuously. With the advance of technologies with a first-person view, new development opportunities have emerged inside the area. Mixed reality promotes virtual environments with objects from the physical world shown in real time. For that, it’s necessary to be concerned with the immersion of the user in this simulated environment, increasingly seeking to bring it closer to a possible desired reality. This paper proposes the development of image processing in order to perform the segmentation of images to identify what is foreground and background in order to facilitate the union of virtual and real images. Thus, the present work obtain real images of the user using the off-highway truck simulator CAT793F, through a camera, to be able to perform the segmentation of such images with artificial intelligence techniques.The convolutional neural network architectures “U-net” and “DeepLabV3+” are applied to perform image segmentation. As a result, metrics with around 90% accuracy were presented and and the best model was determined.

Downloads

Published

2024-10-18

Issue

Section

Articles