Reconhecimento e Tradução de Sinais de Libras para Língua Portuguesa Escrita usando Redes Neurais Profundas

  • Jhon Lucas S. Silva Instituto de Informática, Universidade Federal de Goiás, GO
  • Gabriel S. Vieira Instituto Federal Goiano, Campus Urutaí, Urutaí, GO
  • Afonso U. Fonseca Instituto de Informática, Universidade Federal de Goiás, GO
  • Fabrizzio Soares Instituto de Informática, Universidade Federal de Goiás, GO
Keywords: Sign language recognition, sign language processing, continuous sign language recognition, machine learning, deep learning, computer vision, sign language

Abstract

The Brazilian Sign Language (Libras) enables deaf people to understand and interact with others in order to facilitate their access to culture, knowledge and social integration. However, there are only few solutions to reduce the communication barrier between deaf and hearing people. In this work, we propose a solution based on deep neural networks for sign language recognition. This is an exploratory study in which signs in Libras (“Hello”, “Good morning”, and “Thank you”) are used in training, recognition and classification in continuous and real-time mode. We compared two machine learning models that were trained on the LSTM and BiLSTM neural network architectures. The results point to superior assertiveness of the LSTM model, with an accuracy of 84.71% compared to the 77.07% achieved by the BiLSTM model. Therefore, the LSTM architecture is more suitable for classifying the signals investigated in this study. Besides, its use in sign image recognition systems in Libras proves to be viable.
Published
2022-10-19
Section
Articles