Project-Based Learning Environment: Integration of an Educational Robot Arm with Computer Vision and ROS

  • Cleyson Fernando Araújo Teixeira Control and Automation Engineering Department, Federal University of Ouro Preto
  • Kássia Fernanda da Silva Control and Automation Engineering Department, Federal University of Ouro Preto
  • Anna Cristyna Martins Barros Control and Automation Engineering Department, Federal University of Ouro Preto
  • Santino Martins Bitaraes Control and Automation Engineering Department, Federal University of Ouro Preto
  • Alexandre Magno de S. Thiago Filho Control and Automation Engineering Department, Federal University of Ouro Preto
  • Paulo Henrique dos Santos Control and Automation Engineering Department, Federal University of Ouro Preto
  • JOSÉ ALBERTO NAVES JÚNIOR COCOTA Control and Automation Engineering Department, Federal University of Ouro Preto
Keywords: Project-based learning (PBL), Robot operating systems (ROS), Computer vision, Mitsubishi melfa RV-2AJ robot, Computer vision assisted programming

Abstract

Robotics is a science, which aims at controlling mechanical systems through electrical systems and computer techniques. Robotics has a high rate of applicability but, in order to make it easier to comprehend when it comes to learning processes, methodologies that enhance the student’s performance and learning curve, like the project-based learning method, are indispensable. The project-based learning technique diverges from the conventional approach, as it makes the many students the object itself of their learning development, by giving them the power of choice and major control over the entire process. This article aims at presenting the project developed by students from the “Robotics and Its Elements” class ministered at Universidade Federal de Ouro Preto - UFOP. The project dealt with concepts, approaches, explanations, and techniques that created a robot-cell (Melfa RV-2AJ and its peripherals) which used the ROS framework system alongside computer vision resources. The robot, after its completion, was able to perform repetitive tasks such as the detection and gathering of specific physical elements – it can manipulate them properly. To make those tasks possible, a Kinect camera was used to obtain data such as the depth and location of the elements which were in the robot’s range of grasp. In addition, a detection mechanism derived from the combination of an open source graphic library called OpenCV with the usage of the HSV color system was an important accessory, so that color calibration and orientation addressed properly. Finally, the designated framework (ROS) suited to establish a connection between the robot and its whole operational environment, in order to make the data-sharing and input signal both able to work over the robot’s mobility.

Published
2020-12-07
Section
Articles