Algoritmo Híbrido de Otimização por Enxame de Partículas para o Aprendizado Federado de Redes Neurais Artificiais

Authors

  • Eliezer T. S. Sanhá Programa de Pós-Graduação em Engenharia Elétrica, Universidade Federal de Minas Gerais (UFMG), Av. Antônio Carlos 6627, 31270-901, Belo Horizonte, Brasil
  • Fabricio Javier Erazo-Costa Departmento de Engenharia Elétrica, Universidade Federal de Ouro Preto (UFOP)
  • Frederico Gadelha Guimarães Departamento de Ciência da Computação, Universidade Federal de Minas Gerais (UFMG)

DOI:

https://doi.org/10.20906/CBA2024/4825

Keywords:

Machine Learning, Neural Networks, Convolutional Neural Networks, Federated Learning, Particle Swarm Optimization

Abstract

The aggregation algorithms of Federated Learning (FL) often overlook the balance between performance and communication cost. Therefore, in this study, we propose a new federated training method called FLPSO-SGD, based on the hybrid Particle Swarm Optimization-Stochastic Gradient Descent (PSO-SGD) algorithm, aiming to address these aspects. In contrast to classical FL training techniques, our method collects errors from the clients to the server, instead of model parameters, while the PSO-SGD algorithm conducts training on the client side. The algorithms were evaluated on classification problems using the UC Irvine datasets for the PSO-SGD, and the CIFAR-10 dataset for the FLPSO-SGD. The results highlight the promising performance of the PSO-SGD algorithm. Furthermore, the FLPSO-SGD algorithm demonstrated superior accuracies in global training compared to the FedAvg and FedPSO techniques. The results suggest that FLPSO-SGD is an effective alternative for FL training, particularly in applications with restricted bandwidth communication networks on clients.

Downloads

Published

2024-10-18

Issue

Section

Articles