Abstract
Aggregation methods in Federated Learning (FL) play a fundamental role in the performance and convergence of the global model. In this paper, we propose a novel aggregation strategy based on the concept of federated neural velocity. Neural velocity estimates the rate of change of a neuron's learned function, providing insights into model convergence. Leveraging this property, we design a dynamic training approach where the central model's aggregation is adapted according to the neural velocity of the clients participating in the training process. We validate our method on multiple datasets and under both Independent and Identically Distributed (IID) and non-IID data distributions, demonstrating its effectiveness in improving model performance and robustness while addressing challenges related to data heterogeneity and resource management on edge devices. Moreover, due to its modular nature, our approach can be seamlessly integrated into advanced federated learning frameworks, including client selection strategies, to further improve training efficiency.