Show simple item record

dc.contributor.authorLiu, X
dc.contributor.authorWang, S
dc.contributor.authorDeng, Y
dc.contributor.authorNallanathan, A
dc.date.accessioned2024-07-11T10:57:18Z
dc.date.available2024-07-11T10:57:18Z
dc.date.issued2023-11-08
dc.identifier.citationX. Liu, S. Wang, Y. Deng and A. Nallanathan, "Adaptive Federated Pruning in Hierarchical Wireless Networks," in IEEE Transactions on Wireless Communications, vol. 23, no. 6, pp. 5985-5999, June 2024, doi: 10.1109/TWC.2023.3329450. keywords: {Computational modeling;Servers;Adaptation models;Training;Resource management;Convergence;Analytical models;Hierarchical wireless network;federated pruning;machine learning;communication and computation latency},en_US
dc.identifier.issn1536-1276
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/98006
dc.description.abstractFederated Learning (FL) is a promising privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets. Hierarchical FL (HFL), as a device-edge-cloud aggregation hierarchy, can enjoy both the cloud server's access to more datasets and the edge servers' efficient communications with devices. However, the learning latency increases with the HFL network scale due to the increasing number of edge servers and devices with limited local computation capability and communication bandwidth. To address this issue, in this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale. We present the convergence analysis of an upper on the l2-norm of gradients for HFL with model pruning, analyze the computation and communication latency of the proposed model pruning scheme, and formulate an optimization problem to maximize the convergence rate under a given latency threshold by jointly optimizing the pruning ratio and wireless resource allocation. By decoupling the optimization problem and using Karush-Kuhn-Tucker (KKT) conditions, closed-form solutions of pruning ratio and wireless resource allocation are derived. Simulation results show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50% communication cost.en_US
dc.format.extent5985 - 5999
dc.publisherIEEEen_US
dc.relation.ispartofIEEE Transactions on Wireless Communications
dc.rights© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.titleAdaptive Federated Pruning in Hierarchical Wireless Networksen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/TWC.2023.3329450
pubs.issue6en_US
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
pubs.volume23en_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US
rioxxterms.funder.projectb215eee3-195d-4c4f-a85d-169a4331c138en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record