dc.contributor.author | Zheng, Z | |
dc.contributor.author | Deng, Y | |
dc.contributor.author | Liu, X | |
dc.contributor.author | Nallanathan, A | |
dc.date.accessioned | 2024-07-16T14:40:16Z | |
dc.date.available | 2024-07-16T14:40:16Z | |
dc.date.issued | 2023-01-01 | |
dc.identifier.issn | 2334-0983 | |
dc.identifier.uri | https://qmro.qmul.ac.uk/xmlui/handle/123456789/98181 | |
dc.description.abstract | The emerging field of federated learning (FL) provides great potential for edge intelligence while protecting data privacy. However, as the system grows in scale or becomes more heterogeneous, new challenges, such as the spectrum shortage and stragglers issues, arise. These issues can potentially be addressed by over-the-air computation (AirComp) and asynchronous FL, respectively, however, their combination is difficult due to their conflicting requirements. In this paper, we propose a novel asynchronous FL with AirComp in a time-triggered manner (async-AirFed). The conventional async aggregation requests the historical data to be used for model updates, which can cause the accumulation of channel noise and interference when AirComp is applied. To address this issue, we propose a simple but effective truncation method which retains a limited length of historical data. Convergence analysis presents that our proposed async-AirFed converges on non-convex optimality function with sub-linear rate. Simulation results show that our proposed scheme achieves more than 34% faster convergence than the benchmarks, by achieving an accuracy of 85%, which also improves the time utilization efficiency and reduces the impact of staleness and the channel. | en_US |
dc.format.extent | 1345 - 1350 | |
dc.publisher | IEEE | en_US |
dc.title | Asynchronous Federated Learning via Over-the-Air Computation | en_US |
dc.type | Conference Proceeding | en_US |
dc.rights.holder | © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |
dc.identifier.doi | 10.1109/GLOBECOM54140.2023.10437951 | |
pubs.notes | Not known | en_US |
pubs.publication-status | Published | en_US |
rioxxterms.funder | Default funder | en_US |
rioxxterms.identifier.project | Default project | en_US |
rioxxterms.funder.project | b215eee3-195d-4c4f-a85d-169a4331c138 | en_US |