Show simple item record

dc.contributor.authorJi, Z
dc.contributor.authorKiani, AK
dc.contributor.authorQin, Z
dc.contributor.authorAhmad, R
dc.date.accessioned2020-11-19T10:18:43Z
dc.date.available2020-10-31
dc.date.available2020-11-19T10:18:43Z
dc.date.issued2020-11-04
dc.identifier.issn2162-2337
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/68493
dc.description.abstractDevice-to-Device (D2D) communication can be used to improve system capacity and energy efficiency (EE) in cellular networks. One of the critical challenges in D2D communications is to extend network lifetime by efficient and effective resource management. Deep reinforcement learning (RL) provides a promising solution for resource management in wireless communication systems. This letter aims to maximise the EE while satisfying the system throughput constraints as well as the quality of service (QoS) requirements of D2D pairs and cellular users in an underlay D2D communication network. To achieve this, a deep RL based dynamic power optimization algorithm with dynamic rewards is proposed. Moreover, a novel algorithm with two parallel deep Q networks (DQNs) is designed to maximize the EE of the considered network. The proposed deep RL based power optimization method with dynamic rewards achieves higher EE while satisfying the system throughput requirements.en_US
dc.format.extent1 - 1
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.ispartofIEEE Wireless Communications Letters
dc.titlePower Optimization in Device-to-Device Communications: A Deep Reinforcement Learning Approach with Dynamic Rewarden_US
dc.typeArticleen_US
dc.rights.holder© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.identifier.doi10.1109/lwc.2020.3035898
pubs.notesNot knownen_US
pubs.publication-statusPublisheden_US
dcterms.dateAccepted2020-10-31
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record