Show simple item record

dc.contributor.authorLiu, Xen_US
dc.date.accessioned2022-11-02T15:59:55Z
dc.date.issued2021en_US
dc.identifier.urihttps://qmro.qmul.ac.uk/xmlui/handle/123456789/82216
dc.description.abstractThe intelligent Internet of Things (IoT) network is envisioned to be the internet of intelligent things. In this paradigm, billions of end devices with internet connectivity will provide interactive intelligence and revolutionise the current wireless communications. In the intelligent IoT networks, the unprecedented volume and variety of data is generated, making centralized cloud computing ine cient or even infeasible due to network congestion, resource-limited IoT devices, ultra-low latency applications and spectrum scarcity. Edge computing has been proposed to overcome these issues by pushing centralized communication and computation resource physically and logically closer to data providers and end users. However, compared with a cloud server, an edge server only provides nite computation and spectrum resource, making proper data processing and e cient resource allocation necessary. Machine learning techniques have been developed to solve the dynamic and complex problems and big data analysis in IoT networks. Speci - cally, Reinforcement Learning (RL) has been widely explored to address the dynamic decision making problems, which motivates the research on machine learning enabled computation o oading and resource management. In this thesis, several original contributions are presented to nd the solutions and address the challenges. First, e cient spectrum and power allocation are investigated for computation o oading in wireless powered IoT networks. The IoT users o oad all the collected data to the central server for better data processing experience. Then a matching theory-based e cient channel allocation algorithm and a RL-based power allocation mechanism are proposed. Second, the joint optimization problem of computation o oading and resource allocation is investigated for the IoT edge computing networks via machine learning techniques. The IoT users choose to o oad the intensive computation tasks to the edge server while keep simple task execution locally. In this case, a centralized user clustering algorithm is rst proposed as a pre-step to group the IoT users into di erent clusters according to user priorities for achieving spectrum allocation. Then the joint computation o oading, computation resource and power allocation for each IoT user is formulated as an RL framework and solved by proposing a deep Q-network based computation o oading algorithm. At last, to solve the simultaneous multiuser computation o oading problem, a stochastic game is exploited to formulate the joint problem of computation o oading mechanism of multiple sel sh users and resource (including spectrum, computation and radio access technologies resources) allocation into a non-cooperative multiuser computation o oading game. Therefore, a multi-agent RL framework is developed to solve the formulated game by proposing an independent learners based multi-agent Q-learning algorithm.en_US
dc.language.isoenen_US
dc.titleMachine Learning for Intelligent IoT Networks with Edge Computingen_US
pubs.notesNot knownen_US
rioxxterms.funderDefault funderen_US
rioxxterms.identifier.projectDefault projecten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Theses [4235]
    Theses Awarded by Queen Mary University of London

Show simple item record