Energy-Efficient Cooperative Data Offloading in Cellular Networks Using Reinforcement Learning

Main Article Content

Nabeel Abdolrazagh Yaseen Alrashedi, Rasool Sadeghi, Wael Hussein Zayer Al-Lamy, Mehdi Hamidkhani, Reihaneh Khorsand

Abstract

To address the growing need for wireless communications energy efficiency, this paper proposes a new multi-agent reinforcement learning (MARL) approach to cooperative data offloading in heterogeneous cellular networks. This research is among the first to employ MARL to this extent, and it offers an end-to-end solution that combines cellular, Wi-Fi, and device-to-device (D2D) communications and considers practical network environments like user mobility and channel conditions. We formulate the offloading problem as a Markov Decision Process (MDP) with correct models of energy consumption and network conditions. The deep Q-network (DQN)-based MARL algorithm allows user equipment (UEs) to learn collaborative strategies for optimizing overall energy consumption and timely offloading of data. Simulations compare MARL against greedy, random, and independent Q-learning baselines in low and high mobility regimes. Experiments show that MARL saves energy by as much as 40% over random offloading and 16.6% over greedy offloading, as well as enhancing average delay, throughput, and fairness. Convergence of the learning rate of the algorithm is within 1000 episodes, and sensitivity analyses confirm its performance across a range of user density and data size settings. Furthermore, the MARL framework accommodates dynamic network conditions and provides an adaptable solution for network operators to maximize performance and sustainability for existing and future wireless networks.

Article Details

Section
Articles