Laurinavicius, Ignas, Zhu, Huiling, Pan, Yijin, Chen, Changrun, Wang, Jiangzhou (2025) Novel deep reinforcement learning for user association in fog radio access networks. IEEE Journal on Selected Areas in Communications, . pp. 1-16. ISSN 1558-0008. E-ISSN 1558-0008. (doi:10.1109/JSAC.2025.3574590) (KAR id:110201)
|
PDF
Pre-print
Language: English |
|
|
Download this file (PDF/917kB) |
Preview |
| Request a format suitable for use with assistive technology e.g. a screenreader | |
| Official URL: https://doi.org/10.1109/JSAC.2025.3574590 |
|
| Additional URLs: |
|
Abstract
As an evolution of cloud radio access network (CRAN), fog radio access network (F-RAN) becomes promising for future mobile communications by enabling processing and caching at fog access points (FAPs). Different from the centralised C-RAN, F-RAN has a semi-distributed architecture, aiming to alleviate traffic load on the fronthaul links in C-RAN. Under the semi-distributed architecture in F-RAN, which employs a cell-free multiple input multiple output (MIMO) access technique, decisions on the joint user-FAP association and transmit power allocation are made at individual FAPs. To mitigate strong interference, FAPs will need to exchange cooperative status
information, such as CSI, user association details or transmission power levels. However, this can lead to significant communication
overhead within the network and introduce high complexity in the decision-making process. In this paper, accounting for the semi-distributed nature of the F-RAN architecture, reinforcement learning is leveraged as a potential solution to this kind of problem, and a novel multi-agent dual deep Q-network (MA-DDQN) algorithm is proposed by introducing experience exchange in partially observable Markov decision process environments. The simulation results show that the proposed reinforcement learning based algorithm outperforms the DDQN algorithm as well as the existing low-complexity algorithms.
| Item Type: | Article |
|---|---|
| DOI/Identification number: | 10.1109/JSAC.2025.3574590 |
| Uncontrolled keywords: | reinforcement Learning; deep reinforcement learning; multi-agent deep reinforcement learning; fog radio access networks; cell-free MIMO; user-AP association |
| Subjects: |
T Technology > TK Electrical engineering. Electronics. Nuclear engineering > TK5101 Telecommunications > TK5103.4 Broadband communication systems T Technology > TK Electrical engineering. Electronics. Nuclear engineering > TK5101 Telecommunications > TK5105 Data transmission systems |
| Institutional Unit: | Schools > School of Engineering, Mathematics and Physics > Engineering |
| Former Institutional Unit: |
There are no former institutional units.
|
| Funders: | UK Research and Innovation (https://ror.org/001aqnf71) |
| Depositing User: | Huiling Zhu |
| Date Deposited: | 05 Jun 2025 15:06 UTC |
| Last Modified: | 22 Jul 2025 09:23 UTC |
| Resource URI: | https://kar.kent.ac.uk/id/eprint/110201 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):

https://orcid.org/0000-0003-3021-5013
Altmetric
Altmetric