Partial Computation Offloading in NOMA-Assisted Mobile-Edge Computing Systems Using Deep Reinforcement Learning

Mobile-edge computing (MEC) and nonorthogonal multiple access (NOMA) have been regarded as promising technologies for beyond fifth-generation (B5G) and sixth-generation (6G) networks. This study aims to reduce the computational overhead (weighted sum of consumed energy and latency) in a NOMA-assisted MEC network by jointly optimizing the computation offloading policy and channel resource allocation under dynamic network environments with time-varying channels. To this end, we propose a deep reinforcement learning algorithm named ACDQN that utilizes the advantages of both actor–critic and deep $Q$ -network methods and provides low complexity. The proposed algorithm considers partial computation offloading, where users can split computation tasks so that some are performed on the local terminal while some are offloaded to the MEC server. It also considers a hybrid multiple access scheme that combines the advantages of NOMA and orthogonal multiple access to serve diverse user requirements. Through extensive simulations, it is shown that the proposed algorithm stably converges to its optimal value, provides approximately 10%, 27%, and 69% lower computational overhead than the prevalent schemes, such as full offloading with NOMA, random offloading with NOMA, and fully local execution, and achieves near-optimal performance.