Reinforcement learning based adaptive handover in ultra-dense cellular networks with small cells

The dense deployment of the small base station (BS) in fifth-generation commination system can satisfy the user demand on high data rate transmission. On the other hand, such a scenario also increases the complexity of mobility management. In this paper, we developed a Q-learning framework exploiting user radio condition, that is, reference signal receiving power (RSRP), signal to inference and noise ratio (SINR) and transmission distance to learn the optimal policy for handover triggering. The objective of the proposed approach is to increase the mobility robustness of user in ultra-dense networks (UDNs) by minimizing redundant handover and handover failure ratio. Simulation results show that our proposed triggering mechanism efficiency suppresses ping-pong handover effect while maintaining handover failure at an acceptable level. Besides, the proposed triggering mechanism can trigger the handover process directly without HOM and TTT. The respond speed of triggering mechanism can thus be increased.