A forwarding strategy based on reinforcement learning for Content-Centric Networking

This paper proposes a packet forwarding strategy for Information-Centric Networking. Our proposal is based on reinforcement learning techniques and aims at balancing the exploration of new paths and exploiting the data acquired from previous explorations. The output interfaces of a node are classified according to the content retrieval time and all interests that share the same prefix with contents previously forwarded are sent through the interface with the lowest mean retrieval time. The exploration of new paths is probabilistic. Each node sends the same interest through the best interface and through another interface chosen at random simultaneously. The goal is to retrieve the content by using the best path found until present moment and at the same time explore copies that are recently stored in the cache of nearest nodes. Simulation results show that the proposed strategy reduces up to 28% the number of hops traversed by received contents and up to 80% the interest load per node in comparison to other forwarding strategies.