A Markov decision process model for dynamic wavelength allocation in WDM networks

This paper outlines an optimal dynamic wavelength allocation in all-optical WDM networks. A simple topology consists of a 2-hop path network with three nodes is studied for three classes of traffic where each class corresponds to different source-destination pair. For each class, call interarrival and holding times are exponentially distributed. The objective is to determine a wavelength allocation policy in order to maximize the weighted sum of users of all classes. Consequently, this method is able to provide differentiated services in the network. The problem can be formulated as a Markov decision process to compute the optimal resource allocation policy. It has been shown numerically that for two and three classes of users, the optimal policy is of threshold type and monotonic. Simulation results compare the performance of the optimal policy, with that of complete sharing and complete partitioning policies.