An Asymptotically Optimal Policy for a Quantity-Based Network Revenue Management Problem

We consider a canonical revenue management problem in a network setting where the goal is to find a customer admission policy to maximize the total expected revenue over a fixed finite horizon. There is a set of resources, each of which has a fixed capacity. There are several customer classes, each with an associated arrival process, price, and resource consumption vector. If a customer is accepted, it effectively removes the resources that it consumes from the system. The exact solution cannot be obtained for reasonable-sized problems due to the curse of dimensionality. Several (approximate) solution techniques have been proposed in the literature. One way to analytically compare policies is via an asymptotic analysis where both resource sizes and arrival rates grow large. Many of the proposed policies are asymptotically optimal on the fluid scale. However, as we demonstrate in this paper, these policies may fail to be optimal on the more sensitive diffusion scale even for quite simple problem instances. We develop a new policy that achieves diffusion-scale optimality. The policy starts with a probabilistic admission rule derived from the optimization of the fluid model, embeds a trigger function that tracks the difference between the actual and expected customer acceptance, and sets threshold values for the trigger function, the violation of which invokes the reoptimization of the admission rule. We show that re-solving the fluid model, which needs to be performed at most once, is required for extending the asymptotic optimality from the fluid scale to the diffusion scale. We demonstrate the implementation of the policy by numerical examples.