On the latency in client/server networks

We formalize the notions of latency, effective throughput, fairness, and transient period in a complexity theoretic framework. This new framework allows us to prove the first known complexity results and tight bounds on the latency in a client/server distributed computing system. Using this formal complexity model, we study a general class of fair and maximally efficient control algorithms that maximizes the effective throughput and minimizes the transient period. We show that any fair and maximally efficient algorithm will result in at least cNlogN+O(N) latency, where N is the number of greedy clients in the network and the constant c is a parameter of the chosen algorithm. This lower bound is also shown to be tight.