Selection algorithms for replicated Web servers

Replication of documents on geographically distributed servers can improve both performance and reliability of the Web service. Server selection algorithms allow Web clients to select one of the replicated servers which is "close" to them and thereby minimize the response time of the Web service. Using client proxy server traces, we compare the effectiveness of several "proximity" metrics including the number of hops between the client and server, the ping round trip time and the HTTP request latency. Based on this analysis, we design two new algorithms for selection of replicated servers and compare their performance against other existing algorithms. We show that the new server selection algorithms improve the performance of other existing algorithms on the average by 55%. In addition, the new algorithms improve the performance of the existing non-replicated Web servers on average by 69%.

[1]  Darrell D. E. Long,et al.  A longitudinal survey of Internet host reliability , 1995, Proceedings. 14th Symposium on Reliable Distributed Systems.

[2]  David E. Culler,et al.  Using smart clients to build scalable services , 1997 .

[3]  Jim Gray,et al.  A census of Tandem system availability between 1985 and 1990 , 1990 .

[4]  Mark Crovella,et al.  Dynamic Server Selection In The Internet , 1995, Third IEEE Workshop on the Architecture and Implementation of High Performance Communication Subsystems.

[5]  Margo I. Seltzer,et al.  The case for geographical push-caching , 1995, Proceedings 5th Workshop on Hot Topics in Operating Systems (HotOS-V).

[6]  Vern Paxson,et al.  End-to-end routing behavior in the Internet , 1996, TNET.

[7]  V. Paxson End-to-end routing behavior in the internet , 2006, CCRV.

[8]  Michael F. Schwartz,et al.  Locating nearby copies of replicated Internet servers , 1995, SIGCOMM '95.

[9]  Peter Sturm,et al.  Introducing Application-Level Replication and Naming into Today's Web , 1996, Comput. Networks.