Overhead Reduction in the Network Communication for Web Computing

One major source of overhead for conventional network communication is the involvement of operating system to allow for virtual sharing of the network interface card among applications and its processing of the UDP/TCP/IP headers on behalf of applications. The involvement of operating system results in overheads such as extra write/read operations to the kernel buffer through which each inbound/outbound messages should pass. Another overhead comes from scheduling which may hinder immdiate sending or receiving of messages to and from network interface buffer. Recently, U-Net over Fast Ethernet [9] addressed this problem and proposed a new scheme which circumvents the operating system in processing the network communication. However, since the approach has removed IP header and provides multiplexing in the ethernet layer, it is only applicable to an ethernet LAN. In this paper, we define a new protocol by which messages are allowed to move across routers and the virtual sharing of network interface card is supported with minimal overhead. In a word, we extended the U-Net over Fast Ethernet [9] beyond routers to WAN and hence opening a new way to exploit vast amount of computing resources all over Internet. With our protocol, cluster computing over any part of WAN is realizable as far as total delay caused by intervening routers is tolerable to the application. As a justification of our approach, we show a small part of WAN containing a router exhibits latency comparable to (differ by less than 1 ms) that of a shared ethernet. Another motivation for our approach is the observation that the aggregate bandwidth of two subnets (part of WAN) is greater than that of a single subnet (LAN). With same number of hosts, each connection will experience more available bandwidth with the part of WAN than the LAN. The experiment over WAN involving a router shows reduction of latency comparable to that of U-Net over Fast Ethernet.