Finding the Optimal Read Buffer Size for Grid Applications

The concept of grid computing addresses the next evolutionary step of distributed computing. The goal of this computing model is to make a better use of distributed resources, put them together in order to achieve higher throughput and be able to tackle large scale computation problems. Performance gain is intended at each and every level of an application. Grid data access is achieved by means of local and remote located files. We present here a study on the optimal size of the read buffer with its implications concerning the overall performance of grid and non- grid applications. This paper identifies and compares two methods of data access in a grid environment - using the storage element (SE) and local access. The results presented here come from a series of benchmarks carried on our local grid (GridMOSI), where we determine with a minimum error, the optimal interval for the size of the read buffer.