Space and time savings through large data base compression and dynamic restructuring
暂无分享,去创建一个
The conventional general-purpose data management system tends to make inefficient use of storage space. By reducing the physical size of the data, substantial savings are available in the area of storage cost. Reduction ratios of 4:1 and more are realizable. This also reduces the amount of I/O time required to physically transfer data between secondary and primary memories. Since I/O time tends to be the pacing factor when processing large data bases, this could produce a 4:1 or greater reduction in response time. Data compression experience with four large data bases is described. In some applications, only a small fraction of the data transferred in an individual I/O operation is relevant to the query being processed. If usage patterns are measured and data records and fields are rearranged so that those which are commonly referenced together are also physically stored together, then additional savings are available. Once the data base has been partitioned into clusters of commonly accessed data, further efficiencies can be obtained by choosing data structures, compression strategies, and storage devices that are optimal for the recent usage pattern observed on that cluster.
[1] D. Huffman. A Method for the Construction of Minimum-Redundancy Codes , 1952 .
[2] James E. Mulford,et al. Data compression techniques for economic processing of large commercial files , 1971, SIGIR '71.
[3] Larry H. Thiel,et al. Optimum procedures for economic information retrieval , 1970, Inf. Storage Retr..
[4] M. Wells,et al. File compression using variable length encodings , 1972, Comput. J..