Text compression algorithms are normally defined in terms of a source alphabet Sigma of 8-bit ASCII codes. The authors consider choosing Sigma to be an alphabet whose symbols are the words of English or, in general, alternate maximal strings of alphanumeric characters and nonalphanumeric characters. The compression algorithm would be able to take advantage of longer-range correlations between words and thus achieve better compression. The large size of Sigma leads to some implementation problems, but these are overcome to construct word-based LZW, word-based adaptive Huffman, and word-based context modelling compression algorithms.<<ETX>>
[1]
R. Nigel Horspool,et al.
Algorithms for Adaptive Huffman Codes
,
1984,
Inf. Process. Lett..
[2]
Ian H. Witten,et al.
Text Compression
,
1990,
125 Problems in Text Algorithms.
[3]
Terry A. Welch,et al.
A Technique for High-Performance Data Compression
,
1984,
Computer.
[4]
Robert E. Tarjan,et al.
A Locally Adaptive Data
,
1986
.
[5]
E TarjanRobert,et al.
A locally adaptive data compression scheme
,
1986
.
[6]
Robert G. Gallager,et al.
Variations on a theme by Huffman
,
1978,
IEEE Trans. Inf. Theory.