Abstract. We present a comparative analysis of features such ascompression rate, image loss, sensitivity to resolution variation, etc.,for still image file formats in the most widespread use today forimage storage and transmission on computer networks. © 2004SPIE and IS&T. [DOI: 10.1117/1.1634591] 1 IntroductionNowadays, digital images are widely used in an extensivenumber of computer applications. Despite the fast growthof storage medium technologies, the use of images has be-come so popular that it always runs behind the greedy needfor resources. With the Internet, millions of web pages arebeing transmitted every moment and images are most oftheir content. More and more sophisticated image compres-sion algorithms are being continuously developed 1,2 and in-corporated into the ‘‘formats’’ of files, such as BMP, 3 PNG, 4,5 TIFF, 6 and JPEG, 7–9 among others. 10,11 The firstobjective of image compression schemes was to store im-ages in an efficient way, representing them in an accurateform and using as little space as possible. Today, however,due to the possibility of file transmission through computernetworks, a new aspect grows in importance—the way theimage is sent and the time elapsed in providing the observerenough elements for its recognition may be even more im-portant than the final size of the file. The correct election ofthe algorithm to be used for storage and/or transmissiondepends upon a series of factors, such as image nature,resolution, gamut ~number of colors used!, compression/decompression speed, how the algorithm decomposes im-ages, etc.This paper presents a comparative analysis betweencompression algorithms of widespread use today in com-mercial tools for storage and transmission. Images havebeen previously classified according to their nature~land-scapes, persons, objects, etc.!, with resolutions of 75, 100,150, 200, and 300 dpi. Several features have been analyzedaiming to identify points in which it could be establishedwhich algorithms are the most efficient for each type ofimage. The new graphic format JPEG2000
[1]
M. Charrier,et al.
JPEG2000, the next millennium compression standard for still images
,
1999,
Proceedings IEEE International Conference on Multimedia Computing and Systems.
[2]
Mohamed Abdel-Mottaleb,et al.
Image browsing using hierarchical clustering
,
1999,
Proceedings IEEE International Symposium on Computers and Communications (Cat. No.PR00250).
[3]
Abraham Lempel,et al.
A universal algorithm for sequential data compression
,
1977,
IEEE Trans. Inf. Theory.
[4]
Marti A. Hearst,et al.
Reexamining the cluster hypothesis: scatter/gather on retrieval results
,
1996,
SIGIR '96.
[5]
Gregory K. Wallace,et al.
The JPEG still picture compression standard
,
1991,
CACM.
[6]
V. Ralph Algazi,et al.
Objective picture quality scale (PQS) for image coding
,
1998,
IEEE Trans. Commun..
[7]
D. Huffman.
A Method for the Construction of Minimum-Redundancy Codes
,
1952
.
[8]
Paul Scheunders,et al.
A comparison of clustering algorithms applied to color image quantization
,
1997,
Pattern Recognit. Lett..
[9]
Heung-Kyu Lee,et al.
A Ranking Algorithm Using Dynamic Clustering for Content-Based Image Retrieval
,
2002,
CIVR.
[10]
Hichem Frigui,et al.
Clustering by competitive agglomeration
,
1997,
Pattern Recognit..
[11]
Khalid Sayood,et al.
Introduction to Data Compression, Third Edition (Morgan Kaufmann Series in Multimedia Information and Systems)
,
2005
.
[12]
David A. Forsyth,et al.
Learning the semantics of words and pictures
,
2001,
Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.
[13]
Mark Nelson,et al.
The Data Compression Book, 2nd Edition
,
1996
.
[14]
Touradj Ebrahimi,et al.
The JPEG2000 still image coding system: an overview
,
2000,
IEEE Trans. Consumer Electron..
[15]
John Miano,et al.
Compressed image file formats - JPEG, PNG, GIF, XBM, BMP
,
1999
.