Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression
暂无分享,去创建一个
NASA Tech Briefs, January 2014 A family of schemes has been devised for organizing the output of an algorithm for predictive data compression of hyperspectral imagery so as to allow efficient parallelization in both the compressor and decompressor. In these schemes, the compressor performs a number of iterations, during each of which a portion of the data is compressed via parallel threads operating on independent portions of the data. The general idea is that for each iteration it is predetermined how much compressed data will be produced from each thread. A simple version of this technique is applicable when the image is divided into “pieces” that are compressed independently. As an example, for a compressor that does not make use of interband correlation, a piece could be defined to be an individual spectral band, or a fixed number of bands. In the technique, the compressed output for a piece is comprised of multiple “chunks.” The concatenated chunks for a given piece form the compressed output for the piece. Most of the compressed image is produced in multiple iterations, where during a given iteration, one chunk is produced for each piece. Prior to the start of an iteration, chunk sizes are calculated for each piece. The chunks can be produced or decompressed in parallel. It is noted that it is not specified how much of the image data will go into a chunk, and in fact a chunk may contain incomplete portions of encoded samples (at the chunk’s start or end). The compressor iterates the process of deciding on chunk sizes and producing chunks for each piece of the requested size, until compression of each piece is almost finished. At that point, the remainder of the pieces is compressed serially without a target chunk size. Typically, the chunk size calculation should seek to balance the progress through each piece, i.e., to leave equal numbers of samples remaining in each piece; a suggested procedure has this aim. A key requirement on the chunk input(s) to a system and the resultant output(s) in real time or a posteriori, or from software-generated data sets, were presented to the system, which generated outputs. Once a system is learned, the coefficients and constants can be frozen and the algorithm embedded in an application. This work was done by Michael J. Krasowski and Norman F. Prokop of Glenn Research Center. Further information is contained in a TSP (see page 1). Inquiries concerning rights for the commercial use of this invention should be addressed to NASA Glenn Research Center, Innovative Partnerships Office, Attn: Steven Fedor, Mail Stop 4–8, 21000 Brookpark Road, Cleveland, Ohio 44135. Refer to LEW-18887-1.