An Adaptive Parallel Computer Vision System

An approach for designing a hybrid parallel system that can perform different levels of parallelism adaptively is presented. An adaptive parallel computer vision system (APVIS) is proposed to attain this goal. The APVIS is constructed by integrating two different types of parallel architectures, i.e. a multiprocessor based system (MBS) and a memory based processor array (MPA), tightly into a single machine. One important feature in the APVIS is that the programming interface to execute data parallel code onto the MPA is the same as the usual subroutine calling mechanism. Thus the existence of the MPA is transparent to the programmers. This research is to design an underlying base architecture that can be optimally executed for a broad range of vision tasks. A performance model is provided to show the effectiveness of the APVIS. It turns out that the proposed APVIS can provide significant performance improvement and cost effectiveness for highly parallel applications having a mixed set of parallelisms. Also an example application composed of a series of vision algorithms, from low-level and medium-level processing steps, is mapped onto the MPA. Consequently, the APVIS with a few or tens of MPA modules can perform the chosen example application in real time when multiple images are incoming successively with a few seconds inter-arrival time.

[1]  Jake K. Aggarwal,et al.  A Sliding Memory Plane Array Processor , 1993, IEEE Trans. Parallel Distributed Syst..

[2]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[3]  R. F. Freund,et al.  Augmenting the Optimal Selection Theory for Superconcurrency , 1992, Proceedings. Workshop on Heterogeneous Processing.

[4]  Duncan G. Elliott,et al.  Computational Ram: A Memory-simd Hybrid And Its Application To Dsp , 1992, 1992 Proceedings of the IEEE Custom Integrated Circuits Conference.

[5]  Nikolaos G. Bourbakis,et al.  A RISC-bit-sliced design of the HERMES multilevel vision system , 1988 .

[6]  Ioannis Pitas,et al.  Digital Image Processing Algorithms , 1993 .

[7]  Chau-Wen Tseng,et al.  Compiler optimizations for eliminating barrier synchronization , 1995, PPOPP '95.

[8]  H. T. Kung,et al.  The Warp Computer: Architecture, Implementation, and Performance , 1987, IEEE Transactions on Computers.

[9]  N. Otsu A threshold selection method from gray level histograms , 1979 .

[10]  John R. Nickolls,et al.  The design of the MasPar MP-1: a cost effective massively parallel computer , 1990, Digest of Papers Compcon Spring '90. Thirty-Fifth IEEE Computer Society International Conference on Intellectual Leverage.

[11]  S.H. Noh,et al.  Improving massively data parallel system performance with heterogeneity , 1992, [Proceedings 1992] The Fourth Symposium on the Frontiers of Massively Parallel Computation.

[12]  Howard Jay Siegel,et al.  PASM: a reconfigurable parallel system for image processing , 1984, CARN.

[13]  Linda G. Shapiro,et al.  Computer and Robot Vision , 1991 .

[14]  Maya Gokhale,et al.  Processing in Memory: The Terasys Massively Parallel PIM Array , 1995, Computer.

[15]  Yoshihiro Fujita,et al.  A 3.84 GIPS integrated memory array processor with 64 processing elements and a 2-Mb SRAM , 1994, IEEE J. Solid State Circuits.

[16]  Nikolaos G. Bourbakis,et al.  Kydon Vision System: the Adaptive Learning Model , 1995, Int. J. Artif. Intell. Tools.

[17]  Kenneth E. Batcher,et al.  Design of a Massively Parallel Processor , 1980, IEEE Transactions on Computers.

[18]  Janak H. Patel,et al.  NETRA: A Hierarchical and Partitionable Architecture for Computer Vision Systems , 1993, IEEE Trans. Parallel Distributed Syst..

[19]  L. W. Tucker,et al.  Architecture and applications of the Connection Machine , 1988, Computer.

[20]  Edward W. Davis,et al.  BLITZEN: a highly integrated massively parallel machine , 1988, Proceedings., 2nd Symposium on the Frontiers of Massively Parallel Computation.

[21]  Jorge L. C. Sanz,et al.  Algorithms for Image Component Labeling on SIMD Mesh-Connected Computers , 1987, IEEE Trans. Computers.

[22]  Howard Jay Siegel,et al.  Instruction execution trade-offs for SIMD vs. MIMD vs. mixed mode parallelism , 1991, [1991] Proceedings. The Fifth International Parallel Processing Symposium.

[23]  Karen H. Warren,et al.  Spilt-Join and Message Passing Programming Models on the BBN TC2OOO , 1991, ICPP.