In Bayesian pattern recognition research, static classifiers have featured prominently in the literature. A static classifier is essentially based on a static model of input statistics, thereby assuming input ergodicity that is not realistic in practice. Classical Bayesian approaches attempt to circumvent the limitations of static classifiers, which can include brittleness and narrow coverage, by training extensively on a data set that is assumed to cover more than the subtense of expected input. Such assumptions are not realistic for more complex pattern classification tasks, for example, object detection using pattern classification applied to the output of computer vision filters. In contrast, we have developed a two step process, that can render the majority of static classifiers adaptive, such that the tracking of input nonergodicities is supported. Firstly, we developed operations that dynamically insert (or resp. delete) training patterns into (resp. from) the classifier's pattern database, without requiring that the classifier's internal representation of its training database be completely recomputed. Secondly, we developed and applied a pattern replacement algorithm that uses the aforementioned pattern insertion/deletion operations. This algorithm is designed to optimize the pattern database for a given set of performance measures, thereby supporting closed-loop, performance-directed optimization. This paper presents theory and algorithmic approaches for the efficient computation of adaptive linear and nonlinear pattern recognition operators that use our pattern insertion/deletion technology - in particular, tabular nearest-neighbor encoding (TNE) and lattice associative memories (LAMs). Of particular interest is the classification of nonergodic datastreams that have noise corruption with time-varying statistics. The TNE and LAM based classifiers discussed herein have been successfully applied to the computation of object classification in hyperspectral remote sensing and target recognition applications. The authors' recent research in the development of adaptive TNE and adaptive LAMs is overviewed, with experimental results that show utility for a wide variety of pattern classification applications. Performance results are presented in terms of measured computational cost, noise tolerance, and classification accuracy.
[1]
M. Schmalz.
HYPERSPECTRAL SIGNATURE CLASSIFICATION WITH TABULAR NEAREST-NEIGHBOR ENCODING
,
2007
.
[2]
R. Lippmann,et al.
An introduction to computing with neural nets
,
1987,
IEEE ASSP Magazine.
[3]
J. Boardman.
Automating spectral unmixing of AVIRIS data using convex geometry concepts
,
1993
.
[4]
Gerhard X. Ritter,et al.
Learning In Lattice Neural Networks that Employ Dendritic Computing
,
2006,
FUZZ-IEEE.
[5]
Gerhard X. Ritter,et al.
Noise tolerant dendritic lattice associative memories
,
2011,
Optical Engineering + Applications.
[6]
G. Ritter,et al.
HYPERSPECTRAL ENDMEMBER EXTRACTION AND SIGNATURE CLASSIFICATION WITH MORPHOLOGIAL NETWORKS
,
2006
.
[7]
Gerhard X. Ritter,et al.
Autonomous single-pass endmember approximation using lattice auto-associative memories
,
2009,
Neurocomputing.
[8]
Gerhard X. Ritter,et al.
Massively parallel computation of lattice associative memory classifiers on multicore processors
,
2011,
Optical Engineering + Applications.
[9]
Paul E. Johnson,et al.
Spectral mixture modeling: A new analysis of rock and soil types at the Viking Lander 1 Site
,
1986
.
[10]
Gerhard X. Ritter,et al.
Lattice algebra approach to single-neuron computation
,
2003,
IEEE Trans. Neural Networks.
[11]
Joseph N. Wilson,et al.
Handbook of computer vision algorithms in image algebra
,
1996
.