Perfect metrics
暂无分享,去创建一个
The authors describe an experiment in the construction of perfect metrics for minimum-distance classification of character images. A perfect metric is one that, with high probability, is zero for correct classifications and non-zero for incorrect classifications. They promise excellent reject behavior in addition to good rank ordering. The approach is to infer from the training data faithful but concise representations of the empirical class-conditional distributions. In doing this, the authors have abandoned many visual simplifying assumptions about the distributions, e.g., that they are simply-connected, unimodal, convex, or parametric (e.g., Gaussian). The method requires unusually large and representative training sets, which we provide through pseudorandom generation of training samples using a realistic model of printing and imaging distortions. The authors illustrate the method on a challenging recognition problem: 3755 character classes of machine-print Chinese, in four typefaces, over a range of text sizes. In a test on over three million images, the perfect-metric classifier achieved better than 99% top-choice accuracy. In addition, it is shown that it is superior to a conventional parametric classifier.<<ETX>>
[1] Michael P. Ekstrom,et al. Digital Image Processing Techniques , 1984 .
[2] Henry S. Baird,et al. Document image defect models , 1995 .
[3] Kazuhiko Yamamoto,et al. Research on Machine Recognition of Handprinted Characters , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[4] Richard O. Duda,et al. Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.