Partial likelihood for signal processing

We present partial likelihood (PL) as an effective means for developing nonlinear techniques for signal processing. Posing signal processing problems in a likelihood setting provides a number of advantages, such as allowing the use of powerful tools in statistics and easy incorporation of model order/complexity selection into the problem by use of appropriate information-theoretic criteria. However, likelihood formulations in most time series applications require a mechanism to discount the dependence structure of the data. We address how PL bypasses this requirement and note that it might coincide with conditional likelihood in a number of cases. We show that PL theory can also be used to establish the fundamental information-theoretic connection, to show the equivalence of likelihood maximization and relative entropy minimization without making the assumption of independent observations, which is an unrealistic assumption for most signal processing applications. We show that this equivalence is true for the basic class of probability models (the exponential family), which includes many important structures that can be used as nonlinear filters. We conclude by giving examples of the application of PL theory.

[1]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[2]  Sun-Yuan Kung,et al.  Quantification and segmentation of brain tissues from MR images: a probabilistic neural network approach , 1998, IEEE Trans. Image Process..

[3]  J. Treichler,et al.  A new approach to multipath correction of constant modulus signals , 1983 .

[4]  Huaiyu Zhu On Information and Sufficiency , 1997 .

[5]  D. Godard,et al.  Self-Recovering Equalization and Carrier Tracking in Two-Dimensional Data Communication Systems , 1980, IEEE Trans. Commun..

[6]  Tülay Adali,et al.  Least relative entropy for voiced/unvoiced speech classification , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[7]  Halbert White,et al.  Learning in Artificial Neural Networks: A Statistical Perspective , 1989, Neural Computation.

[8]  Xiao Liu,et al.  A General Probabilistic Formulation for Supervised Neural Classifiers , 2000, J. VLSI Signal Process..

[9]  Jerry M. Mendel,et al.  Maximum-Likelihood Deconvolution , 1989 .

[10]  J. Cardoso Infomax and maximum likelihood for blind source separation , 1997, IEEE Signal Processing Letters.

[11]  Xiao Liu,et al.  Conditional distribution learning with neural networks and its application to channel equalization , 1997, IEEE Trans. Signal Process..

[12]  Robert M. Gray,et al.  Random processes : a mathematical approach for engineers , 1986 .

[13]  Bo Wang,et al.  Partial likelihood methods for probability density estimation , 1999, Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468).

[14]  Bo Wang,et al.  Partial likelihood for estimation of multi-class posterior probabilities , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[15]  E. V. Slud,et al.  Partial likelihood for continuous-time stochastic processes , 1992 .

[16]  Tülay Adali,et al.  On-line order selection for communications , 2001, 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221).

[17]  W. Wong Theory of Partial Likelihood , 1986 .