Geometric Topology for Information Acquisition by Means of Smart Sensor Web

The geometric topology of a point per event written in the higher dimensional μ-space of data (e.g. 6W's: who, where, when, what, how, and why) can help in the design of information acquisition (IA) systems. Measurement intensity of each W's sensor, or the number of words used to describe a specific W's attribute, represents the length of each vector dimension. Then, N concurrent reports of the same event become a distribution set of N points scattered all over μ-space. To discover the statistically independent components, an unsupervised or unbiased Artificial Neural Networks (ANN) methodology called Independent Component Analysis (ICA) can be used to reveal a new subspace called the feature space. The major and minor axes of the subspace correspond to highly precise and efficient combinations of old attributes (e.g. 2-D feature domains consisting of "where-who-when" and "what-how-why" could be good choices for Internet search indices). Thus, one realizes that the communication of an event is not just the address-where: but who and when are equally important attributes. In principle, the number of new sensors can be reduced (e.g. from 6 W's to 2 features), provided that they are physically realizable. In the combined space of 6N-dimensional Γ-space, one point can represent all N concurrent measurements; the flow of these generates the event behavior in time. The time flow over the reduced 2N feature space generates invariant features called knowledge. For surveillance against terrorists, legacy electrical power line communication (PLC) will offer a useful relay for the last mile of mobile communications for a Surveillance Sensor Web (SSW) employing ANN: there is no need for "where" addressing for switching because of smart coding and decoding of "who-when." After reviewing Auto-Regression (AR), we generalize AR to a supervised ANN implementation of Principal Component Analysis (PCA) (Appendix A) learning toward unsupervised learning ANN for ICA (Appendix B). This is possible non-statistically because the classical-closed information theory (CIT) of the maximum Shannon entropy S of a closed system must be generalized for open brain information theory (BIT) having non-zero energy exchange E at the minimum Helmholtz free energy H=E-ToS at isothermal equilibrium (To=37°C). For such an open BIT system, we prove the Lyaponov convergence theorem. We compute the ICA features of image textures in order to measure the ICA classifier information content.

[1]  Steven Noel,et al.  Multimedia authenticity protection with ICA watermarking and digital bacteria vaccination , 2003, Neural Networks.

[2]  Carey E. Priebe,et al.  Optoelectronic computation of waveletlike-based features , 1992 .

[3]  H. Szu Matched filter spectrum shaping for light efficiency. , 1985, Applied optics.

[4]  L. Sirovich A pattern analysis of the second Rehnquist U.S. Supreme Court , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[5]  H. Szu,et al.  Texture analysis by space-filling curves and one-dimensional Haar wavelets , 1992 .

[6]  Harold H. Szu,et al.  Mathematics of adaptive wavelet transforms: relating continuous with discrete transforms , 1994 .

[7]  Richard W. Conners,et al.  A Theoretical Comparison of Texture Algorithms , 1980, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  R.M. Haralick,et al.  Statistical and structural approaches to texture , 1979, Proceedings of the IEEE.

[9]  Harold H. Szu,et al.  Classifying multispectral data by neural networks , 1993 .

[10]  C.-C. Jay Kuo,et al.  Texture analysis and classification with tree-structured wavelet transform , 1993, IEEE Trans. Image Process..

[11]  Brian A. Telfer,et al.  Wavelet transforms and neural networks for compression and recognition , 1996, Neural Networks.

[12]  Pornchai Chanyagorn,et al.  Sparse coding blind source separation through Powerl , 2002, Neurocomputing.

[13]  Terrence J. Sejnowski,et al.  An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.