A survey of feature space reduction methods for context aware processing in IoBT networks

The military use of the Internet of Things within a battlefield environment aims to combine the information collected from a system of heterogeneous sensors and actuators in order to create a cohesive model of the relevant battlefield so that intelligent agents can provide risk-aware decisions or take proper actions, and to collectively give warfighters an edge. To do inference and reasoning under uncertainty efficiently, the most important and relevant features regardless of modality must be identified for each given context, classification task, and classification approach. This can minimize the computational costs required to build a specific model, increase the accuracy of the model and may also allow the model to be generalized. However, the dynamic and adversarial nature of the battlefield may mean that the availability and reliability of sensors will vary over time. Adding a certain amount of redundancy in the set of features used to train an ensemble of classifiers may improve model robustness and minimize uncertainty. One approach to achieve this is by modeling the feature space so that the likely importance of a given set of features can be estimated when context, classification task, or approach is varied. To efficiently understand the shape of a given feature space and to locate clusters of features in a locally distributed fashion, we surveyed methods to select important features and to describe or explore a given feature space.

[1]  L. Saul,et al.  An Introduction to Locally Linear Embedding , 2001 .

[2]  Gavin Brown,et al.  Conditional Likelihood Maximisation: A Unifying Framework for Mutual Information Feature Selection , 2012 .

[3]  David D. Lewis,et al.  Feature Selection and Feature Extraction for Text Categorization , 1992, HLT.

[4]  Gholamreza Haffari,et al.  Scoring relevancy of features based on combinatorial analysis of Lasso with application to lymphoma diagnosis , 2013, BMC Genomics.

[5]  Andre Harrison,et al.  Unfolding the External Behavior and Inner Affective State of Teammates through Ensemble Learning: Experimental Evidence from a Dyadic Team Corpus , 2018, LREC.

[6]  Yiming Yang,et al.  A Comparative Study on Feature Selection in Text Categorization , 1997, ICML.

[7]  R. Fisher THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS , 1936 .

[8]  Laila Benhlima,et al.  Review on wrapper feature selection approaches , 2016, 2016 International Conference on Engineering & MIS (ICEMIS).

[9]  Yixin Zhong,et al.  Dimensionality reduction for text using LLE , 2008, 2008 International Conference on Natural Language Processing and Knowledge Engineering.

[10]  Marco Cristani,et al.  Infinite Feature Selection , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[11]  Xu Qian,et al.  Supervised Non-Linear Dimensionality Reduction Techniques for Classification in Intrusion Detection , 2010, 2010 International Conference on Artificial Intelligence and Computational Intelligence.

[12]  Ananthram Swami,et al.  The Internet of Battle Things , 2016, Computer.

[13]  E. Barocio,et al.  Comparison of Dimensionality Reduction Techniques for Clustering and Visualization of Load Profiles , 2016, 2016 IEEE PES Transmission & Distribution Conference and Exposition-Latin America (PES T&D-LA).

[14]  Isabelle Guyon,et al.  An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..

[15]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[16]  Suchita Goswami,et al.  An Extensive Survey on Feature Extraction Techniques for Facial Image Processing , 2014, 2014 International Conference on Computational Intelligence and Communication Networks.

[17]  George C. Runger,et al.  Feature selection via regularized trees , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).

[18]  Kewei Cheng,et al.  Feature Selection , 2016, ACM Comput. Surv..

[19]  Sahista Machchhar,et al.  An evolution and evaluation of dimensionality reduction techniques — A comparative study , 2014 .

[20]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[21]  Josef Kittler,et al.  Floating search methods in feature selection , 1994, Pattern Recognit. Lett..

[22]  J. Moody,et al.  Feature Selection Based on Joint Mutual Information , 1999 .

[23]  Duncan Fyfe Gillies,et al.  A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data , 2015, Adv. Bioinformatics.

[24]  Huan Liu,et al.  Feature Selection for Classification: A Review , 2014, Data Classification: Algorithms and Applications.

[25]  Kriti Saroha,et al.  Study of dimension reduction methodologies in data mining , 2015, International Conference on Computing, Communication & Automation.

[26]  Keinosuke Fukunaga,et al.  A Branch and Bound Algorithm for Feature Subset Selection , 1977, IEEE Transactions on Computers.

[27]  Emre Akbas,et al.  Supervised Infinite Feature Selection , 2017, ArXiv.

[28]  Samina Khalid,et al.  A survey of feature selection and feature extraction techniques in machine learning , 2014, 2014 Science and Information Conference.

[29]  Pavel Pudil,et al.  Conditional Mutual Information Based Feature Selection for Classification Task , 2007, CIARP.

[30]  Andre Harrison,et al.  Using cardiovascular features to classify state changes during cooperation in a simulated bomb defusal task , 2016, IVA 2016.

[31]  Richard Bowden,et al.  Evaluating dimensionality reduction techniques for visual category recognition using rényi entropy , 2011, 2011 19th European Signal Processing Conference.

[32]  Fuhui Long,et al.  Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy , 2003, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .