Feature Relevance Network-Based Transfer Learning for Indoor Location Estimation

We present a new machine learning framework for indoor location estimation. In many cases, locations could be easily estimated using various traditional positioning methods and conventional machine learning approaches based on signalling devices, e.g., access points (APs). When there exist environmental changes, however, such traditional methods cannot be employed due to data distribution change. In order to circumvent this difficulty, we introduce feature relevance network-based method, which focuses on interrelatedness among features. Feature relevance networks are connected graphs representing concurrency of the signalling devices such as APs. In the newly created relevance network, a test instance and the prototype of a location are expanded until convergence. The expansion cost corresponds to distance between the test instance and the prototype. Unlike other methods, our model is nonparametric making no assumptions about signal distributions. The proposed method is applied to the 2007 IEEE International Conference on Data Mining Data Mining Contest Task #2 (transfer learning), which is a typical example situation where the training and test datasets have been gathered during different periods. Using the proposed method, we accomplish the estimation accuracy of 0.3238, which is better than the best result of the contest.

[1]  Jing Liu,et al.  Survey of Wireless Indoor Positioning Techniques and Systems , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[2]  Tommy W. S. Chow,et al.  A New Feature Selection Scheme Using a Data Distribution Factor for Unsupervised Nominal Data , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[3]  Bernhard Schölkopf,et al.  Learning with Local and Global Consistency , 2003, NIPS.

[4]  Qiang Yang,et al.  Learning Adaptive Temporal Radio Maps for Signal-Strength-Based Location Estimation , 2008, IEEE Transactions on Mobile Computing.

[5]  Massimiliano Pontil,et al.  Regularized multi--task learning , 2004, KDD.

[6]  R. K. Shyamasundar,et al.  Introduction to algorithms , 1996 .

[7]  Andrew Y. Ng,et al.  Transfer learning for text classification , 2005, NIPS.

[8]  Deng Cai,et al.  Probabilistic dyadic data analysis with local and global consistency , 2009, ICML '09.

[9]  Stephen A. Billings,et al.  Neighborhood Detection for the Identification of Spatiotemporal Systems , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Robert P. W. Duin,et al.  Beyond Traditional Kernels: Classification in Two Dissimilarity-Based Representation Spaces , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[11]  Toshiro Kawahara,et al.  Robust indoor location estimation of stationary and mobile users , 2004, IEEE INFOCOM 2004.

[12]  Mikhail Belkin,et al.  Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering , 2001, NIPS.

[13]  Daniel Marcu,et al.  Domain Adaptation for Statistical Classifiers , 2006, J. Artif. Intell. Res..

[14]  Rudolf Fleischer,et al.  Distance Approximating Dimension Reduction of Riemannian Manifolds , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[15]  Qiang Yang,et al.  Estimating Location Using Wi-Fi , 2008, IEEE Intelligent Systems.

[16]  Qiang Yang,et al.  Transferring Naive Bayes Classifiers for Text Classification , 2007, AAAI.

[17]  Yiqiang Chen,et al.  Accurate and Low-cost Location Estimation Using Kernels , 2005, IJCAI.

[18]  Rajat Raina,et al.  Self-taught learning: transfer learning from unlabeled data , 2007, ICML '07.

[19]  Koby Crammer,et al.  Analysis of Representations for Domain Adaptation , 2006, NIPS.

[20]  Bernhard Schölkopf,et al.  Correcting Sample Selection Bias by Unlabeled Data , 2006, NIPS.

[21]  Daphne Koller,et al.  Learning a meta-level prior for feature relevance from multiple related tasks , 2007, ICML '07.

[22]  Tony Jebara,et al.  Multi-task feature and kernel selection for SVMs , 2004, ICML.

[23]  Neil D. Lawrence,et al.  Learning to learn with the informative vector machine , 2004, ICML.

[24]  Changshui Zhang,et al.  Transferred Dimensionality Reduction , 2008, ECML/PKDD.

[25]  Aaas News,et al.  Book Reviews , 1893, Buffalo Medical and Surgical Journal.

[26]  Rich Caruana,et al.  Multitask Learning , 1997, Machine-mediated learning.

[27]  D. Koller,et al.  Transfer Learning of Object Classes : From Cartoons to Photographs , 1992 .

[28]  Tong Zhang,et al.  A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data , 2005, J. Mach. Learn. Res..