Conditional Ranking on Relational Data

In domains like bioinformatics, information retrieval and social network analysis, one can find learning tasks where the goal consists of inferring a ranking of objects, conditioned on a particular target object. We present a general kernel framework for learning conditional rankings from various types of relational data, where rankings can be conditioned on unseen data objects. Conditional ranking from symmetric or reciprocal relations can in this framework be treated as two important special cases. Furthermore, we propose an efficient algorithm for conditional ranking by optimizing a squared ranking loss function. Experiments on synthetic and real-world data illustrate that such an approach delivers state-of-the-art performance in terms of predictive power and computational complexity. Moreover, we also show empirically that incorporating domain knowledge in the model about the underlying relations can improve the generalization performance.

[1]  Tapio Salakoski,et al.  Learning intransitive reciprocal relations with kernel methods , 2010, Eur. J. Oper. Res..

[2]  William I. Gasarch Review of rock, paper, scissors: game theory for everyday life by Len Fisher (Basic Books, 2008) , 2009, SIGA.

[3]  Lars Schmidt-Thieme,et al.  BPR: Bayesian Personalized Ranking from Implicit Feedback , 2009, UAI.

[4]  Shivani Agarwal,et al.  Ranking on graph data , 2006, ICML.

[5]  Eyke Hüllermeier,et al.  Binary Decomposition Methods for Multipartite Ranking , 2009, ECML/PKDD.

[6]  Tapio Pahikkala,et al.  An efficient algorithm for learning to rank from preference graphs , 2009, Machine Learning.

[7]  Bernard De Baets,et al.  Learning layered ranking functions with structured support vector machines , 2008, Neural Networks.

[8]  Len Fisher,et al.  Rock, paper, scissors : game theory in everyday life , 2008 .

[9]  S. Yau Mathematics and its applications , 2002 .

[10]  Jason Weston,et al.  Protein ranking: from local to global structure in the protein similarity network. , 2004, Proceedings of the National Academy of Sciences of the United States of America.

[11]  Peter A. Flach,et al.  Evaluation Measures for Multi-class Subgroup Discovery , 2009, ECML/PKDD.

[12]  William Stafford Noble,et al.  Kernel methods for predicting protein-protein interactions , 2005, ISMB.

[13]  Yoshihiro Yamanishi,et al.  Protein network inference from multiple genomic data: a supervised approach , 2004, ISMB/ECCB.

[14]  Yoram Singer,et al.  An Efficient Boosting Algorithm for Combining Preferences by , 2013 .

[15]  Thorsten Joachims,et al.  Training linear SVMs in linear time , 2006, KDD '06.

[16]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machines , 2002 .

[17]  Yin Yang,et al.  Query by document , 2009, WSDM '09.

[18]  Bernhard Schölkopf,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2005, IEEE Transactions on Neural Networks.

[19]  Karim M. Abadir,et al.  Matrix Algebra: Notation , 2005 .

[20]  Thorsten Joachims,et al.  Optimizing search engines using clickthrough data , 2002, KDD.

[21]  H. Engl,et al.  Regularization of Inverse Problems , 1996 .

[22]  Henk A. van der Vorst,et al.  Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems , 1992, SIAM J. Sci. Comput..

[23]  T. Salakoski,et al.  Learning to Rank with Pairwise Regularized Least-Squares , 2007 .

[24]  Gregory N. Hullender,et al.  Learning to rank using gradient descent , 2005, ICML.

[25]  Anthony Widjaja,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2003, IEEE Transactions on Neural Networks.