Neu-IR: The SIGIR 2016 Workshop on Neural Information Retrieval

In recent years, deep neural networks have yielded significant performance improvements on speech recognition and computer vision tasks, as well as led to exciting breakthroughs in novel application areas such as automatic voice translation, image captioning, and conversational agents. Despite demonstrating good performance on natural language processing (NLP) tasks (e.g., language modelling and machine translation, the performance of deep neural networks on information retrieval (IR) tasks has had relatively less scrutiny. Recent work in this area has mainly focused on word embeddings and neural models for short text similarity. The lack of many positive results in this area of information retrieval is partially due to the fact that IR tasks such as ranking are fundamentally different from NLP tasks, but also because the IR and neural network communities are only beginning to focus on the application of these techniques to core information retrieval problems. Given that deep learning has made such a big impact, first on speech processing and computer vision and now, increasingly, also on computational linguistics, it seems clear that deep learning will have a major impact on information retrieval and that this is an ideal time for a workshop in this area. Neu-IR (pronounced "new IR") will be a forum for new research relating to deep learning and other neural network based approaches to IR. The purpose is to provide an opportunity for people to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research.

[1]  William Lewis,et al.  Skype Translator: Breaking down language and hearing barriers. A behind the scenes look at near real-time speech translation , 2015, TC.

[2]  Larry P. Heck,et al.  Learning deep structured semantic models for web search using clickthrough data , 2013, CIKM.

[3]  Yonghui Wu,et al.  Exploring the Limits of Language Modeling , 2016, ArXiv.

[4]  Bhaskar Mitra,et al.  A Dual Embedding Space Model for Document Ranking , 2016, ArXiv.

[5]  Quoc V. Le,et al.  A Neural Conversational Model , 2015, ArXiv.

[6]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[7]  Geoffrey Zweig,et al.  From captions to visual concepts and back , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Samy Bengio,et al.  Show and tell: A neural image caption generator , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  James P. Callan,et al.  Learning to Reweight Terms with Distributed Representations , 2015, SIGIR.

[10]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[11]  Tara N. Sainath,et al.  Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.

[12]  Alessandro Moschitti,et al.  Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks , 2015, SIGIR.

[13]  Mandar Mitra,et al.  Word Embedding based Generalized Language Model for Information Retrieval , 2015, SIGIR.

[14]  Geoffrey Zweig,et al.  An introduction to computational networks and the computational network toolkit (invited talk) , 2014, INTERSPEECH.

[15]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.