Solving Verbal Questions in IQ Test by Knowledge-Powered Word Embedding

Verbal comprehension questions appear very frequently in Intelligence Quotient (IQ) tests, which measure human’s verbal ability including the understanding of the words with multiple senses, the synonyms and antonyms, and the analogies among words. In this work, we explore whether such tests can be solved automatically by the deep learning technologies for text data. We found that the task was quite challenging, and simply applying existing technologies like word embedding could not achieve a good performance, due to the multiple senses of words and the complex relations among words. To tackle these challenges, we propose a novel framework to automatically solve the verbal IQ questions by leveraging improved word embedding by jointly considering the multi-sense nature of words and the relational information among words. Experimental results have shown that the proposed framework can not only outperform existing methods for solving verbal comprehension questions but also exceed the average performance of the Amazon Mechanical Turk workers involved in the study.

[1]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[2]  Andrew McCallum,et al.  Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space , 2014, EMNLP.

[3]  Peter D. Turney A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations , 2008, COLING.

[4]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[5]  Danqi Chen,et al.  Reasoning With Neural Tensor Networks for Knowledge Base Completion , 2013, NIPS.

[6]  Tie-Yan Liu,et al.  Knowledge-Powered Deep Learning for Word Embedding , 2014, ECML/PKDD.

[7]  S. T. Dumais,et al.  Using latent semantic analysis to improve access to textual information , 1988, CHI '88.

[8]  David L. Dowe,et al.  A computer program capable of passing I.Q. tests , 2008 .

[9]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[10]  Tie-Yan Liu,et al.  WordRep: A Benchmark for Research on Learning Word Representations , 2014, ArXiv.

[11]  Yoshua Bengio,et al.  A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..

[12]  Graeme Hirst,et al.  Computing Word-Pair Antonymy , 2008, EMNLP.

[13]  Christopher D. Manning,et al.  Better Word Representations with Recursive Neural Networks for Morphology , 2013, CoNLL.

[14]  Mark Dredze,et al.  Improving Lexical Embeddings with Semantic Knowledge , 2014, ACL.

[15]  Mehrdad Amirghasemi,et al.  An anthropomorphic method for number sequence problems , 2013, Cognitive Systems Research.

[16]  Oren Etzioni,et al.  Diagram Understanding in Geometry Questions , 2014, AAAI.

[17]  Jason Weston,et al.  Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction , 2013, EMNLP.

[18]  Jason Weston,et al.  Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..

[19]  Jason Weston,et al.  Learning Structured Embeddings of Knowledge Bases , 2011, AAAI.

[20]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[21]  Jason Weston,et al.  Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.

[22]  Jason Weston,et al.  Translating Embeddings for Modeling Multi-relational Data , 2013, NIPS.

[23]  Jason Weston,et al.  A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.

[24]  Enhong Chen,et al.  A Probabilistic Model for Learning Multi-Prototype Word Embeddings , 2014, COLING.

[25]  Raymond J. Mooney,et al.  Multi-Prototype Vector-Space Models of Word Meaning , 2010, NAACL.

[26]  Dilek Z. Hakkani-Tür,et al.  Enriching Word Embeddings Using Knowledge Graph for Semantic Tagging in Conversational Dialog Systems , 2015, AAAI Spring Symposia.

[27]  Philip J. Carter The Complete Book of Intelligence Tests , 2005 .

[28]  J. Hernández-Orallo,et al.  IQ tests are not for machines, yet , 2012 .

[29]  Peter D. Turney Analogy perception applied to seven tests of word comprehension , 2011, J. Exp. Theor. Artif. Intell..

[30]  Oren Etzioni,et al.  Learning to Solve Arithmetic Word Problems with Verb Categorization , 2014, EMNLP.

[31]  Ron Dumont,et al.  Woodcock-Johnson III Tests Of Cognitive Abilities , 2014 .

[32]  Gang Wang,et al.  RC-NET: A General Framework for Incorporating Knowledge into Word Representations , 2014, CIKM.

[33]  Kevin Duh,et al.  Incorporating Both Distributional and Relational Semantics in Word Representations , 2015, ICLR.

[34]  Akiko Aizawa,et al.  An information-theoretic perspective of tf-idf measures , 2003, Inf. Process. Manag..

[35]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[36]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[37]  D. Wechsler,et al.  Wechsler Adult Intelligence Scale—Fourth Edition (WAIS-IV) , 2010 .

[38]  Nancy Mather,et al.  Woodcock-Johnson III Tests of Cognitive Abilities. , 2009 .

[39]  G. Whipple,et al.  The Psychological Methods of Testing Intelligence , 1915, The Psychological Clinic.

[40]  Luke S. Zettlemoyer,et al.  Learning to Automatically Solve Algebra Word Problems , 2014, ACL.

[41]  András Lörincz,et al.  Automated Word Puzzle Generation via Topic Dictionaries , 2012, ICML 2012.

[42]  Andrew Y. Ng,et al.  Improving Word Representations via Global Context and Multiple Word Prototypes , 2012, ACL.