Generative Model for NLP Applications based on Component Extraction

Abstract People all around the world speak so many different languages, but a Computer System or any other Computerized Machine only understands a single language i.e. binary language (1s and 0s) This system or a process that converts human language to computer understandable language is known as Natural Language Processing (NLP), though various diversified models have suggested so far, yet the need for a generative predictive model which can optimize depending upon the nature of problem being addressed is still an area of research under work. The paper presents a Generative Model for NLP Applications based on significant components extracted from Case Studies. The generative model is a single platform for diversified areas of NLP that can address specific problems relating to read text, hear speech, interpret it, measure sentiment and determine which parts are important. This is achieved by process of elimination once the relevant components are identified. Single platform provides same model generating and reproducing optimized solutions and addressing different issues.

[1]  Shrikanth S. Narayanan,et al.  Detecting emotional state of a child in a conversational computer game , 2011, Comput. Speech Lang..

[2]  Stephen E. Levinson,et al.  Cognitive state classification in a spoken tutorial dialogue system , 2006, Speech Commun..

[3]  Abeer Alwan,et al.  A Generative Student Model for Scoring Word Reading Skills , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[4]  Jack Mostow,et al.  4-month evaluation of a learner-controlled reading tutor that listens , 2008 .

[5]  I. A. Bessmertny On constructing intellectual systems in ternary logic , 2014, Programming and Computer Software.

[6]  Jason Weston,et al.  Natural Language Processing (Almost) from Scratch , 2011, J. Mach. Learn. Res..

[7]  Yue Zhang,et al.  Chinese Parsing Exploiting Characters , 2013, ACL.

[8]  Bashar Nuseibeh,et al.  Analysing anaphoric ambiguity in natural language requirements , 2011, Requirements Engineering.

[9]  Diane J. Litman,et al.  Benefits and challenges of real-time uncertainty detection and adaptation in a spoken dialogue computer tutor , 2011, Speech Commun..

[10]  M. Filippini US Residential Energy Demand and Energy Efficiency: A Stochastic Demand Frontier Approach , 2012 .

[11]  M P Black,et al.  Automatic Prediction of Children's Reading Ability for High-Level Literacy Assessment , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[12]  Arthur C. Graesser,et al.  Theoretical Perspectives on Affect and Deep Learning , 2011 .

[13]  Benedikt Gleich,et al.  Ambiguity Detection: Towards a Tool Explaining Ambiguity Sources , 2010, REFSQ.

[14]  Arthur C. Graesser,et al.  Toward an Affect-Sensitive AutoTutor , 2007, IEEE Intelligent Systems.

[15]  Paul S. Jacobs,et al.  Natural-language processing , 1994, IEEE Expert.

[16]  Maxine Eskénazi,et al.  An overview of spoken language technology for education , 2009, Speech Commun..

[17]  Yijing Li,et al.  Chinese Language Processing Based on Stroke Representation and Multidimensional Representation , 2018, IEEE Access.

[18]  Abeer Alwan,et al.  Assessment of emerging reading skills in young native speakers and language learners , 2009, Speech Commun..

[19]  Dipanjan Das Andr,et al.  A Survey on Automatic Text Summarization , 2007 .

[20]  Kevyn Collins-Thompson,et al.  Computational Assessment of Text Readability: A Survey of Current and Future Research Running title: Computational Assessment of Text Readability , 2014 .

[21]  Marcelo R. Campo,et al.  Mining textual requirements to assist architectural software design: a state of the art review , 2012, Artificial Intelligence Review.

[22]  Andrea Esuli,et al.  Natural Language Requirements Processing: A 4D Vision , 2017, IEEE Software.