Visual representations are always better than narrations in accordance to children, for better understanding. This is quite advantageous in learning school lessons and it eventually helps in engaging the children and enhancing their imaginative skills. Using natural language processing techniques and along the computer graphics it is possible to bridge the gap between these two individual fields, it will not only eliminate the existing manual labor involved instead it can also give rise to efficient and effective system frameworks that can form a foundation for complex applications. In this paper we present an architecture to design for a NLP engine that can be used for 3D scene generation, the input would be in textual form that would be processed by each module of the natural language processing (NLP) engine. This text would be restricted in terms of the constraint based grammar (CBG), eliminating the maximum occurrence of any ambiguity and easing the noun fragmentation process. Eventually, the output of the NLP engine would be a sentence that fulfills the custom grammatical rules.
[1]
R. Mayer,et al.
Multimedia learning: Are we asking the right questions?
,
1997
.
[2]
M. McDaniel,et al.
Learning Styles
,
2008,
Psychological science in the public interest : a journal of the American Psychological Society.
[3]
Mauro Di Manzo,et al.
Natural Language Input For Scene Generation
,
1983,
EACL.
[4]
Terry Winograd,et al.
Procedures As A Representation For Data In A Computer Program For Understanding Natural Language
,
1971
.
[5]
Richard Sproat,et al.
WordsEye: an automatic text-to-scene conversion system
,
2001,
SIGGRAPH.
[6]
Jane Wilhelms,et al.
Put: language-based interactive manipulation of objects
,
1996,
IEEE Computer Graphics and Applications.
[7]
Beatrice Santorini,et al.
The Penn Treebank: An Overview
,
2003
.
[8]
Lijun Yin,et al.
Real-time automatic 3D scene generation from natural language voice and text descriptions
,
2006,
MM '06.