Text to 3 D Scene Generation

Creating 3D graphics is a difficult and time-consuming process. We see the need for a new paradigm in which the creation of 3D graphics is both effortless and immediate. Thus we propose a Text to 3d Scene generation system that incorporates user interaction. A user provides a natural language text as an input to this system and the system then identifies explicit constraints on the objects that should appear in the scene. From these explicit constraints system then uses various priors to identify implicit constraints on the objects. The system also identifies scene type from various constraints. Then candidate scene will be generated that will be continuously improved as per the user interaction and thus final scene will be rendered as an output.