Modeling Shapes and Graphics Concepts in an Ontology

This work presents the development of a graphics ontology for natural language interfaces. In a first phase, the ontology was developed in a standard way, based on documentations and textbooks for graphics systems as well as existing ontologies. In the second phase, we collected sets of natural language instructions to create and modify graphic images from human subjects. In these sentences, people describe actions to create or modify images; graphic objects and shapes; and features of shapes, like size, colour, and orientation. When analyzing these sentences, we found that some concepts associated with shape features needed to be added to or modified in the ontology. The ontology was then integrated with a natural language interface and a graphics generation module, yielding the Lang2Graph system. The Lang2Graph system accepts verbal instructions in the graphics domain as input and creates corresponding images as output. We tested the Lang2Graph system using a subset of the collected sentences as input. We determined the correctness of the created output images using two methods: an objective, feature-based measurement of the goodness of fit of the created image, and a corresponding objective evaluation by human users. The results of the tests showed that the system performed at an accuracy level of ~80% and over.