Calligraphic Interfaces and Geometric Reconstruction

Analysing the CAD market over last years, we can conclude that there is a small evolution, in the development of user interfaces. Nowadays, all the commercial applications follow the WIMP (Window, Icon, Menu, Pointing device) paradigm. This justifies the fact that CAD software still has a minimum impact in the conceptual design phase, where the “paper and pencil” approach is the basic support for expressing the designer creativity. In this context, the research group REGEO (http://www.tec.uji.es/regeo) has developed in the last years some algorithms in the field of geometric reconstruction. In the paper, we present the current work the group is involved around geometric reconstruction and calligraphic interfaces. We think that a system aimed to facilitate the preliminary design phases must be designed to conform user habits, and not to force users operate in an environment that not exploits their sketching skills. It seems a contradiction to develop a user interface that forgets a common ability for designers. For this reason we think very interesting combine our previous work about geometric reconstruction with the emerging field of calligraphic interfaces. In the last years we have developed a software application called REFER, oriented to build a 3D model using an axonometric projection as input. At this moment this reconstructor is robust enough to deal with simple polyedrical geometry. Its input is a “perfect” drawing generated by means of a 2D CAD system in DXF format. We call it perfect because al the vertexes are correctly defined, and the edges are straight lines connecting vertexes. The REFER applications uses an optimisation approach to solve the reconstruction problem, and its output is a surface model. To provide REFER a friendly interface we have adopted a sketching based one. The user communicates with the application with a stylus and a graphic tablet or LCD tablet. Simply he sketches an axonometric perspective drawing that serves as input to the reconstructor. We will present a new REFER module designed to capture sketches, that with the proper pre-processing are used as input to the reconstruction phase. Also we present in the paper a second module devoted to export the reconstructed object in VRML 2.0 format (ISO 14772). Of this way, we present a self-contained application including a sketching interface, a reconstruction engine and a VRML 2.0 export module. 1. GENERAL PRESENTATION Analysing current CAD systems we can conclude that it has had a great evolution during the last decades. Today, feature based solid modelling and constraint based modelling are concepts widespread implemented. But user interfaces have slow evolved from old UNIX-like command line user interfaces to the WIMP (Window, Icon, Menu, Pointing device) paradigm. However, this kind of interface is still little friendly in the first stages of the design process where the engineer expresses his creativity with paper and pencil [1][2][3]. The primitive CAD systems used devices like light pens for data entry, but nowadays mouse is the king pointing device. Graphics tablets are still used, but by means of templates use the same "point and click" philosophy. However, during last decades different research lines has been explored to improve man-machine interaction related to CAD systems, with the first references from last 60's and beginning of 70's [4][5][6]. One of these research lines is about using and electronic version of the paper and pencil tools. The more similar devices for it are a stylus and a graphic tablet or LCD tablet. During early 90's Microsoft intended to promote "Windows for Pen Computers", an operative system built over standard Windows with a set of extensions to support a user interface based on the interaction with a stylus and a LCD tablet. It was a failure because of the small processing power of those "pen computers". But present computer processing power makes feasible implement alternative input modalities. New interface technologies are emerging using sketching, drawing and pen input to support man-machine communication. The term "calligraphic interface" covers this new kind of user interface. Examples of new devices supporting these new technologies are the popular PDAs, and the prototype Microsoft showed last November in the COMDEX fair called "Tablet PC". Most part of applications developed under the "pen computing" umbrella use gestures [7][8] as the basic command input. Besides hand-written and drawing sketch input is supported by these systems. For example, early 2001, IBM presented the Transnote laptop computer that captures handwritten ideas on paper and transfer them to the computer, where they can be organized and searched. In this context, there is a research line related to use sketch input to build 3D geometric models. There are two main trends. One is based in interacting with user by means of gestures that are recognised as modelling commands. The second approach is based in geometric reconstruction parting from an object projection. A third hybrid approach combines both gesture input and geometric reconstruction. Examples of the first type systems, that we will call "gesture based" are: • SKETCH [9] focused on architectural design, where the modelling commands are gestures. For example, three concurrent lines on a point define a block primitive. Positive volumes are drawn from top to bottom, and negative ones from bottom to top. SKETCH-N-MAKE [10] works in a similar way but is oriented towards machining simple parts • Quick-Sketch [11] is a tool oriented to mechanical design. Provides a constraint based drawing environment. Builds 3D models by means of a set of command gestures. • Teddy [12] allows modelling freeform surfaces with a very simple interface. The procedure consists in drawing the object silhouette, and then the application provides a polygonal mesh adapted to the object silhouette. The application is Java based and runs in an Internet navigator. • GIDes [13] builds 3D models from a perspective projection or multiple diedric projections. The system provides a gesture alphabet for building a reduced set of geometric primitives. Also provides a dynamic gesture recognition, for confirm design intent. The second research line, that we could name "geometric reconstruction" takes as input an axonometric projection and provides as output a 3D geometric model. There are two main approaches to carry on this process. One is based in the Huffman-Clowes [14][15] labelling algorithm. The second approach uses an optimisation [18] formulation based on human perception. It takes in account the mental process, simplifications and assumptions we take when are seeing a perspective drawing. The optimisation procedure provides in many cases not feasible solutions, mathematical correct, but visually incorrect. It must be pointed out that, from the point of view of Geometry, it has been always very well known that full recovery of a geometrical 3D model from one single projection of it is not possible. Nevertheless, from the field of psychologists it is also very well known the fact that humans seem to have no problems to identify 3D models depicted in 2D images. What is more, it seems to have a great consensus about which is the “correct” and “single” model all humans see in every picture. The reason comes from the fact that humans, when “reading” drawings, do implicit recovery actions. According to Gestalt school, this is because human perception holds some common characteristics, called “principles of organisation”. Applying Consequently, pure Visual Perception rules must also be considered to face the reconstruction problem. This rules are implemented by means of a mathematical formulation that is solved by an optimisation process. Several reconstruction engines have been developed by authors like Marill, Leclerc, Fischler, Lipson and Shpitalni [16][17][19] In last place there is a third way, that combines both gesture and reconstruction approaches. It follows a hybrid technique. The most interesting systems are: • Digital Clay [20] works with polyedric objects. It uses a calligraphic interface for data input, that after proper pre-processing is introduced in a reconstruction engine based on the Huffman-Clowes labelling algorithm, then the reconstructed object is exported to VRML format • Stilton [21] is oriented to the architectural design. Implements a calligraphic interface directly on a VRML browser. Its reconstruction kernel uses the optimisation approach and operates with genetic algorithms Our application fits in this third group, integrating an optimisation reconstruction engine and a sketch based input. REFER reconstruction engine SKETCH INPUT VRML

[1]  Eric and Gross Mark Schweikhardt Digital Clay: Deriving Digital Models from Freehand Sketches , 1998 .

[2]  David Craig,et al.  The importance of drawing in the mechanical design process , 1990, Comput. Graph..

[3]  James A. Landay,et al.  Visual similarity of pen gestures , 2000, CHI.

[4]  W. Wang,et al.  A Survey of 3D Solid Reconstruction from 2D Projection Line Drawings , 1993, Comput. Graph. Forum.

[5]  Joaquim A. Jorge,et al.  Towards Calligraphic Interfaces: Sketching 3D Scenes with Gestures and Context Icons , 2000, WSCG.

[6]  Ivan E. Sutherland,et al.  Sketchpad a Man-Machine Graphical Communication System , 1899, Outstanding Dissertations in the Computer Sciences.

[7]  Christopher F. Herot Graphical input through machine recognition of sketches , 1976, SIGGRAPH '76.

[8]  Dean Rubine,et al.  Combining gestures and direct manipulation , 1992, CHI.

[9]  Alasdair Turner,et al.  Sketching space , 2000, Comput. Graph..

[10]  S. Sutherland Seeing things , 1989, Nature.

[11]  Satoshi Matsuoka,et al.  Teddy: A Sketching Interface for 3D Freeform Design , 1999, SIGGRAPH Courses.

[12]  V. Goel Sketches of thought , 1995 .

[13]  Nicholas Negroponte,et al.  Recent advances in sketch recognition , 1973, AFIPS National Computer Conference.

[14]  Jaume I. Dept GEOMETRICAL RECONSTRUCTION FROM SINGLE LINE DRAWINGS USING OPTIMIZATION-BASED APPROACHES , 1999 .

[15]  David R. Nadeau,et al.  VRML 2.0 Sourcebook , 1995 .

[16]  Gershon Elber,et al.  Inferring 3D models from freehand sketches and constraints , 1997, Comput. Aided Des..

[17]  Elaine Cohen,et al.  SKETCH-N-MAKE: AUTOMATED MACHINING OF CAD SKETCHES , 1998 .