Creative Sketching Partner: A Co-Creative Sketching Tool to Inspire Design Creativity

The Creative Sketching Partner is an AI-based cocreative sketching tool that supports the conceptual design process. This AI partner presents sketches of varying visual and conceptual similarity based on the designer’s sketch. The goal of the partner is to present a sketch to inspire the user to explore more of the design space and to reduce design fixation, i.e. becoming stuck on one or a class of designs during the design process. The system is meant to help designers achieve a conceptual shift during their design process by presenting similar designs or images from different domains. Users can control the parameters of the algorithm by specifying how visually and conceptually similar the system’s sketch should be to their own. The Creative Sketching Partner Sketching is a critically important component of design creativity that facilitates thinking, reflection, and enables designers to share their ideas with other stakeholders. Designers often rapidly iterate through their initial sketches in the early stages of design, allowing them to ideate and explore the conceptual space of the design. However, designers sometimes face design fixation (Purcell and Gero 1996), or becoming stuck on one design or a class of designs. To inspire design creativity and overcome design fixation, we have developed the Creative Sketching Partner (CSP). The Creative Sketching Partner is a co-creative sketching tool that analyzes the user’s sketch and produces a sketch from a large database of sketches that has some visual and conceptual similarity with the user’s sketch. The goals is that the CSP will enable designers to experience conceptual shifts in the design process by presenting sketches that bear some visual and conceptual resemblance to the initial sketch. Analogical reasoning may be triggered by exposure to a conceptual shift stimulus. Analogy is a common activity in design, in which a source object is identified that can be mapped onto the current design (the “target”), and then some properties of that source can be transferred to the target to help solve the design problem (Grace, Gero, and Saunders 2015). Co-creative sketching systems are an active area of research in the computational creativity community. One such example is the Drawing Apprentice, which is a co-creative drawing partner that collaborates with users in real time (Davis et al. 2015). The system uses sketch recognition to identify objects drawn by the user and selects a complementary object to display on the screen. Instead of selecting a sketch from the same conceptual category, such as Drawing Apprentice, the CSP uses a computational model of conceptual shifts (Karimi et al. 2018) to determine an appropriate target sketch from a dataset. Conceptual Shift Algorithm The AI model for determining conceptual shifts has two components: visual similarity and conceptual similarity. Visual similarity entails identifying a sketch that shares some structural characteristics, whereas conceptual similarity identifies a concept that has some semantic relationship. The visual similarity module computes the distances between the cluster centroids of distinct categories and maps the user’s input to the most similar sketches from other categories. First, a deep learning model is employed called Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) (Carbune 2017). We train the model from scratch on the QuickDraw! dataset of 345 categories of human-drawn sketches (Jongejan et al. 2016). Each sketch is represented by the last LSTM layer, for 256 values per sketch. We use the resulting feature vectors for sketches in each category to create clusters of visually similar sketches. This process provides a feature vector representation for calculating the novelty between the user’s initial sketch and sketches in the QD dataset using visual similarity. The conceptual similarity module takes the pairs of selected category names from the previous step and computes their semantic similarity. This module uses a word embedding model (Mikolov 2016) trained on the Google News corpus with 3 million distinct words. The visual similarity module provides a set of candidate sketches to the conceptual similarity module. We extract the word2vec word embedding features (Mikolov 2016) from these category names. The similarity between the category of the source sketch and the selected target sketch is computed as 1− dc, where dc is the cosine distance between the feature vectors of category names. The larger number indicates that the two sketch categories are more likely to appear in the same context, whereas a smaller number indicates that the two are less associated with each other. Figure 1: The Creative Sketching Partner interface and example sketch from a user.