Toward an Information Visualization Workspace: Combining Multiple Means of Expression

New user interface challenges are arising because people need to explore and perform many diverse tasks involving large quantities of abstract information. Visualizing information is one approach to these challenges. But visualization must involve much more than just enabling people to "see" information. People must also manipulate it to focus on what is relevant and reorganize it to create new information. They must also communicate and share information in collaborative settings and act directly to perform their tasks based on this information. These goals suggest the need for information visualization workspaces with new interaction approaches. We present several systems Visage, SAGE and SDM that comprise such a workspace and a suite of user interface techniques for creating and manipulating integrative visualizations. Our work in this area revealed the need for interfaces that enable people to communicate with systems in multiple complementary ways. We discuss four dimensions for analyzing user interfaces that reveal the combination of design approaches needed for Towards an Information Visualization Workspace: Combining Multiple Means of Expression • 2 visualizations to support information analysis tasks effectively. We discuss the results of our attempts to provide multiple forms of expression using direct manipulation and propose areas where multimodal techniques are likely to be more effective. INTRODUCTION There is a growing need for people to make use of information-intensive systems as organizations invest substantially to build and maintain databases to support their activities. This is true in almost every domain, including areas such as management (e.g., hospital, school, factory, project), research and analysis (e.g., marketing, sales, investment, accounting), event tracking (e.g., product inventory and distribution), and planning (e.g., transportation and communication resources). As people attempt to make use of these information resources, challenging user interface design problems are emerging. These result not only from the abstract nature and large quantities of information they must consider, but also because of the diversity of tasks they must perform in the course of working with this information. Sometimes a user's task is to retrieve individual facts to answer focused questions (e.g., the quantity of parts in a warehouse). More often, users' tasks are exploratory and iterative. The results examined at each step determine questions to pursue next. The character of these tasks is reflected in the terms researchers have used to refer to this work: exploratory data analysis (Tukey, 1977), data mining (Holsheimer & Siebes, 1994), data archaeology (Brachman et al., 1993), and data exploration (Goldstein et al., 1994). Usually, exploration is just the beginning: people must also communicate and act on information. Therefore, effective environments will provide user interface mechanisms that support multiple information analysis tasks. For example, the process of understanding traffic accident data might include tasks such as: • searching for, examining, and narrowing the scope of information to relevant subsets (e.g., only those accidents occurring in large cities in 1995); • controlling the level of detail of data (e.g., at the aggregate level, finding the total insurance cost for all accidents and at the most detailed level, knowing the costs for Towards an Information Visualization Workspace: Combining Multiple Means of Expression • 3 individual accidents); • reorganizing, grouping and transforming data to create new information (e.g., organizing accident data by the age of drivers involved or the types of injuries incurred); • computing new attributes derived from others (e.g., computing total accident costs from medical expenses, lost wages, and repairs); • detecting important relationships and patterns (e.g., correlations between time of day, driver experience, location, and severity of accident or exceptions to trends); • communicating the results of analyses to other people (e.g., in group presentations); • acting on this information through computer tools for executing orders (e.g., insurance management systems that schedule assignments to case workers who must interview accident victims). Information visualization has become a popular research area for addressing these problems. The design goal has been that of creating concrete, external, manipulable objects to enable people to perform tasks whose abstractness, complexity, or magnitude make them very difficult to perform otherwise (e.g., when they are represented textually). Typically, the goal of visualization research has been to enable people to perceive relationships and manipulate data sets using more efficient perceptual and motor operations instead of performing many more demanding cognitive operations (Casner, 1991). Some of this work has been on new graphical representations and direct manipulation interaction techniques (e.g., Chuah et al., 1995b; Goldstein et al., 1994; Rao & Card, 1994). Other work has assembled techniques to produce focused tools or applications (e.g., Eick & Steffen, 1992; Plaisant et al., 1996; Spence et al., 1995). Still other work has addressed the problem of enabling users to create visualizations dynamically, as they are needed for analysis (e.g., Gray, Waite, & Draper, 1990; Roth et al., 1994). However, little research has been done on the broader problem of supporting the wide range of information search, analysis, communication, and system control tasks within a single user interface environment. This includes understanding how to create a consistent information workspace where people can make seamless transitions in their use of multiple visualizations, tools, and applications. The first goal of this paper is to present our work on the design Towards an Information Visualization Workspace: Combining Multiple Means of Expression • 4 features we have explored in creating a workspace called Visage. This will include our related research on visualization and interaction techniques that are being integrated within the Visage environment. The paper will discuss: 1. Visage design features that coordinate user interfaces for multiple visualization, analysis, and communication applications, 2. SAGE, a tool for users to automatically and interactively create visualizations for use in the Visage workspace, and 3. selective dynamic manipulation, techniques we are prototyping in a system called SDM for interacting with visualizations to modify their appearance to bring out important properties of the information they represent. A central element of our approach is to provide people with multiple means for expressing their intentions while performing tasks. By multiple means of expression, we are referring to interaction techniques that are effective for different purposes but often useful in combination because they support representations of different kinds of goals, actions, and objects. Multimodal interfaces are a good example of interfaces that provide multiple, complementary ways for people to convey their intentions. Oviatt (1996) studied people interacting with information in geographic displays and found that multimodal interfaces (combining speech and pen inputs) are more effective than interfaces using a single input modality. Speech inputs are better for retrieving objects that could be identified by referring expressions (e.g., by their names or with descriptions of groups) especially when they are not already visible. In contrast, pen input is more effective than speech for specifying spatial regions on a map and for pointing to objects that are not easily named or described linguistically. Tasks involving both these actions are performed most effectively with multimodal interfaces. This research demonstrates the importance of characterizing the different means by which people most effectively express their intentions in information-intensive workspaces. Such a characterization would help determine when different input channels are effective Towards an Information Visualization Workspace: Combining Multiple Means of Expression • 5 individually and in combination. More generally, however, it would enable us to understand when multiple interface techniques should be combined whether or not they use different input channels. Although current work on information visualization and exploration has been unimodal (i.e. using direct manipulation), it has sometimes addressed the need for interfaces that support multiple means of expression by combining multiple interface styles and techniques. There are also numerous opportunities for enhancing the usability of these systems with multimodal interfaces (i.e. adding speech). The second goal of this paper is to explore some dimensions for characterizing different means of expression that are particularly relevant to the design of information visualization environments. We will use these dimensions to discuss both the successes and shortcomings of direct manipulation interfaces we have developed using individual and combinations of different means of expression. We will also point out opportunities where multimodal approaches (particularly speech and direct manipulation) are likely to be successful. In the next section, we discuss several dimensions for analyzing user interfaces that characterize how they enable users to express their intentions in information visualization environments. We will use these dimensions to evaluate three areas of research we have been pursuing. Section 3 presents our work on Visage, an information workspace. Section 4 presents our work on SAGE, a visualization system that has been integrated with Visage to provide graphical depictions of data. Section 5 presents our work on a prototype called SDM for manipulating the appearance of visualizations interactively. At the end of each section, we discuss how the user interfaces pr

[1]  Ben Shneiderman,et al.  Visual information seeking: tight coupling of dynamic query filters with starfield displays , 1994, CHI '94.

[2]  Stephen G. Eick,et al.  Visualizing code profiling line oriented statistics , 1992, Proceedings Visualization '92.

[3]  Stephen M. Casner,et al.  Task-analytic approach to the automated design of graphic presentations , 1991, TOGS.

[4]  Jade Goldstein-Stewart,et al.  Interactive graphic design using automatic presentation knowledge , 1994, CHI '94.

[5]  Steven F. Roth,et al.  Data characterization for intelligent graphics presentation , 1990, CHI '90.

[6]  Steven F. Roth,et al.  Automating the presentation of information , 1991, [1991] Proceedings. The Seventh IEEE Conference on Artificial Intelligence Application.

[7]  Joe Marks,et al.  A formal specification scheme for network diagrams that facilitates automated design , 1991, J. Vis. Lang. Comput..

[8]  Jock D. Mackinlay,et al.  Cone Trees: animated 3D visualizations of hierarchical information , 1991, CHI.

[9]  Arno Siebes,et al.  Data Mining: the search for knowledge in databases. , 1994 .

[10]  Richard A. Becker,et al.  Brushing scatterplots , 1987 .

[11]  Steven F. Roth,et al.  SageBook: searching data-graphics by content , 1995, CHI '95.

[12]  Steven F. Roth,et al.  Sketching, searching, and customizing visualizations: a content-based approach to design retrieval , 1997 .

[13]  P R Cohen,et al.  The role of voice input for human-machine communication. , 1995, Proceedings of the National Academy of Sciences of the United States of America.

[14]  M. Braga,et al.  Exploratory Data Analysis , 2018, Encyclopedia of Social Network Analysis and Mining. 2nd Ed..

[15]  E KrasnerGlenn,et al.  A cookbook for using the model-view controller user interface paradigm in Smalltalk-80 , 1988 .

[16]  Ben Shneiderman,et al.  Tree-maps: a space-filling approach to the visualization of hierarchical information structures , 1991, Proceeding Visualization '91.

[17]  Edward Rolf Tufte,et al.  The visual display of quantitative information , 1985 .

[18]  Ben Shneiderman,et al.  LifeLines: visualizing personal histories , 1996, CHI.

[19]  Ramana Rao,et al.  The table lens: merging graphical and symbolic representations in an interactive focus + context visualization for tabular information , 1994, CHI '94.

[20]  Joseph M. Ballay Designing Workscape: an interdisciplinary experience , 1994, CHI '94.

[21]  George G. Robertson,et al.  The WebBook and the Web Forager: an information workspace for the World-Wide Web , 1996, CHI.

[22]  Steven F. Roth,et al.  SDM: selective dynamic manipulation of visualizations , 1995, UIST '95.

[23]  Andreas Buja,et al.  Painting multiple views of complex objects , 1990, OOPSLA/ECOOP '90.

[24]  Jade Goldstein-Stewart,et al.  A Framework for Knowledge-based Interactive Data Exploration , 1994, J. Vis. Lang. Comput..

[25]  Robert Spence,et al.  Visualisation for functional design , 1995, Proceedings of Visualization 1995 Conference.

[26]  Christopher Ahlberg,et al.  IVEE: an environment for automatic creation of dynamic queries applications , 1995, CHI '95.

[27]  Sharon L. Oviatt,et al.  Multimodal interfaces for dynamic interactive maps , 1996, CHI.

[28]  Jock D. Mackinlay,et al.  Automating the design of graphical presentations of relational information , 1986, TOGS.

[29]  James D. Hollan,et al.  Pad++: a zooming graphical interface for exploring alternate interface physics , 1994, UIST '94.

[30]  Steven F. Roth,et al.  Exploring information with Visage , 1996, CHI 1996.

[31]  Neff Walker,et al.  A classification of visual representations , 1994, CACM.

[32]  John F. Hughes,et al.  An architecture for an extensible 3D interface toolkit , 1994, UIST '94.

[33]  Tom Meyer,et al.  3D widgets for exploratory scientific visualization , 1994, UIST '94.