“Automation Suffices for 80% of Visualization” In the late 1800’s telephone exchanges were manually operated and could only process a few callers a minute. As the volume of calls grew, a single operator could not handle the demand and manual exchanges gave way to automated ones. Today, operators still connect some calls, usually when the caller needs additional information (or money), but the vast majority can be handled by automated systems. History is littered with examples of systems that have become automated as technology improves. This panel questions whether we, the visualization community, are on the right track by concentrating our research and development on interactive visualization tools and systems. After all, research programs like the Department of Energy’s Accelerated Strategic Computing Initiative (ASCI) run computer simulations that produce terabytes of data every day. This raises the following questions: Interactive visualization would be essential to those scientists who pursue unfettered exploration of unfamiliar data, the scientists who discover new phenomena in their simulation that they never suspected were there, the scientists who like to try new tools that other people have created for their use. As many of us have experienced first-hand, these scientists exist in the realm of science fiction and PBS specials, not in real life. There are two primary applications of computer graphics in scientific computing: debugging and presentation. Tom Crockett (ICASE) champions the paradigm of visualization as a 3D print statement to let you Is it feasible to analyze terabyte data sets using interactive techniques? quickly hunt down an offending segment of code. An interactive debugger is great for finding errors, but _. most people only use one as a last resort. The automatically-generated compiler messages catch the large fraction of simple bugs, and print statements reveal * Has visualization reached a level of maturity where most of the tasks can be automated?