A Usability Test of Two Browser- based Embedded Help Systems

INTRODUCTION MDL Information Systems is a supplier of chemical databases and associated database-searching software for PhD research chemists and biologists working in the pharmaceutical, biotech, and similar industries. The company’s “core competency” is that chemists can draw a chemical structure in one of our chemical-drawing editors and then use this drawing (with additional data constraints, if required) as a query to a database. The challenge for MDL’s Technical Communications group is to enable our busy audience of chemists and biologists to figure out how to create effective and efficient search queries for the databases. A database may contain hundreds of thousands of chemical structures, one or more of which—if discovered in the database—could be worth millions of dollars to that company as a promising lead in developing a particular drug. The big problem with database-searching applications is that the user receives little feedback. Consider, for example, novice users starting to use Microsoft Word. The users want to right-justify a paragraph of text. Their efforts, either successful or unsuccessful, will be immediately apparent on the screen: The paragraph is either correctly justified or it isn’t. However, a good-quality or a poorquality search query used over a large database may retrieve 5,000 records, whether good or poor. How is the chemist to know whether the search query was effective and efficient? That is, how does the chemist know that the search query retrieved all and only the relevant records? Several years ago, we would have said that traditional online help (WinHelp invoked from the Help menu) was the solution: We thought that our rational and logical user chemists would be able to find all the user-assistance they needed at their fingertips to feel confident that they had created an effective and efficient search query. There would be no hunting for a manual, and WinHelp had good searching capabilities. Thus, for ISIS, our flagship databasesearching application at the time, we carefully built a large, accurate, and comprehensive help system. However, as noted in a previous article, “Fear and loathing of the Help menu,” (Grayling 1998), things didn’t turn out even remotely as we expected. Our users didn’t behave in the way we anticipated, as summarized in the following section.