Expert System Tool Evaluation
暂无分享,去创建一个
Publisher Summary This chapter presents a framework of evaluation criteria and a methodology for selecting an expert system tool. Evaluating and choosing a tool requires matching a tool to its intended use including all aspects of the problem domain, the problem itself, and the anticipated project. Because of the evolving and inconsistent terminology in this new field, comparing features of different tools is of limited utility and limited longevity. Instead, the capabilities provided by these features must be analyzed, evaluated, and compared. The framework shows how to use specific assessment techniques to apply specific metrics to specific capabilities of a tool for a specific application in a specific context. The development of expert system is reflected in the importance of issues, such as integration, database access, portability, fielding, maintainability, robustness, reliability, concurrent access, performance, user interface, debugging support, and documentation. Though the difficulty of comparing and selecting tools may be daunting to a developer faced with a decision, this difficulty is largely a result of the richness of the field and the bewildering pace at which new ideas are being incorporated into tools. The evaluation approach is offered, not as a final answer to a fixed problem, but as a strategy for dealing with a dynamic problem whose complexity reflects the health of a research area whose impact on software engineering is only beginning to be felt.
[1] Jeff Rothenberg,et al. Evaluating Expert System Tools , 1987 .
[2] B. Chandrasekaran,et al. Generic Tasks in Knowledge-Based Reasoning: High-Level Building Blocks for Expert System Design , 1986, IEEE Expert.
[3] Mark H. Richer. An evaluation of expert system development tools , 1986 .
[4] Gary S. Kahn,et al. The Mud System , 1986, IEEE Expert.