Automatic tools for testing expert systems

In the last several years, validating and verifying (VV is the system knowledge adequate? Domain validation ensures accuracy and completeness of the knowledge base while procedural validation establishes the accuracy and reliability of system output [3]. Verification, on the other hand, establishes structural correctness and process effectiveness by testing the logic of the knowledge base while testing executes a piece of software with the goal of finding errors; structural testing exercises a set of test cases on as many paths as possible, although it does not guarantee that each path is tested while functional testing validates problem specifications by comparing system output with known results. Both functional and structural testing are necessary to build reliable systems[1]. Current tools and techniques for testing expert systems include test case generation, face validation, Turing test, field tests, subsystem validation, and sensitivity analysis. Since there is no one single test technique that captures all errors, developers must try a combination of different methods [2]. Of these techniques, test case generation continues to be the most popular technique. Some limitations of manual test case generation can be overcome by automatic test case generators: software that automatically identifies a set of input-output pairs, where the input identifies the path(s), conditions, and condition values to be tested while the output identifies the results associated with the input. Other approaches and tools to automating the testing process include Expert Systems Validation Associate (EVA) [4], dependency charts, decision tables, graphs or Petri nets, and exploration of dynamic and temporal relationship between rules [8, 9].