Model-based testing and validation on artificial intelligence systems

In this paper, we discuss how viewing an artificial intelligence (AI) system as a model leads to certain criteria for testing methodologies. This includes a discussion of how certain mathematical techniques for testing AI systems can be used as criteria for determining the AI System's adequacy when no other models are available. We give an example of an error due to widespread rule interactions. Such errors are the keys to understanding why the independent rule assumption does not work, and therefore why AI systems must be modeled. We examine how testing can be applied both to individual system components as well as to the system as a whole. We also submit different criteria by which a set of test cases can be assembled and the problems in determining whether or not the performance of an AI system on a set of test cases is acceptable. In the end, the article shows the results of applying this model to a real case.

[1]  Ali Mili,et al.  Towards the verification and validation of online learning systems: general framework and applications , 2004, 37th Annual Hawaii International Conference on System Sciences, 2004. Proceedings of the.

[2]  Koushik Sen,et al.  Rule-Based Runtime Verification , 2004, VMCAI.

[3]  Bojan Cukic,et al.  Performability modeling of mobile software systems , 2004, 15th International Symposium on Software Reliability Engineering.

[4]  Krysia Broda,et al.  Applying connectionist modal logics to distributed knowledge representation problems , 2004, Int. J. Artif. Intell. Tools.

[5]  Tim Menzies,et al.  Verification and Validation and Artificial Intelligence , 2005, Adv. Comput..

[6]  Zhihao Chen,et al.  Validation methods for calibrating software effort models , 2005, Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005..