Automated Generation of Test Trajectories for Embedded Flight Control Systems

Automated generation of test cases is a prerequisite for fast testing. Whereas the research in automated test data generation addressed the creation of individual test points, test trajectory generation has attracted limited attention. In simple terms, a test trajectory is defined as a series of data points, with each (possibly multidimensional) point relying upon the value(s) of previous point(s). Many embedded systems use data trajectories as inputs, including closed-loop process controllers, robotic manipulators, nuclear monitoring systems, and flight control systems. For these systems, testers can either handcraft test trajectories, use input trajectories from older versions of the system or, perhaps, collect test data in a high fidelity system simulator. While these are valid approaches, they are expensive and time-consuming, especially if the assessment goals require many tests. We developed a framework for expanding a small, conventionally developed set of test trajectories into a large set suitable, for example, for system safety assurance. Statistical regression is the core of this framework. The regression analysis builds a relationship between controllable independent variables and closely correlated dependent variables, which represent test trajectories. By perturbing the independent variables, new test trajectories are generated automatically. Our approach has been applied in the safety assessment of a fault tolerant flight control system. Linear regression, multiple linear regression, and autoregressive techniques are compared. The performance metrics include the speed of test generation and the percentage of "acceptable" trajectories, measured by the domain specific reasonableness checks.

[1]  Debra J. Richardson,et al.  What makes one software architecture more testable than another? , 1996, ISAW '96.

[2]  Elaine J. Weyuker,et al.  The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software , 1995, IEEE Trans. Software Eng..

[3]  Brad Seanor,et al.  A complete hardware package for a fault tolerant flight control system using online learning neural networks , 1999, Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251).

[4]  J. A. Whittaker,et al.  Statistical testing for cleanroom software engineering , 1992, Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences.

[5]  M. Napolitano,et al.  Neural-network-based scheme for sensor failure detection, identification, and accommodation , 1995 .

[6]  A. W. M. van den Enden,et al.  Discrete Time Signal Processing , 1989 .

[7]  Richard J. Lipton,et al.  Defining Software by Continuous, Smooth Functions , 1991, IEEE Trans. Software Eng..

[8]  Wei-Tek Tsai,et al.  Automated Test Case Generation for Programs Specified by Relational Algebra Queries , 1990, IEEE Trans. Software Eng..

[9]  John R. Callahan,et al.  Automated Software Testing Using Model-Checking , 1996 .

[10]  Yves Crouzet,et al.  An experimental study on software structural testing: deterministic versus random input generation , 1991, [1991] Digest of Papers. Fault-Tolerant Computing: The Twenty-First International Symposium.

[11]  Richard F. Gunst,et al.  Applied Regression Analysis , 1999, Technometrics.

[12]  Eric R. Ziegel,et al.  Generalized Linear Models , 2002, Technometrics.

[13]  Ioannis Parissis,et al.  Specification-based testing of synchronous software , 1996, SIGSOFT '96.

[14]  Michael R. Lyu,et al.  Handbook of software reliability engineering , 1996 .

[15]  Raj Jain,et al.  The art of computer systems performance analysis - techniques for experimental design, measurement, simulation, and modeling , 1991, Wiley professional computing.

[16]  D.M. Cohen,et al.  The Combinatorial Design Approach to Automatic Test Generation , 1996, IEEE Softw..

[17]  Simeon C. Ntafos,et al.  An Evaluation of Random Testing , 1984, IEEE Transactions on Software Engineering.

[18]  Bojan Cukic,et al.  Bayesian framework for reliability assurance of a deployed safety critical system , 2000, Proceedings. Fifth IEEE International Symposium on High Assurance Systems Engineering (HASE 2000).

[19]  Vincent Kanade,et al.  Clustering Algorithms , 2021, Wireless RF Energy Transfer in the Massive IoT Era.

[20]  John F. Passafiume,et al.  Software testing and evaluation , 1987 .

[21]  Darrel C. Ince,et al.  The Automatic Generation of Test Data , 1987, Comput. J..

[22]  Ray Jain,et al.  The art of computer systems performance analysis - techniques for experimental design, measurement, simulation, and modeling , 1991, Wiley professional computing.

[23]  Bogdan Korel,et al.  Automated Software Test Data Generation , 1990, IEEE Trans. Software Eng..

[24]  Siddhartha R. Dalal,et al.  The Automatic Efficient Test Generator (AETG) system , 1994, Proceedings of 1994 IEEE International Symposium on Software Reliability Engineering.

[25]  Elaine J. Weyuker,et al.  Automatically Generating Test Data from a Boolean Specification , 1994, IEEE Trans. Software Eng..

[26]  David Lorge Parnas,et al.  Evaluation of safety-critical software , 1990, CACM.