Annotation Inference for Safety Certification of Automatically Generated Code (Extended Abstract)

Automated code generation is an enabling technology for model-based software development and promises many benefits, including higher quality and reduced turn-around times. However, the key to realizing these benefits is generator correctness: nothing is gained from replacing manual coding errors with automatic coding errors. In this paper, we describe an alternative technique that uses a generic post-generation annotation inference algorithm. We exploit both the highly idiomatic structure of automatically generated code and the restriction to specific safety properties. Since generated code only constitutes a limited subset of all possible programs, the new "eureka" insights required in general remain rare in our case. Since safety properties are simpler than full functional correctness, the required annotations are also simpler and more regular. We can thus use patterns to describe all code constructs that require annotations and templates to describe the required annotations. We use techniques similar to aspect-oriented programming to add the annotations to the generated code: the patterns correspond to (static) point-cut descriptors, while the introduced annotations correspond to advice. The annotation inference algorithm can run completely separately from the generator and is generic with respect to the safety property, although we use initialization safety as running example here. It has been implemented and applied to certify initialization safety for code generated by Auto-Bayes and AutoFilter