Editorial: how to get your paper rejected from STVR

This issue contains three deep and compelling papers on modeling software and generating tests. The first, An improved Pareto distribution for modeling the fault data of open source software, by Luan and Huang, presents a new model, based on the traditional Pareto distribution, that accurately describes the distribution of faults in open-source software. (Recommended by Min Xie.) The second, Extending model checkers for hybrid system verification: The case study of SPIN, by Gallardo and Panizo, studies an unusual type of system, hybrid systems. The authors have developed an extension to model checking that allows engineers to accurately model the behavior of this complex type of software. (Recommended by Paul Ammann.) The third, Search-based testing using constraint-based mutation, by Malburg and Fraser, addresses the key problem of test value generation. They propose and evaluate a hybrid form of test value generation that combines search-based techniques with constraint-based techniques. (Recommended by Mark Harman.) This editorial is based on a talk I gave at the ICST PhD symposium in April 2014. I had fun giving the talk and hope you enjoy reading this summary. First, I want to make it clear that I am highly qualified to give advice on getting papers rejected. I have well over 100 rejections in my time and may well be the most rejected software testing researcher of all time. As examples, let me share some quotes from reviewers: