How to revise a research paper

This issue contains three outstanding papers, two that contain strong theory and show promise for immediate practical application, and another that can inform a new generation of researchers. The first, A Lightweight Framework for Dynamic GUI Data Verification Based on Scripts, by Mateo, Ruiz and Pérez, presents a way to integrate verification into a GUI during execution. The runtime verifier reads verification rules from files created by the engineers and checks the state of the GUI for violations while running (recommended by Peter Mueller). The second, Model-Based Security Testing: A Taxonomy and Systematic Classification, by Felderer, Zech, Breu, Büchler and Pretschner, surveys and summarizes 119 papers on model-based security testing. This paper should become the first entry port for anybody doing research in the area (recommended by Bogdan Korel). The third paper, Generating Effective Test Cases Based on Satisfiability Modulo Theory Solvers for Service-Oriented Workflow Applications, by Wang, Xing, Yang, Song and Zhang, address the very technically difficult problem of testing service-oriented applications developed with WSBPEL. Many execution paths in WS-BPEL applications are infeasible. This paper addresses the problem and shows how to generate tests based on finding test paths from embedded constraints (recommended by Bogdan Korel). A well-crafted process to revising a journal submission is crucial for eventual acceptance. Although the initial reaction to the reviews may be negative, it is very important to be proactive and positive. Researchers, even world-renowned, will always be criticized, fairly or not. We must be able to respond to criticism in positive ways. ‘The reviewers were blind and close-minded’, a common complaint, may be valid—however, expressing that does not help achieve the goal of publishing a paper. Authors cannot make reviewers or editors smarter. This is yet another situation where we must strive to change the things we can and accept the things we cannot. In this editorial, I walk through the process that I have used to revise journal papers for two and a half decades. I start my revision process with three initial steps. First, I look at the decision. If it is an ‘accept’, ‘minor revision’ or ‘major revision’, I celebrate. I view a major revision as an ‘accept after lots of work’. I put off reading the reviews until later that day or the next. Even a decision of minor revision may contain things that are bothersome. A reaction of ‘how could the reviewer be so blind?’ is common. Several days later, I return to the reviews for a deep, detailed analysis of what they said. Being proactive is essential. If the reviewers misunderstood, how can the author change the writing so that reviewers will understand the second time? If the reviewers were not satisfied, can the work be better motivated? If the reviewers did not believe the work truly solved the problem, can the problem be restated? Like software, no paper is ever perfect. Like testers, the reviewers’ job is to help the authors improve the paper. Recently a co-author and I got reviews asking for a major revision. The revisions asked us to throw the previous empirical study out and start again. (As an editor, I would define that as a reject, but that is another story [1].) The reviews were strange—as if they read the wrong paper. They reflected neither the paper’s goals nor its results. Three reviewers completely misunderstood the paper! We finally found a key review comment that helped us realize that we had buried our important goal inside a subsection in the experimental design . . . in a formula! Our title, abstract, introduction and research questions all sent the reviewers in the wrong direction. That is an extreme case, but true. And it illustrates the main point of response letters. Take responsibility! After all, authors want a paper accepted, but reviewers do not care. They simply want to

[1]  A. Jefferson Offutt Editorial: Standards for reviewing papers , 2007, Softw. Test. Verification Reliab..