Non‐expert reviews considered helpful
暂无分享,去创建一个
This issue has three innovative papers. In a belated implementation of a decision made at a meeting of STVR’s editorial board, this issue will start naming the reviewing editor who took charge of each paper. So when you read ‘recommended by’ later, that identifies the reviewing editor who found reviewers, evaluated the paper and the reviews and made a recommendation to the co-editors in chief. This is immensely more work than it may seem on the surface, and we are happy to acknowledge the reviewing editors’ hard work. The first paper in this issue, A testing strategy for abstract classes, by Clarke, Power, Babich and King, reports on a landmark advance in object-oriented testing. The authors have invented a way to test abstract classes without having to instantiate the class, thereby giving users more confidence when they create inheritance hierarchies. This is recommended by Atif Memon. The second, Test data regeneration: generating new test data from existing test data, by Yoo and Harman, presents an innovative approach to the always challenging problem of automatic test data generation. Their idea is called regeneration, where existing tests are used as a basis for new tests. This is recommended by Paul Strooper. The third, Fuzzy Bayesian system reliability assessment based on prior two-parameter exponential distribution under different loss functions, by Gholizadeh, Shirazi and Gildeh, presents a new approach to reliability assessment that uses fuzzy parameters, fuzzy random variables and fuzzy prior distributions. This is recommended by Tor Stalhane. I want to talk more about reviewing in this editorial. Most reviews are from experts on the topic of the paper. When trying to assess the value of a research paper, reviewing editors quite naturally tend to look at the top experts on the topic and ask two or three of them to review the paper. However, for a research paper to have significant impact, the paper must be accessible beyond the few experts. Thus, it is also valuable to get opinions from non-experts, scientists who are more representative of the hoped for intended audience. Getting one of three reviews from a non-expert can help assess the broad accessibility of a research paper. Reviewing as a non-expert takes different techniques from reviewing as an expert. Generally speaking, a non-expert should be able to read and understand at least the broad points in a research paper. For example, ideally, everybody with some knowledge of testing should be able to understand all introductions and conclusions in testing papers published in STVR, and most should be able to understand empirical sections. If we cannot follow details of algorithms or techniques, that is okay. So as a non-expert, a reviewer should comment on how well he or she understood the paper, whether there were any obvious flaws and whether it was easy to separate the parts that could be understood from the parts that could not. Non-experts may also be particularly suited to assess how hard it will be for the research ideas to move into practice. A non-expert probably cannot assess the originality and significance, but other reviewers should be able to fulfil that role. So if you are asked to review a paper that is a bit out of your area, you should alert the reviewing editor but also be prepared to write a non-expert review. For a reviewing editor, look at the reviews from non-experts differently and make sure that at least some reviewers have the knowledge to understand all the technical details.