How can we recognize facade journals?

This paper contains two terrific papers on testing. A hybrid approach to testing for nonfunctional faults in embedded systems using genetic algorithms, by Tingting Yu, Witawas Srisa-an, Myra B. Cohen, and Gregg Rothermel, investigates challenging aspects of testing for nonfunctional faults in embedded software. (Recommended by Gordon Fraser.) Approaches for computing test-case-aware covering arrays, by Ugur Koc and Cemal Yilmaz, presents several novel approaches to correctly compute test-case-aware covering arrays. (Recommended by Mauro Pezzè.) This editorial continues a series about what I call “facade journals.” In the 28(5) issue, I pointed out that peer reviews are the most important mechanism research journals use to ensure scientific quality of published papers [1]. Then in the 28(6) issue, I defined “facade journals” to be journals that publish papers that look like research but that do not truly advance human knowledge [2]. In this editorial, I ask the question of how do we recognize them. It’s not only harder than we might think, it’s gotten harder over time. Like facades on physical buildings, it is hard to recognize a facade journal. Facade journals used to be published online and scientific journals were published on paper. But we lost that discriminator when excellent scientific journals start publishing online. Facade journals used to make decisions by the editor-in-chief without reviews or an editorial board. Now facade journals have editorial boards and reviewers. The most important distinction now is the quality of the reviews [3]. But reviews are anonymous and confidential, thus not subject to public scrutiny. This is true in all journals, including scientific journals such as STVR and IEEE Transactions, making it hard to recognize facade journals from the outside. But reviews for facade journals differ from scientific journals. Reviewers are expected to accept with few or no comments. Most only have two grounds for rejection. First, the paper is not written in English. Bad English is okay, by the way. Second, facade journal editors have realized that particularly high acceptance rates are red flags. So they recruit people to submit nonsense papers specifically to reduce the acceptance rate. Another measure of journals is the impact factor, which measures numbers of citations. The journal impact factor is based on citations to published papers in the past two years, so long term impact is ignored. Even though the impact factor is problematic at best (and nonsense in my opinion [4]), it is widely used. It is reasonable to assume that the impact factor for facade journals would be extremely low. However, this is now being faked as well. One of the more common minor revision suggestions is to add citations to papers published in that journal to artificially inflate the journal’s “impact” factor. (It also, as a side benefit, inflates the authors’ h-index values.) Some helpful scientists have put together lists of journals that do not publish real research. Perhaps the best known is Jeffrey Beall [5]. Beall published a blog criticizing “predatory open access journals.”† He focused largely on the “pay to publish” model, which he described as inherently corrupting. Whether the authors paid publication fees was a good discriminator at one time, but that line is being blurred now as the economics of publishing changes. Some journals are trying to create a legitimate pay-to-publish model that avoids the corruption of the past. Other journals keep papers behind a paywall, unless the authors pay a fee to make them open. And most conferences now refuse to publish papers unless an author pays full a registration fee. These are both forms of pay-to-publish, although different from Beall’s model.

[1]  A. Jefferson Offutt What is the value of the peer‐reviewing system? , 2018, Softw. Test. Verification Reliab..