Defining Predatory Journals
By Michael Seadle, published on 1 August 2018
The Süddeutsche Zeitung published an article recently in which a fictitious author, “R. Funden” (equivalent in English to “I. Maginary”), wrote a fake study on “Die kombinierten Effekte von Essigsäureethylesterextrakten in Bienenharz auf das Absterben menschlicher Darmkrebsstellen” (English: “The combined effects of ethyl acetate extracts in bee resin on the death of human colon cancer sites”). (Bauer et al., 2018, p. 12) The Journal of Integrative Oncology accepted the article with the claim that a reviewer wanted the label on a graphic improved and asked if the ethics commission had approved the study; with those minor changes the article was accepted and the author asked to pay 1892 € within a week for the publication. (Bauer et al., 2018, p. 14)
OMICS is the publisher of The Journal of Integrative Oncology, and the US Federal Trade Commission “filed a complaint against the academic journal publisher OMICS Group and two of its subsidiaries, saying the publisher deceives scholars and misrepresents the editorial rigor of its journals.” (Straumsheim, 2016). Beall’s List of Predatory Journals and Publishers includes OMICS.
What is a predatory journal? In a recent interview in Die Zeit, Stefan Hornbostel said: “nur sind die predatory journals, also Raubzeitschriften, wie sie im Fachjargon heißen, nicht immer leicht zu erkennen.” (English: “the predatory journals, as they are called in the jargon, are not easy to recognise”) (Spiewak, 2018) Blacklists represent an attempt to expose predatory journals without necessarily defining them.
Cabell’s Scholarly Analytics lists the characteristics it uses to create a “blacklist” of journals. The actual list is available only with a subscription, but the criteria are public and include various forms of integrity violations (e.g., a fake ISSN), peer review problems (e.g., including scholars on the editorial board without their knowledge), website problems (e.g., “Poor grammar and/or spelling”), poor publication practices (false indexing claims), and a range of other issues including no preservation plan and blocking web crawlers. (Cabells, 2018) All of the criteria address serious publication problems, and many can be established empirically. The criteria are good, but do not constitute a systematic definition of a predatory journal.
Universities have an interest in a definition that lets them identify predatory journals in order to warn faculty, staff, and doctoral students against them, and universities need clear criteria in order to avoid the lawsuits that some publishers have launched against their critics. False claims qualify as an important element in the definition, but many false claims are just symptoms of the need to obscure the lack of any real quality control.
In general, quality control in academic journals is a function of the peer-review process. This is not to say that peer review is the only reasonable method, but it is a gatekeeper. Faked peer review, as in the case of R. Funden, is plainly problematic. There are, however, cases where a well-known editor may ask well-known authors to write an article without sending it through full peer review. Standard peer review is not the only quality criterion, and not all peer review processes serve as gatekeepers.
Open Peer Review
F1000 is an interesting example because it uses a post-publication open peer review process where: “Expert referees are selected and invited, and their reports and names are published alongside the article, together with the authors’ responses and comments from registered users.” (F1000, 2018a) F1000 charges $1000 for publication “following successful completion of our pre-publication checks”. (F1000, 2018b)
Open peer review is certainly legitimate as a precursor to publication, but peer review post publication is different unless there is a mechanism to remove the article in cases where the reviews are negative or the reviewers unqualified. The language that F1000 uses is somewhat ambiguous: “Authors are encouraged to publish revised versions of their article. All versions of an article are linked and independently citable. Articles that pass peer review are indexed in external databases such as PubMed, Scopus and Google Scholar.” (F1000, 2018a) Most scholars seem to regard F1000 as legitimate, perhaps in part because the founder, Vitek Tracz, also founded PubMed Central.
Those who work with F1000 often cite its close ties to the Wellcome Trust, which uses the F1000 platform and model for articles, but the Wellcome Trust sets slightly different conditions: “Articles which pass the peer review requirements of PubMed Central (“PMC”) for deposit are deposited in PMC. … if such an article has not been sent to PMC within 2 weeks of passing the peer review requirements of PMC for deposit, F1000 will refund 100% of the APC.” (Wellcome Trust, 2018) The chief difference, in other words, is that the Wellcome Trust returns the money if the paper does not meet the standards for a more prestigious location. It is not clear that F1000 will make such refunds.
Predators and Readers’ Judgment
One of the ideas of open peer review has been that the community would decide whether a work was good or not depending on its quality, and that poor scholarship would ultimately be ignored. It is not clear that this is empirically true, especially not for students.
A quick test in Google Scholar finds 94 articles from the Journal of Integrative Oncology (the journal that accepted the fake R. Funden article), all with multiple citations. The other articles could, of course, be legitimate, but the Google Scholar results could also be seen as evidence that a journal’s poor reputation is no barrier to citation. An article by Cobey et al. (2018) in F1000 that has no peer reviews is clearly labeled as such, including in the recommended citation “[version 1; referees: awaiting peer review].” Nonetheless it is easily found in a standard Google search. The gatekeeper function is there, but weak.
Gatekeeping in the Definition
It seems reasonable to expect that a definition of predatory journals addresses weaknesses in the gatekeeping to prevent articles without clear scholarly merit from becoming part of the public record and (potentially) the scholarly discourse. Peer review is not, however, a simple black-and-white process where good articles are accepted and bad articles are thrown out.
Establishing an empirically verifiable definition of predatory publishing would ideally involve some form of metrics to show that a genuine peer review process took place and that the process met minimal quality standards. Defining these criteria is itself far from easy and will likely require collaboration between publishers and the scholarly community.
Melanie Rügenhagen (MA) and Vera Hillebrand (MA) assisted with the article.
Anonymous. 2018. “Beall’s List of Predatory Journals and Publishers.” 2018. Available online.
Cobey, Kelly D., Manoj M Lalu, Becky Skidmore, Nadera Ahmadzai, Agnes Grudniewicz, and David Moher. 2018. “What Is a Predatory Journal? A Scoping Review.” [version 1; referees: awaiting peer review]. F1000Research 7 (1001). Available online.
Cabells Scholarly Analytics. 2018 “Cabell’s Blacklist Violations.” Available online.
F1000. 2018a. “How It Works: Our Publishing Processes.” 2018. Available online.
F1000. 2018b. “How to Publish: Article Processing Charges.” 2018. Available online.
Straumsheim, Carl. 2016. “Feds Target ‘Predatory’ Publishers.” Inside Higher Ed, no. 29 August 2016. Available online.
WellcomeTrust. 2018. “Wellcome Open Research: How to Publish.” 2018. Available online.