Additional desk research looked at social media in the researcher
community (e.g. scirev.org; letpub.com; pubpeer.com, Twitter, etc.)
regarding the journal and year of each case study. In four cases we
identified significant negative comments regarding aspects of peer
review at the respective journal, concerning either the process or the
outcomes.
We also identified a decrease in companies in an industry connected with
journals in three of our cases; we suspect that the decline in that
industry may correlate with a decline in research output. A respondent
to one of our other cases also identified a decline in research funding
in the journal’s field as a factor.
Limitations
The analysis of Impact Factor looked at number of submissions, rather
than percentage increase, so the average increase is rough and not
generalizable across all journals. Future analysis could include
repeating these analyses stratified by subject area, or other factors,
to improve the accuracy of the coefficient. Also, future analysis could
look at relative ISI subject ranking, rather than absolute Impact
Factor.
For each year of analysis there is a very small sample of journals with
retractions (often under 30 journals), which may cause issues with the
accuracy of the results of the t-tests. This analysis does not
distinguish between a journal publishing 1 retraction in a single year
versus a journal publishing 15. Some journals issued retractions every
year, or most years, so no difference can be analyzed.
The Impact Factor is released midway through the calendar year.
Retractions might be published at any point in the calendar year. Future
analysis could look at submissions by month, perhaps focusing on the
12-month period after the release of the IF or publication of the
retraction.
Because the sample includes journals that started publishing within the
timeframe (or shortly before), efforts were made to exclude the first
year of submissions data. However, the submissions growth of a new
journal may not be linear and may be uneven.
Discussion
There are many factors involved in the number of submissions a journal
receives and many of them are beyond the control of the editor or the
publisher. The number of research papers being written will be dependent
in large part on the amount of research funding available and the number
of relevant untapped research areas in that field. (One respondent to
our cases noted how the technique covered by the relevant journal is now
“commonplace” leading to a decline in new research.) Even where
research funding unchanged, submission numbers would still fluctuate,
dependent on the incidence of negative or null results (which are
commonly not submitted for publication.) Therefore, submission numbers
are unlikely to rise evenly. Because there are multiple submission
options for authors, including both general and specialized journals,
there is competition between journals for submissions.
From our data, we identified two factors that have a significant impact
on submission numbers. The finding that changes in Impact Factor
correlate to changes in submissions is not surprising, as it fits with
prior research about authors’ selection process. Because Impact Factor
is used by various bodies, including universities and funders, to assess
published research, authors will inevitably seek to be published in
journals with the highest Impact Factor. One of our respondents noted,
“it has been unfortunate that some countries and grant agencies set
arbitrary limits of what IF journals will or will not count in certain
metrics.”
Our second finding is less expected – that retractions correlate with
declining submissions. This effect was noted in the two years following
a retraction; given this (relatively) long effect time we cannot rule
out other factors. Prior research indicated that journal reputation is
an important factor for authors when choosing where to submit. Our case
study research found that negative research comments about a journal’s
peer review occurred in years with significant declines in submissions.
Though an isolated case, the decline of submissions following
editor-in-chief misconduct seems indicative of a correlation between
journal reputation and submissions. To distinguish these reputational
effects from Impact Factor, we shall categorize them “peer review
reputation” (PRR). We propose that PRR is an important selection
criterion for authors and that significant negative PRR will lead to a
decrease in submission numbers. We believe this is the reason for the
correlation between retractions and submission numbers. (Previous
researchers have used retractions as a metric for peer review quality;
see Horbach & Halffman, 2019.)
We do not believe that the correlation between retractions and
submission numbers should be considered an incentive for editors or
publishers to not publish retractions. Firstly, publishers have a
responsibility to maintain the scientific and scholarly literature,
which includes the publication of retractions, corrections, and errata.
Secondly, whilst retractions correlate with submission numbers, there
are many incidences in our data where there was no effect, so it is too
simplistic to conclude that retractions cause declines in
submissions. Though retractions may be a quantifier of PRR, they are not
the sum total of it. Thirdly, as our case studies indicate, negative PPR
is transmitted by many means including word-of-mouth, which would be
impossible to quantify. Lastly, we suspect that not publishing a
retraction, where one was justified, would ultimately lead to a greater
decline in PPR than publishing the retraction.
Factors and retractions (our proxy of PRR) to determine which is the
more significant “risk factor” for a decrease in submissions. Our
analysis of these two factors involved different tests and different
types of analysis, which are not easily comparable. Also, the number of
retractions, whilst statistically significant, was relatively small.
Changes in Impact Factor correlate to both increases and decreases in
submission numbers, whilst retractions only correlate to decreases in
submissions. Retractions are also a binary measure; either a retraction
was published, or it wasn’t. The cases where multiple retractions were
published by the same journal in the same year were too few to test for
any correlation between number of retractions and number of submissions
– we do not expect such a relationship. Furthermore, it may be that
trying to rank Impact Factor against PRR would be meaningless in any
case.
It is worth noting the previous finding that high IF journals published
more retractions \citep*{Fang_2011}. Those authors speculate
that the correlation may be due to the high IF increasing the incentive
for authors to manipulate their results in order to get published, or
may be due to higher IF journals coming under more intense scrutiny.
Others have proposed that high IF journals favour papers reporting novel
and unexpected results, which are disproportionately likely to be
retracted later \citep*{Agrawal_2012}. Regardless of the cause, this
correlation between IF and retractions means that the correlation
between these factors and submissions will not be a simple one; it may
be impossible to truly isolate each factor in an analysis.
We found no correlation with submissions for factors such as acceptance
rate and turnaround times. One might expect authors to care about the
expected speed of dissemination and the likelihood of being published.
We suspect the reason that there was no correlation is that these
metrics are not routinely made available to authors. Editors and
publishers might consider advertising acceptance rates and turnaround
times to assess the impact on submissions. \citet{Pepermans_2015}
found that, for high-quality papers only, framing the metric as an
acceptance rate, rather than a rejection rate, did lead to authors being
more likely to submit.
We recommend that editors and publishers consider both PRR and IF when
analyzing submission numbers. Given that PRR is likely to be a
significant factor in determining journal choice, publishers may want to
find metrics other than retractions by which to measure it. Journals
need sustained or increasing submissions to remain viable, so editors
and publishers should aim to maintain a journal’s PRR, as well as its
IF, to continue to attract submissions. Underinvesting in the practices
and processes around peer review may lead to smaller submission numbers
and thereby smaller revenues in the longer term.
Tables