Research on the academic review process may help to improve research productivity. The article presents a model of the review process in a top journal, in which authors know their paper’s quality whereas referees obtain a noisy signal about quality. Increased signal noisiness, lower submission costs and more published papers all reduce the average quality of published papers in the journal. The model allows analyzing how the submission cost, the accuracy of referees and the number of published papers affect additional equilibrium characteristics. Implications of the model for journal policies are also discussed.

The article presents a model of the academic review process, which assumes that authors are fully informed about their article’s quality whereas referees and editors only get noisy signals (as opposed to the more common assumption in the literature, that editors perfectly observe article quality during the review process).^{[1]} This means that editors may obtain information about the article’s quality from an author’s submission decision. There are a few reasons why the assumption that authors are more informed than referees (and consequently also more informed than editors) is reasonable. One is that if referees could evaluate papers perfectly, we should have seen multiple referees on the same paper always reaching the same conclusion and for this reason journals should have never used more than one referee for each submission. None of these is supported by actual refereeing patterns. In addition, the evidence that single-blind refereeing produces different results from double-blind refereeing (e.g., Blank 1991) also should not occur if referees can evaluate a paper very accurately based on its content. The many examples of papers in top journals that later receive very few citations (or are not cited at all) and some examples of highly important articles that were initially rejected (Gans and Shepherd 1994) further support the claim that the referees’ signal is noisy. Moreover, when we take the referee’s signal as it is transferred to the journal’s editor (which is the relevant signal here), additional issues can create noise or discrepancy between the paper’s true quality and what the referee conveys to the editor. Frey, Eichenberger, and Frey (2009), for example, based on many years of editing the journal Kyklos, suggest that papers may contain new ideas that undermine the ideas that the referees expressed over a long time, and thus referees may knowingly or unknowingly have a significant conservative bias. They also argue that the well-known scholars among the referees are often overburdened and ask their graduate students to draft a referee report, resulting in a strong bias toward standard economics and avoiding the risk of supporting an unorthodox contribution.

The qualitative results of the paper should hold even if the author does not have perfect information about the paper’s quality, but does have a more accurate signal than the referee. ^{[2]} It makes sense that authors have better information about their paper’s quality than referees for various reasons. First, authors can ask colleagues to read the paper and give them a sincere evaluation of quality before submitting it. ^{[3]} If each colleague is on average as accurate in detecting the paper’s quality as a referee, then obviously combining the opinions of several colleagues gives a more accurate signal about quality than that of a single referee, and on top of that the author has his own private information about the paper’s quality. Also, authors are often experts in the narrow area to which the paper belongs, whereas referees may be experts in the broader area but less knowledgeable than the authors about the specific narrow literature that is most closely related to the paper. Finally, authors often spend with their papers hundreds or thousands of hours, thinking carefully about the topic, analyzing possible directions in a theoretical paper, analyzing different regressions in an empirical paper, etc. They should be in a better position to judge the paper’s quality accurately than a referee who spends only a few hours reading the paper.

The model yields several results, some of which go in opposite direction to those in other models. Increased signal noisiness, lower submission costs and more published papers all reduce the average quality of published papers in the journal. The model allows analyzing how the submission cost, the accuracy of referees and the number of published papers affect additional characteristics of the equilibrium. Of particular interest is the result that increasing the submission cost raises the average quality of published papers, and thus may be viewed as a beneficial strategy for the journal. This occurs even though the model does not impose restrictions on the use of referees and so increasing submission costs is not a way to cope with limited resources of refereeing (as in Leslie 2005). The positive impact on the journal’s quality that results from higher submission costs has potential implications for journals’ policy, and is in the opposite direction to results of previous models. If refereeing accuracy and submission costs are relatively high, authors are hesitant to submit papers that are below the acceptance threshold, and the editor makes relatively few mistakes in accepting papers. When refereeing accuracy and submission costs are lower, more authors with weak papers submit, and editors end up making less accurate acceptance decisions. This effect is absent when the editor can perfectly assess article quality (i.e., when refereeing accuracy is at the maximum). The results show that this inaccurate editor assessment effect leads the editor to prefer higher submission costs and more accurate referees.

The next section reviews the literature, and Section 3 presents the model. Section 4 characterizes the equilibrium, and Section 5 analyzes and discusses some comparative statics results. Section 6 provides a numerical analysis that presents the computed equilibria for various parameter values. Section 7 analyzes and discusses the case where the top papers are accepted for sure despite the noisy review process. The last section concludes and discusses the implications of the model for journal policies.

The academic review process is an essential part of academic publishing, which in turn is a vital part of academic life and of disseminating research results to the academic community and the public. The details of the review process can affect the incentives researchers face and how they allocate their time to different tasks, and thus impact their research productivity. For example, the review process can determine how researchers allocate their time between working on new ideas and polishing existing papers (Ellison 2002a). The review process also determines how quickly research results are published. Evidence shows that characteristics of the review process change over time, suggesting that it is possible to shape the process; for example, Ellison (2002b) documents a slowdown in submit–accept times (from submission to final acceptance, including time for revisions) whereas Azar (2007) finds a slowdown in journals’ first-response time (from submission to the journal’s response on the initial submission). In addition, certain characteristics of the academic environment change over time. For example, there is evidence for increasing competition among economists over time (Sutter and Kocher 2001) and for geographical changes in the concentration of authors in top journals (Kocher and Sutter 2001). The development of the Internet and open access publishing are additional examples for changes in the academic publishing environment over the last decades. Some of these changes have implications for the academic review process. McCabe and Snyder (2005), for example, develop a model to analyze the relationship between open access and journal quality, and Azar (2008) builds a model that shows how changes in the publishing environment can affect the equilibrium through changing social norms related to refereeing.

As opposed to many markets where market forces yield efficiency, academic research and publishing do not operate in a regular market and under a system of strong incentives that brings it to efficiency. Some journals are not-for-profit; authors of academic articles are not paid by the journal; referees are often unpaid; salaries in some places are not tied very strongly to research productivity and in other places they have an upper bound; many researchers have tenure and therefore universities have limited ability to replace low-productivity researchers; and journal editors are usually not paid according to the journal’s performance. All these reasons lead to a system that lacks market forces that can yield efficiency without further intervention. It therefore becomes vital that journal editors will make smart decisions about how to shape the review process. To do so, research on academic publishing and in particular the academic review process is crucial, and yet we see too little of such research.

Some literature on the academic review process does exist, however, and includes several important articles. Some research uses empirical data to address various questions. Laband (1990), for example, examines the value-added from the review process. Using empirical data, he analyzes the relationship between citations of published papers and comments provided by reviewers and editors. He finds that referees’ comments have a positive impact on subsequent citation of papers, whereas comments made by editors show no such impact. Laband therefore suggests that value-adding by editors seems to result mostly from efficient matching of papers with reviewers. Blank (1991) compares the use of single-blind versus double-blind review using an experiment in the *American Economic Review* and finds that referees are more critical when they do not know the author’s identity. She also reports that double-blind reviewing does not affect authors from top-ranked universities, colleges, and low-ranked universities, but affects negatively authors from near-top-ranked universities and from non-academic institutions. Laband and Piette (1994) address the issue of journal editors’ favoritism toward papers written by colleagues and former students. Using citation analysis they find evidence that editors sometimes publish subpar papers by colleagues and former students but that on balance the editors’ use of connections allows them to capture high-impact papers for publication in their journals. Medoff (2003) studies a related issue and finds that articles published by authors with connections or personal ties to the publishing journal’s editors (especially authors who serve on the journal’s editorial board) are of higher quality than articles by authors without such connections. Bergstrom (2001) discusses the provision of academics’ free labor to journals and the high prices for subscriptions that some of them charge despite enjoying free labor. He advocates several possible strategies to overcome the problems caused by high prices of for-profit journals, such as expanding non-profit journals, supporting new electronic journals, and punishing overpriced journals.

Somewhat closer to the focus of the current paper are several recent theoretical models. Leslie (2005) presents a model of the review process, in which the referee knows the quality of a paper accurately, but the author only observes a noisy signal about the quality of his paper. Leslie finds that increasing the submission cost (by raising the submission fee or the first-response time) can be detrimental to the journal because it reduces the journal’s success rate (the proportion of the population of top papers that the top journal publishes). In addition, he finds that as noise increases (the signal received by the author about his paper’s quality becoming less accurate), the expected standard of accepted papers increases. Besancenot and Vranceanu (2008) develop a model of the market for academic publications in business and economics, using a game between researchers and schools’ deans with the editorial selection process also playing a role. They analyze the impact of publication incentives and show that large bonuses might result in bad consequences, such as a drop in the quality of major journals, a decline in the number of top-tier publications by leading institutions and a fall in top researchers’ compensation. Heintzelman and Nocetti (2009) analyze the decision problem of how to optimally submit manuscripts for publication. When probabilities of submission outcomes are exogenous parameters, authors can find the optimal submission path through the use of journal scores based only on the journals’ characteristics and the author’s degree of impatience. Cotton (2013) analyzes a model where editors can choose both submission fees and response times, and shows that when authors differ in their willingness to pay submission fees or to wait for the journal’s response, journal quality is maximized by choosing a moderate fee and a moderate response time.

The model focuses on the review process in a top journal and how authors respond to it. The population of potential submissions is normalized to have a mass of 1. Papers vary in their quality, denoted by *q*, which is uniformly distributed between 0 and 1. Authors know their papers’ quality but the journal does not and therefore it uses referees. For the various reasons mentioned in the previous section, the referee is less informed than the authors about the paper’s quality. In particular, the referee observes a noisy signal *z* about the paper’s quality, equal to *q* + ε, where *ε* is distributed uniformly over the range [-*g, g*]. The expectation of *ε* is therefore 0, meaning that on average the referee is correct. The parameter *g* is a measure of how noisy the referee’s signal is. Because *q* ∈ [0, 1], the most extreme case of referee’s noise about quality that will be considered here is *g*=1. When *g*=1, even the worst paper (*q*=0) has some chance to be perceived as an excellent paper (*q*=1) and vice versa. Therefore it is assumed that *g* ∈ (0, 1], although in reality the value of *g* is likely to be much smaller than 1.

The journal has space to publish a mass *n* of papers, where *n* ∈ (0, 1). The editor wants to exhaust this space and publish exactly a mass *n*, and given this constraint to maximize the journal’s quality. This is consistent with the behavior of journals – we do not see journals that publish only one paper a year in order to maximize their average quality, but rather they publish a constant number of issues every year and try to maximize the quality of the published papers. ^{[4]} These objectives imply that the editor decides on a threshold quality (denoted by *r*), and only if the referee thinks that a paper’s quality is at least *r* the paper is accepted. The level of *r* is set by the editor such that in equilibrium, given authors’ decisions, it results in the journal publishing exactly a mass of *n* papers. For simplicity, the process leading to the acceptance decision (e.g., a revise and resubmit decision followed by another version of the paper, etc.) is ignored and only the final outcome is modeled.

An author whose paper is rejected, or an author who does not submit to the top journal at all, can submit the paper elsewhere. The paper may be rejected also by another journal, can be submitted to more journals, and will end up either being published somewhere or not. The exact nature of this complicated process of what happens to papers that are not accepted by the top journal is beyond the scope of the current model. What is important here is that this outside option of not publishing in the top journal yields a certain expected utility level, which will be assumed to be zero. However, acceptance of the paper by the top journal obviously gives a higher utility than this outside option. The extra utility from acceptance in a top journal compared to the outside option to submit elsewhere is normalized to equal 1. ^{[5]}

The author bears a submission cost of *c*, which is a result of several ingredients. First, he might have to pay a submission fee. Second, submitting a paper to a journal takes some effort – navigating through the submission system, possibly making changes to the paper to match the journal’s style, etc. Third, it takes a few months on average to hear back from a journal, and because promotion and salary depend on the author’s publications, this delay is costly.

The game is a simultaneous game. The values of *c*, *g* and *n* are exogenous and are given by the academic environment. The two players in the game are the authors and the editor. The authors see the quality of their paper, *q*, and decide whether to submit their paper to the top journal or not. The editor decides, based on the noisy signal about the paper’s quality, *z*, whether to accept the paper for publication or not. The Nash equilibrium is a combination of authors’ and editor’s strategies such that each player responds optimally to the strategy of the other player, and the number of papers submitted that are accepted is equal to the space the editor wants to fill, *n*. As will be discussed below, the equilibrium strategies can be defined by minimum thresholds: the authors submit the paper if *q*≥*q**, and the editor accepts the paper if its quality signal is above a certain level, *z≥r*. Once these strategies are determined, the rest of the game unfolds automatically without any intervention by the players: papers are submitted if their quality exceeds *q**, the editor obtains a noisy signal about each paper and only those with signals above *r* are accepted and published, whereas the rest are rejected.

In this section I assume that there is sufficiently enough noise in the refereeing process, and the standards in the top journal are sufficiently high, that every paper has some positive probability of getting rejected in equilibrium. At least in economics, this seems to be a reasonable assumption when we discuss the top journal. ^{[6]} This means that for all levels of *q* we have *q*<*g**+**r*, which in turn implies that *g*>1 − *r*. In order to guarantee this, this section limits attention to the case where *g* (1 − *c*^{2})>*n*. The analysis shows that this condition is both necessary and sufficient for even the best papers to be uncertain of acceptance in equilibrium (i.e., for *q*<*g* + *r*). The assumption that every paper has a chance to get rejected from the top journal (even if a small one), in addition to being reasonable in the real environment of economics journals, also simplifies the analysis.

Given a level of *r* (the threshold of *z* above which the editor accepts the paper), utility maximization by the authors implies that they submit to the journal if and only if their expected utility from doing so is non-negative. (I assume that when authors are indifferent between submitting their paper or not, they choose to submit.) Because their gain in utility from acceptance (versus the outside option of submitting elsewhere) equals 1, the condition that ensures that an author with a paper of quality *q* wants to submit to the journal is *P*(acceptance) − *c*≥0, where *P*(acceptance) is the probability of acceptance given the paper’s quality and the values of *r* and *g*. It is equal to the probability that *q*+ε≥*r*, or *P*(ε≥*r**−**q*). This can be divided into three cases. If *q*≥*r**+**g*, then even for the worst possible signal the referee may receive (*q**−**g*) the paper will be accepted because this signal exceeds *r*. However, the assumption mentioned above of *g*>1 − *r* implies that we never have *q*≥*r**+**g*. If *r**−**g ≤ q<r**+**g*, the probability of acceptance, given that *ε* is distributed uniformly between −*g* and *g*, is equal to (*g**−**r**+**q*)/2*g*. If *q<r**−**g*, the probability of acceptance is zero. Consequently, the author submits to the journal if and only if (*g**−**r**+**q*)/2*g − c*≥0. Rearranging gives the threshold paper quality *q**: papers of quality of *q** and above will be submitted to the journal, and those with lower quality will not, where *q**=2*gc**+**r**−**g*.

Given the authors’ behavior, we can now turn to find which *r* the editor sets in equilibrium. The editor is interested in getting exactly *n* papers accepted and he wants to maximize their average quality. The editor, however, does not know the true quality of each paper, *q*, and therefore he can only condition his acceptance decision on the information he has, which is the noisy signal *z=q*+ε. Lemma 1 shows that also after accounting for equilibrium behavior of authors, the expected true quality of a submitted paper is increasing in the signal *z*:

**Lemma 1**. ^{[7]} The expected value of *q* conditional on *z* and given a threshold value of authors’ submission *q** is given by E(*q|z*) *=* [*max* (*q*, z − g*) *+ min* (*z**+**g*, 1)]/2, and is an increasing function of *z*.

Given this result, an editor who wants to accept exactly *n* papers in a manner that maximizes average quality should accept the *n* papers with the highest signals and reject the rest. This means that the optimal strategy of the editor is to set a threshold for acceptance *r* such that any paper with a signal above *r* is accepted, any paper with a lower signal is rejected and in equilibrium this threshold indeed results in exactly *n* papers being accepted. Proposition 1 presents the equilibrium level of *r*:

**Proposition 1**. In equilibrium, *r =* 1 + *g**−* 2(*ng**+**g*^{2}*c*^{2})^{0.5}. It follows that *q**=2*gc**+* 1 *−* 2(*ng**+**g*^{2}*c*^{2})^{0.5} and that the assumption *g*>1 − *r* is equivalent to *g*(1 − *c*^{2})>*n*.

The value of *q** can be used to define a few additional variables of interest. Because every paper with quality above *q** is submitted and given the distribution of *q*, the number of submitted papers is given by

**Proposition 2**. The average quality of published articles, denoted by *Y*, is equal to

It is interesting to examine how the equilibrium variables change as a result of change in the parameters *c, g* and *n*. Table 1 presents a summary of the sign of the derivatives of *q**, *r*, *A, S, T* and *Y* with respect to the parameters *c*, *g* and *n*. These signs are based on the following constraints about parameter values: 0<*c*<0.5; 0<*g*≤1; 0<*n*<1; *g*(1 − *c*^{2})>*n*.

Table 1

With respect to derivative of |
c (Submission cost) |
g (Noise in referee’s signal) |
n (Number of published papers) |

q* (threshold quality for submission) |
+ | − | − |

r (threshold referee signal for acceptance) |
− | ± | − |

A (acceptance rate) |
+ | − | + |

S (number of submissions) |
− | + | + |

T (average quality of submissions) |
+ | − | − |

Y (average quality of published papers) |
+ | − | − |

To understand the intuition of these comparative statics, consider first an increase in the submission cost. Such an increase makes it less profitable to submit to the journal, and those who were borderline submitters previously (were close to being indifferent between submitting to the top journal or to the next best alternative) now choose not to submit. Because we are dealing with the top journal, these are the people with the lowest quality papers among the previous submitters. This means that the threshold quality for submission increases as well as their average quality, but the number of submissions declines. If the journal retains the former quality threshold for acceptance, *r* (recall that the journal only receives a noisy signal about quality), but with fewer submissions now, it will accept and publish fewer papers. However, we assume that the journal still wants to publish the same number of papers *n*, and therefore it has to reduce the threshold quality for acceptance, *r*. Interestingly, even though this threshold declines, the average quality of published papers increases. The reason is that the journal was able to deter submissions of some of the lowest quality papers, and because these still had a chance to get published due to referee mistakes, deterring them increases the average quality of accepted papers, despite the decline in the threshold of quality for acceptance. Finally, fewer submissions with the same expected number of accepted papers results in a higher acceptance rate.

Next, consider an increase in *g*. The quality signal of the referee is more noisy, thus giving more chance to mediocre-quality papers to get accepted, and therefore lowering *q** and the average quality of submissions, increasing the number of submissions and lowering the acceptance rate. Because submissions are of lower average quality and the referees make more mistakes, the average quality of published papers declines.

Finally, suppose the journal wants to publish more papers every year. To do so, it needs to lower the bar for acceptance, which increases acceptance rate. It also encourages submissions of lower quality papers, and therefore *q** and the average quality of submissions decline, and the number of submissions increases. Naturally, the average quality of published papers declines as a result.

Table 2 reports the equilibria values for several combinations of the parameters *c*, *g* and *n*. Regarding the value of *n*, notice that if a discipline has *N* journals that publish the entire body of research of that discipline and the top journal includes a number of articles that is similar to the average journal, then it publishes 1/*N* of the mass of papers in this discipline. So if most disciplines have between 40 and 500 journals, reasonable values for *n* can be 0.002–0.025.

Table 2

c=0.01g=0.03 |
c=0.05g=0.03 |
c=0.25g=0.03 |
c=0.01g=0.1 |
c=0.05g=0.1 |
c=0.25g=0.1 |
c=0.01g=0.2 |
c=0.05g=0.2 |
c=0.25g=0.2 |
||

n=0.002 |
q* |
0.985 | 0.987 | 0.993 | 0.974 | 0.980 | 0.993 | 0.964 | 0.975 | 0.992 |

r |
1.015 | 1.014 | 1.008 | 1.072 | 1.070 | 1.043 | 1.160 | 1.155 | 1.092 | |

A |
0.134 | 0.156 | 0.305 | 0.076 | 0.100 | 0.269 | 0.055 | 0.081 | 0.260 | |

S |
0.015 | 0.013 | 0.007 | 0.026 | 0.020 | 0.007 | 0.036 | 0.025 | 0.008 | |

T |
0.993 | 0.994 | 0.997 | 0.987 | 0.990 | 0.996 | 0.982 | 0.988 | 0.996 | |

Y |
0.995 | 0.995 | 0.997 | 0.991 | 0.992 | 0.996 | 0.987 | 0.989 | 0.996 | |

n=0.01 |
q* |
0.966 | 0.968 | 0.977 | 0.939 | 0.946 | 0.969 | 0.914 | 0.928 | 0.966 |

r |
0.995 | 0.995 | 0.992 | 1.037 | 1.036 | 1.019 | 1.110 | 1.108 | 1.066 | |

A |
0.294 | 0.315 | 0.440 | 0.163 | 0.185 | 0.327 | 0.117 | 0.140 | 0.293 | |

S |
0.034 | 0.032 | 0.023 | 0.061 | 0.054 | 0.031 | 0.086 | 0.072 | 0.034 | |

T |
0.983 | 0.984 | 0.989 | 0.969 | 0.973 | 0.985 | 0.957 | 0.964 | 0.983 | |

Y |
0.988 | 0.989 | 0.990 | 0.979 | 0.980 | 0.986 | 0.970 | 0.972 | 0.984 | |

n=0.025 |
q* |
0.946 | 0.948 | 0.958 | 0.902 | 0.910 | 0.938 | 0.863 | 0.877 | 0.927 |

r |
0.975 | 0.975 | 0.973 | 1.000 | 1.000 | 0.988 | 1.059 | 1.057 | 1.027 | |

A |
0.461 | 0.482 | 0.598 | 0.255 | 0.276 | 0.405 | 0.182 | 0.204 | 0.342 | |

S |
0.054 | 0.052 | 0.042 | 0.098 | 0.090 | 0.062 | 0.137 | 0.123 | 0.073 | |

T |
0.973 | 0.974 | 0.979 | 0.951 | 0.955 | 0.969 | 0.931 | 0.939 | 0.963 | |

Y |
0.982 | 0.982 | 0.983 | 0.967 | 0.967 | 0.973 | 0.953 | 0.954 | 0.967 |

The value of *g* represents the most extreme case of a referee’s mistake, and more generally how noisy is the referee’s signal. In a discipline with 100 journals, each with roughly the same number of papers annually, if journals had perfect information about article quality then the top journal would publish with quality levels 0.99-1, the tenth journal quality levels 0.90-0.91 and so on. In equilibrium the actual papers published in each journal are a little different from this division due to referees’ mistakes. However, this helps to understand what different values of *g* represent. For example, *g*=0.1 in this case implies that the most extreme mistake a referee can make is to think of a paper whose true quality is of the top journal, that it only belongs to the 11th best journal in terms of quality, or vice versa. In economics, Journal Citation Reports (JCRs) and Social Science Citation Index (SSCI) include about 250 journals, and many more (generally of lower quality) are published but are not covered there. If we assume that there are about 500 economics journals (or a little more but that the lower ranked journals publish fewer articles on average than the more prominent journals), then quality levels of 0.99–1 corresponds to a top-5 journal, and quality levels of 0.8–0.9 correspond to a journal ranked about 51st–100th. This implies that in economics, values of *g* between 0.1 and 0.2 correspond to the case that the most extreme referee mistakes are to think that a paper whose true quality is of top-5 has a quality of a journal ranked 51st–100th (e.g., *Economica*, *Oxford Economic Papers*, or the *Scandinavian Journal of Economics*), or vice versa. The values of *g* chosen for the analysis in Table 2 are 0.03, 0.1 and 0.2, representing several levels within the reasonable parameter values.

For the values of *c*, recall that we normalized the extra utility from a publication in the top journal compared to the outside options to equal 1. Studies of salary levels of economics professors show that one publication in a top journal can increase salary by about 3% (Sauer 1988; Moore, Newman, and Turnbull 2001; Azar 2005). ^{[8]} This means a few thousand dollars annually. The total present value depends crucially on how many years the person has until retirement, his discount rate, etc., but is probably a five-digit dollar number. However, we also have to take into account the value of the outside option, that is, of submitting to other top journals, or to second-tier journals, etc. Obviously a paper that had a reasonable chance in the top journal also has chances to get accepted in these other outlets, and the above studies suggest that publication in these journals also increases salary. The extra benefit from publishing in the top journal compared to the other options is therefore probably a four- or five-digit dollar number, and in the model this amount is normalized to 1.

Turning to the submission cost, recall that it includes three ingredients. First, the submission fee. Many economics journals do not charge a submission fee at all, and this is even more common in other social science journals (e.g., psychology and management). Economics journals that charge a submission fee usually charge up to $200 (Azar 2005). Some disciplines (accounting and finance in particular) tend to have higher submission fees than economics, up to $550 in the *Journal of Financial Economics* (Azar 2005).

The second cost is the value of time and effort required to format the paper for the journal and go through the technicalities of submission. In economics most journals do not require extensive unique formatting in the initial submission stage. In psychology many journals insist on receiving the paper according to strict formatting rules, but on the other hand the formatting rules are more homogeneous than in economics and often mostly require the APA style. Overall, in most cases the submission to an additional journal of an existing paper can take anything up to a few hours.

The third and often highest submission cost is the delay it causes in the publication of the article. Because publishing an article in a good journal gives a large monetary benefit as discussed above, the few months it usually takes to get a decision from a journal (in many economics journals it takes about 3–6 months; see Azar 2007) create a big cost because they delay the increased salary. Obviously one cannot avoid this delay entirely because to get the rewards from publication one needs to submit a paper and wait for a response. Nevertheless, this is a cost that should be incorporated in the total submission cost. For example, a person who submitted to the top journal, received a rejection and then the paper was accepted at another journal (for simplicity let us ignore the common revise and resubmit stage) would have his publication a few months later than someone who gave up the submission to the top journal and sent to the other journal immediately and had his paper accepted. Given that the salary can increase by a few thousand dollars every year if the paper is accepted to a top journal, a delay of such an increase by a few months creates a cost that can reach a few thousand dollars. For example, in the case of a 6-month delay in a paper that eventually gets published in an excellent journal and gives a 3% salary increase to someone with about $130,000 annual salary, the cost of the delay is about $2,000. On the other hand, some papers receive a quick desk reject, and others get published eventually in mediocre journals that hardly affect the salary, thus having a small delay cost. Overall the three ingredients of the submission cost can range from very little up to a little above $2,000 in most cases. Because the extra benefit from publishing in the top journal was estimated to be a four- or five-digit dollar number and was normalized to 1 in the model, the submission cost *c* in the model probably should be up to about 0.25 at most. Consequently, the table presents the equilibria for the *c* values of 0.01, 0.05 and 0.25.

Obviously, the model simplifies the reality in many ways and the actual equilibrium values are not an accurate representation of the relevant values. Nevertheless, it is a useful exercise to look at some values, what they mean and whether they are reasonable or completely wrong. If we take the case of the economics discipline, the closest value in the table in terms of *n* is 0.002 (i.e., about 500 discipline journals). Let us take for this exercise the middle values of *c* and *g*, namely *c*=0.05 and *g*=0.1. In the absence of referee mistakes, the top journal publishes papers with *q*>0.998 (assuming it publishes the same number of articles as the average journal). However, referee mistakes cause authors with lower quality papers to also submit, with the hope that the referee will overestimate their quality and they will get accepted. In particular, the model suggests that *q**=0.98. This means that authors with papers who belong to the top-10 journals in terms of quality try first the top journal. The average level of published papers is 0.992, a little above the average quality of submitted papers (0.990) and below the average level that could be possible if referees had no mistakes at all (0.999, because only papers with *q* between 0.998 and 1 would be published). The journal’s acceptance rate is 10%. If referees are more accurate (*g*=0.03), the journal’s average quality will increase to 0.995.

In Section 4 we assumed that every paper has some positive probability of getting rejected in equilibrium, which implies that *g*>1 − *r*. In this section I adopt the alternative assumption that the review process is sufficiently accurate and the number of papers published in the top journal is sufficiently large that the papers with the top quality levels get accepted with certainty and face no risk of rejection. This means that *g*<1 − *r*. As the proof of Proposition 3 shows, this is equivalent to *g*(1 − *c*^{2})<*n*. In that case there are quality levels *q*≤1 for which *q*≥*g**+**r*. Then, even with the most negative value of *ε* (which is equal to −*g*), the signal obtained by the editor still exceeds the threshold for acceptance: *z=q* − *g≥r*.

As before, the condition that ensures that an author with a paper of quality *q* submits his paper is *P*(acceptance) − *c*≥0. *P*(acceptance) is equal to the probability that *q* + ε *≥ r*, or *P*(ε≥*r**−**q*). This can be divided into three cases. If *q*≥*r**+**g*, then even for the worst possible signal the referee may receive (*q**−**g*) the paper will be accepted because this signal exceeds *r*. Before, the assumption of *g*>1 − *r* implied that we never had *q*≥*r**+**g*, but now there are quality levels such that *q*≥*r**+**g*. If *r**−**g ≤ q<r**+**g*, the probability of acceptance is equal to (*g**−**r**+**q*)/2*g*. If *q<r**−**g*, the probability of acceptance is zero. Consequently, the author submits to the journal if and only if (*g**−**r**+**q*)/2*g* − *c*≥0. Rearranging gives the threshold paper quality *q** above which papers are submitted: *q**=2*gc**+**r**−**g*.

As before, the editor behaves optimally by choosing the papers with the highest signals, and the authors behave optimally by following the threshold *q** identified above. The equilibrium is obtained when the editor’s choice of the value of *r* and the resulting authors’ behavior are such that the number of papers submitted and accepted equals *n*. Proposition 3 characterizes the equilibrium:

**Proposition 3**. In equilibrium, *r =* 1 − *n* − *gc ^{2}*. It follows that

Notice that the derivative of *q** with respect to *c* equals 2*g*(1 − *c*), which is positive for all *c<*1. This means that increasing the submission cost *c* raises the quality threshold above which authors submit their papers, *q**, so the editor gets on average better submissions. With a noisy review process and a fixed number of published papers, this also implies that the average quality of published papers increases. These results are qualitatively similar to those obtained above when we analyzed the case of an equilibrium when no author has a guaranteed acceptance, further supporting the robustness of these results. In addition, notice that the derivative of *q** with respect to *c* is increasing in *g*, suggesting that the improvement in submitted papers’ quality that results from increased submission cost is more prominent when the noisiness of the review process (captured by *g*) is higher.

It is interesting to point out what happens when *c* gets to its maximal value of 1 (above that level no one will submit because the benefit from publication in the top journal was normalized to equal 1). Then in equilibrium we get *r =* 1 − *n* − *gc ^{2} =* 1 −

For the authors this equilibrium with *c*=1 is very bad. Authors who do not submit can be ignored because their utility is assumed to be zero. In other equilibria, the marginal submitting author (the one with paper quality *q**) is just indifferent between submitting or not (implying that his utility from submitting is zero) and other submitting authors get a strictly positive utility (they bear the same cost as the marginal submitting author but have a higher expected benefit because their chances to get their paper accepted are higher). With *c*=1, however, all the submitting authors are indifferent between submitting or not, and get zero utility from submitting. Thus, no author obtains a positive utility in this equilibrium.

The editor, on the other hand, does extremely well in this equilibrium. He wants to publish *n* papers, and he succeeds to publish the very top *n* papers that exist. This is the best he could hope for. In the other equilibria, on the other hand, some lower quality papers are submitted and get accepted due to the noisy review process. Other characteristics of this equilibrium may also be attractive for the editor. For example, if the submission cost consists of a submission fee that goes to the journal and can increase the editor’s income or otherwise be used by the editor for purposes he is interested in, he might want to maximize the submission fees collected. The total benefits from publication in the model are equal to *n* papers times 1 per published paper, which equals *n*. Because each submitting author has to get a non-negative expected utility in order to agree to submit, the total costs cannot exceed *n* as well. Here the editor creates a cost of 1 for *n* authors, i.e., he maximizes the total cost that the journal imposes, which can benefit him as explained above. In other words, with a lower submission cost but more submitting authors we cannot get higher total cost compared to this equilibrium. Moreover, the situation regarding submission fees is probably even more promising for the editor. The total cost includes in addition to submission fees also the cost of the effort of submitting the paper and the cost of the delay in publishing due to the review process (a cost that the author bears even if the paper is then rejected). The editor has an interest to keep these costs low, because it allows him to increase the submission fee, from which he gains, whereas from these other components of submission cost he does not benefit. These costs per submitting author may be fixed (assuming that the editorial delay does not change between different equilibria ^{[9]}), but their impact on total cost depends on how many authors submit their papers. The editor has an interest to reduce these costs and therefore the equilibrium where only the best *n* papers are submitted minimizes these additional submission costs and maximizes the amount that can be obtained from submission fees. Possibly the editor also benefits from fewer submissions because they reduce the editorial burden, creating yet another attractive characteristic from the editor’s perspective for this equilibrium. One possible drawback, however, is that journal prestige is often affected by rejection rates, and a journal with a 0% rejection rate may lose its prestige and sooner or later no longer be considered the top journal, or at least the benefit from publishing in it compared to the best alternative may go down from 1.

Research on the academic review process is an important endeavor, as it may help improve the process and the productivity of scholars. Unfortunately, not much research has been done in this area. One exception is the model of Leslie (2005), which assumes that referees know the quality of a paper accurately, whereas the author only observes a noisy signal about the quality of his paper. The current paper assumes instead that authors are more informed about quality than the referees – authors know the quality whereas referees only receive a noisy signal about it. Several reasons, discussed in the introduction, justify this alternative assumption. It is not clear whether a model with a noisy referee signal as presented here should produce qualitatively different results from a model with a noisy author signal. Authors may submit a mediocre-quality paper to a good journal either because they are uninformed about the true quality and think the paper is better than it actually is, or because the referee occasionally makes mistakes and may recommend acceptance even for their mediocre paper. Therefore, the results may be similar regardless of which agent is the less informed. The analysis shows, however, that the assumption regarding who is more informed about the paper’s quality (the author or the referee) can be important. In particular, in Leslie’s model more noise results in a higher expected quality of published papers. ^{[10]} Here, however, more noise reduces the average quality of published papers. In addition, in Leslie’s model a higher submission cost reduces the journal’s success rate (the proportion of the population of top papers that the top journal publishes). ^{[11]} Here a higher submission cost is beneficial to the journal because it increases the average quality of published papers when the number of published papers remains constant. This happens even though the model does not restrict the use of referees and therefore increasing submission costs is not a way to manage limited resources of refereeing (as in Leslie 2005). Finally, in Leslie’s model more noise increases the number of submissions initially but then decreases it (and when *p**>0.5 the number of submissions always declines), whereas here more noise only increases the number of submissions.

The model also innovates with respect to the existing literature in other ways, which provide rich comparative statics and increased ability to compare the model to empirical observations on journals in different disciplines. As with any model, it simplifies the real world substantially. As with most papers, especially in such areas that are under-researched, there are many directions for future research. For example, heterogeneity of authors, more journals being considered simultaneously and heterogeneity in the journals’ scope can all be added to produce more sophisticated models that may reveal additional interesting observations. Nevertheless, this relatively simple and parsimonious model still captures several important dimensions in the decisions of authors and journals about the review process.

The analysis above can be used to evaluate which policies the top journal might want to pursue. Assuming that one of the main objectives of the journal is to maximize its average quality and publish the best research it can, it seems that the journal has an incentive to raise the submission cost (but not above the level that deters submissions altogether), and to lower the noisiness of the referee’s signal and the number of published papers. However, each of these directions also has potential disadvantages.

Higher submission costs can increase the information contained in the authors’ submission decisions, especially in the model presented in this paper, where the authors are more informed about paper quality than the editor. Therefore increasing submission costs can help improve the journal’s quality (in terms of the average quality of published papers). In the section that deals with the case where some papers are accepted with certainty, the result about the submission cost is particularly striking, because it shows that with a maximal submission cost the editor can achieve the very best outcome possible from his perspective – only the very top papers will be submitted, and all will be accepted.

Increasing the submission cost (e.g., charging a higher submission fee, making less effort to reduce the journal’s first-response time or –more cynically – increasing the effort required to submit to the journal) may work well for a journal that is clearly the top journal as in the model, but in a situation where it is less clear which is the top journal, increasing the submission fee may cause not only the lower quality submissions but also some excellent papers to turn to competing journals. Also, recall that the submission cost is relative to the benefit from a publication in the top journal, which varies substantially between authors (because different institutions reward differently for publications, authors have varying periods until retirement, etc.) ^{[12]}, so it is difficult to find what the optimal level of the submission cost is. On the other hand, increasing the submission cost and reducing the number of submissions can reduce the burden on editors and referees, which is another advantage from the journal’s perspective.

Reducing the number of published articles can be implemented by the journal relatively easily and in many cases can indeed increase the average quality of papers published in the journal. However, it goes in an opposite direction to another aim of the journal, which is to make an impact. If a journal publishes only a dozen papers a year, its impact on the discipline is smaller than when it publishes sixty papers annually, even if the lower number allows cutting out the papers with the lower quality and thus to increase average quality. Moreover, the average quality of the top journal is anyway extremely high, and due to the noisiness of the review process it is also not easy to eliminate only the lowest quality papers by reducing the number of articles. In addition, reducing the number of articles might reduce the willingness to pay of libraries and readers for the journal, thus going against the interest of the journal to generate revenues.

Can the journal reduce the noisiness of the quality signal it receives about papers and thus improve its quality? There are some ways in which the journal can do so. First, it can employ better referees. This is not always easy to do, because the more senior and knowledgeable referees are often also busier, and possibly a less senior person who can dedicate more time to the paper might provide a better assessment of the paper. To the extent that the journal can achieve better feedback from referees, however, it is probably a good idea for the journal to do so. The top journal in particular might have access to the best referees. Also, paying referees might help to improve the willingness of the best referees to review for the journal (especially if they have to choose to which journals’ refereeing requests they prefer to say yes) and thus reduce the noisiness of the quality signal.

Another way to reduce the noisiness of the quality signal the journal obtains is to employ more referees per paper. More referees reduce the chances that a low-quality paper will be accepted by mistake, as well as the probability that an excellent paper will be rejected, because each referee obtains a different signal and the editor who then sees all these signals can have better information. ^{[13]} Employing more referees per paper, however, has obvious costs, because referees’ time is precious and the journal wants to save on this scarce resource as much as possible. A possible solution might be to employ two referees, and if the reports received show that additional feedback on the paper will be particularly helpful, to ask for an additional report only then. The downside to this solution is that it makes the review process longer.

I thank two anonymous referees for helpful comments.

**Proof of Lemma 1**. Because both the quality level *q* and the noise *ε* are taken from uniform distributions, we should first identify for each possible signal *z* the minimal and maximal values of *q* from which it can come given the distributions and equilibrium behavior or authors. Because *z*=*q* + ε and *ε* is between −*g* and *g*, we know that *q*≥*z* − *g* and *q*≤*z* + *g*. The distribution of *q* implies that *q*≤1. The equilibrium behavior of authors means that the true quality of a submitted paper must be at least *q** (since *q**≥0, this also captures the requirement that the true quality is positive). Combining these observations we know that the lower bound for *q* given the value of *z* is *max* (*q**, *z*−*g*) and the upper bound for *q* is *min* (*z**+**g*, 1). The uniformity of the distributions of *q* and *ε* implies that the expected value of *q* conditional on *z* given the range of *q*-values from which *z* can result is given by the middle of that range (the average of the minimum and maximum values of *q* from which the signal *z* can come). Thus, we obtain the following expected value of *q* conditional on the observed signal *z*: *E*(*q|z*) *=* [*max* (*q**, *z* − *g*) *+ min* (*z**+**g*, 1)]/2. The two components of this function, *max* (*q*, z* − *g*) and *min* (*z**+**g*, 1), are both either strictly or weakly increasing in *z*, and therefore their sum is also an increasing function of *z*.

**Proof of Proposition 1**. Recall that by assumption *g*>1 − *r*. The expected number of accepted papers is equal to the following:

The assumption *g*>1 − *r* is now equivalent to 1 + 2*g**−* 2(*ng**+**g*^{2}*c*^{2})^{0.5}>1, or *g>*(*ng**+**g*^{2}*c*^{2})^{0.5}, which then gives after squaring both sides *g*^{2}*>ng**+**g*^{2}*c*^{2}, or equivalently *g*(1 − *c*^{2})>*n*. Because *n* is generally very small this inequality is easily satisfied for reasonable parameter values. ^{[14]}

**Proof of Proposition 2**. The average quality of published articles is obtained by multiplying the probability of acceptance by the article quality, integrating over the range of submitted papers taking into account the distribution of *q* and dividing the result by the number of published articles. This gives

Substituting *r =* 1 + *g**−* 2(*ng**+**g*^{2}*c*^{2})^{0.5}, solving the integral and simplifying then gives the expression in Proposition 2.

**Proof of Proposition 3**. Recall that here by assumption *g*<1 − *r* and consequently there are two ranges of *q*-values that result in accepted papers. For *q*≥*g**+**r* the acceptance is guaranteed. This means that we have 1 − *g* − *r* accepted papers from this range of *q*-values. For *q*-values between *q**=2*gc**+**r**−**g* and *g**+**r* acceptance is not guaranteed but is possible, with acceptance probability being equal to (*g**−**r**+**q*)/2*g* as explained before. The total number of accepted should be equal to *n* in equilibrium and is therefore given by

Akerlof, G. A. 1970. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” *Quarterly Journal of Economics*84(3):488–500. Search in Google Scholar

Azar, O. H. 2005. “The Review Process in Economics: Is It Too Fast?” *Southern Economic Journal*72(2):482–91. Search in Google Scholar

Azar, O. H. 2007. “The Slowdown in First-Response Times of Economics Journals: Can It Be Beneficial?” *Economic Inquiry*45(1):179–87. Search in Google Scholar

Azar, O. H. 2008. “Evolution of Social Norms with Heterogeneous Preferences: A General Model and an Application to the Academic Review Process.” *Journal of Economic Behavior and Organization*65(3–4):420–35. Search in Google Scholar

Bergstrom, T. C. 2001. “Free Labor for Costly Journals?” *Journal of Economic Perspectives*15(3):183–98. Search in Google Scholar

Besancenot, D., and R.Vranceanu. 2008. “Can Incentives for Research Harm Research? A Business Schools’ Tale.” *Journal of Socio-Economics*37(3):1248–65. Search in Google Scholar

Blank, R. 1991. “The Effects of Double-Blind Versus Single-Blind Reviewing: Experimental Evidence From the *American Economic Review*.” *American Economic Review*81(5):1041–67. Search in Google Scholar

Cotton, C. 2013. “Submission Fees and Response Times in Academic Publishing.” *American Economic Review*103(1):501–9. Search in Google Scholar

Ellison, G. 2002a. “Evolving Standards for Academic Publishing: A q-R Theory.” *Journal of Political Economy*110(5):994–1034. Search in Google Scholar

Ellison, G. 2002b. “The Slowdown of the Economics Publishing Process.” *Journal of Political Economy*110(5):947–93. Search in Google Scholar

Ellison, G. 2011. “Is Peer Review in Decline?” *Economic Inquiry*49(3):635–57. Search in Google Scholar

Frey, B. S., R.Eichenberger, and R. L.Frey. 2009. “Editorial Ruminations: Publishing Kyklos.” *Kyklos*62(2):151–60. Search in Google Scholar

Gans, J. S., and G. B.Shepherd. 1994. “How Are the Mighty Fallen: Rejected Classic Articles by Leading Economists.” *Journal of Economic Perspectives*8(1):165–80. Search in Google Scholar

Heintzelman, M., and D.Nocetti. 2009. “Where Should We Submit Our Manuscript? An Analysis of Journal Submission Strategies.” *The B.E. Journal of Economic Analysis & Policy*9(1):1–26, Advances: Article 39. Search in Google Scholar

Kocher, M. G., and M.Sutter. 2001. “The Institutional Concentration of Authors in Top Journals of Economics during the Last Two Decades.” *The Economic Journal*111(472):F405–F421. Search in Google Scholar

Laband, D. N. 1990. “Is There Value-Added from the Review Process in Economics?: Preliminary Evidence from Authors.” *Quarterly Journal of Economics*105(2):341–52. Search in Google Scholar

Laband, D. N., and M. J.Piette. 1994. “Favoritism Versus Search for Good Papers: Empirical Evidence Regarding the Behavior of Journal Editors.” *Journal of Political Economy*102(1):194–203. Search in Google Scholar

Laband, D. N., and R. D.Tollison. 2003. “Good Colleagues.” *Journal of Economic Behavior and Organization*52(4):505–12. Search in Google Scholar

Leslie, D. 2005. “Are Delays in Academic Publishing Necessary?” *American Economic Review*95(1):407–13. Search in Google Scholar

McCabe, M. J., and C. M.Snyder. 2005. “Open Access and Academic Journal Quality.” *American Economic Review*95(2):453–8. Search in Google Scholar

Medoff, M. H. 2003. “Editorial Favouritism in Economics?” *Southern Economic Journal*70(2):425–34. Search in Google Scholar

Moore, W. J., R. J.Newman, and G. K.Turnbull. 2001. “Reputational Capital and Academic Pay.” *Economic Inquiry*39(4):663–71. Search in Google Scholar

Oswald, A. J. 2007. “An Examination of the Reliability of Prestigious Scholarly Journals: Evidence and Implications for Decision-Makers.” *Economica*74(293):21–31. Search in Google Scholar

Sauer, R. D. 1988. “Estimates of the Returns to Quality and Coauthorship in Economic Academia.” *Journal of Political Economy*96(4):855–66. Search in Google Scholar

Shepherd, G. B. 1995. *Rejected: Leading Economists Ponder the Publication Process*. Sun Lakes, AZ: Thomas Horton and Daughters. Search in Google Scholar

Sutter, M., and M. G.Kocher. 2001. “Power Laws of Research Output. Evidence for Journals of Economics.” *Scientometrics*51(2):405–14. Search in Google Scholar

©2015 by De Gruyter