I think preprints are great. A big part of why preprints are great is because they aren’t journal articles. As such, I’m going to start out by talking about the problems with journals and with peer review, and then swing back around to talk about how preprints help us solve some of these problems. I also think journals are still valuable and I don’t want them to go away, and so I view preprints as a valuable complement rather than as a replacement. Here we go.
Problems with Journals and Peer Review
Journals are slooooooow
Journals can take a long time to render a decision on a manuscript, which often enough is a request for revision, and forcing the process to begin anew. Time to decision varies a lot by field. Nature once rejected a paper of mine in less than 12 hours. An economics journal, on the other hand, once took 11 months to reach their first decision (R&R), and another 7 months to reach their second decision (conditional accept). That’s a long time in the mill. Further, even after a paper is accepted, it can take months or even years for the paper to actually appear in the journal. This can be a long time to wait for your work to see the light of day.
There may be good reasons for why a journal takes a long time to review your paper. It may be difficult for the editor to find reviewers, and empirical papers now often contain substantially more data than they used to (Powell 2016). On the other hand, some reasons aren’t so good. Reviewers may accept an assignment only to blow it off, or editors may be unwilling to exercise agency in making review-splitting decisions. I’m a big fan of how the journal The American Naturalist asks its reviewers to follow the Golden Rule of Reviewing: review for others as you would have others review for you (McPeek et al. 2009). I wish more journals promoted this rule.
Peer review stifles innovation and mutes style
There are countless tales of seminal, field-defining papers getting routed from journal to journal because they defied expectations held by editors and reviewers of what a paper should look like. This means that important work can wallow in the labyrinths of review for ages before seeing the light of day.
In addition, because functioning peer review promotes maximal clarity for a broad, professional audience, more idiosyncratic verbiage is often eliminated. I understand the reasons for this. Colloquialisms can make translation difficult. But they can also carry important information to fluent speakers. There is a tradeoff here, and I fully acknowledge that both clarity and accessibility are extremely important for science communication. I also think that sometimes you gotta write with panache. Translation is often difficult, but that doesn’t mean hard-to-translate things shouldn’t be said (Hofstadter, 1996). Compounds in natural foods may have unknown functions but can act independently or synergistically with known ingredients. So it is with language.
Journals make research hard to access or hard to disseminate
Many journal articles are behind paywalls, making them inaccessible to anyone without access to the library of a major research university, unless you want to pay $40 to read a single paper. The website Sci-Hub illegally shares paywalled papers, and has predictably stirred up controversy about intellectual property and access to publically funded research, leading some governments to block it (Chawla 2017). Open Access journals make all papers freely accessible, but incur fees from authors. This leads to a different type of burden, and also incentivizes unethical behavior from such journals, such as laxer acceptance policies.
Journal prestige contributes to institutional pathology
Publishing in certain journals is highly competitive. Certain high-impact journals receive undue attention, leading to warped incentives to publish in those outlets. I once spoke to someone in a molecular biology lab at a prestigious university who said that her department wouldn’t even consider job candidates that hadn’t published in Nature, Science, or Cell. And researchers respond accordingly. When I was a postdoc, another postdoc in my group walked into my office and proposed that we try to come up with the sort of research project that was likely to get us a Nature publication. To emphasize this: the proposal was that our choice of research questions and methods should be guided not by our interests or by their intrinsic importance, but by our perception of what made for a “Nature paper” (I turned down his kind offer). Even worse, universities in some countries, including China, Chile, and Venezuela, often give explicit cash payouts to researchers who publish in high-impact journals (MIT Technology Review 2017, Franzoni et al. 2011).
The obsession with journal papers clutters the literature with trivial or incorrect results. It can create incentives for fraud, such as the outright fabrication of data (Fanelli 2009), or a fascinating instance where some pharmacologists submitted fake email addresses for suggested reviewers so they could surreptitiously review their own paper (Cohen et al. 2016). Even without fraud, incentives to publish can select for whatever practices maximize publications, which in turn can lead to an increase in false discovery rates as bad practices are rewarded and adopted (Smaldino & McElreath 2016). Incentives have an effect on the journals as well. Journal editors may employ dubious strategies to increase their journal’s impact factor, such as encouraging authors to heavily cite articles in their journal (Research Policy 2016). This is done both in order to get better submissions, and because there is higher prestige of editing a high impact journal. Jerry Davis, the erstwhile editor of Administrative Science Quarterly, wrote on this subject:
I was quite surprised to discover [while browsing Web of Knowledge’s Journal Citation Report] unexceptional articles in unheralded journals that had been cited more often than Watson and Crick’s paper reporting the discovery of the structure of DNA. The citing articles often included text along the lines of ‘‘Management is an important topic’’ followed by parenthetical citations to a dozen recent articles in the offending journal. (Davis 2014)
Obviously, this is all totally bananas.
What’s Awesome About Preprints
Preprints let you get your work our earlier. To some extent, that’s all there is to it. You avoid the long waits of journal review and publication timelines. You get to disseminate your paper the way you think it should be written. And because all preprint servers are freely accessible for both authors and readers, you avoid any concerns about either paywalls or publishing costs.
A side effect I have seen in at least some people is that they stop worrying so much about publishing in prestigious outlets. Once you accept that your paper can be seen and judged on its own merits, publication months or years later ends up being somewhat anticlimactic. Consider the oldest online preprint server, arXiv (pronounced “archive,” because the “X” looks like the Greek letter “chi,” and physicists are hilarious), which publishes work primarily in physics and computer science. Most physicists and computer scientists are used to uploading their work to arXiv.org, and the ones I know tend not to care too much about where their papers end up being published.
Pre-publication feedback is helpful
One of the oft-touted advantages of posting preprints is getting helpful feedback on your paper before you submit it to a journal. This sounds great. That said, I’ve rarely gotten unsolicited feedback on a preprint, and apparently only about 10% of preprints draw comments, at least on bioRxiv (Kaiser 2017), so I’m not totally convinced by this argument. I do find that solicited feedback is amazingly helpful. All my best papers have benefitted from having lots of eyes on them before publication to point out logical flaws or suggest extensions or related literatures. If commenting on preprints becomes more common, that can only benefit their authors.
Preprints lets science be creative
I think of science as fundamentally creative work. I want to figure out how the world works for my own edification, and I want to share it with others to create a community of people who are working together to understand the world. I’m aware that can come across as very lah-di-dah, and there are of course quotidian demands and pressing pragmatic problems to solve and the occasional materialistic dreams of fame and glory. If I was really in it for the money or fame, though, I’d be doing something else. If you don’t relate to this perspective, that’s fine. But to whatever extent science is really a creative endeavor, then preprints allow for a more direct line between artist and audience.
In his book Steal Like An Artist, Austin Kleon (2012) writes that the secret to success is “do good work and share it with people”:
It’s a two-step process. Step one, “do good work,” is incredibly hard. There are no shortcuts. Make stuff every day. Know you’re going to suck for a while. Fail. Get better. Step two, “share it with people,” was really hard up until about ten years ago or so. Now, it’s very simple: “Put your stuff on the Internet.”
I think the parallels to science are pretty clear. The worry some people have is that, if you put your ideas online, someone will steal them before you publish. To this, I say: you did publish. You published online. Preprint servers have a timestamp. Some have a DOI. Everyone can see this. Moreover, have some humility. The physicist Howard Aiken is often quoted as saying “Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats.”
Where to start with preprints
When I was a graduate student, there were two major repositories for preprints. Physics, math, and computer science papers were on arXiv. Social science papers, especially economics and political science, were on SSRN (before it was bought by Elsevier). Over the last few years, discipline-specific preprint servers have been sprouting like weeds: bioRxiv, PsyArXiv, SocArXiv, engrXiv, AgriXiv, PeerJ Preprints. Thankfully, the good people at the Center for Open Science have created an index through which you can search all of these repositories at once. Check it out. It’s awesome.
While we’re on the subject of all these new preprint servers, it is worth registering one complaint about the way they’ve sprung up. You may have noticed that each discipline appears to now have its own preprint server. It’s wonderful that scientists of all stripes can now find relevant preprints and a place to put their own. However, the division of preprint servers into traditional disciplinary silos is one of the great missed opportunities of the open science movement. As the computational scientist Peter Sheridan Dodds recently commented, “It’s like having 43 Wikipedias.” My hope is that as preprints gain popularity in more circles, servers will eventually merge.
Remaining Concerns About Preprints
Many of the old concerns have been addressed. Increasingly, granting agencies are allowing proposals to cite preprints. Top journals are no longer disqualifying papers that have been previously posted as preprints. Some concerns remain. Tenure and promotion committees need to find a way to acknowledge preprints. At the University of California, where I work, preprints don’t count at all toward merit or tenure promotions. Of course, we really shouldn’t be counting papers at all, but that’s another story, and acknowledging preprints may be a step toward considering the work itself rather than proxies like the number of papers in fancy journals.
Why Not Preprints?
I believe the question is no longer “Why should you post a preprint?” The question has become “Why wouldn’t you?” I mean it. What are you trying to accomplish with your science? Are you confident in your results? Do you want other people to actually see your work? Then post a preprint. Perhaps you feel like you don’t want the work to be seen before it’s critiqued by a few anonymous reviewers. There will be cases for which this is a legitimate concern. On the other hand, I have seen plenty of papers go out for peer review that were never meant for mass consumption, in which the authors appeared to expect the reviewers to tell them how to finish their paper. Posting a preprint forces you to get your paper in shape, to produce something that you’re willing to release it to the public, before you send it to reviewers. Light is the best disinfectant.
Why Not Only Preprints?
At this point, I feel the need to close with a defense of journals. There are some who contend that we should move to a post-journal world, where all review is post-publication review, and all papers will be found on what we now call preprint servers. I have some sympathy for this view, but I’m against it. Good journals build up their reputation among a community of researchers. They get those reputations not just for rigorous, thoughtful, and respectful peer review, but also for publishing quality papers that fit a certain view of the world. I don’t think this is a trivial contribution. Vast numbers of papers on myriad topics come out each year. I think we need some way to sift through them all.
Certain journals create unique experiences. In 2014 I was lucky enough to publish a target article in the journal Behavioral and Brain Sciences (Smaldino 2014). Once the paper was accepted to publication, following review by four referees, the journal solicited peer commentaries to be published alongside the target article. In the end, 28 commentaries were published, along with my response. This experience was a rare opportunity to have a public dialogue about my work with the global scientific community, and was monumentally important for my career (the experience of reading blistering critiques of my work from top scientists in my field while I was still a postdoc was something like trial by fire). I contend that this sort of thing requires an institutional infrastructure beyond what preprints alone can offer. More generally, going through review at a good journal can force us to make our work more rigorous, more clear, or more logically coherent. The sociologist Charles Perrow (1986) argued that the rigorous critique of mainline journals provides important pushback that can turn an interesting idea into a fully fleshed out, well-supported theory. I can personally think of many times when a reviewer or editor pushed me to do better.
The best defense of journals of know of is Davis (2014). It’s a great paper, and I recommend reading the whole thing. Here’s the heart of the argument:
At their best, journals accomplish three things: certifying, convening, and curating. Certifying is what the review process does, validating articles as having made it through a vetting process (however organized). Convening means that specific journals are able to bring together interested and engaged scholars in a way that the abstract endeavor of organizational scholarship cannot. The membership of the editorial board reflects a journal’s ability to attract the voluntary and mostly anonymous labor of outstanding scholars. Ideally, scholars will regard a journal as a community (but not a club). Curating suggests that what is published in a particular journal is likely to be worth reading. In a field in which 8,000 or more papers are published every year, it is helpful to have the assurance that papers in a specific journal will be worth your time.
Both preprints and traditional journals are important. Journals are valuable for their curation and markings of the “sort of paper this paper is.” This is legitimately useful information. Preprints are valuable because that kind of information is not always essential to finding good papers, and because important work needs to find an audience as soon as it is ready for eyes.
A number of useful articles and blog posts have recently been written about preprints, some of which take a less favorable view than I have here. Here are some of them.
Paul Smaldino is an Assistant Professor in Cognitive and Information Sciences at the University of California, Merced.