Director’s Blog: Accessing and Assessing Science: From PLOS to DORA
By Thomas Insel on
When the Public Library of Science (PLOS) launched its first open access journal in 2003, it was big news. Although many physics and computer science journals had already been free and open for several years, biomedical research had followed a business model dictated by the publishing industry, with access limited to subscribers. Subscriptions could often cost over $1000/year per journal, and more often than not, way above this figure. Most biomedical academic research, funded by taxpayers through the NIH, was not accessible to the taxpayers who paid for it.
After PLOS published PLOS Biology as an online journal with immediate access to anyone on the internet late in 2003, I began wearing a bright blue PLOS tee shirt around Bethesda. At a neighborhood party, an editor of the journal Science (which was not enthusiastic about open access) asked me if she could borrow my shirt for an editorial meeting to wake up her staff. On the spot, for the first and possibly last time, I literally gave someone the shirt off my back.
What seemed disruptive and provocative a decade ago is now the new normal. According to a report in Science (which still delays access by 12 months), a recent study by the European Commission found that by 2011 half of all scientific papers were freely available to the public.1 Biomedical research actually does better than physics or chemistry, with 61 percent of papers freely available within a year. This means that anyone with access to the internet has access to the published results of scientific research. Perhaps that should not seem surprising. After all, most published research is funded by public money, authors donate their papers, and peer review is provided by volunteers. Why shouldn’t the results be freely available? The traditional answer has been that scientific publishing is a business with expenses like any other publishing business; publishers feel that their process adds value to the papers.2 The advent of electronic journals has, of course, changed the cost and now the culture of scientific publishing.
But it is too early to declare victory in the quest for public access. For many journals there is still a lag of 6 or 12 months before papers are available. The good news is that there is a central repository for journals once they are publicly available. NIH established PubMed Central (PMC) as a required repository for all published research supported by NIH funding. This does not fix the lag in public access, but it provides one-stop shopping. A second issue is that even if the public has access to the papers, the results are in technical language not easily understood by a reader without a specialized science background. Fortunately, many clinical journals, such as JAMA and Pediatrics, now provide a plain language explanation of major findings from their published papers. Finally, the real test of access, for scientists, will be not only a view of the paper but access to the original data. A new journal, eLife, has been launched with the requirement that accepted papers, when published, will be accompanied by the relevant raw data. eLife is really more than an electronic journal, it is an experiment in communicating science to accelerate discovery. It deserves careful attention from anyone interested in the dissemination of scientific information and the changing culture of publishing.
These issues aside, we have come a long way in the decade since PLOS launched its first online issue. In fact, one might wonder if the problem now is too much access, not too little. There are now thousands of biomedical research journals, many only published electronically. How can anyone know what to read? How can anyone keep up? The answer for too many scientists and science administrators is what we call the “CNS syndrome,” where “CNS” stands for Cell, Nature, and Science. For many, these three journals and a few of their offshoots are the holy grail of publishing. They are indeed the highest ranking journals based on impact factor—a measure of the frequency with which the articles in a given journal have been cited elsewhere. The impact factor has taken on magical powers in science. While papers in these journals are frequently newsworthy and sometimes groundbreaking, these highest impact factor journals do not have a monopoly on outstanding science.
An unfortunate consequence of the deluge of data and papers is that recognition or promotion in science may have less to do with what a scientist has done and more to do with where the work is published. Promotion committees or grant reviewers simply assume that a CNS paper is a proxy for excellence, without reading the work or knowing how each scientist contributed. As a result, the pressure to publish in a CNS journal has become intense, while equally outstanding work in other journals is devalued or overlooked altogether. Over the past few months, more than 9,000 scientists and well over 300 scientific organizations have signed on to the San Francisco Declaration on Research Assessment (DORA), which advocates halting the use of impact factors as proxies for scientific merit. DORA hopes to begin a cultural change, an “impact factor rebellion,” to shift the focus from a single metric determining the goals of scientific publishing to a more thoughtful assessment of scientific value based on details of the work, innovation of the approach, and replication of the findings. After all, shouldn’t we care more about what is published, than where? These days, I am looking for a DORA tee shirt to wear at NIH.
1 Kaiser J. Scientific publishing. Half of all papers now free in some form, study claims. Science. 2013 Aug 23;341(6148):830. doi: 10.1126/science.341.6148.830.
2 Van Noorden R. Open access: The true cost of science publishing. Nature. 2013 Mar 28;495(7442):426-9. doi: 10.1038/495426a.