A Vision for the Future of Scholarly Publishing

In many ways, the Research Works Act has been a blessing (see excellent link round up here). It has taken the moderately complacent but always grousing scientific community and whipped our feelings about the current state and cost of scientific publishing into a white hot fury. Ideas are bandied about, critiques given, and people begin to take action.

So what is the way forward? Certainly we are not getting away from journals in the near term. Or ever, really, as they are fabulous final curated repositories scientific results. They are the end point and gold standard. And I think we’re all coming to the conclusion that a PLoS-like model is a great way to go. Science must end up in an open access repository at the end of the day.

But a final resting place aside, what should the future look like so that research results can be disseminated rapidly and openly? How can we fold in peer review as a part of the process, as it is one of the hallmarks of the scientific quality control.

So I’ve been dreaming. A vision of the future of scientific publishing. What if arXiv, reedit, PLoS, pubcreds, slashdot’s commenting system, figshare, Data One, and Web 2.0 had a baby? This lead to an idea – a concept – a proposal.

So, here’s my vision of the future. It’s not the only vision, and there is substantial room for discussion, but, it’s a start… Consider this a SciFi musing on scholarly publishing.

I sit down with my morning cup of coffee and log into SciX. I am presented with several options on the main screen:

Read Papers
Submit a Paper
Revise a Paper
Review the Reviewers

So, what happens when I click on each of those? Let’s follow each one, one by one.

OK, so, I click on “Read a Paper”. I’m taken to a screen that shows different scientific disciplines and a search box. I click on my discipline of interest, as I’m just browsing. I am presented with a list of subdisciplines, a search box, and a list of papers below that with several sorting options (# of reviews, submission date, review score, etc.) I click on my subdiscipline, and am presented with just a list of papers which I can sort by various criteria and, again, a search box. I sort so that I can see the latest submissions and find one that piques my interest. I click on it, read it, maybe even view some of its supplemental videos or such. I have a strong opinion about it – I think it’s good, but has a few flaws that need to be corrected before it should be accepted by the community. The paper already has one review filed. I read it, and it’s OK, but it misses some key things. So I click review.

I write a brief review, just like a normal paper. I click that my review should remain anonymous. I also need to select one option from the following list:

This paper is fraudulent/not a paper (flag for review).
This paper is not acceptable in its current form (reject).
This paper is good, but requires some large revisions (major revisions).
This paper is acceptable, but requires some changes (minor revisions).
This paper is acceptable as is.

As I think there are some serious flaws, but like the work, I select the major revisions option.

At that moment, it just so happens that I get an email from SciX. The email contains the latest papers in my chosen disciplines and sub-disciplines that have received at least two more acceptable reviews than rejection reviews. I.e., reject is a score of -1, major revisions is a score of 0, and any acceptable score is +1.

OK, great. Let’s move on. Let’s say I wanted to submit a paper. The submission process is the same as any other site except, today when I click on submit, a screen comes up that denies me the ability to submit. It reads that I currently do not have a review to submission ratio of 3:1. I’m all good on reviewing reviews (more on that later), but I need to file more original reviews myself! I grumble about it, bemoaning my days in grad school when I only had to keep up with 1:1, but it’s right, so I go and file one more review before I come back and submit my paper in the usual way – except for that embedded video. After submitting, I like to the doi on my website and CV.

OK, it’s several weeks later. I have received two major revisions reviews on my paper. I’ve done the revisions, and thought carefully about the reviewers responses. So I go back to SciX and click on Revise a paper. I upload the revised version. I then upload an individual response to each reviewer. They are both anonymous, so I don’t know who they are. However, once I hit ‘resubmit’, they are sent an email. It tells them that I have responded to their reviews and submitted a revised version. They have two weeks to look at the revision and the responses. If they do nothing, the paper will be marked as “acceptable as is”. If they wish, though, they can go and submit a secondary review. This review also includes an option of “Did not respond to my review at all.” If both reviewers select this for more than two rounds of resubmission, my paper is booted and I have to start over again.

A few weeks later, both reviews come back positive, and my paper is included in the next email out. I have also received two additional ‘acceptable’ reviews of the paper with no comments attached. I list the score on my CV. I also hit “submit to journal” and select PLoS One. The system generates the proper submission, and attaches the review history. All I do is fill in a cover letter.

I have another paper to submit, but, I know I’m down on my number of “reviewing the reviewers”, so, I click on that option. I am brought to a screen with five reviews. For each review, I also have the title and abstract of the paper. For each review, I am asked to select one of the following options:

Review is fraudulent/someone padding their bank/unrelated to paper (flag)
Review is cursory. Email reviewer for more detailed review.
Review is acceptable, but laced with inappropriate invective. Count as half-review.
Review is fully acceptable.

I quickly go through and click acceptable for all of them, except one that merely says “No.” with “Reject” selected. Clearly, not fair to the author or the community.

With this finished, I go and submit my next short paper. It’s a brief note, but one that I feel important to get out into the literature.

After all of this, I go to my profile page. I check my list of papers, note their current scores and number of downloads and citations, and update those on my website and CV.

So, that’s it. That’s my simple vision. Open access and transparency from hitting the ‘publish’ button to reading and writing reviews. And a reputation based economy so that papers are only marked accepted when the weight of reviews say so, but anyone can still look at them and the comments that others have made about them.


UPDATE I’m tremendously excited about F1000’s announcement of their new F1000 Research which is being discussed across the interwebs. I fear that their model of post-publication peer review will end up suffering the same fate as PLoS One, though – comments on highly controversial or touted articles, but most of the rest going without comment or notice. The above vision solves that problem.

See also (things I have found after writing the above):
Gowers’s How might we get to a new model of mathematical publishing?
Gowers’s more modest proposal
This excellent thread at Math 2.0
Nikolaus Kriegeskorte’s excellent The Future of Scientific Publishing

16 thoughts on “A Vision for the Future of Scholarly Publishing

  1. As Jason Priem said, recursive reviewing FTW. I’d add some of Stack Overflow’s mechanics to the mix as well.

    It’s not 100% of the way there yet, but at least for the “Read Papers” part of your workflow, you can do some of that with Mendeley. You can search by keyword & discipline & results are ranked using the number of readers as an input. You can also restrict results to only open access papers, so you know you’ll actually be able to read what you find.

    Please take a look and let me know if you think it has a place in your vision for the future.

  2. I haven’t really read on what other’s have proposed for the future work flow for peer review, but to me your system seems to have quite a bit of potential. I could easily imagine myself going along the steps as you described, and would probably enjoy it more than the current model with traditional publishers. (As a side note: I found your post via Mr. Gunn’s Google+ link.)

  3. Indeed, the voting mechanic idea came, in part, from stackoverflow. I like the idea, though, of a sort by number of readers, or number of readers, say, in a week. I’ll add that to the list!

  4. I think this is just excellent. I could easily see myself going through the steps you’ve outlined. And I think it would really pick up. The important thing is to get the publishers on board, but that seems to have been no problem for F1000 Research. I order to get people to submit papers to SciX, most would like to know that journals wouldn’t count a submission to SciX as prior publication.


  5. Reputation counts for something. Slashdot’s karma system is a good example.
    Anonymous reviews won’t be as valuable, while major, known trusted reviewers carry more weight.
    Same goes for authors, to some extent, although work should be judged on merit, not reputation. There’s a danger that unknown authors would find it hard to get noticed (the volume of submissions is huge).
    Authors will want to solicit reviews by known names. People want to be in high-impact journals now, based largely on reputation. Not everyone will get on the front page, so of course search is important. Non-anonymous review by respected people can boost your ranking.
    One of the big problems today is fraud. I expect that fraud will be much harder to get away with. Instead of getting safely published in a dusty book from a paywalled journal, work will stay out in the open forever, and if fraud is discovered that author will get a black bar across his/her name.
    Today all people care about is how many publications you have, not their ranking. That promotes this kind of fraud.

  6. Hrm. While I think reviews should remain anonymous, the karma-system seems interesting – I think linking that to the reviewing of the reviewers seems like a wonderful idea. So, while no clear identifying information about a reviewer is given, one can at least see the moderated score of their abilities as a reviewer – say an average and a total number of reviews or something along those lines. With respect to soliciting reviews, once one posts something on this SciX, well, there’s no reason that they cannot contact ‘reviewers’ themselves! Let people who you think should be reading and reviewing your work know that you’ve put something up. We do this already to some extent when we recommend reviewers for our manuscripts to editors. Now, of course this raises the flag of conflict of interest. Which, really most COIs can be tracked with some simple biographical information (grad program, professional appointments, etc) and by tracking co-authorship. There are some nitty-gritty details to work out there, but I see no reason why it couldn’t work.

    But thanks for bringing in the public signifier of reputation. As long as it is based on assessment of reviews within the system, it might be a very helpful metric for people to use in a number of different ways!

  7. Pingback: Academic publishing – are the winds of change starting to blow? | Infectious Thoughts

  8. Just because Wikipedia isn’t the publishing solution you are looking for, doesn’t mean that the framework provided by the software that Wikipedia runs on isn’t ideal for exactly what you are trying to accomplish – not to mention well understood and supported by a much larger community.

    If you want to begin to design such a website, that is a great place to start, especially given the lack of licensing issues with regards to the software – it’s also open as far as I know.

    With regards to Wiki and publication reputations in general, first change I would make would be a little box that an author/reviewer could check to decide between “I proof read the entire section for errors” and “I am fixing this one little error in a section than may have many more” – this is one reason Wikipedia is nothing at all like peer review in the first place. That said, the access controls and talk page/revision tracking/citation linking infrastructure of the Wiki software is still perfect for this.

  9. Pingback: Open Haus: The Future of Scholarly Publishing « i'm a chordata! urochordata!

  10. I can see you are not a mathematician…

    Thoroughly refereeing a paper in mathematics that has serious potential to be published is anywhere between 3 hours (for a very short, relatively unimportant paper) and over 1000 hours (Wiles’s paper on FLT – the work was spread over several people) of work. I’ve probably averaged around 20 hours for each paper I’ve refereed.

  11. Indeed, I am an ecologist. Paper review for me – typically half a day at minimum, and more depending on the content and depth of analyses. It is indeed a different culture, but, I think the issue of how the future of publishing should be structured is common between both of our disciplines, and we likely have much to learn from one another.

  12. You say that your idea solves the problem of “most of the rest going without comment or notice”, but it seems that it solves that problem chiefly but getting in the users way, and nagging them about contributing. Unless such a site were the only option for reading and writing papers (not likely to be the case, and I’m sure you would agree undesirable), where is the motivation to obey the software’s nagging?

  13. Scott – it’s less of a nag and more of a, “If you want to use this service, you have to be part of the community.” It’s actually based on an idea that has been suggested in EEB to ensure that people don’t try and cheat the reviewing system. See here for a full discussion.

    Realistically, the only time the ‘nag’ should happen is if you haven’t been reading and reviewing other people’s work. Ideally, you will never get such a notice, because you are not only using the site, but participating as well.

  14. FYI, on further thought and discussions offline, the reputation economy (a la stackoverflow and crossvalidated) is seeming to be a better and better model. So, while minimum requirements may stay in place, a reputation economy is going to be a key to making this work, and providing that carrot that will make people feel good about their participation here. It’s worked for all of the stachexchange sites, and I think it will here, too.

  15. Pingback: PeerJ launches open access into a new realm | i'm a chordata! urochordata!

  16. Pingback: Some introductory reading material | OpenPub

Leave a Reply

Your email address will not be published. Required fields are marked *