Open Haus: The Future of Scholarly Publishing

Today at The National Center for Ecological Analysis and Synthesis we’re having an open discussion about the future of scholarly publishing. I may post some notes from it later, but, Stephanie Pau and I have compiled a list of thought questions and helpful links to help folk prep. And, as it was an interesting gathering of linkage, I thought it mide be useful more broadly. So, below is the text of my email and the useful links. I realize that I am blogging my email. Is this a new low? Perhaps. Enjoy!

Hello, all! For today’s open house, we’ll be talking about the growing hubub regarding academic publishing and the relationship between scientists and publishers. Is it time for a change? What is the way forward?

This is a big topic, and not all of you may be aware of the things that have gone down over the past few months that are bringing this to a head. As such, Steph and I have put together a list of links that give you an introduction to all of this as well as some questions to bear in mind while perusing them. Don’t worry, they’re all pretty quick reads – short blog posts or an Xtranormal video which is brilliant.

Also exciting, we’re going to have a few guests who are involved in scholarly publishing from campus (and maybe beyond).

Marty Einhorn (KITP) for perspective on http://arxiv.org and the physics community’s take on this
Josh Schimel (EEMB) who is on the ESA publications committee
Chuck Bazerman (Education) a consultant for http://hypothes.is/

See you at 4pm in the lounge. And, as the weather has been gorgeous, if we want to continue the conversation after 5, Tony Ray’s?

Open House – Is it time for a change in scientific publishing?

Questions to Ponder

1. Do we need a change our model of scientific publishing? Why?

2. What needs to change?

3. Are scientists/ecologists ready for a change? Are we too conservative or slow to adopt change in general, open access in particular? I.e., good in theory, doesn’t work in practice? (http://www.nytimes.com/2012/01/17/science/open-science-challenges-journal-tradition-with-web-collaboration.html?ref=science)

4. What are the differences between commercial (e.g., Elsevier) and non-profit journals (e.g., ESA) that affect the exchange of scientific information (http://bjoern.brembs.net/comment-n820.html)? There does not appear to be a difference in quality as measured by the number of citations (http://octavia.zoology.washington.edu/publishing/ecology_citationprice.html)

5. Do commercial journals offer us something that non-profit journals do not? Prestige? What about differences between ESA and PloS (high costs put on author) models?

6. If the exchange of information is better served by open access, should we refuse to review for commercial journals?

7. Is there a difference between exchange of information for the sake of the discipline and personal academic success?

Resources on Open Access
http://oad.simmons.edu/oadwiki/Social_media_sites_about_OA

The Issue at Hand
An Introduction with some Humor
http://www.youtube.com/watch?v=GMIY_4t-DR0

The Research Works, H.R. 3699 Act & Responses from Scientists
http://thomas.loc.gov/cgi-bin/query/z?c112:H.R.3699:

http://www.michaeleisen.org/blog/?p=807

http://blogs.scientificamerican.com/evo-eco-lab/2012/01/06/scientists-fight-for-access/

http://osc.hul.harvard.edu/stp-rfi-response-january-2012

http://www.monbiot.com/2011/08/29/the-lairds-of-learning/

http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist

ESA’s statement on Open Access back in Jan (that some on Ecolog-L were not too happy about)
http://www.esa.org/esablog/ecology-in-policy/esa-policy-news-january-13-2/

A Pledge to Not Publish in Elsevier Journals (e.g., TREE) with a lot of folk signing on
http://thecostofknowledge.com/

Further Meditations
Academic Spring?
http://www.slate.com/articles/technology/future_tense/2012/02/federal_research_public_access_act_the_research_works_act_and_the_open_access_movement_.html

Comments from Michael Hochberg
https://sites.google.com/site/perspectivesinpublishing/our-mission

Do publishers add value? Nature says yes.

http://www.nature.com/nature/journal/v481/n7382/full/481409a.html

http://www.michaeleisen.org/blog/?p=873

Publishers need us more than we need them
http://dougbelshaw.com/blog/2012/01/21/you-need-us-more-than-we-need-you/#.TyAbbuNSSWU

http://dougbelshaw.com/blog/2012/01/22/journals-academic-and-the-ivory-tower/#.TyAbieNSSWU

Oh, just go and read Michael Eisen’s blog already. I mean, he co-founded PLoS!
http://www.michaeleisen.org/blog/

Solutions?

Federal Research Public Access Act, or, Scientists Strike Back. #FRPAA
http://cyber.law.harvard.edu/hoap/Notes_on_the_Federal_Research_Public_Access_Act

http://www.michaeleisen.org/blog/?p=925

http://newsbreaks.infotoday.com/NewsBreaks/Bill-Introduced-for-Open-Access-to-Federally-Funded-ResearchFRPAA-Revisited-67078.asp

Beyond Academic Journals
http://dougbelshaw.com/blog/2012/01/25/beyond-academic-journals/

What Math and Physics have been doing for years
http://arxiv.org/

Nature Preceedings
http://precedings.nature.com/

Faculty 1000 new open access publication:
http://blogs.nature.com/news/2012/01/f1000-launches-fast-open-science-publishing-for-biology-and-medicine.html

Something Wholly New?
Gratuitous self-link

http://futureofscipub.wordpress.com/

People and hashtags to follow on twitter

#openaccess

#frpaa

@mrgunn

@openscience

@mbeisen (Michael Eisen)

@phylogenomics (Jonathan Eisen)

@researchremix

@FakeElsevier

A Vision for the Future of Scholarly Publishing

In many ways, the Research Works Act has been a blessing (see excellent link round up here). It has taken the moderately complacent but always grousing scientific community and whipped our feelings about the current state and cost of scientific publishing into a white hot fury. Ideas are bandied about, critiques given, and people begin to take action.

So what is the way forward? Certainly we are not getting away from journals in the near term. Or ever, really, as they are fabulous final curated repositories scientific results. They are the end point and gold standard. And I think we’re all coming to the conclusion that a PLoS-like model is a great way to go. Science must end up in an open access repository at the end of the day.

But a final resting place aside, what should the future look like so that research results can be disseminated rapidly and openly? How can we fold in peer review as a part of the process, as it is one of the hallmarks of the scientific quality control.

So I’ve been dreaming. A vision of the future of scientific publishing. What if arXiv, reedit, PLoS, pubcreds, slashdot’s commenting system, figshare, Data One, and Web 2.0 had a baby? This lead to an idea – a concept – a proposal.

So, here’s my vision of the future. It’s not the only vision, and there is substantial room for discussion, but, it’s a start… Consider this a SciFi musing on scholarly publishing.

I sit down with my morning cup of coffee and log into SciX. I am presented with several options on the main screen:

Read Papers
Submit a Paper
Revise a Paper
Review the Reviewers

So, what happens when I click on each of those? Let’s follow each one, one by one.

OK, so, I click on “Read a Paper”. I’m taken to a screen that shows different scientific disciplines and a search box. I click on my discipline of interest, as I’m just browsing. I am presented with a list of subdisciplines, a search box, and a list of papers below that with several sorting options (# of reviews, submission date, review score, etc.) I click on my subdiscipline, and am presented with just a list of papers which I can sort by various criteria and, again, a search box. I sort so that I can see the latest submissions and find one that piques my interest. I click on it, read it, maybe even view some of its supplemental videos or such. I have a strong opinion about it – I think it’s good, but has a few flaws that need to be corrected before it should be accepted by the community. The paper already has one review filed. I read it, and it’s OK, but it misses some key things. So I click review.

I write a brief review, just like a normal paper. I click that my review should remain anonymous. I also need to select one option from the following list:

This paper is fraudulent/not a paper (flag for review).
This paper is not acceptable in its current form (reject).
This paper is good, but requires some large revisions (major revisions).
This paper is acceptable, but requires some changes (minor revisions).
This paper is acceptable as is.

As I think there are some serious flaws, but like the work, I select the major revisions option.

At that moment, it just so happens that I get an email from SciX. The email contains the latest papers in my chosen disciplines and sub-disciplines that have received at least two more acceptable reviews than rejection reviews. I.e., reject is a score of -1, major revisions is a score of 0, and any acceptable score is +1.

OK, great. Let’s move on. Let’s say I wanted to submit a paper. The submission process is the same as any other site except, today when I click on submit, a screen comes up that denies me the ability to submit. It reads that I currently do not have a review to submission ratio of 3:1. I’m all good on reviewing reviews (more on that later), but I need to file more original reviews myself! I grumble about it, bemoaning my days in grad school when I only had to keep up with 1:1, but it’s right, so I go and file one more review before I come back and submit my paper in the usual way – except for that embedded video. After submitting, I like to the doi on my website and CV.

OK, it’s several weeks later. I have received two major revisions reviews on my paper. I’ve done the revisions, and thought carefully about the reviewers responses. So I go back to SciX and click on Revise a paper. I upload the revised version. I then upload an individual response to each reviewer. They are both anonymous, so I don’t know who they are. However, once I hit ‘resubmit’, they are sent an email. It tells them that I have responded to their reviews and submitted a revised version. They have two weeks to look at the revision and the responses. If they do nothing, the paper will be marked as “acceptable as is”. If they wish, though, they can go and submit a secondary review. This review also includes an option of “Did not respond to my review at all.” If both reviewers select this for more than two rounds of resubmission, my paper is booted and I have to start over again.

A few weeks later, both reviews come back positive, and my paper is included in the next email out. I have also received two additional ‘acceptable’ reviews of the paper with no comments attached. I list the score on my CV. I also hit “submit to journal” and select PLoS One. The system generates the proper submission, and attaches the review history. All I do is fill in a cover letter.

I have another paper to submit, but, I know I’m down on my number of “reviewing the reviewers”, so, I click on that option. I am brought to a screen with five reviews. For each review, I also have the title and abstract of the paper. For each review, I am asked to select one of the following options:

Review is fraudulent/someone padding their bank/unrelated to paper (flag)
Review is cursory. Email reviewer for more detailed review.
Review is acceptable, but laced with inappropriate invective. Count as half-review.
Review is fully acceptable.

I quickly go through and click acceptable for all of them, except one that merely says “No.” with “Reject” selected. Clearly, not fair to the author or the community.

With this finished, I go and submit my next short paper. It’s a brief note, but one that I feel important to get out into the literature.

After all of this, I go to my profile page. I check my list of papers, note their current scores and number of downloads and citations, and update those on my website and CV.

So, that’s it. That’s my simple vision. Open access and transparency from hitting the ‘publish’ button to reading and writing reviews. And a reputation based economy so that papers are only marked accepted when the weight of reviews say so, but anyone can still look at them and the comments that others have made about them.

Thoughts?

UPDATE I’m tremendously excited about F1000′s announcement of their new F1000 Research which is being discussed across the interwebs. I fear that their model of post-publication peer review will end up suffering the same fate as PLoS One, though – comments on highly controversial or touted articles, but most of the rest going without comment or notice. The above vision solves that problem.

See also (things I have found after writing the above):
Gowers’s How might we get to a new model of mathematical publishing?
Gowers’s more modest proposal
This excellent thread at Math 2.0
Nikolaus Kriegeskorte’s excellent The Future of Scientific Publishing

A Need to Understand Climate Change’s Indirect Effects

We know that warming, storms, drought, acidification, and the myriad of other effects of climate change will impact natural ecosystems. Most of our studies have concentrated on direct effects, though. For example, if you change temperature, you alter herbivore grazing rates. But what about indirect effects? For example, I’ve found that increased intense storm frequency may remove kelp which will have an indirect effect on the structure of kelp forest food webs.

So, I did a little experiment. I went to Web of Knowledge and searched the following term: “climate change” AND “impact”. I got 21,310 entries. Then I searched again using this query: “climate change” AND “impact” AND “indirect effect”.

The search returned 35 entries.

Surely, this must be a mistake. So instead of “indirect effect” I went with just “indirect”. 506. Better. If I took out the word impact I went up to 1,202. So, at maximum, 5.6%.

OK, maybe this was because I was looking at EVERYTHING. So I filtered it down to just Environmental Sciences and Ecology. “climate change” AND “impact”: 9,248. “climate change” AND “impact” AND “indirect”: 173. Removing impact got me to 689. Only 7.5%.

I’m guessing there are other careful ways of filtering, but, either way, I’m pretty surprised that even at this point, the study of the indirect of climate change still accounts for so little of our knowledge. Pretty interesting. Although I’m heartened by the fact that this literature seems to be increasing exponentially.

Should We Eat Fish? The Online Discussion

One of the main reasons science blogs excite me is the possibility of communication between scientists. It allows for a medium that scientists can use to hash out ideas and do so publicly. This has the added advantage that those who are not big names in a field or somesuch can listen in, as it were, or even step in and participate.

The discussion of science between scientists here, online, also has the added advantage of allowing the public to see inside of scientific debates and discussions in realtime. How do we think? What conclusions can we reach when we talk online rather than through the slow-moving medium of the peer reviewed literature?

I think we have been luck to have just witnessed a great example of how scientists talking through a problem in an online milieu can lead to an interesting conversation – one worth seeing from both the inside and the outside. It concerns the ever contentious issue of overfishing and the eating of (delicious) fish.

I can’t really do better than Emmett at summing the whole thing up, but, I’ll give a rough timeline of what’s been talked about online so far. Enjoy the conversation.

It all started… well, it actually got kicked off by three posts in The Nature Conservancy’s Cool Green Science.

These posts and the corresponding comments are well worth a read. Then came an NYT editorial that really got things roiling – and this is where things got interesting.

Scientists! Of! Fishiness! Online!

It’s very cool stuff that is a must-read, I think. If you read nothing else, really, go see Forum on fish, food, and people right now. Enjoy!

“Privatizing” the Reviewer Commons?

This post was chosen as an Editor's Selection for ResearchBlogging.orgLet’s face it. The current journal system is slowly breaking down – in Ecology if not in other disciplines as well. The number of submissions is going up exponentially. At the same time, journals are finding it harder and harder to find reviewers. Statistics such as editors contacting 10 reviewers to find 3 are not uncommon. People don’t respond, they take a long time to review, or just take a long time and THEN don’t respond leading to a need for still more reviewers to be found (this has held up 2 of my pubs for 3+ extra months). The consequences are inevitable. I’ve heard (and experienced) more and more stories of people submitting to journals for which their work is perfectly suited, only to have them rejected without review for trivial, if any, reason. (I know the plural of anecdote is not data – see refs in the article below for a more rigorous discussion).

Even if an article is reviewed, once rejected, it begins the revision cycle afresh at a new journal, starting the entire reviewer finding-and-binding process over again, yielding considerable redundancy of effort. This is slowing the pace of science, and the pace of our careers – a huge cost for young scientists.

How do we solve the tragedy of the reviewing commons?

Jeremy Fox and Owen Petchy lay out an intriguing suggestion (or see here for free pdf) and couple it with a petition. If you’re convinced by their article, go sign it now.

In essence, they want to “privatize” the reviewer commons. They propose the creation of a public online Pubcred bank. To submit a paper, one pays three credits. For every review, they receive one credit. This maintains a minimum 3:1 submit:review ratio which we should all be maintaining. Along with this, they propose that reviews are passed from journal to journal if a paper is rejected. They authors cannot hide from comments, hoping to roll the dice and get past critical reviewers. This lessens the workload for everyone and boosts science.

There are of course a million details to be worked out – what about new authors (they propose an allowable overdraft), multi-authored papers (split the cost), bad reviews (no credits for you!), etc.? Fox and Petchy lay out a delightfully thoughtful and detailed response to all of these (although I’m sure more will crop up – nothing is perfect).

I think a Pubcred system is absolutely essential to the forward progress of modern science, and I whole-heartedly support this proposal (and signed the petition). At the same time, I think there is a second problem worth thinking about that is related to the proliferation of articles.

Namely, the review and re-review cycle. We all start by submitting to the highest impact journal that we think will take our articles. This can lead to a cycle of review and re-review that takes time and energy from reviewers, and can be gamed by authors who do not revise before resubmitting (who among us has not seen this happen?).

For this reason, at a minimum, the sharing of rejection reviews from journal-to-journal and authors being forced to respond is *ESSENTIAL* to the Pubcred system working. On the other hand, Pubcreds are going to require a large co-ordinating effort between journals – many of whom are published by different organizations. If we are going to go to this trouble already, one wonders if a system where authors submit articles to a common reviewing pool, and journals select articles after review and revision (asking for any additional revisions as needed) as proposed by Stefano Allesina might be even more efficient.

Then again, let’s come back to the real world. Such a system would require a sea-change in the world of academic publishing, and I don’t think we’re there yet. The Pubcred bank will require its own journal compliance hurdles in the first place, and a need for multiple publishers to agree and co-ordinate their actions. No small feat. Given its technical simplicity and huge benefits to journals, this task will hopefully be minor. Implementing Pubcreds gets us a good part of the way there, and begins to tackle what is rapidly becoming a large problem lurking in the background. It won’t solve everything (or maybe it will!), but it should certainly staunch the current tide of problems.

So please, read the article, and if you agree, go sign the petition already!

Update: For more thoughtful discussion see this post at Jabberwocky Ecology and a thoughtful response by Fox and Petchey.

Fox, J., & Petchey, O. (2010). Pubcreds: Fixing the Peer Review Process by “Privatizing” the Reviewer Commons Bulletin of the Ecological Society of America, 91 (3), 325-333 DOI: 10.1890/0012-9623-91.3.325

Stefano Allesina (2009). Accelerating the pace of discovery by changing the peer review algorithm arXiv.org arXiv: 0911.0344v1

Do Not Log-Transform Count Data, Bitches!

ResearchBlogging.org OK, so, the title of this article is actually Do not log-transform count data, but, as @ascidacea mentioned, you just can’t resist adding the “bitches” to the end.

Onwards.

If you’re like me, when you learned experimental stats, you were taught to worship at the throne of the Normal Distribution. Always check your data and make sure it is normally distributed! Or, make sure that whatever lines you fit to it have normally distributed error around them! Normal! Normal normal normal!

And if you violate normality – say, you have count data with no negative values, and a normal linear regression would create situations where negative values are possible (e.g., what does it mean if you predict negative kelp! ah, the old dreaded nega-kelp), then no worries. Just log transform your data. Or square root. Or log(x+1). Or SOMETHING to linearize it before fitting a line and ensure the sacrament of normality is preserved.

This has led to decades of thoughtless transformation of count data without any real thought as to the consequences by in-the-field ecologists.

But statistics has had a better answer for decades – generalized linear models (glm for R nerds, gzlm for SAS goombas who use proc genmod. What? I’m biased!) whereby one specifies a nonlinear function with a corresponding non-normal error distribution. The canonical book on this was first published ’round 1983. Sure, one has to think more about the particular model and error distribution they specify, but, if you’re not thinking about these things in the first place, why are you doing science?

“But, hey!” you might say, “Glms and transformed count data should produce the same results, no?”

From first principles, Jensen’s inequality says no – consider the consequences for error of the transformation approach of log(y) = ax+b+error versus the glm approach y=e^(ax+b)+error. More importantly, the error distributions from generalized linear models may often be far far faaar more appropriate to the data you have at hand. For example, count data is discrete, and hence, a normal distribution will never be quite right. Better to use a poisson or a negative binomial.

But, “Sheesh!”, one might say, “Come on! How different can these models be? I mean, I’m going to get roughly the same answer, right?”

O’Hara and Kotze’s paper takes this question and runs with it. They simulate count data from negative binomial distributions and look at the results from generalized linear models with negative binomial or quasi-poisson error terms (see here for the difference) versus a slew of transformations.

Intriguingly, they find that glms (with either distribution) always perform well, while each transformation performs poorly at some or all values.

Estimated root mean-squared error from six different models. Curves from the quasi-poisson model are the same as the negative binomial. Note that the glm lines (black solid) all hang out around 0 as opposed to the transformed fits.

More intriguingly to me are the results regarding bias. Bias is the deviation between a fit parameter and its true value. Bascially, it’s a measure of how wrong your answer is. Again, here they find almost no bias in the glms, but bias all over the charts for transformed fits.

Estimated mean biases from six different models, applied to data simulated from a negative binomial distribution. A low bias means that the method will, on average, return the 'true' value. Note that the bias for transformed fits is all over the place. But with a glm, bias is always minimal.

They sum it up nicely

For count data, our results suggest that transformations perform poorly. An additional problem with regression of transformed variables is that it can lead to impossible predictions, such as negative numbers of individuals. Instead statistical procedures designed to deal with counts should be used, i.e. methods for fitting Poisson or negative binomial models to data. The development of statistical and computational methods over the last 40 years has made it easier to fit these sorts of models, and the procedures for doing this are available in any serious statistics package.

Or, more succinctly, “Do not log-transform count data, bitches!”

“But how?!” I’m sure some of you are saying. Well, after checking into some of the relevant literature, it’s quite straightforward.

Given the ease of implementing glms in languages like R (one uses the glm function, checks diagnostics of residuals to ensure compliance with model assumptions, then can use Likliehood ratio testing akin to anova with, well, the Anova function) this is something easily within the grasp of the everyday ecologist. Heck, you can even do posthocs with multcomp, although if you want to correct your p-values (and there are reasons to believe you shouldn’t), you need to carefully consider the correction type.

For example, consider this data from survivorship on the Titanic (what, it’s in the multcomp documentation!) – although, granted, it’s looking at proportion survivorship, but, still, you’ll see how the code works:

library(multcomp)
### set up all pair-wise comparisons for count data
data(Titanic)
mod <- glm(Survived ~ Class, data = as.data.frame(Titanic), weights = Freq, family = binomial)

### specify all pair-wise comparisons among levels of variable "Class"
### Note, Tukey means the type of contrast matrix.  See ?contrMat
glht.mod <- glht(mod, mcp(Class = "Tukey"))

###summaryize information
###applying the false discovery rate adjustment
###you know, if that's how you roll
summary(glht.mod, test=adjusted("fdr"))

There are then a variety of ways to plot or otherwise view glht output.

So, that's the nerdy details. In sum, though, the next time you see someone doing analyses with count data using simple linear regression or ANOVA with a log, sqrt, arcsine sqrt, or any other transformation, jump on them like a live grenade. Then, once the confusion has worn off, give them a copy of this paper. They'll thank you, once they're finished swearing.

O’Hara, R., & Kotze, D. (2010). Do not log-transform count data Methods in Ecology and Evolution, 1 (2), 118-122 DOI: 10.1111/j.2041-210X.2010.00021.x