How I Use Different Social Media Platforms for Science

OK, so, you’re a scientist interested in jumping into the world of online social networking for interacting with colleagues, fun and profit bringing science to the world at large. Fantastic!

So, you log on, and are suddenly confronted by a dizzying array of sites you could use to communicate. Facebook, Instagram, Twitter, Google+, Myspace (yes, it’s back), Blogspot, WordPress, FriendFeed, Tumblr, …. I could go on for a while. And then you may also have concerns about mixing personal and private.

So, which should you use? And why? Before going on, read Bik and Goldstein. Just, trust me. In particular, their Box 1, Box 2, and Fig 2. That paper is gold.

For me, I’ll be honest, I only use three of the above – and each for a distinct purpose. Other social media mavens out there might have different takes, or find that different platforms have better milage for them – and I’d love to hear what you’re using and why! But, this is my take on the platforms I use and how I handle things like privacy and the kind of things I’ll say in each.

0) A Personal Webpage – This one seems obvious, but I find so many instances of people *not* having at least a professional web presence as a slice of hypertext on someone else’s webpage (e.g., advisor, organization you work for, etc.) that it bears repeating. This is the face you present to the world, and often the first hit on a search for you. Make it count!

If you’re not HTML savvy and don’t know where to start, services like Weebly and WordPress not only provide tools to build a site, but reams of wonderful templates. They’ve also got some fancier options for a small fee. Or host your own on a site service like Hostmonster as I do. Such sites often have tools for easily installing content rich professional websites (often using things like WordPress) with just a few clicks.

1) Twitter – To me, this is the ur-site for social media and science. It’s quick, easy, has a ton of tools (HootSuite, TweetDeck, etc.) to make the information firehose that it contains easy to sort (hashtags and lists are fabulous), and provides a great way to dip into the stream of scientific conversation very easily. It’s also fully public. What you say there will tell a lot of people about who you are. It really is your choice who you want to be. So, it is where my professional scientific persona lives, as it were. Which is pretty much just me, but with some guardrails on, as it were. The brevity of posts is also a huge benefit, as it’s a low barrier to entry, and low barrier to interaction. And there are a ton of Ecologists, Evolutionary, and Marine Biologists on it.

2) Blogs – This is where one can go long-form and really lay out some thoughts or a meaty juicy piece of what they’re doing. There’s no set form, structure, or rules, really. Basically, I view my blog as an intellectual sandbox. And it’s excellent practice for writing. Heck, I’ve even knitted together significant pieces of papers from blog posts. Basically, I view blogs as the place for good, well thought out, detailed, interaction and communication to take place. It’s where you can show all of who you are and how you think. It’s where you can try and connect with audiences – public and scientific – using the broadest most informative brush. It’s not a substitute for the peer reviewed literature, but rather a place where the scientific ebb and flow of ideas can find a home when we’re not all at a meeting or somesuch.

3) Google+ – I’m still not sure about this one. I LOVE hangouts, and am going to be trying some experiments with them in the future. They’ve basically replaced Skype for me, and I’ve found that G+’s groups and communities are terribly convenient for organizing and posting to groups of folk working on a project. But as a primary source of social media presence….you can post longer things than you can on Twitter? I think the multi-media capabilities and hangouts are key for what G+ has to offer, and, so, that’s what I use it for!

4) Facebook – you notice that I haven’t mentioned Facebook up to this point? Curious, no? Maybe it’s because of historical reasons (remembering a long line of social media sites – Friendster, Orkut, Myspace, etc.), maybe it’s because of the higher degree of immediate interaction, maybe its because my mom is on it (hi, mom!) – I’m not totally sure why, but I, at least, use Facebook for personal purposes only. I mean, I pipe my Twitter feed into it, and enjoy the conversation that occurs off of it. But I generally only add folk who I have met personally or have a personal connection of somesort. Basically, folk I’m willing to let in to see who I am a little less guardedly – my not-so-professional online persona, if you will (there are a lot of cat photos, I admit). This is not true of all fields. For example, my wife is in theater, and theater is all about being social. Thus, Facebook becomes a professional space.

This is not to say that Facebook cannot evolve into a professional space. Actually, my favorite use of it lately has been the number of fellow scientists with whom I have a professional relationship sharing some of their inside thoughts regarding their own careers, their daily struggles, and a good bit of camaraderie and commiserations.

Oh, last, a word of caution to those of you not yet aquatinted with this fact – everything you say on the internet is forever able to be associated with you. It will come up when you least expect it. What you say online shapes how folk perceive you. Even things that you think are completely 100% private…not always so much (particularly if Facebook randomly changes its privacy settings). This is not to say that people are not forgiving of context – they are or should be delightfully so – but, you know, think before you hit post.

In fact, all of this brings me to a point I make a lot in public, and, I should perhaps post here so that I can have it in digital print: If you are not curating your online identity, someone or something else is doing it for you. By someone I don’t mean some specific person (usually), but, rather, a combination of the crowd and information sifting algorithms. So, want to leave a good impression? Be known as a person interested in topic X? Only you have that power. And with great power…

So, feel free to use the above as a general guide, or discover that, in internet terms, I’m a fuddy-duddy and there are better ways of using social network tools that are at your disposal. Or, heck, I’m sure there are tools waiting out there somewhere on the horizon that can enhance the scientific conversation even more!

Updated 10/2013 with some links to sites to help you build professional websites

Giving Honest Resources to Prospective Students

A question for the peanut gallery –

I’m starting to get a trickle of the interested-in-grad-school student emails. So far the few I’ve gotten have been great. So, I’m revising my prospective students page after noticing a few things in year one. The first was a lack of some students realizing that I wanted to know their interests beyond ‘marine ecology’. I wanted a research question – any question, no matter how broad – as having a question does not guarantee it’s what you’re going to follow in your graduate career. It was just to see how students think and what I might expect from them. Now I’ve made that a wee bit more explicit, and the division between potential masters and PhD students on the page. This is all ok

OK – honesty aside, as who knows if one of the potential mentors I interviewed with is reading this, but, one of my potentially most embarrassing moments as a prospective PhD student was my first meeting with her. I had just taken a cross country flight or two, and then went to dinner with them. The first thing they asked me was, “What are you interested in?” And, indeed, I answered, after far too long of a pause, “Marine ecology.” Long pause. Then, they were kind enough to gently say, “OK, what about marine ecology?…” After I screwed my head back on a bit straighter, we had a lovely conversation about ideas, work, etc., and I passed out as soon as my head hit my pillow that night. So, you know, everyone flounders a bit. Particularly when extremely jet-lagged and getting an adrenaline rush of peering into the future.

OK, so, my question – One thing I’ve been pondering, though, is putting a piece on the page as a heads-up about the prospects of a PhD or masters student. In particular in reading Jacquelyn’s recent piece and the excellent discussion therein, and thinking about the answers to my question, “Why this degree?” to prospectives last year, I’ve realized that most students just haven’t thought about it, or have unrealistic expectations. My question is, what should we be telling them? This page – the prospective students information – is perhaps the venue they will scan most closely, and hence one of the best places to give students some knowledge about what this degree can do for them and what long-term challenges will be once they enter the employment market. I’m trying to be brief, and provide a few helpful, if sobering, links to start further reading. So – do you think this is a good idea? And if so, does the tone veer too far one way or another? I’ve tried to be gentle, if cautionary.

Or, maybe I won’t put this up at all…

— Here’s the new section

Why a graduate degree?

Why are you interested in a graduate degree? If you have no research experience aside from working as a lab tech, a masters degree might be what you’re looking for. Or it’s excellent as there are a wide variety of career options that are open to you with a masters. Do some homework – ask yourself how this degree will help you achieve your professional goals. For a PhD, I know the default answer is always “So I can be a university professor.” That’s great, and I look forward to working together to help you achieve that goal. Do some reading, though, and make sure you know what you’re jumping into (and be sure to read the comment thread at that link). The job market for academics is never great. This is not to discourage you, but take a breath before diving in and think about long-term goals. Know also that a PhD is amazing training for a wide variety of careers as well. So, think about your long-term goals and why a PhD is the right road for you. You may also be interested in checking out this book.

Why I’m Teaching Computational Data Analysis for Biology

This is a x-post from the blog I’ve setup for my course blog. As my first class at UMB, I’m teaching An Introduction to Computational Data Analysis for Biology – basically mixing teaching statistics and basic programming. It’s something I’ve thought a long time about teaching – although the rubber meeting the road has been fascinating.

As part of the course, I’m adapting an exercise that I learned while taking English courses – in particular from a course on Dante’s Divine Comedy. I ask that students write 1 page weekly to demonstrate that they are having a meaningful interaction with the material. I give them a few pages from this book as a prompt, but really they can write about anything. One student will post on the blog per week (and I’m encouraging them to use the blog for posting other materials as well – we shall see, it’s an experiment). After they post, I hope that it will start a conversation, at least amongst participants in the class. I also think this post might pair well with some of Brian McGill’s comments on statistical machismo to show you a brief sketch of my own evolution as a data analyst.

I’ll be honest, I’m excited. I’m excited to be teaching Computational Data Analysis to a fresh crop of graduate students. I’m excited to try and take what I have learned over the past decade of work in science, and share that knowledge. I am excited to share lessons learned and help others benefit from the strange explorations I’ve had into the wild world of data.

I’m ready to get beyond the cookbook approach to data. When I began learning data analysis, way back in an undergraduate field course, it was all ANOVA all the time (with brief diversions to regression or ANCOVA). There was some point and click software that made it easy, so long as you knew the right recipe for the shape of your data. The more complex the situation, the more creative you had to be in getting an accurate sample, and then in determining what was the right incantation of sums of squares to get a meaningful test statistic. And woe be it if your p value from your research was 0.051.

I think I enjoyed this because it was fitting a puzzle together. That, and I love to cook, so, who doesn’t want to follow a good recipe?

Still, there was something that always nagged me. This approach – which I encountered again and again – seemed stale. The body of analysis was beautiful, but it seemed divorced from the data sets that I saw starting to arrive on the horizon – data sets that were so large, or chocked full of so many different variables, that something seemed amiss.

The answer rippled over me in waves. First, comments from an editor – Ram Meyers – for a paper of mine began to lift the veil. I had done all of my analyses as taught (and indeed even used for a class) using ANOVA and regression, multiple comparison, etc. etc. in the classic vein. Ram asked why, particularly given that the Biological processes that generated my data should in no way generate something with a normal – or even log-normal – distribution. While he agreed that the approximation was good enough, he made me go back, and jump off the cliff into the world of generalized linear models. It was bracing. But he walked me through it – over the phone even.

So, a new recipe, yes? But it seemed like something more was looming.

Then, an expiration of a JMP site license with one week left on a paper revision left me bereft. The only free tool I could turn to that seemed to do what I wanted it to do was R.

Wonderful, messy, idiosyncratic R.

I jumped in and learned the bare minimum of what I needed to know to do my analysis…and lost myself.

I had taken computer science in college, and even written the backend of a number of websites in PERL (also wonderful, messy, and idiosyncratic). What I enjoyed most about programming was that you could not hide from how you manipulated information. Programming has a functional aspect at the core where an input must be translated into a meaningful output according to the rules that you craft.

Working with R, I was crafting rules to generate meaningful statistical output. But what were those rules but my assumptions about how nature worked. The fundamentals of what I was doing all along – fitting a line to data with an error distribution – that should be based in biology, not arbitrary assumptions – was laid all the more bare. Some grumblingly lovely help from statistical denizens on the R help boards helped to bring this in sharp focus.

So, I was ready when, for whatever reason, fate thrust me into a series of workshops on Bayesian statistics, AIC analysis, hierarchical modeling, time series analysis, data visualization, meta-analysis, and last – Structural Equation Modeling.

I was delighted to learn more and more of how statistical analysis had grown beyond what I had been taught. I drank deeply of it. I know, that’s pretty nerdy, but, there you have it.

The new techniques all shared a common core – that they were engines of inference about biological processes. How I, as the analyst, made assumptions about how the world worked was up to me. Once I had a model of how my system worked in mind – sketched out, filled with notes on error distributions, interactions, and more, I could sit back and think about what inferential tools would give me the clearest answers I needed.

I had moved instead of finding the one right recipe in a giant cookbook to choosing the right tools out of a toolbox. And then using the tools of computer science – optimizing algorithms, thinking about efficient data storage, etc – to let my tools work bring data and biological models together.

It’s exciting. And that’s the core philosophy I’m trying to convey in this semester. (N.B. the spellchecker tried to change convey to convert – there’s something there).

Think about biology. Think about a line. Think about a probability distribution. Put them together, and find out what stories your data can tell you about the world.

Does Synthesis Ecology Exist as a Scientific Discipline?

Does Synthesis Ecology exist? Is it a discipline? If so, what is it? If not, why not?

As a part of the Trends in Ecological Analysis and Synthesis symposium here at NCEAS, several postdocs past and present organized by Jennifer “Firestarter” Balch got together and sent this survey to the last 15 years of NCEAS postdocs. The survey asks what current and former NCEAS postdocs thought were the most important contributions in Synthesis Ecology and what they thought were the most exciting future directions in Synthesis Ecology.

And then a small storm erupted.

While Jennifer modified a definition of Synthesis Ecology from the NCEAS mission statement (“Synthesis Ecology is the integration and analysis of existing data, concepts, or theories to find emergent patterns and principles that address major fundamental questions in ecology and allied fields. “), even amongst the postdocs, no one could agree whether or not Synthesis Ecology existed as a Thing. Was it a discipline? Was it a technique? Would you feel comfortable calling yourself a Synthesis Ecologist? What is it?

Even amongst the authors on the analysis of the survey, there was little agreement. We sat down one morning, a group of current and former NCEAS postdocs, and tried to hash this issue out. Amusingly, the room was divided, largely along generational lines, as to whether it was or was not a field. We argued it around for a while, posing different definitions and finding little agreement.

Really, there are more questions and points of reflection than answers. Here are some relevant points that I pulled from our conversation. They’re what I latched on to, and are even argued amongst the participants in the group, so, no answers here.

  • What is a Field Of Science? The definition I threw out that everyone seemed comfortable with was that a field is a unique way of asking and answering questions about the world. The confluence of Asking and Answering is key. A methodology is just a way of answering.
  • Does a Field need to have a unique theory associated with it? Or not?
  • By analogy, how is Genomics a field? Why is Genomics not just a technique or methodology within Genetics? Similarly, Geography has had this debate about Geographic Information Science and, indeed, has emerged as its own field. Also on the same line, Molecular Biology – a field we are all well familiar with has gone through the same set of questioning.
  • One objection was that Synthesis Ecology doesn’t have a single field system – it is a collection of techniques that answer larger questions. And yet, is that not similar to Theoretical Ecology? How is one a discipline and the other not?
  • If it is a field, a defining emergent characteristic MUST be the crossing of disciplinary boundaries – either within ecology or outside of ecology

So, I wish I could say I had an answer for you.

OK, that’s a cop-out – I do have my own answer (Not reflective of the group! In fact, I hope they have some pointed answers and counterpoints to this!). Yes, I do think Synthesis Ecology is a field. Synthesis Ecology is the field in ecology defined by the combination heterogeneous streams of data & concepts to ask and answer questions underpinned by either ecological theory and/or application that cannot be addressed by any single investigation or dataset.

OK, after pondering THAT and the above points and thinking about the pieces you’ve read over the last 15 years, I open this discussion to you: Is Synthesis Ecology in and of itself a field? And please, be polite!

Update: See also Karen McLeod’s excellent post, Beyond crunching data: The power of ideas

Can We Reduce the Carbon Cost of Scientific Mega-Meetings?

ResearchBlogging.orgI admit it. I love big scientific meetings. There’s something about the intense intellectual hubbub of thousands of my fields greatest minds gathered in one place for a few days of showing off the latest, greatest, flashiest work that just fills me with joy. Also a need to sleep for a week afterwards due to my brain going at a Matrix-like pace to keep up with all of the new and interesting information while spouting off ideas, critiques, beginning collaborations, and constantly questing to understand the growing shape of the research fields that interest me. It’s quite simply an intellectual smörgåsbord. But like all such dining experiences, there is a cost. A cost I’ve been wrestling with in this new piece in Enthobiology Letters with my collaborator Alexandra Ponette-González.

It’s a carbon cost. A cost for climate change.

Simply, there a lot of people at these Mega-Meetings. A LOT. And they are rarely local. Most of us fly in, from across the country, from another country, or even another continent. Those flights put out CO2 emissions – a lot of it. Heck, even driving the full distance to some of these meetings would have a high emissions profile given the distances. And it makes you stop and wonder – we ecologists who are so environmentally conscious, what is the carbon cost of our engagement in big Mega-Meetings? Could we be doing better? How?

A map of the location of the last several ESA meetings and the 2010 AGU meeting (triangles with Carbon Cost next to them) as compared ti the distribution of attendees (circles proportional to number of attendees from that area over all meetings). Costs are in per capita metric tons.

A few years ago, this issue came up at the DISCCRS conference – an annual interdisciplinary gathering of early career climate researchers that is truly amazing. During the coffee afterwards I got to talking with a fellow attendee, and we began brainstorming. How could the big scientific societies of the world – the ESAs, AGUs, or, heck, maybe even the AAASs – still conduct their vital business of intellection discourse while reducing their carbon footprint from meeting travel?

Travel is the key – if attendees, even the same number of attendees, didn’t have to travel to far and use air travel, it’s possible that we could dramatically lower carbon costs. Merely limiting the number of meetings or restricting the number of possible attendees seemed draconian and not possible. Carbon offsets have proven to be unreliable. Telecommuniting to meetings limits the real value of live social interaction (so far). It seemed like there wasn’t a good solution. But then we began to think about a second kind of meeting that some, but not all, of us attend.

I’m talking about meetings that are smaller, cozier, with researchers rarely from more than a few states away. Grad students have piled into cars, trucks, vans, llama-powered motor-scooters, and more to make the pilgrimage for the meeting’s weekend of showing their stuff and finding new colleagues, collaborators, and mentors. These are the meetings where you form deep relationships that you come back to year after year – relationships that slowly bear great intellectual fruit. Meetings like The Western Society of Naturalists, for example.

True, Mega-Meetings are quite different from these smaller more local meetings – like the big flash of molecular gastronomy to the simple elegant nourishment of slow food – elBulli to Chez Panisse. Therein, however, lies their intrinsic value – a value that attendees of only Mega-meetings may actually be missing.

So we began to ponder – what if societies alternated between Mega-meetings and a large number of smaller more regional meetings? Could this be a possible solution? Intellectually, sure, I’m sure some would still argue against it, but that would be moot if the carbon savings were trivial. So we sat down over the next few months and did the computational equivalent of some back-of-the-envelope calculations of carbon as currently emitted versus carbon emitted based on several different scenarios of meeting distributions.

And then we sat back, pretty surprised.

Assuming that pretty much everyone drives, but that no one carpools (or uses llamas), carbon savings under our most pessimistic set of assumptions were around 50%. That’s right, halving the carbon emissions.

Granted, this is back-of-the-envelope, but, the idea is pretty compelling. And yes, there are other costs – administrative, logistic, etc. But thinking from a carbon perspective alone, this result is pretty stunning. Not only are there large carbon benefits, but local meetings confer other benefits – contribution to regional economies, better ties to regional organizations and NGOs, and quite likely a higher degree of participation from graduate students (and lower attendance barriers to undergraduates and the community).

We also considered other alternatives – lowering carbon costs by taking the distribution of members into account, reducing international participation, etc. But the drawbacks in these seemed to be ones that most people would not, at least currently, accept, when we floated ideas to others.

So, this local-regional alternation seems to be something worth thinking about. Would you be willing to participate in an alternative society structure – one where meetings alternated between being large and international and then small and regional? What would be lost for you? What would be gained? Would it be too much of an additional burden on organizers? Would that burden be justified by carbon savings?

Also of note, we had a hard time getting this published. We had a lot of wonderful comments from editors and reviewers who were very positive about this work, but then would say, “Oh, but, you know, we just don’t have a venue for this.” (sometimes followed weeks later by editorials stating “THIS IS A PROBLEM! WHERE ARE THE SOLUTIONS?” which we thought curious) We tried multiple generalist and specialist journals, journals for societies and by regular publishers. I’d like to thank Ethnobiology Letters for going out on a limb and publishing this, as conversations like this need to be had in the peer reviewed literature.

Ponette-González, Alexandra G, & Jarrett E Byrnes (2011). Sustainable Science? Reducing the Carbon Impact of Scientific Mega-Meetings Ethnobiology Letters, 2, 65-71

A Grand Experiment: Can Crowdfunding Work for Science?

There have been a steady stream of articles over the years stating that now is the time for Science to embrace Crowdfunding. It is the wave of the future, they state. It will link people to science directly. Abandon the shackles of the NSF and NIH, ye meek and mild mannered scientists. The internet is your salvation.

And along with this hoopla comes a small handfull of science crowdfunding sites, but few success stories. There has been no large-scale well-advertised attempt at seeing whether crowdfunding really is a viable model for science.

Well, it’s time we change that.

Announcing the SciFund Challenge!

My co-conspirator, Jai Ranganathan of Curiouser and Curiouser, and I have decided to spearhead a large effort to recruit scientists to try and crowd-fund a piece of their science. We want to see, just how is crowdfunding for science different from normal funding? Can we harness the power of our social networks, the online science blogosphere, and maybe the larger public to fund science?

Crowdfunding hedgehog wants crowds to fund him.

Honestly, we have no idea. But it’s worth trying, and, hopefully, we can do some interesting post-mortem analyses once the project wraps up. So go check out our project blog including a call to arms, how this will work, and our sign-up form.

Finding Truth in a Messy World

ResearchBlogging.org*-note, this was derived from a combination of emails between myself and my former phd advisor. See if you can pick out who is arguing what and where. It’s fun – well, for some of you, anyway.

How do we know the world?

This is a seemingly simple and vast question – one with no answer. And yet, it is at the core of every single scientific endeavor. We make choices. We frame it in pretty language – that we are designing specific tests of mechanistic hypotheses in order to better parameterize our quantitative or qualitative models of natural processes.

Well that language does no one any favors. (Or, as one colleague of mine is fond of saying “Horseshit!”)

We are grappling with the simple issue of how we can gain real true knowledge of the world around us. Now that’s a meaty quandary.

As a PhD student, you begin to wrestle with this problem intuitively, but using the pat answers given to you by the mentors around you. As you grow up, as it were, one day you have to take ownership of it. And it can spawn some interesting discussions. Two papers, seemingly with little to do with each other recently provided their authors takes on this age-old issue, albeit under two different guises. And while they are talking about different pieces of Ecology, I’ve witnessed this argument playing across vast swaths of the natural and social sciences. So, substitute in your discipline of choice in the discussion of these two papers.

The first, Macroecology: Does it ignore or can it encourage further Ecological syntheses based on spatially local experimental manipulations? by Bob Paine is a harsh rebuke of the field of Macroecology. Paine argues that we can only know the world by kicking the can, as it were. Small scale manipulative experiment are the essential building blocks of knowledge. Macroecology, he argues, seeks to substitute large-scale observational analyses of unclear patterns and trends with vague ideas behind them for detailed mechanistic insights.

But what about those detailed models? Or even their application to large-scale experiments. Ellison and Dennis argue in their recent piece, Paths to statistical fluency for ecologists, that it’s time for ecologists to grow up statistically. We’re still grounded in the agricultural statistics of the 1940′s and 50′s, they say. Statistics has evolved so much. It’s time for us to both get more statistically savvy and require a strong mathematic and statistical education for every incoming ecological student. This is true not only for big observational analyses, but even so that we can design and properly analyze better experiments.

These papers and the different camps they represent have a lot to say to one another, really. At first, they may seem to be in disagreement, but there’s an intriguing core underneath both.

An overly zealous devotee of the Ellison and Dennis logic would likely read Paine and argue that his is an argument that is stuck. That the low-hanging fruit of strong interactions that are easily detectable by simple experiments have largely been, well, picked. However, we have made a mistake – one that has haunted us from forest to fisheries management. The world is a complex place. One can look at one particular mechanism functioning with a simplified world in a small little corner of the world and derive some truth. But in the real world, processes modify each other or change in strength – often in sharp and unpredictable ways – across vast swaths of space and time in ways experiments alone can never detect. Similarly, there vitally important patterns and processes in nature that cannot be tested in a square meter plot. Statistical models informed by nature may be the only way to tease apart natural variation and divine real meaning. But, as ecologists, we lack the skills and training to do this correctly – or, worse, to be able to tell when our fellow colleagues are misapplying modern statistics. This must change.

Conversely, a Paine partisan might read the Ellison and Dennis paper and argue that many of our most important insights have come from basic studies where the result is easily observed and convincingly related without fancy statistics. To play devil’s advocate (and make sure to see Paine’s note regarding his own soul in his acknowledgements – cracked me up), one could write a rebuttal to Ellison and Dennis that argued the converse: complex statistical analysis is a crutch. We need to focus our work on elucidating patterns and processes that are plainly observable. These are likely to be the strongest and most important drivers in natural systems. If they’re not that strong, how likely is it that a fancy statistical result showing that they are distinguishable from expectations or a null distribution only serve to allow publication of essentially meaningless results?

This is not an argument against statistical fluency, mind you, but for many there is an inverse relationship between the complexity of the statistics used and the believability of the results. If ecology is interested in influencing management decisions, policy, etc., complex statistical analyses need to be reduced to simple depictions or the results won’t be usable. Will training our students in math/stats classes designed for engineers really help that goal?

I mean, who DOESN’T want incoming students to be more quantitatively literate than we are? But is the program by Ellison and Dennis the way to go? Or will their proposed program extend graduate school by another year and likely deter many good students from a program (“I want to DO ecology, not sit in a classroom and take math classes pitched at engineering students!”).

Clearly, there is an in between – a sweet spot.

The fallacy of Ellison and Dennis’s argument for training rests in what happened to me in my first quarter of Alan Hastings’s excellent Mathematical Methods in Population Biology class. I suddenly understood calculus. In blinding clarity, calculus and linear algebra jumped off the page, and said “Hello! Aren’t we awesome!” It was a similar clarity that gripped me while reading Ben Bolker’s book – a clarity I do not get from reading most of the primary/non-ecological sources when I seek statistical wisdom.

To force students to take engineering/maths classes to understand these things is but a stopgap. The real goal should be to have Ecologists teaching applied math/stats/comp-sci classes that meld these concepts with ecology and the practical analysis of data. For example, I am super-excited about this new book. Imagine combining that with Bolker’s text, or even Gelman and Hill. What wonderful team-teaching opportunities there are! And not just for grad education – what unique classes we could design that build the statistical and quantitative literacy of undergraduates. These are concepts that are necessary far outside the realm of just ecology, but by teaching them in an ecological context, we can give them an entrance point that they would not otherwise find.

Or, at least, that’s my utopian/give-me-a-job view.

This same chain of logic is true of the Paine paper. Microecology is great. But even he admits at the end of the paper that the REALLY strong kind of inference is derived from small-scale experiments replicated across large scales and/or coupling small-scale experiments with observational data. The former is what I’ve read Bruce Menge and others call the ‘Comparative Experimental’ approach or Jon Witman define as ‘Experimental Macroecology’. The latter is similar, but, actually requires the techniques Ellison and Dennis would have us use.

This is a slim point of agreement, but, this combined approach really is very compelling. Kick the system. But never trust your results unless they are true in the world – using the best inference possible. If they are not, ask why not? Kick the system again – but do it everywhere. And if you’re going to do such an experiment right, here in this crazy complex world, you will require more sophisticated machinery than we have been taught to use.

It all leads to some funny places – like my desk. If you were to look at my hellaciously disorganized desk right now, on one side you’d find papers detailing Conditional Likelihood (a non-parametric version of maximum likelihood) and some scribbles of pseudo-code on its relevance and how to implement it for complex SEM models. On the other side is a copy of Morris, Abbott and Haderlie sitting next to a copy of Leighton’s thesis on consumption rates and nutritional demands of purple urchins and abalone.

And damn if I don’t feel like that will all get me closer to Truth with a capital T in my own work. At the same time, this is not the way for everyone. It takes all kinds – from the most math/stats-y of us to those who live and die by their weed-whacker. And when all of these kinds of scientists agree, well, that’s when you know you’ve got something worth paying attention to.

Paine, R. (2010). Macroecology: Does It Ignore or Can It Encourage Further Ecological Syntheses Based on Spatially Local Experimental Manipulations? The American Naturalist, 176 (4), 385-393 DOI: 10.1086/656273

Ellison, A., & Dennis, B. (2010). Paths to statistical fluency for ecologists Frontiers in Ecology and the Environment, 8 (7), 362-370 DOI: 10.1890/080209

“Privatizing” the Reviewer Commons?

This post was chosen as an Editor's Selection for ResearchBlogging.orgLet’s face it. The current journal system is slowly breaking down – in Ecology if not in other disciplines as well. The number of submissions is going up exponentially. At the same time, journals are finding it harder and harder to find reviewers. Statistics such as editors contacting 10 reviewers to find 3 are not uncommon. People don’t respond, they take a long time to review, or just take a long time and THEN don’t respond leading to a need for still more reviewers to be found (this has held up 2 of my pubs for 3+ extra months). The consequences are inevitable. I’ve heard (and experienced) more and more stories of people submitting to journals for which their work is perfectly suited, only to have them rejected without review for trivial, if any, reason. (I know the plural of anecdote is not data – see refs in the article below for a more rigorous discussion).

Even if an article is reviewed, once rejected, it begins the revision cycle afresh at a new journal, starting the entire reviewer finding-and-binding process over again, yielding considerable redundancy of effort. This is slowing the pace of science, and the pace of our careers – a huge cost for young scientists.

How do we solve the tragedy of the reviewing commons?

Jeremy Fox and Owen Petchy lay out an intriguing suggestion (or see here for free pdf) and couple it with a petition. If you’re convinced by their article, go sign it now.

In essence, they want to “privatize” the reviewer commons. They propose the creation of a public online Pubcred bank. To submit a paper, one pays three credits. For every review, they receive one credit. This maintains a minimum 3:1 submit:review ratio which we should all be maintaining. Along with this, they propose that reviews are passed from journal to journal if a paper is rejected. They authors cannot hide from comments, hoping to roll the dice and get past critical reviewers. This lessens the workload for everyone and boosts science.

There are of course a million details to be worked out – what about new authors (they propose an allowable overdraft), multi-authored papers (split the cost), bad reviews (no credits for you!), etc.? Fox and Petchy lay out a delightfully thoughtful and detailed response to all of these (although I’m sure more will crop up – nothing is perfect).

I think a Pubcred system is absolutely essential to the forward progress of modern science, and I whole-heartedly support this proposal (and signed the petition). At the same time, I think there is a second problem worth thinking about that is related to the proliferation of articles.

Namely, the review and re-review cycle. We all start by submitting to the highest impact journal that we think will take our articles. This can lead to a cycle of review and re-review that takes time and energy from reviewers, and can be gamed by authors who do not revise before resubmitting (who among us has not seen this happen?).

For this reason, at a minimum, the sharing of rejection reviews from journal-to-journal and authors being forced to respond is *ESSENTIAL* to the Pubcred system working. On the other hand, Pubcreds are going to require a large co-ordinating effort between journals – many of whom are published by different organizations. If we are going to go to this trouble already, one wonders if a system where authors submit articles to a common reviewing pool, and journals select articles after review and revision (asking for any additional revisions as needed) as proposed by Stefano Allesina might be even more efficient.

Then again, let’s come back to the real world. Such a system would require a sea-change in the world of academic publishing, and I don’t think we’re there yet. The Pubcred bank will require its own journal compliance hurdles in the first place, and a need for multiple publishers to agree and co-ordinate their actions. No small feat. Given its technical simplicity and huge benefits to journals, this task will hopefully be minor. Implementing Pubcreds gets us a good part of the way there, and begins to tackle what is rapidly becoming a large problem lurking in the background. It won’t solve everything (or maybe it will!), but it should certainly staunch the current tide of problems.

So please, read the article, and if you agree, go sign the petition already!

Update: For more thoughtful discussion see this post at Jabberwocky Ecology and a thoughtful response by Fox and Petchey.

Fox, J., & Petchey, O. (2010). Pubcreds: Fixing the Peer Review Process by “Privatizing” the Reviewer Commons Bulletin of the Ecological Society of America, 91 (3), 325-333 DOI: 10.1890/0012-9623-91.3.325

Stefano Allesina (2009). Accelerating the pace of discovery by changing the peer review algorithm arXiv.org arXiv: 0911.0344v1

Who Am I? Job Hunting Edition

As I am in the thick of job hunting season, I’m writing a few applications. One of the most interesting parts of the whole process is the research statement. It’s interesting as it forces one to really be self-reflective, and ask “Who am I and why am I doing this?” The first paragraph in particular is a sort of declaration. Hopefully this will be the last one of these I write, but, who am I kidding. So, for posterity (and perhaps to keep track of how I grow as a scientist) here’s my 1st paragraph from job apps in the fall of ’08:

As an ecologist, I seek to understand the complex interaction of different ecological forces in communities. Ecology is replete with paradigms that seemingly conflict. Do extinctions allow invasion, or are invaders driving native species extinct? Does diversity drive ecosystem function, or does the functioning of an ecosystem drive diversity? Do top-down or bottom-up processes regulate food webs? My primary interest is tackling these questions head-on and showing that they are false dichotomies. Rather, conflicting processes are two sides of the same coin. I seek to explore how they interact, and provide a more holistic view of the natural world. Furthermore, my research seeks to provide a clearer view of how ecosystems function and respond in the face of multiple human impacts. When managers focus on any one human impact in isolation, they can reach incorrect conclusions. My hope is that my research can be used to understand the complex nature of biological responses to management and restoration actions.