*-note, this was derived from a combination of emails between myself and my former phd advisor. See if you can pick out who is arguing what and where. It’s fun – well, for some of you, anyway.

How do we know the world?

This is a seemingly simple and vast question – one with no answer. And yet, it is at the core of every single scientific endeavor. We make choices. We frame it in pretty language – that we are designing specific tests of mechanistic hypotheses in order to better parameterize our quantitative or qualitative models of natural processes.

Well that language does no one any favors. (Or, as one colleague of mine is fond of saying “Horseshit!”)

We are grappling with the simple issue of how we can gain real true knowledge of the world around us. Now that’s a meaty quandary.

As a PhD student, you begin to wrestle with this problem intuitively, but using the pat answers given to you by the mentors around you. As you grow up, as it were, one day you have to take ownership of it. And it can spawn some interesting discussions. Two papers, seemingly with little to do with each other recently provided their authors takes on this age-old issue, albeit under two different guises. And while they are talking about different pieces of Ecology, I’ve witnessed this argument playing across vast swaths of the natural and social sciences. So, substitute in your discipline of choice in the discussion of these two papers.

The first, *Macroecology: Does it ignore or can it encourage further Ecological syntheses based on spatially local experimental manipulations?* by Bob Paine is a harsh rebuke of the field of Macroecology. Paine argues that we can only know the world by kicking the can, as it were. Small scale manipulative experiment are the essential building blocks of knowledge. Macroecology, he argues, seeks to substitute large-scale observational analyses of unclear patterns and trends with vague ideas behind them for detailed mechanistic insights.

But what about those detailed models? Or even their application to large-scale experiments. Ellison and Dennis argue in their recent piece, *Paths to statistical fluency for ecologists*, that it’s time for ecologists to grow up statistically. We’re still grounded in the agricultural statistics of the 1940’s and 50’s, they say. Statistics has evolved so much. It’s time for us to both get more statistically savvy and require a strong mathematic and statistical education for every incoming ecological student. This is true not only for big observational analyses, but even so that we can design and properly analyze better experiments.

These papers and the different camps they represent have a lot to say to one another, really. At first, they may seem to be in disagreement, but there’s an intriguing core underneath both.

An overly zealous devotee of the Ellison and Dennis logic would likely read Paine and argue that his is an argument that is stuck. That the low-hanging fruit of strong interactions that are easily detectable by simple experiments have largely been, well, picked. However, we have made a mistake – one that has haunted us from forest to fisheries management. The world is a complex place. One can look at one particular mechanism functioning with a simplified world in a small little corner of the world and derive some truth. But in the real world, processes modify each other or change in strength – often in sharp and unpredictable ways – across vast swaths of space and time in ways experiments alone can never detect. Similarly, there vitally important patterns and processes in nature that cannot be tested in a square meter plot. Statistical models informed by nature may be the only way to tease apart natural variation and divine real meaning. But, as ecologists, we lack the skills and training to do this correctly – or, worse, to be able to tell when our fellow colleagues are misapplying modern statistics. This must change.

Conversely, a Paine partisan might read the Ellison and Dennis paper and argue that many of our most important insights have come from basic studies where the result is easily observed and convincingly related without fancy statistics. To play devil’s advocate (and make sure to see Paine’s note regarding his own soul in his acknowledgements – cracked me up), one could write a rebuttal to Ellison and Dennis that argued the converse: complex statistical analysis is a crutch. We need to focus our work on elucidating patterns and processes that are plainly observable. These are likely to be the strongest and most important drivers in natural systems. If they’re not that strong, how likely is it that a fancy statistical result showing that they are distinguishable from expectations or a null distribution only serve to allow publication of essentially meaningless results?

This is not an argument against statistical fluency, mind you, but for many there is an inverse relationship between the complexity of the statistics used and the believability of the results. If ecology is interested in influencing management decisions, policy, etc., complex statistical analyses need to be reduced to simple depictions or the results won’t be usable. Will training our students in math/stats classes designed for engineers really help that goal?

I mean, who DOESN’T want incoming students to be more quantitatively literate than we are? But is the program by Ellison and Dennis the way to go? Or will their proposed program extend graduate school by another year and likely deter many good students from a program (“I want to DO ecology, not sit in a classroom and take math classes pitched at engineering students!”).

Clearly, there is an in between – a sweet spot.

The fallacy of Ellison and Dennis’s argument for training rests in what happened to me in my first quarter of Alan Hastings’s excellent *Mathematical Methods in Population Biology* class. I suddenly understood calculus. In blinding clarity, calculus and linear algebra jumped off the page, and said “Hello! Aren’t we awesome!” It was a similar clarity that gripped me while reading Ben Bolker’s book – a clarity I do not get from reading most of the primary/non-ecological sources when I seek statistical wisdom.

To force students to take engineering/maths classes to understand these things is but a stopgap. The real goal should be to have Ecologists teaching applied math/stats/comp-sci classes that meld these concepts with ecology and the practical analysis of data. For example, I am super-excited about this new book. Imagine combining that with Bolker’s text, or even Gelman and Hill. What wonderful team-teaching opportunities there are! And not just for grad education – what unique classes we could design that build the statistical and quantitative literacy of undergraduates. These are concepts that are necessary far outside the realm of just ecology, but by teaching them in an ecological context, we can give them an entrance point that they would not otherwise find.

Or, at least, that’s my utopian/give-me-a-job view.

This same chain of logic is true of the Paine paper. Microecology is great. But even he admits at the end of the paper that the REALLY strong kind of inference is derived from small-scale experiments replicated across large scales and/or coupling small-scale experiments with observational data. The former is what I’ve read Bruce Menge and others call the ‘Comparative Experimental’ approach or Jon Witman define as ‘Experimental Macroecology’. The latter is similar, but, actually requires the techniques Ellison and Dennis would have us use.

This is a slim point of agreement, but, this combined approach really is very compelling. Kick the system. But never trust your results unless they are true in the world – using the best inference possible. If they are not, ask why not? Kick the system again – but do it everywhere. And if you’re going to do such an experiment right, here in this crazy complex world, you will require more sophisticated machinery than we have been taught to use.

It all leads to some funny places – like my desk. If you were to look at my hellaciously disorganized desk right now, on one side you’d find papers detailing Conditional Likelihood (a non-parametric version of maximum likelihood) and some scribbles of pseudo-code on its relevance and how to implement it for complex SEM models. On the other side is a copy of Morris, Abbott and Haderlie sitting next to a copy of Leighton’s thesis on consumption rates and nutritional demands of purple urchins and abalone.

And damn if I don’t feel like that will all get me closer to Truth with a capital T in my own work. At the same time, this is not the way for everyone. It takes all kinds – from the most math/stats-y of us to those who live and die by their weed-whacker. And when all of these kinds of scientists agree, well, that’s when you know you’ve got something worth paying attention to.

Paine, R. (2010). Macroecology: Does It Ignore or Can It Encourage Further Ecological Syntheses Based on Spatially Local Experimental Manipulations? The American Naturalist, 176 (4), 385-393 DOI: 10.1086/656273

Ellison, A., & Dennis, B. (2010). Paths to statistical fluency for ecologists Frontiers in Ecology and the Environment, 8 (7), 362-370 DOI: 10.1890/080209