# A Probabilistic Look at Predators, Prey, and Extinctions

This is part of a larger series of open notebook posts about how food web structure modifies the effects of predator extinctions. For an introduction and list of other posts, see here.

To begin tackling the question of “How does the structure of a food web influence the consequences of extinction? let’s begin by thinking about how changes in the number of predator species in a simple 2-level predator-prey food web can influence 1) the probability of prey being eaten and 2) the amount of energy transfered to the predator group. Looking back at our “Master Food Web”, we’re zooming in on, say, the consequences of extinction in group E for group C.

Let's zoom in on one part of the general food web

I’m going to begin by thinking about whether we can solve this problem if we know the specific structure of a food web. Later on, I’ll start talking about working from food web network properties such as degree distribution.

The first thing to realize is that we’re talking about probabilities. What’s the probability of a prey item being eaten? What’s the probability of energy getting to the predator trophic level? It’s this probabilistic thinking that’s at the centerpiece of the framework I’m going to lay out.

A Probabilistic Framework for Food Webs and Predation

Let’s establish some ground rules for this framework. For an arbitrary topology (i.e., network structure), we can calculate the probability of an individual prey species being eaten quite simply. If there are any links between a prey item and any predator, that probability is 1. If there are no links, it is 0. Let’s call this p(eaten). Control of a group of prey species can therefore be described as the average value of p(eaten) across the whole group of prey. If everyone has a predator, it’s 1. If no one has a predator, it’s 0. If half of the species have a predator, it’s 0.5. And one should be able to calculate variance, etc., from that information.

And just like that, we have a new network metric. p(eaten), which, really, is p(connected to the network by an incoming edge) if you want to get technical.

Before we jump into numbers, let’s look at some examples of how this p(eaten) metric works. First, a simple system with 2 predators, 2 prey. One predator eats one prey. One predator eats two prey. What’s the average p(eaten) if 1 predator goes extinct?

The above figure shows 2 different possible configurations. When the generalist is knocked out, one prey item escapes consumption. p(eaten) for that extinction scenario is 1/2. When the specialist is knocked out, neither prey item escapes consumption. p(eaten) is 1. So, what’s the average probability that a prey item will be eaten if 1 predator is going extinct? It’s just the average of results from the two scenarios: 0.75.

To hammer this home, consider the figure below for a three predator, three prey food web with two specialists and one generalists. I’ve drawn all possible configurations for both the 1 predator extinction and the 2 predator extinction scenarios. Underneath each scenario is its p(eaten) and in bold to the left is the average p(eaten) for that number of extinctions.

A Hypergeometric Approach to Predator-Prey Relationships. Whoah. You said Hypergeometric.

OK, so, the framework should be fairly clear at this point. Now, for that arbitrary topology, what will be the probability that a prey item will be eaten if some number, E, predators go extinct? First, we can calculate this for a single prey item, let’s call it prey item i, if we know the number of predators who eat it – the prey in-degree, Di.

Let’s say there are Sp predators. Some number of them, Di, eat a single prey. We want to know the probability that, if E predators go extinct, of those that remain, what is the probability that none of them are predators of our prey item. This is p(not eaten). And then 1-p(not eaten) = p(eaten) for our little prey item.

OK, so, given all of the possible combinations of predator extinctions, what is p(not eaten)? How do we find it?

This is actually a special case of the hypergeometric distribution. Remember it from intro probability and statistics? No? OK. So. Briefly, let’s say you are drawing balls from an urn. Some are white, some are black. What’s the probability, if you draw X balls, that N of them will be black? This can be expressed as dh(N; # black, # white, X) since we’re using the distributions probability density function. If you want to get into the details of it, see here.

In the case of our prey item, N=0 – no predators are left that eat the prey. The number of draws is the number of species left after extinction: Sp-E. Black balls are predators of our prey item (Di), white balls are those that are, well, not. So, the probability that an individual prey item will not be eaten given E extinctions is

p(eaten | E) = 1-dh(0; Di, Sp-Di, Sp-E)   (1)

To determine the average probability that a prey item will be eaten, we just average this over all prey.

And if you want to get all gory with this, we can actually write down a function for the probability of being eaten given E extinctions:

$p(eaten | E) = 1 - \overline{ \frac{ {{ S_{p}-D_{i} }\choose{ S_{p}-E }} }{ {{ S_{p} }\choose{ S_{p}-E }} }}$

N.B. In putting this together, I’ve also realized that we can use a slightly more intuitive general equation (but the gory details version is uglier). That is, rather than thinking about the probability of having 0 predators left after E extinctions, what is the probability of having all Di predators of a prey item removed by extinction. This is still p(not eaten). And it leads to the following very similar, and potentially more understandable, equation. Any preferences in the peanut gallery?

p(eaten | E) = 1-dh(Di; Di, Sp-Di, E)   (2)

A Last Note on Energy Transfer from Prey to Predators

The nice thing about this averaged function is that it is simultaneously both the average probability that the prey trophic level will be under control by predators and the average probability that energy entering the food web via prey will make its way up to the predator trophic level. Basically, p(prey eaten) = p(energy gets to predator).

A Final Note of Wonder and some R Code

So, the thing that amazes me the most about the results above is that it all hinges on the prey. One actually doesn’t need to know anything about the diet of individual predators. Instead, one only needs to know how many things eat each prey item. This makes the framework easy to code up computationally, and easy to similate, as instead of coming up with all possible adjacency matrices, one can just look at all possible combinations of Di given some number of total predators. To demonstrate how this can be nice, here’s the R code I use to calculate p(eaten):

#Sp is the diversity of the predators
#E is the number of extinctions
#prey.vec is the in-degree (# of predators) of each prey species

pEaten<-function(Sp, E, prey.vec) {
#see ?dhyper for more on hypergeometric distributions in R
1-mean(dhyper(prey.vec,prey.vec, Sp-prey.vec, E))
}

#If you find the 0 predators remaining formulation more intuitive
pEaten2<-function(Sp.max, E, prey.vec) {
1-mean(dhyper(0,prey.vec, Sp.max-prey.vec, Sp.max-E))
}


Great, and thus end-eth the big information boom. This is the kind of thinking that will underlie everything as I move forward, so read and digest this. If there is something that isn't clear, let me know. Once you start to think about the probability of connections, I think it becomes a good bit more transparent. I'll talk validation and generalization to statistical network probabilities in my next post.

# Food Web Network Structure and Extinction: The Start of an Open Notebook

So, we know that species are going extinct at a pretty stunning rate. Mostly by human activities. The natural question is, will this affect the function of the natural world? You may well say ‘Duh! Of course!’ as a first instinctive pass, but, the issue isn’t so clear cut – will species that survive simply take up the slack? What’s the value of a ‘species’ anyway?

Starting in the late ’90s the field of diversity-function research has tackled this topic, largely using manipulations plant species number. And the results are pretty conclusive – what you change plant diversity, you affect how the natural world works.

But note I said plants.

A bunch of us in the early to mid ‘aughts wondered if changes in the number of top predator species, or herbivores, or intermediate predators, or other species other than plants and algae might also alter the way the natural world worked in an analogous way. Emmett Duffy outlined a number of reasons we could expect changes in diversity at different trophic levels to produce either the same results as changes in plant diversity or maybe not!

So we went out and did the experiments, and found – well, sometimes diversity affected function one way, sometimes another. It all seemed to depend on something about each individual experiment. I was involved in this by examining about whether losses of predator diversity affected the impact of herbivores on their plant or algal prey – so called trophic cascades.

And in looking at the relationship between predator diversity and trophic cascades we really did see every kind of result one could imagine.

Fortunately, there seemed to be some predictability here which can be seen both by comparing different experiments or looking at some of Deborah Finke’s awesome work in a variety of systems. That one could predict the effect of changing levels of diversity if they knew the relative number of omnivores, specialists, or within-trophic level predation (so called intraguild predation). But these insights were all pretty qualitative. There’s no real quantitative guide here as to when diversity will do what.

Leaving this, I went off and did some work looking at how climate change may alter the network structure of food webs. And promptly did a palm-to-the-forehead. Food web network ecology has done a brilliant job of deriving metrics to describe the structure of, well, food webs. And the structure of food webs seems to influence the effects of species going extinct on trophic cascades or any other function one would care to measure.

Clearly, these two fields needed to come together.

So this is what I’m doing for my postdoc here at NCEAS. I am slowly but slowly trying to figure out how to link food web network theory with biodiversity-ecosystem function.

A general food web to consider for other entries in this series.

What’s my goal? Simple. Look at the food web to the left. In it, different trophic levels have different colors. But, heck, even within a trophic level, we may split things into finer trophic groups based on their diet and types of interactions – something we do all the time qualitatively (e.g., by saying there’s a brown food web of detritivores and a green food web of consumers of living tissue) and can even now do quantitatively.

What I want to be able to do is say, let’s take this food web. If we know some structural properties of the web, can we then predict the effects of a change in diversity within any trophic group on the flow of energy and control of consumption within the web.

For example, if some number of species go extinct in F, will consumption of A increase or decrease? If some number of species in C go extinct, will G accumulate more or less energy?

I realize this doesn’t take some important things into account – interaction strengths or the abundance of each individual species. I think the former can be folded in later. I’d also note that I am trying a very different approach than qualitative modeling and think that problems of indeterminacy in predictions may be dealt with by using a probabilistic framework from the start. With respect to abundances – I’m hoping the results can translate into predictions of biomass, but, we shall see…

In the coming weeks, I’m going to try and open up my lab notebook, and lay out the theory I’m developing to answer these sorts of questions. I’ll be honest, I’m doing this for myself as much as anything. I have a lab notebook full of scribbles – some blind alleys, some promising leads. Writing it out will force me to focus my arguments and spot weaknesses (or have you spot weaknesses)! I’ve got some of this nailed, and some of it I may stumble around a bit with. And, heck, I’m always game to hear thoughts from the peanut gallery.

As I answer different pieces of the puzzle, I’ll put links to them in this entry. So, let’s start this open notebook and see where it goes!

# Seeing Through the Measurement Error

I am part of an incredibly exciting project – the #SciFund Challenge. #SciFund is an attempt to have scientists link their work to the general public through crowdfunding. As I’m one of the organizers, I thought I should have some skin in the game. But what skin?

Well, people are pitching some incredibly sexy projects – tracking puffin migrations, coral reefs conservation, snake-squirrel interactions (WITH ROBOSQUIRRELS!), mathematical modeling of movements like Occupy Wall Street, and many many more. It’s some super sexy science stuff.

So what is my project going to address? Measurement error.

WOOOOOOOOO MEASUREMENT ERROR!

But wait, before you roll your eyes at me, this is REALLY IMPORTANT. Seriously!

It can change everything we know about a system!

I’m working with a 30 year data set from the Channel Islands National Park. 30 years of divers going out and counting everything in those forests to see what’s there. They’ve witnessed some amazing change – El Niños, changes in fishing pressure, changes in fishing pressure, changes in urbanization on the coast, and more. It’s perhaps the best long-term large-scale full community subtidal data set in existence (and if there are better, um, send ‘em my way because I want to work with them!)

But 30 years – that’s a lot of different divers working on this data set under a ton of different environmental conditions. Doing basic sampling on SCUBA is arduous, and given the sometimes crazy environmental conditions, there is probably some small amount of variation in the data due to processes other than biology. To demonstrate this to a general non-statistical audience, I created the following video. Enjoy watching me in my science film debut…oh dear.

OK, my little scientific audience. You might look at this and think, meh, 30 years of data, won’t any measurement error due to those kinds of conditions or differences in the crew going out to do these counts just average out? With so much data, it shouldn’t be important! Jarrett just wanted an excuse to make a silly science video!

And that’s where you may well be wrong (well, about the data part, anyway). I’ve been working with this data for a long time, and one of my foci has been to try and tease out the signals of community processes, like the relative importance of predation and grazing versus nutrients and habitat provision. Your basic top-down bottom-up kind of thing. While early models showed, yep, they’re both important, and here’s how and why, some rather strident reviewer comments came back and forced me to rethink the models, adding in a great deal more complexity even to the simplest one.

And this is where measurement error became important. Measurement error can obscure the signal of important processes in complex models. A process may be there, may be important in your data, but if you’re not properly controlling for measurement error it can hide real biological patterns.

For example, below is a slice of one model done with two different analyses. I’m looking at whether there are any relationships between predators, grazers, and kelp. On the left hand side, we have the results from the fit model without using calibration data to quantify measurement error. While it appears that there is a negative relationship between grazers and kelp, there is no detectable relationship between predators and grazers (hence the dashed line – it ain’t different from 0).

This is because there is so much extra variation in records of grazer abundances due to measurement error that we cannot see the predator -> grazer relationship.

Now let’s consider the model on the right. Here, I’ve assumed that 10% of the variation in the data is due to measurement error (i.e., an R2 of 0.9 between observed and actual grazer abundances). So, I have “calibration” data. This error rate is made up, just to show the consequences of folding the error in to our analysis.

Just folding in this very small amount of measurement error, we get a change in the results of the model. We can now see a negative relationship between predators and grazers.

I need this calibration data to ensure that the results I’m seeing in my analyses of this incredible 30 year kelp forest data set are real, and not due to spurious measurement error. So I’m hoping wonderful folk like you (or people you know – seriously, forward http://scifund.rockethub.com around to everyone you know! RIGHT NOW!) will see the video, read the project description, and pitch in to help support kelp forest research.

If we’re going to use a 30 year data set to understand kelp forests and environmental change, we want to do it right, so, let’s figure out how much variation in the data is real, and how much is mere measurement error. It’s not hard, and the benefits to marine research are huge.

# The Story Behind the Paper: Climate Change and Kelp Forest Food Webs

Woohoo!

So, what have I been doing for the past few years of my life?

In brief summary: Kelp. Food webs. Climate change. A potent combination.

And if you want the punchline without digging into the rest of this article, it is this: the diverse, complex food webs of southern California kelp forest will likely be greatly simplified if climate change leads to big storms every year.

I could give you all of the gory details of the paper, how I arrived at that conclusion, the science-y nuggets, etc. Instead, I thought I’d give you the slightly longer, more human, but more meander-y version of how this paper was created – the story of this particular story. It’s not something we always talk about in science. Science isn’t always a nice linear process. What we set out to do it not always what ends up happening. In the end, though, we want a nice linear story that leaves the juicy bits of exploration out. And the process behind this paper was, indeed, intellectually juicy.

So, how does a paper on kelp forests, food webs, and climate change, come to be?

Like all projects in my life, this was not one I was expecting to do. It’s kind of a theme for me. I came to work at the Santa Barbara Coastal Long-Term Ecological Research site to work on potential feedbacks between the diversity of life on the seafloor and grazing pressure by urchins. Essentially, I wanted to test a theoretical model that I’d put together with colleagues in a 2007 paper. I was convinced that feedbacks between diversity and function were going to be the next big thing. And I still think they’re pretty neat. The experiment was pretty fun, led me to revisit some old concepts in new ways, and ultimately produced some great data which is going to be submitted soon.

Five year running average of extreme (i.e., storm-driven) winter non-tidal residual wave heights from the San Francisco Tide Guage. Starting ~ 1945, we see an increase. Figure adapted from Bromirski et al. 2002 J. Climate.

But early on in all of this, the project PI called me to his office and laid out the following.

The SBC LTER has a buttload (metric, not Imperial) of data. They’ve been sampling thirty-five 80 m2 transects at reefs along the Santa Barbara Chanel coastline every summer for nearly a decade, counting the heck out of everything. At the same time, we’ve noticed two interesting things in the climate literature: 1) climate change projections say that the strongest storm of the winter should get bigger in the future, and 2) if you look at the data, the largest winter waves in California have gotten bigger over the last 50 years.

We know that big waves rip out kelp to life on the seafloor.

We don’t know what will happen if kelp is ripped out every year, or lost altogether.

The LTER had started a project to simulate this annual storm disturbance – big 200m2 plots where we went through and trimmed the giant kelp every winter with hedge clippers. But, coming up on year 2, no one had an idea of how to put the whole story together.

Hello, opportunity, how’s it going?

My facebook network. My mom is actually fairly well connected - but mostly to my High School cluster.

I’d long been fascinated with a network approach to studying communities. Basically, you can visualize life on the seafloor by thinking about Facebook. No, really! There are a ton of tools out there you can use to visualize your network of friends, with the links between them showing friendships, and you at the top. There’s a ton of information there. Who are the hubs? Who connects disparate groups of friends? How does the size of, say, your group of college friends influence the number of other random groups of friends you’ve accumulated through life?

And who has your mom friended, anyway?

OK, now, instead of friendship, think of your friends as eating each other. And you’re at the top!

OK, ok, now replace the people in your network with different species. And you’re a shark. Or giant squid. (or theoretical giant mutant Pycnopodia). Voilá! Food web! And all of that structure and complexity, it has real meaning describing the stability and function of a community of organisms. (well, ok, the function part is what I’m tackling in my current postdoc at nceas)

So, I went back to my PI and said, OK, hey, let’s take a look at how changing the annual frequency of storms can shift the network structure of kelp forest food webs. It would be an indirect effect, so we can use my favorite tool, Structural Equation Modeling, for the analysis. And we can even bring in two awesome other bits of data – transect-level wave height projections from the Coastal Data Information Program and awesome new measurements of kelp beds right after storm season taken by satellite (and developed by Kyle Cavanaugh – an awesome grad student at UCSB who uses Landsat images to count kelp).

He said, sure! But, I might want to see about that food web. You see, no one has actually put together a full kelp forest food web. Or even one for the 250 species that we sample. So, can this be done? Really?

And thus began a 6 month quest. Of living and dying by google scholar. Of talking to experts. Of driving up and down the coast to marine labs, riffling through their libraries of unpublished masters theses, or appendices to undergraduate student reports. Just to find out, who indeed, eats who?

It’s that kind of basic natural history that is necessary to inform sexy fun theory-based analyses. And without it, the sexy-fun-stats-nerdery is really meaningless.

I emerged with a nice solid web, a good sense of uncertainty, and a decent idea of how I’d put these models together. The next part was shockingly simple. Use plyr to smoosh our data with a master kelp-forest food web to get individual transect-level food webs (e.g., what the structure of a place with two seaweeds, an urchin, and a lobster, versus a full-blown hyper-diverse kelp forest)? And then use Structural Equation Modeling to fit models that looked like so:

Path diagram of an SEM showing how waves indirectly influence the species richness and linkage density of a kelp forest food web.

This is all very interesting, and one can contrast the strength of different indirect pathways by which waves influence kelp. It was not immediately intuitive, however, as to what this means for the future of kelp forests under a climate change scenario with annual large storms. Fortunately, as I was fitting Structural Equation Models, which are really just a system of linear equations, I could turn my models into dynamic simulations.

Yes yes. Lots of coding.

I then used these simulations to make predictions about how increasing the annual frequency of large storms will affect the network structure of kelp forest food webs. I could reproduce the table of results for you, and discuss each individual result, but I think the following image more or less sums it up

Basically, frequent large storms will simplify food webs in the end. What’s interesting, though, is that just one storm after years of calm – our current scenario – may actually increase complexity. Everything gets a little mixed up as sunlight streams in and lets suppressed algae establish a beachhead, even while top predators may decrease in diversity. And, shockingly, results from our large-scale field experiment – at least the first two years – appeared to match this pattern beautifully.

And thus, blammo! Publication!

Well, OK, no. Honestly, after the initial results it took a few meeting with my co-authors to get everyone on the same page. In no small part this was due to the atypical methods I was using. Also, while the final paper has about nine or ten different measures of community complexity in it, my initial analyses looked at about thirty. I had some winnowing to do in order to establish a good story. Good clear story is king. What is a scientific paper, after all, than good storytelling backed up by data and then confused with jargon.

After we all got on the same page (and I had tried my story out via a few talks and posters), I wrote it up for Science. Because, well, why not. Even so, it took multiple rounds of revision, before submission. And sweet sweet rejection. Thus followed attempts to submit to three different other journals (What? I wanted to try and be in one of the glossy magazines! I’m thinking’ about my career, here, and other postdocs will back me up on this!) And a major re-write for the format of each. Then, finally, I realized that GCB really was a logical and perfect fit for the piece, and the reviews I got were most helpful in clarifying the last few pieces.

In the end, I’m a pretty proud Papa on this one. I think it’s a nice solid piece of science. It’s got a massive chunk of natural history in it, filling what I see as a key gap. It uses some fancy-pants statistics – and allowed me to go on a deep statistical odyssey in learning the ins and outs of some arcane parts of SEM, such that I’m now an SEM package developer in R. And it coupled the analysis of a smokingly hot large-scale observational dataset (go-go LTER power!) with an intense and awesome mongo-effort field experiment (Clint, Shannon, and Christine, you guys are underwater animals!) It’s basically everything I want in a paper. And yet, it all coalesces around a single story:

The diverse, complex food webs of Southern California kelp forest will likely be greatly simplified if climate change leads to big storms every year.

BYRNES, J., REED, D., CARDINALE, B., CAVANAUGH, K., HOLBROOK, S., & SCHMITT, R. (2011). Climate-driven increases in storm frequency simplify kelp forest food webs Global Change Biology, 17 (8), 2513-2524 DOI: 10.1111/j.1365-2486.2011.02409.x

# My Dissertation in Under 7 Minutes

I recently attended the DISCCRS symposium for recent PhDs of a wide variety of disciplines whose work (past or present) deals with climate change. The week-long meeting was phenomenal, seeding me with thoughts, ideas, and basically making me feel quite good about the work I’m doing (if also very pessimistic about how society is dealing with Climate Change). Perhaps one of the most interesting exercises of the whole thing was something we had to do as a sort of getting-to-know-you. We had to present our dissertation in 7 minutes.

That’s right. 6 years of blood, sweat, and tears in 7 minutes. Oh, and for a totally non-specialist audience.

I thought this was an amazing challenge. Granted, I only ended up really presenting on 3 of my chapters (see papers below). But I really liked the results.

Apologies for the sound quality – the acoustics of the room were pretty bad. Props to The Urban Matador for some sound editing. And malaprops to me for not annunciating or projecting as much as I usually do. Bad scientist. Bad! Bad!

Also, please note that this is with a particular emphasis on how our work relates to climate change.

Byrnes, J., Reynolds, P., & Stachowicz, J. (2007). Invasions and Extinctions Reshape Coastal Marine Food Webs PLoS ONE, 2 (3) DOI: 10.1371/journal.pone.0000295

Byrnes, J., Stachowicz, J., Hultgren, K., Randall Hughes, A., Olyarnik, S., & Thornber, C. (2005). Predator diversity strengthens trophic cascades in kelp forests by modifying herbivore behaviour Ecology Letters, 9 (1), 61-71 DOI: 10.1111/j.1461-0248.2005.00842.x

Byrnes, J., & Stachowicz, J. (2009). The consequences of consumer diversity loss: different answers from different experimental designs Ecology, 90 (10), 2879-2888 DOI: 10.1890/08-1073.1

# I’ve Got the Power!

There is nothing like that pain of taking a first look at fresh precious new data from a carefully designed experiment which took months of planning and construction, 8 divers working full days, and a lot of back-and-forth with colleagues, and then you find absolutely no patterns of interest.

Yup, it’s a real takes-your-breath-away-hand-me-a-beer kind of moment. Fortunately, I was on my way to a 4th of July dinner with my family up in the mountains of Colorado, so, I wasn’t quite going to commit hari-kari on the spot. Still, the moment of vertigo and neausea was somewhat impressive.

Because, let’s admit it, as much as science is about repeated failure until you get an experiment right (I mean, the word is “experiment” not “interesting-meaningful-data-generating-excersise”), it still never feels good.

So, round one of my experiment to test for feedbacks between sessile species diversity and urchin grazing showed bupkiss. Not only was there no richness effect, but, well, there wasn’t even a clear effect that urchins mattered. High density urchin treatments lost about as much cover and diversity of algae and sessile inverts as treatments with low or no urchins in them. What had gone wrong? And can this project be salvaged?

Yeah, I can see a pattern here. Sure...

Rather than immediately leaping to a brilliant biological conclusion (that was frankly not in my head) I decided to jump into something called Power Analysis. Now, to most experimental scientists, power analysis is that thing you were told to do in your stats class to tell you how big your sample size should be, but, you know, sometimes you do it, but not always. Because, really, it is often taught as a series of formulae with no seeming rhyme nor reason (although you know it has something to do with statistical significance). Most folk use a few canned packages for ANOVA or regression, but the underlying mechanics seem obscure. And hence, somewhat fear-provoking.

Well, here’s the dirty little secret of power analysis. Really, at the end of the day, it’s just running simulations of a model of what you THINK is going on, adding in some error, and then seeing how often your experimental design of choice will give you easily interpretable (or even correct) results.

The basic mechanics are this. Have some underlying deterministic model. In my case, a model that described how the cover and diversity of sessile species should change given an initial diversity, cover, and number of urchins. Then, add in random error – either process error in each effect itself, or just noise from the environment (e.g., some normal error term with mean 0 and a standard deviation). Use this to get made-up data for your experiment. Run your states and get parameter estimates, p values, etc. Then, do the same thing all over again. Over many simulations of a particular design, you can get an average coefficient estimates, the number of times you get a significant result, etc. “Power” for a given effect is determined by the number of “significant” results (where you define if you’re going with p=0.05 or whatnot) divided by the number of simulations.

So that’s just what I did. I had coefficient estimates (albeit nonsignificant ones). So, what would I need to detect them? I started by playing with the number of replicates. What if I had just put out more cages? Nope. The number of replicates I’d need to get the power into the 0.8 range alone (and you want as close to 1 as possible) was mildly obscene.

So, what about number of urchins? What if instead of having 0 and 16 as my lowest and highest densities?

Bingo.

Shown above are two plots. On the left, you see the p value for the species richness * urchin density interaction for each simulated run at a variety of maximum urchin densities. On the right you see the power (# of simulations where p<0.05 / total number of simulations). Note that, for the densities I worked with, power is around 0.3? And if I kicked it up to 50, well, things get much nicer.

As soon as I saw this graph, the biology of the the Southern California Bight walked up behind me and whapped me over the head. Of course. I had been using a maximum density that corresponded to your average urchin barren. The density of urchins that can hang out, and graze down any new growth. But that's not the density of urchins that CREATED the barren in the first place. Sure enough, looking at our long-term data, this new value of 50 corresponded to a typical front of urchins that would move through an area and lay waste to the kelp forest around it.

Which is what I had been interested in the first place.

So, round two is up and running now. The cages are stocked with obscenely high densities of the little purple spikey dudes. And we shall see what happens! I am quite excited, and hopeful. Because, I've got the Power!

(and so should you!)

# Mapping the Sasquatch

I love modeling! I love modeling! Modeling will solve everything!

Let’s model the spatial distribution of Bigfoot!

WAIT, WHAT?!

Figure 1 from the paper. Foots denote sighting of Sasquatch footprints. Circles for just visual/auditory sightings. I ask, how does one know what Bigfoot sounds like?

Yes, it sounds silly, but in the current issue of the Journal of Biogegraphy, Lozier et al give us their stunning contribution Predicting the distribution of Sasquatch in western North America: anything goes with ecological niche modelling. Finally, all will be revealed. And for those wondering:

Sasquatch belongs to a large primate lineage descended from the extinct Asian species Gigantopithicus blacki, but see Milinkovich et al. (2004) and Coltman & Davis (2005) for phylogenetic analyses indicating possible membership in the ungulate clade.

They do this to prove a point – that Ecological Niche Models for determining species ranges are amazing – invaluable conservation tools, really. But if the taxonomy on the data that goes into them are shoddy (like, say, calling a Black Bear a Sasquatch), the results will be, well, interesting.

They use data on sightings (see Fig. 1 above) from… the Bigfoot Field Research Organization
and then used the latest and greatest in Ecological Niche Modeling to determine, given environmental parameters, just where does Bigfoot live? And, under current climate change scenarios, where might we find Sasquatch in the future?

So cryptozoologists take note! Here is a veritable treasure trove of information as to where to place your next tripwire camera!

Where will bigfoot be in the future after climate change? Panel A shows current Sasquatch Distribution. Panel B shows its projected distribution under climate change.

In fairness, the authors use this dubious analysis to point out that, when we have a record of species occurrences that seem tidy and orderly, we often don’t question their taxonomic validity. The output of these models, vital to some conservation efforts, will only be as good as their input. Indeed, in this case, the authors find striking overlap with the (far less frequently observed) Black Bear (yes, people report sightings of Sasquatch more than that of Black Bears). It’s a real problem, and the assessment of data uncertainty is a real pressing issue for any method that attempts to draw inference from sparse data.

But, really, in the end, this is an Ig-Nobel award winner in the making. Bravo.

Lozier, J., Aniello, P., & Hickerson, M. (2009). Predicting the distribution of Sasquatch in western North America: anything goes with ecological niche modelling Journal of Biogeography DOI: 10.1111/j.1365-2699.2009.02152.x

# Going Topless with Urchins

There’s nothing so satisfying as pulling back and seeing your brand new experiment out there in the water.

It’s been a crazy week or three getting this up and running, but now my first big postdoctoral experiment is soaking in the water, with urchins grazing away.

I’m testing some ideas regarding how diversity mediates the impact of disturbance by urchin grazing, and vice-versa – how disturbance by grazing can alter diversity. In essence, I’m testing a model of a community feedback process based on a framework whipped together by Randall Hughes, myself, and a few other fabulous co-authors.

But even though your ideas may be high-up and lofty, they always meet some interesting realities on the ground. Reality point 1 – my god, we built a lot of large cages.

This is about 1/4 of the cages before deployment. The rest were in the water. Thank fod for cheerful undergrad labor (fueled by brownies made from scratch – the key is to underbake them, and use a combination of eggs and egg yolks for extra gooey-ness) They look like such simple cheap affairs – some garden fencing, some PVC, some netting around the bottom…and then there’s about 1 ton worth of chain and half a ton of rebar stuffed into them. Subtidal work: unless it’s heavy, the waves will sweep it away.

Reality 2 – sometimes, you’ve gotta do it topless. Yes, the cages have no tops. This would seem the height of insanity if you want to keep something INSIDE. However, urchins appear to not like bendy flexy things. Sure, they’ll crawl up to the tops, but then they get to that wave strip at the margin, and freak out and freeze up. I’ve watched it. It’s kinda odd. And those cages that did have a top on them? That top, even if it’s mesh, creates a LOT of lift. So, a small wave washes by, and suddenly the cage top becomes an airplane wing. Unless you’ve added a huge amount of weight to your cage (see above), you may well never be able to find your cage again.

Reality 3 – nature is variable. Well, duh. See the two cages with two very different species compositions, som providing more or less biomass. I mean, the whole premise of this experiment was to use a natural gradient in species diversity as a treatment. But sometimes adding or subtracting one species can make a huge difference. Sampling (Reality 4 – ID-ing to the species level in the field on SCUBA gets pretty tedious after one hour, let alone 4 or 5) was pretty interesting, showing that large differences were indeed generated by both position on the reef, local topography, etc, as well as whether, say, tiny sea cucumbers had colonized a plot, whether the plot was full of lush Pterygophora, or the presence of the squat thick gorgonian Muricea.

Reality 5 – hungry urchins are hungry. And devious. Upon addition of urching to plots, they zoomed over to any brown algae (particularly the aforementioned Pterygophora or any juvenile giant kelp) and began munching in earnest. Some ran for the sides of the cages (and a few managed to squeeze out – Reality 6, the best laid plans of underwater mice and men… I’ll be doing some replacements this week with larger urchins). But the instand voracious consumption was really quite impressive.

I’m pretty stoked, and deeply curious as to how this will turn out. I’m sure there will be cursing, frustration, and bizarre results in the future, but for now, SCIENCE! Love it!

# Can you see the matrix?

Lately, I’ve been dreaming of webs.

I’ve been asking myself, how do we visualize the hidden complexity of the natural world? This is not an idle question, but draws on some of my current research. It is vital to how we think about ecosystems when we attempt to preserve and restore them. It is inherently beautiful, in and of itself.