Here a Tau, there a Tau… Plotting Quantile Regressions

I’ve ended up digging into quantile regression a bit lately (see this excellent gentle introduction to quantile regression
for ecologists
[pdf] for what it is and some great reasons why to use it -see also here and here). In R this is done via the quantreg package, which is pretty nice, and has some great plotting diagnostics, etc. But what it doesn’t have out of the box is a way to simply plot your data, and then overlay quantile regression lines at different levels of tau.

The documentation has a nice example of how to do it, but it’s long tedious code. And I had to quickly whip up a few plots for different models.

So, meh, I took the tedious code and wrapped it into a quickie function. Which I dorp here for your delectation. Unless you have some better fancier way to do it (which I’d love to see – especially for ggplot….)

Here’s the function:

quantRegLines <- function(rq_obj, lincol="red", ...){  
  #get the taus
  taus <- rq_obj$tau
  #get x
  x <- rq_obj$x[,2] #assumes no intercept
  xx <- seq(min(x, na.rm=T),max(x, na.rm=T),1)
  #calculate y over all taus
  f <- coef(rq_obj)  
  yy <- cbind(1,xx)%*%f
  if(length(lincol)==1) lincol=rep(lincol, length(taus))
  #plot all lines
  for(i in 1:length(taus)){
    lines(xx,yy[,i], col=lincol[i], ...)

And an example use.

taus <- c(.05,.1,.25,.75,.9,.95)
plot(income,foodexp,xlab="Household Income",
     ylab="Food Expenditure",
     pch=19, col=alpha("black", 0.5))
rq_fit <- rq((foodexp)~(income),tau=taus)

Oh, and I set it up to make pretty colors in plots, too.

plot(income, foodexp, xlab = "Household Income", 
    ylab = "Food Expenditure", 
    pch = 19, col = alpha("black", 0.5))

quantRegLines(rq_fit, rainbow(6))
legend(4000, 1000, taus, rainbow(6), title = "Tau")

All of this is in a repo over at github (natch), so, fork and play.

More on Bacteria and Groups

Continuing with bacterial group-a-palooza

I followed Ed’s suggestions and tried both a binomial distribution and a Poisson distribution for abundance such that the probability of a density of one species s in one group g in one plot r where there are S_g species in group gis

A_rgs ~ Poisson(\frac{A_rg}{S_g})

In the analysis I’m doing, interesting, the results do change a bit such that the original network only results are confirmed.

I am having one funny thing, though, which I can’t lock down. Namely, the no-group option always has the lowest AIC once I include abundances – and this is true both for binomial and Poisson distributions. Not sure what that is about. I’ve put the code for all of this here and made a sample script below. This doesn’t reproduce the behavior, but, still. Not quite sure what this blip is about.

For the sample script, we have five species and three possible grouping structures. It looks like this, where red nodes are species or groups and blue nodes are sites:

Screen Shot 2013-04-12 at 4.32.50 PM

And the data looks like this

  low med high  1   2   3
1   1   1    1 50   0   0
2   2   1    1 45   0   0
3   3   2    2  0 100   1
4   4   2    2  0 112   7
5   5   3    2  0  12 110

So, here’s the code:

And the results:

> aicdf
     k LLNet LLBinomNet  LLPoisNet   AICpois  AICbinom AICnet
low  5     0    0.00000  -20.54409  71.08818  60.00000     30
med  3     0  -18.68966  -23.54655  65.09310  73.37931     18
high 2     0 -253.52264 -170.73361 353.46723 531.04527     12

We see that the two different estimations disagree, with the binomial favorint disaggregation and poisson favoring moderate aggregation. Interesting. Also, the naive network only approach favors complete aggregation. Interesting. Thoughts?

Filtering Out Exogenous Pairs of Variables from a Basis Set

Sometimes in an SEM for which you're calculating a test of D-Separation, you want all exogenous variables to covary. If you have a large model with a number of exogenous variables, coding that into your basis set can be a pain, and hence, you can spend a lot of time filtering out elements that aren't part of your basis set, particularly with the ggm library. Here's a solution – a function I'm calling filterExoFromBasiSet

#Takes a basis set list from basiSet in ggm and a vector of variable names

filterExoFromBasiSet <- function(set, exo) {
    pairSet <- t(sapply(set, function(alist) cbind(alist[1], alist[2])))
    colA <- which(pairSet[, 1] %in% exo)
    colB <- which(pairSet[, 2] %in% exo)
    both <- c(colA, colB)
    both <- unique(both[which(duplicated(both))])


How does it work? Let's say we have the following model:

y1 <- x1 + x2

Now, we should have no basis set. But…


modA <- DAG(y1 ~ x1 + x2)
## [[1]]
## [1] "x2" "x1"

Oops – there's a basis set! Now, instead, let's filter it

basisA <- basiSet(modA)
filterExoFromBasiSet(basisA, c("x1", "x2"))
## list()

Yup, we get back an empty list.

This function can come in handy. For example, let's say we're testing a model with an exogenous variable that does not connect to an endogenous variable, such as

y1 <- x1
x2 (which is exogenous)

Now –

modB <- DAG(y ~ x1, 
               x2 ~ x2)

basisB <- basiSet(modB)
filterExoFromBasiSet(basisB, c("x1", "x2"))
## [[1]]
## [1] "x2" "y"  "x1"

So, we have the correct basis set with only one element.

What about if we also have an endogenous variable that has no paths to it?

modC <- DAG(y1 ~ x1, 
               x2 ~ x2, 
               y2 ~ y2)

basisC <- basiSet(modC)

filterExoFromBasiSet(basisC, c("x1", "x2"))
## [[1]]
## [1] "y2" "x2"
## [[2]]
## [1] "y2" "x1"
## [[3]]
## [1] "y2" "y1" "x1"
## [[4]]
## [1] "x2" "y1" "x1"

This yields the correct 4 element basis set.

Extracting p-values from different fit R objects

Let's say you want to extract a p-value and save it as a variable for future use from a linear or generalized linear model – mixed or non! This is something you might want to do if, say, you were calculating Fisher's C from an equation-level Structural Equation Model. Here's how to extract the effect of a variable from multiple different fit models. We'll start with a data set with x, y, z, and a block effect (we'll see who in a moment).

x <- rep(1:10, 2)
y <- rnorm(20, x, 3)
block <- c(rep("a", 10), rep("b", 10))

mydata <- data.frame(x = x, y = y, block = block, z = rnorm(20))

Now, how would you extract the p-value for the parameter fit for z from a linear model object? Simply put, use the t-table from the lm object's summary

alm <- lm(y ~ x + z, data = mydata)

##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)   1.1833     1.3496  0.8768 0.392840
## x             0.7416     0.2190  3.3869 0.003506
## z            -0.4021     0.8376 -0.4801 0.637251

# Note that this is a matrix.  
# The third row, fourth column is the p value
# you want, so...

p.lm <- summary(alm)$coefficients[3, 4]

## [1] 0.6373

That's a linear model, what about a generalized linear model?

aglm <- glm(y ~ x + z, data = mydata)

##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)   1.1833     1.3496  0.8768 0.392840
## x             0.7416     0.2190  3.3869 0.003506
## z            -0.4021     0.8376 -0.4801 0.637251

# Again, is a matrix.  
# The third row, fourth column is the p value you
# want, so...

p.glm <- summary(aglm)$coefficients[3, 4]

## [1] 0.6373

That's a linear model, what about a generalized linear model?

anls <- nls(y ~ a * x + b * z, data = mydata, 
     start = list(a = 1, b = 1))

##   Estimate Std. Error t value  Pr(>|t|)
## a   0.9118     0.1007   9.050 4.055e-08
## b  -0.4651     0.8291  -0.561 5.817e-01

# Again, is a matrix.  
# The second row, fourth column is the p value you
# want, so...

p.nls <- summary(anls)$coefficients[2, 4]

## [1] 0.5817

Great. Now, what if we were running a mixed model? First, let's look at the nlme package. Here, the relevant part of the summary object is the tTable

alme <- lme(y ~ x + z, random = ~1 | block, data = mydata)

##               Value Std.Error DF t-value  p-value
## (Intercept)  1.1833    1.3496 16  0.8768 0.393592
## x            0.7416    0.2190 16  3.3869 0.003763
## z           -0.4021    0.8376 16 -0.4801 0.637630

# Again, is a matrix.  
# But now the third row, fifth column is the p value
# you want, so...

p.lme <- summary(alme)$tTable[3, 5]

## [1] 0.6376

Last, what about lme4? Now, for a linear lmer object, you cannot get a p value. But, if this is a generalizes linear mixed model, you are good to go (as in Shipley 2009). Let's try that here.


almer <- lmer(y ~ x + z + 1 | block, data = mydata)

# no p-value!
##             Estimate Std. Error t value
## (Intercept)    4.792     0.5823   8.231

# but, for a genearlined linear mixed model
# and yes, I know this is a
# bad model but, you know, demonstration!

aglmer <- lmer(y + 5 ~ x + z + (1 | block), 
        data = mydata, family = poisson(link = "log"))

##             Estimate Std. Error z value  Pr(>|z|)
## (Intercept)  1.90813    0.16542  11.535 8.812e-31
## x            0.07247    0.02471   2.933 3.362e-03
## z           -0.03193    0.09046  -0.353 7.241e-01

# matrix again!  Third row, fourth column
p.glmer <- summary(aglmer)@coefs[3, 4]

## [1] 0.7241

A Quick Note in Weighting with nlme

I’ve been doing a lot of meta-analytic things lately. More on that anon. But one quick thing that came up was variance weighting with mixed models in R, and after a few web searches, I wanted to post this, more as a note-to-self and others than anything. Now, in a simple linear model, weighting by variance or sample size is straightforward.

lm(y ~ x, data = dat, weights = 1/v)

#sample size
lm(y ~ x, data = dat, weights = n)

You can use the same sort of weights argument with lmer. But, what about if you’re using nlme? There are reasons to do so. Things change a bit, as nlme uses a wide array of weighting functions for the variance to give it some wonderful flexibility – indeed, it’s a reason to use nlme in the first place! But, for such a simple case, to get the equivalent of the above, here’s the tricky little difference. I’m using gls, generalized least squares, but this should work for lme as well.

gls(y ~ x, data=dat, weights = ~v)

#sample size
gls(y ~ x, data = dat, weights = ~1/n)

OK, end note to self. Thanks to John Griffin for prompting this.

Missing my Statsy Goodness? Check out #SciFund!

I know, I know, I have been kinda lame about posting here lately. But that’s because my posting muscle has been focused on the new analyses for what makes a succesful #SciFund proposal. I’ve been posting them at the #SciFund blog under the Analysis tag – so check it out. There’s some fun stats, and you get to watch me be a social scientist for a minute. Viva la interdisciplinarity!

Running R2WinBUGS on a Mac Running OSX

I have long used JAGS to do all of my Bayesian work on my mac. Early on, I tried to figure out how to install WinBUGS and OpenBUGS and their accompanying R libraries on my mac, but, to no avail. I just had too hard of a time getting them running and gave up.

But, it would seem that some things have changed with Wine lately, and it is now possible to not only get WinBUGS itself running nicely on a mac, but to also get R2WinBUGS to run as well. Or at least, so I have discovered after an absolutely heroic (if I do say so myself) effort to get it all running (this was to help out some students I’m teaching who wanted to be able to do the same exercises as their windows colleagues). So, I present the steps that I’ve worked out. I do not promise this will work for everyone – and in fact, if it fails at some point, I want to know about it so that perhaps we can fix it so that more people can get WinBUGS up and running.

Or just run JAGS (step 1} install the latest version, step 2} install rjags in R. Modify your code slightly. Run it. Be happy.)

So, this tutorial works to get the whole WinBUGS shebang running. Note that it hinges on installing the latest development version of Wine, not the stable version (at least as of 1/17/12). If you have previously installed wine using macports, good on you. Now uninstall it with “sudo port uninstall wine”. Otherwise, you will not be able to do this.

Away we go!

1) Have the free version of XCode Installed from You may have to sign up for an apple developer account. Whee! You’re a developer now!

2) Have X11 Installed from your system install disc.

3) Install and install from the package installer. See also here for more information. Afterwards, open the terminal and type

echo export PATH=/opt/local/bin:/opt/local/sbin:$PATH$'n'export MANPATH=/opt/local/man:$MANPATH | sudo tee -a /etc/profile

You will be asked for your password. Don’t worry that it doesn’t display anything as you type. Press enter when you’ve finished typing your password.

4) Open your terminal and type

sudo port install wine-devel

5) Go have a cup of coffe, check facebook, or whatever you do while the install chugs away.

6) Download WinBUGS 1.4.x from here. Also download the immortality key and the patch.

7) Open your terminal, and type

cd Downloads
wine WinBUGS14.exe

Note, if you have changed your download directory, you will need to type in the path to the directory where you download files now (e.g., Desktop).

8 ) Follow the instructions to install WinBUGS into c:Program Files.

9) Run WinBUGS via the terminal as follows:

wine ~/.wine/drive_c/Program Files/WinBUGS14/WinBUGS14

10) After first running WinBUGS, install the immortality key. Close WinBUGS. Open it again as above and install the patch. Close it. Open it again and WinBUGS away!

11) To now use R2WinBugs fire up R and install the R2WinBUGS library.

12) R2WinBugs should now work normally with one exception. When you use the bugs function, you will need to supply the following additional argument:'/Users/YOURUSERNAME/.wine/drive_c/Program Files/WinBUGS14'

filling in your username where indicated. If you don’t know it, in the terminal type

ls /Users

No, ~ will not work for those of you used to it. Don’t ask me why.

Ecological SEMs and Composite Variables: What, Why, and How

I’m a HUGE fan of Structural Equation Modeling. For those of you unfamiliar with the technique, it’s awesome for three main reasons.

  1. It’s a method of teasing apart direct and indirect interactions in your data.
  2. It allows you to assess the importance of underlying latent variables that you cannot measure, but for which have measured indicators.
  3. As it’s formally presented, with path diagrams showing connections between variables, it’s SUPER easy to link conceptual models with your data. See Grace et al. 2010 for a handy guide to this.

Also, there is a quite simple and intuitive R package for fitting SEMs, lavaan (LAtent VAriable Analysis). Disclaimer, I just hopped on board as a lavaan developer (yay!). I’ve also recently started a small project to find cool examples of SEM in the Ecological literature, and then using the provided information, post the models coded up in lavaan so that others can see how to put some of these models together.

As Ecologists, we often use latent variables to incorporate known measurement error of a quantity – i.e., a latent variable with a single indicator and fixed variance. We’re often not interested in the full power of latent variables – latents with multiple indicators. Typically, this is because we’ve actually measured everything we want to measure. We’re not like political scientists who have to quantify fuzzy things like Democracy, or Authoritarianism, or Gastronomicism. (note, I want to live in a political system driven by gastronomy – a gastronomocracy!)

However, we’re still fascinated by the idea of bundling different variables together into a single causal effect, and maybe evaluating the relative contribution of each of those variables within a model. In SEM, this is known as the creation of a Composite Variable. This composite is still an unmeasured quantity – like a latent variable – but with no error variance, and with “indicators” actually driving the variable, rather than having the unmeasured variable causing the expression of its indicators.

Let me give you an example. Let’s say we want to measure the effect of nutrients on diatom species richness in a stream. You’re particularly concerned about nitrogen. However, you can’t bring water samples back to the lab, so, you’re relying on some moderately accurate nitrogen color strips, the biomass of algae (more algae = more nitrogen!), and your lab tech, Stu, who claims he can taste nitrogen in water (and has been proved to be stunningly accurate in the past). In this case, you have a latent variable. The true nitrogen content of the water is causing the readings by these three different indicators.

A composite variable is a different beast. Let’s say we have the same scenario. But, now you have really good measurements of nitrogen. In fact, you have good measurements of both ammonium (NH4) and nitrate (NO3). You want to estimate a “nitrogen effect”, but, you know that both of these different forms of N will contribute to the effect in a slightly different way. You could just construct a model with effects going from both NO3 and NH4 to species richness. If you want to represent the total “Nitrogen Effect” in your model, however, and evaluate the effect of each form of nitrogen on its total effect, you would create a composite. The differences become clear when looking at the path diagram of each case.

Here, I’m adopting the custom of observed variables in squares, latent variables in ovals, and composite variables in hexagons. Note that, as indicators of nitrogen in the latent variable model, each observed indicator has some additional variation due to factors other than nitrogen – δi. There is no such error in the composite variable model. Also, I’m showing that the error in the Nitrogen Effect in the composite variable model is indeed set to 0. There are sometimes reasons where that shouldn’t be 0, but that’s another topic for another time.

This may still seem abstract to you. So, let’s look at an example in practice. One way we often use composites is to bring together a linear and nonlinear effect of a single variable. For example, we know that often nutrient supply rates have a humped shape effect on species richness – i.e., the highest richness happens at intermediate supply rates. One nice example of that is in a paper by Cardinale et al. in 2009 looking at relationships between manipulated nutrient supply, species richness, and algal productivity. To capture that relationship with a composite variable, one would have a ‘nitrogen effect’ affected by N and N2. This nitrogen effect would then affect local species richness.

So, how would you code this model up in lavaan, and then evaluate it.

Well, hey, the data from this paper are freely available, so, let’s use this as an example. For a full replication of the model presented in the paper see here. However, Cardinale et al. didn’t use any composite variables, so, let’s create a model of our own capturing the Nitrogen-Richness relationship while also accounting for local species richness being influenced by regional species richness.

In the figure above, we have the relationship between resource supply rate and local species richness on an agar plate to the left. Separate lines are for separate streams. The black line is the average fit with the supplied equation. On the right, we have a path diagram representing this relationship, as well as the influence of regional species richness.

So, we have a path diagram. Now comes the tricky part. Coding. One thing about the current version of lavaan (0.4-8) is that it does not have a way to represent composite variables. This will change in the future (believe me), but, it may take a while, so, let me walk you through the tricks of incorporating latent variables now. Basically, there are four steps.

  1. Define the variable as a regression, where the composite is determined by it’s causal variables. Also, fix one of the effects to 1. This gives your composite variable a scale.
  2. Specify that the composite has an error variance of 0.
  3. Now treat the composite as a latent variable. It’s indicators are it’s response variables. This may seem odd. However, it’s all just ways of specifying causal pathways – an indicator pathway and a regression pathway have the same meaning in terms of causality. The software just needs something specified so that it doesn’t go looking for our composite variable in our data. Hence, defining it as a latent variable whose indicators are endogenous responses. I actually find this helpful, as it also makes me think more carefully about what a composite variable is, and how too many responses may make my model not identified.
  4. Lastly, because we don’t want to fix the effect of our composite on its response to 1, we’ll have to include an argument in the fitting function that makes it not force the first latent variable loading to be set to 1. We’ll also have to specify that we then want the variance of the response to latent variables freely estimated. Yeah, I know. Note: this all can play havoc when you have both latent and composite variables, so be careful. See here for an example.
  5. Everything else – other regression relationships, showing that nonlinearities are derived quantities, etc.

OK, that’s a lot. How’s it work in practice? Below is the code to fit the model in the path diagram. I’ve labeled the steps in comments, and, included the regional ~ local richness relationship as well as the relationship showing that logN2 was derived from logN. Note, this is a centered squared variable. And, yes, all nitrogen values have been log transformed here.

#simple SA model with N and regional SR using a composite
#Variables: logN = log nutrient supply rate, logNcen2 = log supply rate squared
# SA = Species richness on a patch of Agar, SR = stream-wide richness
	#1) define the composite, scale to logN
	Nitrogen ~ 1*logN + logNcen2 #loading on the significant path!

	#2) Specify 0 error variance
	Nitrogen ~~ 0*Nitrogen

  	#3) now, because we need to represent this as a latent variable
  	#show how species richness is an _indicator_ of nitrogen
  	Nitrogen =~ SA

	#4) BUT, make sure the variance of SA is estimated
  	SA ~~ SA

	#Regional Richness also has an effect
	SA ~ SR

	#And account for the derivation of the square term from the linear term
	logNcen2 ~ logN

 # we specify so that the Nitrogen-SA relationship isn't fixed to 1
 compositeFit <- sem(compositeModel, data=cards,

Great! It should fit just fine. I'm not going to focus on the regional relationship, as it is predictable and positive. Moreover, when we look at the results, two things immediately pop out at us about the effect of nutrient supply rate.

                   Estimate  Std.err  Z-value  P(>|z|)
Latent variables:
  Nitrogen =~
    SA                0.362    0.438    0.827    0.408

  Nitrogen ~
    logN              1.000
    logNcen2         -1.311    1.373   -0.955    0.340

Wait, what? The Nitrogen effect was not detectably different from 0? Nor was there a nonlinear effect? What's going on here?

What's going on is that the scale of the composite is affecting our results. We've set it to 1. Whenever you are fixing scales, you should always check and see, what would happen if you changed which path was set to 1. So, we can simply set the scale to the nonlinear variable, refit the model, and see if this makes a difference. If it doesn't, then that means there is no nitrogen effect at all!

So, change

Nitrogen ~ 1*logN + logNcen2


Nitrogen ~ logN + 1*logNcen2

And, now let's see the results…..

                   Estimate  Std.err  Z-value  P(>|z|)
Latent variables:
  Nitrogen =~
    SA               -0.474    0.239   -1.989    0.047

  Nitrogen ~
    logN             -0.763    0.799   -0.955    0.340
    logNcen2          1.000

Ah HA! Not only is the linear effect not different from 0, but now we see that fixing the nonlinear effect allows the nutrient signal to come through.

But wait, you may say, that effect is negative? Well, remember that the scale of the nitrogen effect is the same as the nonlinear scale. And, a positive hump-shaped relationship will have a negative squared term. So, given how we've setup the model, yes, that should be negative.

*whew!* That was a lot. And this for a very simile model involving composites and nonlinearities. I thought I'd throw that out as it's a common use of composites, and interpreting nonlinearities in SEMs is always a little tricky and worth bending your brain around. Other uses of composites include summing up a lot of linear quantities, a composite for the application of treatments, and more. But, this should give you a good sense of what they are, how to code them in lavaan, and how to use them in the future.

For a more in depth treatment of this topic, and latent variables versus composites, I urge you to check out this excellent piece by Jim Grace and Ken Bollen. Happy model fitting!