Repeated measures ANOVA is a test that seems close to one-way ANOVA as it allows to check for differences between the means of two and more groups. The essential difference is that the groups are dependent (i.e. related). This means that the groups contain data or measurements originating from the same subjects. What differs in terms of design may be when measurements were taken (measurements were repeated in time: before, during, and after an experiment OR every day during a given period of time, etc) or in which circumstances or conditions measurements were performed (for example, measurements were done 3 times, each time following application of a new drug). In this test, the null hypothesis H0 states that the means of all groups are equal.

Unlike one-way ANOVA where one fits a linear model with lm(), you may fit a linear mixed effect model with the function lme() that is found in the package nlme. However, we need to tell the function that some of the measurements are not replicates but that they originate from the same individual. We will thus use the argument random=~1|subjects in lme().

Let’s take an example where 5 rats are weighed 4 times with intervals of 4 weeks (week8 to 20). Here is the code for the dataframe:

# response variable
rat.weight <- c(164,164,158,159,155,220,230,226,227,222,261,275,264,280,272,306,326,320,330,312)

# predictor variable 
time.point <- as.factor(c(rep("week08",5), rep("week12",5), rep("week16",5), rep("week20",5)))

# individual ID
rat.ID <- as.factor(rep(c("rat1","rat2","rat3", "rat4", "rat5"),4))

# dataframe
my.dataframe <- data.frame(rat.ID,time.point,rat.weight)



Let’s visualize the data with a multiple line chart:

Assumptions

Assumptions are:



Running the test

Since lme() is part of the package nlme(), you need to install and load it. Here is how to do it:

install.packages("nlme")
library(nlme)



The following code fits the linear mixed effect model. The syntax is lme(response ~ predictor, random=~1|subjects, data = my.dataframe), where response is the response variable, predictor is the predictor variable or factor (which categorizes the observations), subjects is the variable that defines the individuals, and dataframe the name of the dataframe that contains the data. We first need to fit the model with lme() and then we store it in the object model. We then compute and display the table for the analysis using anova():

model <- lme(rat.weight ~ time.point, random=~1|rat.ID, data = my.dataframe)
anova(model)
##             numDF denDF   F-value p-value
## (Intercept)     1    12 11442.465  <.0001
## time.point      3    12   793.995  <.0001

The output shows an F-value close to 794 and a p-value under 0.0001. The null hypothesis can be rejected; there is a significant difference between the means of the groups. Unsurprisingly, this difference is due to the factor time.point, meaning that the rats gained in weight with time (as expected under normal conditions). However, there is no indication of difference between individuals.

But this does not tell us anything about the groups which means are significantly different.

Indeed, this test tells you about an effect of the predictor variable, but does not tell you which groups are significantly different from others. For that, we will need post hoc tests. We can run multiple comparisons in linear mixed effect models.



Alternative

If you need an alternative test to the repeated measures ANOVA because, for example, you suspect violation of the assumption of normality of distribution, you may use a non-parametric test called Friedman rank sum test (or simply Friedman’s test).