The function pairwise.t.test() allows for performing multiple t-tests on a multitude of groups using all possible pairwise comparisons. This comes handy as a follow up to one-way ANOVA, factorial ANOVA, Kruskal-Wallis test with more than 2 groups,…

Here is a short list of the tests that pairwise.t.test() can run (quoted from the documentation of p.adjust):

The adjustment methods include the Bonferroni correction (“bonferroni”) in which the p-values are multiplied by the number of comparisons. Less conservative corrections are also included by Holm (1979) (“holm”), Hochberg (1988) (“hochberg”), Hommel (1988) (“hommel”), Benjamini & Hochberg (1995) (“BH” or its alias “fdr”), and Benjamini & Yekutieli (2001) (“BY”), respectively.

Let’s reuse the example introduced here (one-way ANOVA). To create the corresponding dataframe, use the following code:

# response variable
size <- c(25,22,28,24,26,24,22,21,23,25,26,30,25,24,21,27,28,23,25,24,20,22,24,23,22,24,20,19,21,22)

# predictor variable
location <- as.factor(c(rep("ForestA",10), rep("ForestB",10), rep("ForestC",10)))

# dataframe
my.dataframe <- data.frame(size,location)



Running the test

This syntax is pairwise.t.test(response, predictor, p.adjust.method= ) where response is the response variable and predictor is the predictor variable. In addition, pairwise.t.test() allows you to apply the method of your choice when it comes to correcting/adjusting p-values to control type I errors. Simply add the name or abbreviation for the method to the argument p.adjust.method=. The methods are none (no correction), bonferroni, holm, hochberg, hommel, BH and BY. Choosing the right correction for your analysis depends a lot on your experimental design.

Now let’s look at 3 variants of post hoc pairwise t-tests, the first one without correction, the second one with bonferroni correction and the last one with hochberg correction:

pairwise.t.test(size, location, p.adjust.method="none")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  size and location 
## 
##         ForestA ForestB
## ForestB 0.18995 -      
## ForestC 0.02470 0.00092
## 
## P value adjustment method: none
pairwise.t.test(size, location, p.adjust.method="bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  size and location 
## 
##         ForestA ForestB
## ForestB 0.5699  -      
## ForestC 0.0741  0.0027 
## 
## P value adjustment method: bonferroni
pairwise.t.test(size, location, p.adjust.method="hochberg")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  size and location 
## 
##         ForestA ForestB
## ForestB 0.1900  -      
## ForestC 0.0494  0.0027 
## 
## P value adjustment method: hochberg

The three outputs are presented above. Each one displays the same type of table with pairwise comparisons, the last line shows the adjustment method. The results are very different in these 3 cases. Using no correction or hochberg correction, 2 p-values drop under 0.05, whereas only one p-value is less than 0.05 when using bonferroni correction. This difference may be decisive for interpreting the outcome of your experiment and may change a lot the conclusions of your research. Therefore, it is important that you weigh the pros and cons when deciding which adjustment method to apply to your dataset.

Alternative

When you need to perform pairwise comparisons as a follow up to Kruskal-Wallis test with more than 2 groups and for which the assumption of normality of distribution is challenged, you may use the function pairwise.wilcox.test() which allows for performing multiple Wilcoxon tests on all possible pairwise comparisons. pairwise.wilcox.test() works in a similar way to pairwise.t.test() with the same arguments and syntax:

pairwise.wilcox.test(size, location, p.adjust.method="none")
## 
##  Pairwise comparisons using Wilcoxon rank sum test 
## 
## data:  size and location 
## 
##         ForestA ForestB
## ForestB 0.2685  -      
## ForestC 0.0239  0.0034 
## 
## P value adjustment method: none
pairwise.wilcox.test(size, location, p.adjust.method="bonferroni")
## 
##  Pairwise comparisons using Wilcoxon rank sum test 
## 
## data:  size and location 
## 
##         ForestA ForestB
## ForestB 0.805   -      
## ForestC 0.072   0.010  
## 
## P value adjustment method: bonferroni
pairwise.wilcox.test(size, location, p.adjust.method="hochberg")
## 
##  Pairwise comparisons using Wilcoxon rank sum test 
## 
## data:  size and location 
## 
##         ForestA ForestB
## ForestB 0.268   -      
## ForestC 0.048   0.010  
## 
## P value adjustment method: hochberg

Again, the p-values largely vary depending on the method used for correction (if any).