To compare the respective variances of two samples, you may use **Fisher’s F test**, also known as F-test of equality of variances. This test checks whether the variances of the two samples are equal, assuming that they do not deviate from

In brief, Fisher’s F test calculates the ratio between the larger variance and the smaller variance. The ratio is then compared to a critical value which depends on the degrees of freedom and on the set value for **α** (usually 0.05). If the ratio is greater than the critical value, then the null hypothesis (`H0`

, stating that the variances are equal) is rejected and you may declare that the variances are different.

The function in R that runs Fisher’s F test is `var.test(data1,data2)`

where `data1`

and `data2`

are the vectors containing data from the two samples. Let’s use an example to illustrate the use of `var.test()`

. First let’s visualize these two samples in a boxplot:

The boxplot shows samples with means rather close to each other, but with dramatically different spreads. Let’s now look at their variance, here calculated with `var()`

):

`var(data1)`

`## [1] 0.6790178`

`var(data2)`

`## [1] 17.94452`

Unsurprisingly the variances differ a lot. But we need to run Fisher’s F test to get it confirmed. Since this test requires that normal distribution of the samples, let’s run the Shapiro-Wilks normality test.

`shapiro.test(data1)`

```
##
## Shapiro-Wilk normality test
##
## data: data1
## W = 0.96234, p-value = 0.8122
```

`shapiro.test(data2)`

```
##
## Shapiro-Wilk normality test
##
## data: data2
## W = 0.92624, p-value = 0.4119
```

In both cases, the p-value is high. The likelihood for these samples to *deviate from normality* is low. Thus, we can finally proceed with Fisher’s F test:

`var.test(data1,data2)`

```
##
## F test to compare two variances
##
## data: data1 and data2
## F = 0.03784, num df = 9, denom df = 9, p-value = 4.02e-05
## alternative hypothesis: true ratio of variances is not equal to 1
## 95 percent confidence interval:
## 0.009398881 0.152342968
## sample estimates:
## ratio of variances
## 0.03783984
```

The F value (ratio of variances) is displayed, along with a p-value. Here the p-value is very low, thus the null hypothesis `H0`

(stating that the variances are equal) is rejected.

Note that you may reverse the order of the samples in the function and consequently obtain a different F value:

`var.test(data2,data1)`

```
##
## F test to compare two variances
##
## data: data2 and data1
## F = 26.427, num df = 9, denom df = 9, p-value = 4.02e-05
## alternative hypothesis: true ratio of variances is not equal to 1
## 95 percent confidence interval:
## 6.564136 106.395649
## sample estimates:
## ratio of variances
## 26.42717
```

However, the p-value remains the same, and so does the conclusion: `H0`

is rejected, the variances are significantly different.

Note: Fisher’s F test is not the only test for equal variance, but it is very commonly used. You could as well use Bartlett’s test or the Fligner-Killeen test of homogeneity of variances, but these are more often used for comparison of more than two groups (like in ANOVA). Bartlett’s test, like Fisher’s F test is rather sensitive to normality; thus, normality of distribution should be checked beforehand. However, Figner-Killeen test is non-parametric. It is thus a good alternative when normality is challenged. The corresponding functions in R are `bartlett.test()`

and `fligner.test()`

.