## What Is the Bonferroni Test?

The Bonferroni test is a type of multiple comparison test used in statistical analysis. When performing a hypothesis test with multiple comparisons, eventually a result could occur that appears to demonstrate statistical significance in the dependent variable, even when there is none.

If a particular test, such as a linear regression, thus yields correct results 99% of the time, running the same regression on 100 different samples could lead to at least one false-positive result at some point. The Bonferroni test attempts to prevent data from incorrectly appearing to be statistically significant like this by making an adjustment during comparison testing.

### Key Takeaways

- The Bonferroni test is a statistical test used to reduce the instance of a false positive.
- In particular, Bonferroni designed an adjustment to prevent data from incorrectly appearing to be statistically significant.
- An important limitation of Bonferroni correction is that it may lead analysts to mix actual true results.

## Understanding the Bonferroni Test

The Bonferroni test, also known as "Bonferroni correction" or "Bonferroni adjustment" suggests that the p-value for each test must be equal to its alpha divided by the number of tests performed.

The Bonferroni test is a multiple-comparison correction used when several dependent or independent statistical tests are being performed simultaneously. The reason is that while a given alpha value may be appropriate for each individual comparison, it is not appropriate for the set of all comparisons. In order to eliminate multiple spurious positives, the alpha value needs to be lowered to account for the number of comparisons being performed.

The test is named for the Italian mathematician who developed it, Carlo Emilio Bonferroni (1892–1960). Other types of multiple comparison tests include Scheffé's test and the Tukey-Kramer method test. A criticism of the Bonferroni test is that it is too conservative and may fail to catch some significant findings.

In statistics, a null hypothesis is essentially the belief that there's no statistical difference between two data sets being compared. Hypothesis testing involves testing a statistical sample to confirm or reject a null hypothesis. The test is performed by taking a random sample of a population or group. While the null hypothesis is tested, the alternative hypothesis is also tested, whereby the two results are mutually exclusive.

However, with any testing of a null hypothesis, there's the expectation that a false-positive result could occur. This is formally called a Type I error, and as a result, an error rate that reflects the likelihood of a Type I error is assigned to the test. In other words, a certain percentage of the results will likely yield a false positive.

## Using Bonferroni Correction

For example, an error rate of 5% might typically be assigned to a statistical test, meaning that 5% of the time there will likely be a false positive. This 5% error rate is called the alpha level. However, when many comparisons are being made in an analysis, the error rate for each comparison can impact the other results, creating multiple false positives.

Bonferroni designed his method of correcting for the increased error rates in hypothesis testing that had multiple comparisons. Bonferroni's adjustment is calculated by taking the number of tests and dividing it into the alpha value. Using the 5% error rate from our example, two tests would yield an error rate of 0.025 or (.05/2) while four tests would therefore have an error rate of .0125 or (.05/4). Notice that the error rate decreases as the sample size increases.