## What is 'Statistical Significance'

Statistical significance means that a result from testing or experimenting is not likely to occur randomly or by chance, but is instead likely to be attributable to a specific cause. Statistical significance can be strong or weak, and it is important to disciplines that rely heavily on analyzing data and research, such as finance, investing, medicine, physics and biology.

!--break--When analyzing a data set and doing the necessary tests to discern whether one or more variables have an effect on an outcome, statistical significance helps show that the results from the test are real and not just by luck or chance. Problems arise in tests of statistical significance because researchers are usually working with samples of larger populations. As a result, the samples must be representative of the population, so the data contained in the sample must not be biased in any way.

Statistical significance does not always indicate practical significance, meaning the results cannot be applied to real-world business situations. In addition, statistical significance can be misinterpreted when researchers do not use language carefully in reporting their results. Another problem that may arise with statistical significance is that past data, and the results from that data, whether statistically significant or not, may not reflect ongoing or future conditions. In investing, this may manifest itself in a pricing model breaking down during times of financial crisis as correlations change and variables do not interact as usual. Statistical significance can also help an investor discern whether one asset pricing model is better than another.

## Calculating Statistical Significance

The calculation of statistical significance (significance testing) is subject to a certain degree of error. The researcher must define in advance the probability of a sampling error, which exists in any test that does not include the entire population. Sample size is an important component of statistical significance in that larger samples are less prone to flukes. Only random, representative samples should be used in significance testing. The level at which one can accept whether an event is statistically significant is known as the significance level. Researchers use a test statistic known as the p-value to discern whether the event falls below the significance level; if it does, the result is statistically significant. The p-value is a function of the means and standard deviations of the data samples.

The p-value indicates the probability under which a statistical result occurred by chance or by sampling error. In other words, the p-value indicates the risk that there is no actual difference or relationship. The p-value must fall under the significance level for the results to at least be considered statistically significant. The opposite of the significance level, calculated as 1 minus the significance level, is the confidence level. The confidence level indicates the degree of confidence that the statistical result did not occur by chance or by sampling error. The customary confidence level in many statistical tests is 95%, leading to a customary significance level or p-value of 5%.