Your investment advisor proposes you a monthly income investment plan that promises a variable return each month. You will invest in it only if you are assured of an average $180 monthly income. Your advisor also tells you that for the past 300 months, the scheme had investment returns with an average value of $190 and a standard deviation of $75. Should you invest in this scheme? Hypothesis testing comes to the aid for such decisionmaking.
This article assumes readers' familiarity with concepts of a normal distribution table, formula, pvalue and related basics of statistics.
What Is Hypothesis Testing?
Hypothesis or significance testing is a mathematical model for testing a claim, idea or hypothesis about a parameter of interest in a given population set, using data measured in a sample set. Calculations are performed on selected samples to gather more decisive information about the characteristics of the entire population, which enables a systematic way to test claims or ideas about the entire dataset.
Here is a simple example: A school principal reports that students in her school score an average of 7 out of 10 in exams. To test this “hypothesis,” we record marks of say 30 students (sample) from the entire student population of the school (say 300) and calculate the mean of that sample. We can then compare the (calculated) sample mean to the (reported) population mean and attempt to confirm the hypothesis.
To take another example, the annual return of a particular mutual fund is 8%. Assume that mutual fund has been in existence for 20 years. We take a random sample of annual returns of the mutual fund for, say, five years (sample) and calculate its mean. We then compare the (calculated) sample mean to the (claimed) population mean to verify the hypothesis.
The decisionmaking criteria have to be based on certain parameters of datasets.
Different methodologies exist for hypothesis testing, but the same four basic steps are involved:
Step 1: Define the Hypothesis
Usually, the reported value (or the claim statistics) is stated as the hypothesis and presumed to be true. For the above examples, the hypothesis will be:
 Example A: Students in the school score an average of 7 out of 10 in exams.
 Example B: Annual return of the mutual fund is 8% per annum.
This stated description constitutes the “Null Hypothesis (H_{0})” and is assumed to be true – the way a defendant in a jury trial is presumed innocent until proven guilty by the evidence presented in court. Similarly, hypothesis testing starts by stating and assuming a “null hypothesis,” and then the process determines whether the assumption is likely to be true or false.
The important point to note is that we are testing the null hypothesis because there is an element of doubt about its validity. Whatever information that is against the stated null hypothesis is captured in the Alternative Hypothesis (H_{1}). For the above examples, the alternative hypothesis will be:
 Students score an average that is not equal to 7.
 The annual return of the mutual fund is not equal to 8% per annum.
In other words, the alternative hypothesis is a direct contradiction of the null hypothesis.
As in a trial, the jury assumes the defendant's innocence (null hypothesis). The prosecutor has to prove otherwise (alternative hypothesis). Similarly, the researcher has to prove that the null hypothesis is either true or false. If the prosecutor fails to prove the alternative hypothesis, the jury has to let the defendant go (basing the decision on the null hypothesis). Similarly, if the researcher fails to prove an alternative hypothesis (or simply does nothing), then the null hypothesis is assumed to be true.
Step 2: Set the Criteria
The decisionmaking criteria have to be based on certain parameters of datasets and this is where the connection to normal distribution comes into the picture.
As per the standard statistics postulate about sampling distribution, “For any sample size n, the sampling distribution of X̅ is normal if the population X from which the sample is drawn is normally distributed.” Hence, the probabilities of all other possible sample mean that one could select are normally distributed.
For e.g., determine if the average daily return, of any stock listed on XYZ stock market, around New Year's Day is greater than 2%.
H_{0}: Null Hypothesis: mean = 2%
H_{1}: Alternative Hypothesis: mean > 2% (this is what we want to prove)
Take the sample (say of 50 stocks out of total 500) and compute the mean of the sample.
For a normal distribution, 95% of the values lie within two standard deviations of the population mean. Hence, this normal distribution and central limit assumption for the sample dataset allows us to establish 5% as a significance level. It makes sense as, under this assumption, there is less than a 5% probability (10095) of getting outliers that are beyond two standard deviations from the population mean. Depending upon the nature of datasets, other significance levels can be taken at 1%, 5% or 10%. For financial calculations (including behavioral finance), 5% is the generally accepted limit. If we find any calculations that go beyond the usual two standard deviations, then we have a strong case of outliers to reject the null hypothesis.
Graphically, it is represented as follows:
In the above example, if the mean of the sample is much larger than 2% (say 3.5%), then we reject the null hypothesis. The alternative hypothesis (mean >2%) is accepted, which confirms that the average daily return of the stocks is indeed above 2%.
However, if the mean of the sample is not likely to be significantly greater than 2% (and remains at, say, around 2.2%), then we CANNOT reject the null hypothesis. The challenge comes on how to decide on such close range cases. To make a conclusion from selected samples and results, a level of significance is to be determined, which enables a conclusion to be made about the null hypothesis. The alternative hypothesis enables establishing the level of significance or the "critical value” concept for deciding on such close range cases.
According to the textbook standard definition, “A critical value is a cutoff value that defines the boundaries beyond which less than 5% of sample means can be obtained if the null hypothesis is true. Sample means obtained beyond a critical value will result in a decision to reject the null hypothesis." In the above example, if we have defined the critical value as 2.1%, and the calculated mean comes to 2.2%, then we reject the null hypothesis. A critical value establishes a clear demarcation about acceptance or rejection.
Step 3: Calculate the Statistic
This step involves calculating the required figure(s), known as test statistics (like mean, zscore, pvalue, etc.), for the selected sample. (We'll get to these in a later section.)
Step 4: Reach a Conclusion
With the computed value(s), decide on the null hypothesis. If the probability of getting a sample mean is less than 5%, then the conclusion is to reject the null hypothesis. Otherwise, accept and retain the null hypothesis.
Types of Errors
There can be four possible outcomes in samplebased decisionmaking, with regard to the correct applicability to the entire population:
Decision to Retain 
Decision to Reject 

Applies to entire population 
Correct 
Incorrect (TYPE 1 Error  a) 
Does not apply to entire population 
Incorrect (TYPE 2 Error  b) 
Correct 
The “Correct” cases are the ones where the decisions taken on the samples are truly applicable to the entire population. The cases of errors arise when one decides to retain (or reject) the null hypothesis based on the sample calculations, but that decision does not really apply for the entire population. These cases constitute Type 1 (alpha) and Type 2 (beta) errors, as indicated in the table above.
Selecting the correct critical value allows eliminating the type1 alpha errors or limiting them to an acceptable range.
Alpha denotes the error on the level of significance and is determined by the researcher. To maintain the standard 5% significance or confidence level for probability calculations, this is retained at 5%.
According to the applicable decisionmaking benchmarks and definitions:
 “This (alpha) criterion is usually set at 0.05 (a = 0.05), and we compare the alpha level to the pvalue. When the probability of a Type I error is less than 5% (p < 0.05), we decide to reject the null hypothesis; otherwise, we retain the null hypothesis.”
 The technical term used for this probability is pvalue. It is defined as “the probability of obtaining a sample outcome, given that the value stated in the null hypothesis is true. The pvalue for obtaining a sample outcome is compared to the level of significance."
 A Type II error, or beta error, is defined as “the probability of incorrectly retaining the null hypothesis, when in fact it is not applicable to the entire population.”
A few more examples will demonstrate this and other calculations.
Example 1
A monthly income investment scheme exists that promises variable monthly returns. An investor will invest in it only if he is assured of an average $180 monthly income. He has a sample of 300 months’ returns which has a mean of $190 and a standard deviation of $75. Should he or she invest in this scheme?
Let’s set up the problem. The investor will invest in the scheme if he or she is assured of his desired $180 average return.
H_{0}: Null Hypothesis: mean = 180
H_{1}: Alternative Hypothesis: mean > 180
Method 1: Critical Value Approach
Identify a critical value X_{L }for the sample mean, which is large enough to reject the null hypothesis – i.e. reject the null hypothesis if the sample mean >= critical value X_{L}
P (identify a Type I alpha error) = P(reject H_{0 }given that H_{0} is true),
This would be achieved when the sample mean exceeds the critical limits.
= P (given that H_{0} is true) = alpha
Graphically, it appears as follows:
Taking alpha = 0.05 (i.e. 5% significance level), Z_{0.05} = 1.645 (from the Ztable or normal distribution table)
= > X_{L} = 180 +1.645*(75/sqrt(300)) = 187.12
Since the sample mean (190) is greater than the critical value (187.12), the null hypothesis is rejected, and the conclusion is that average monthly return is indeed greater than $180, so the investor can consider investing in this scheme.
Method 2: Using Standardized Test Statistics
One can also use standardized value z.
Test Statistic, Z = (sample mean – population mean) / (stddev / sqrt (no. of samples).
Then, the rejection region becomes the following:
Z= (190 – 180) / (75 / sqrt (300)) = 2.309
Our rejection region at 5% significance level is Z> Z_{0.05} = 1.645.
Since Z= 2.309 is greater than 1.645, the null hypothesis can be rejected with a similar conclusion mentioned above.
Method 3: Pvalue Calculation
We aim to identify P (sample mean >= 190, when mean = 180).
= P (Z >= (190 180) / (75 / sqrt (300))
= P (Z >= 2.309) = 0.0084 = 0.84%
The following table to infer pvalue calculations concludes that there is confirmed evidence of average monthly returns being higher than 180:
pvalue 
Inference 
less than 1% 
Confirmed evidence supporting alternative hypothesis 
between 1% and 5% 
Strong evidence supporting alternative hypothesis 
between 5% and 10% 
Weak evidence supporting alternative hypothesis 
greater than 10% 
No evidence supporting alternative hypothesis 
Example 2
A new stockbroker (XYZ) claims that his brokerage fees are lower than that of your current stock broker's (ABC). Data available from an independent research firm indicates that the mean and stddev of all ABC broker clients are $18 and $6, respectively.
A sample of 100 clients of ABC is taken and brokerage charges are calculated with the new rates of XYZ broker. If the mean of the sample is $18.75 and stddev is the same ($6), can any inference be made about the difference in the average brokerage bill between ABC and XYZ broker?
H_{0}: Null Hypothesis: mean = 18
H_{1}: Alternative Hypothesis: mean <> 18 (This is what we want to prove.)
Rejection region: Z <=  Z_{2.5} and Z>=Z_{2.5} (assuming 5% significance level, split 2.5 each on either side).
Z = (sample mean – mean) / (stddev / sqrt (no. of samples))
= (18.75 – 18) / (6/(sqrt(100)) = 1.25
This calculated Z value falls between the two limits defined by:
 Z_{2.5 }= 1.96 and Z_{2.5 }= 1.96.
This concludes that there is insufficient evidence to infer that there is any difference between the rates of your existing broker and the new broker.
Alternatively, The pvalue = P(Z< 1.25)+P(Z >1.25)
= 2 * 0.1056 = 0.2112 = 21.12% which is greater than 0.05 or 5%, leading to the same conclusion.
Graphically, it is represented by the following:
Criticism Points for the Hypothetical Testing Method:
 A statistical method based on assumptions
 Errorprone as detailed in terms of alpha and beta errors
 Interpretation of pvalue can be ambiguous, leading to confusing results
The Bottom Line
Hypothesis testing allows a mathematical model to validate a claim or idea with a certain confidence level. However, like the majority of statistical tools and models, it is bound by a few limitations. The use of this model for making financial decisions should be considered with a critical eye, keeping all dependencies in mind. Alternate methods like Bayesian Inference are also worth exploring for similar analysis.