Standard Error of the Mean vs. Standard Deviation: What's the Difference?

Standard Error of the Mean vs. Standard Deviation: An Overview

Standard deviation (SD) measures the amount of variability, or dispersion, from the individual data values to the mean. SD is a frequently-cited statistic in many applications from math and statistics to finance and investing.

Standard error of the mean (SEM) measures how far the sample mean (average) of the data is likely to be from the true population mean. The SEM is always smaller than the SD.

Standard deviation and standard error are both used in statistical studies, including those in finance, medicine, biology, engineering, and psychology. In these studies, the SD and the estimated SEM are used to present the characteristics of sample data and explain statistical analysis results.

However, even some researchers occasionally confuse the SD and the SEM. Such researchers should remember that the calculations for SD and SEM include different statistical inferences, each of them with its own meaning. SD is the dispersion of individual data values. In other words, SD indicates how accurately the mean represents sample data.

However, the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution).

Key Takeaways

  • Standard deviation (SD) measures the dispersion of a dataset relative to its mean.
  • SD is used frequently in statistics, and in finance is often used as a proxy for the volatility or riskiness of an investment.
  • The standard error of the mean (SEM) measures how much discrepancy is likely in a sample’s mean compared with the population mean.
  • The SEM takes the SD and divides it by the square root of the sample size.
  • The SEM will always be smaller than the SD.

Click Play to Learn the Difference Between Standard Error and Standard Deviation

Standard error estimates the likely accuracy of a number based on the sample size.

Standard error of the mean, or SEM, indicates the size of the likely discrepancy compared to that of the larger population.

Calculating SD and SEM

standard deviation  σ = i = 1 n ( x i x ˉ ) 2 n 1 variance = σ 2 standard error  ( σ x ˉ ) = σ n where: x ˉ = the sample’s mean n = the sample size \begin{aligned} &\text{standard deviation } \sigma = \sqrt{ \frac{ \sum_{i=1}^n{\left(x_i - \bar{x}\right)^2} }{n-1} } \\ &\text{variance} = {\sigma ^2 } \\ &\text{standard error }\left( \sigma_{\bar x} \right) = \frac{{\sigma }}{\sqrt{n}} \\ &\textbf{where:}\\ &\bar{x}=\text{the sample's mean}\\ &n=\text{the sample size}\\ \end{aligned} standard deviation σ=n1i=1n(xixˉ)2variance=σ2standard error (σxˉ)=nσwhere:xˉ=the sample’s meann=the sample size

Standard Deviation

The formula for the SD requires a few steps:

  1. First, take the square of the difference between each data point and the sample mean, finding the sum of those values.
  2. Next, divide that sum by the sample size minus one, which is the variance.
  3. Finally, take the square root of the variance to get the SD.

Standard Error of the Mean

SEM is calculated simply by taking the standard deviation and dividing it by the square root of the sample size.

Standard error gives the accuracy of a sample mean by measuring the sample-to-sample variability of the sample means. The SEM describes how precise the mean of the sample is as an estimate of the true mean of the population.

As the size of the sample data grows larger, the SEM decreases vs. the SD. As the sample size increases, the sample mean estimates the true mean of the population with greater precision.

Increasing the sample size does not make the SD necessarily larger or smaller; it just becomes a more accurate estimate of the population SD.

A sampling distribution is a probability distribution of a sample statistic taken from a greater population. Researchers typically use sample data to estimate the population data, and the sampling distribution explains how the sample mean will vary from sample to sample. The standard error of the mean is the standard deviation of the sampling distribution of the mean.

Standard Error and Standard Deviation in Finance

In finance, the SEM daily return of an asset measures the accuracy of the sample mean as an estimate of the long-run (persistent) mean daily return of the asset.

On the other hand, the SD of the return measures deviations of individual returns from the mean. Thus, SD is a measure of volatility and can be used as a risk measure for an investment.

Assets with greater day-to-day price movements have a higher SD than assets with lesser day-to-day movements. Assuming a normal distribution, around 68% of daily price changes are within one SD of the mean, with around 95% of daily price changes within two SDs of the mean.

How Are Standard Deviation and Standard Error of the Mean Different?

Standard deviation measures the variability from specific data points to the mean. Standard error of the mean measures the precision of the sample mean to the population mean that it is meant to estimate.

Is the Standard Error Equal to the Standard Deviation?

No, the standard deviation (SD) will always be larger than the standard error (SE). This is because the standard error divides the standard deviation by the square root of the sample size.

If the sample size is one, they will be the same, but a sample size of one is rarely useful.

How Can You Compute the SE From the SD?

If you have the standard error (SE) and want to compute the standard deviation (SD) from it, simply multiply it by the square root of the sample size.

Why Do We Use Standard Error Instead of Standard Deviation?

Standard error is more commonly used when evaluating confidence intervals or statistical significance using statistical analysis.

What Is the Empirical Rule, and How Does It Relate to Standard Deviation?

A normal distribution is also known as a standard bell curve, since it looks like a bell in graph form. According to the empirical rule, or the 68-95-99.7 rule, 68% of all data observed under a normal distribution will fall within one standard deviation of the mean. Similarly, 95% falls within two standard deviations and 99.7% within three.

The Bottom Line

Investors and analysts measure standard deviation as a way to estimate the potential volatility of a stock or other investment. It helps determine the level of risk to the investor that is involved. When reading an analyst's report, the level of riskiness of an investment may be labeled "standard deviation."

Standard error of the mean is an indication of the likely accuracy of a number. The larger the sample size, the more accurate the number should be.

Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
  1. National Center for Biotechnology Information. “Standard Deviations and Standard Errors.”

  2. Penn State Eberly College of Science, Department of Statistics. “STAT 500 | Applied Statistics: The Empirical Rule.”