What Is Autoregressive Conditional Heteroskedasticity (ARCH)?

Autoregressive conditional heteroskedasticity (ARCH) is a statistical model used to analyze volatility in time series in order to forecast future volatility. In the financial world, ARCH modeling is used to estimate risk by providing a model of volatility that more closely resembles real markets. ARCH modeling shows that periods of high volatility are followed by more high volatility and periods of low volatility are followed by more low volatility.

In practice, this means that volatility or variance tends to cluster, which is useful to investors when considering the risk of holding an asset over different time periods. The ARCH concept was developed by economist Robert F. Engle in the 1980s. ARCH immediately improved financial modeling, resulting in Engle winning the 2003 Nobel Memorial Prize in Economic Sciences.

Key Takeaways

  • Autoregressive conditional heteroskedasticity (ARCH) models measure volatility and forecast it into the future.
  • ARCH models are dynamic, meaning they respond to changes in the data.
  • ARCH models are used by financial institutions to model asset risks over different holding periods.
  • There are many different types of ARCH models that alter the weightings to provide different views of the same data set.

Understanding Autoregressive Conditional Heteroskedasticity (ARCH)

The autoregressive conditional heteroskedasticity (ARCH) model was designed to improve econometric models by replacing assumptions of constant volatility with conditional volatility. Engle and others working on ARCH models recognized that past financial data influences future data—that is the definition of autoregressive. The conditional heteroskedasticity portion of ARCH simply refers to the observable fact that volatility in financial markets is nonconstant—all financial data, whether stock market values, oil prices, exchange rates, or GDP, go through periods of high and low volatility. Economists have always known the amount of volatility changes, but they often kept it constant for a given period because they lacked a better option when modeling markets.

ARCH provided a model that economists could use instead of a constant or average for volatility. ARCH models could also recognize and forecast beyond the volatility clusters that are seen in the market during periods of financial crisis or other black swan events. For example, volatility for the S&P 500 was unusually low for an extended period during the bull market from 2003 to 2007, before spiking to record levels during the market correction of 2008. This uneven and extreme variation is difficult for standard-deviation-based models to deal with. ARCH models, however, are able to correct for the statistical problems that arise from this type of pattern in the data. Moreover, ARCH models work best with high-frequency data (hourly, daily, monthly, quarterly), so they are ideal for financial data. As a result, ARCH models have become mainstays for modeling financial markets that exhibit volatility (which is really all financial markets in the long run).

The Ongoing Evolution of ARCH Models

According to Engle's Nobel lecture in 2003, he developed ARCH in response to Milton Friedman's conjecture that it was the uncertainty about what the rate of inflation would be rather than the actual rate of inflation that negatively impacts an economy. Once the model was built, it proved to be invaluable for forecasting all manner of volatility. ARCH has spawned many related models that are also widely used in research and in finance, including GARCH, EGARCH, STARCH, and others.

These variant models often introduce changes in terms of weighting and conditionality in order to achieve more accurate forecasting ranges. For example, EGARCH, or exponential GARCH, gives a greater weighting to negative returns in a data series as these have been shown to create more volatility. Put another way, volatility in a price chart increases more after a large drop than after a large rise. Most ARCH model variants analyze past data to adjust the weightings using a maximum likelihood approach. This results in a dynamic model that can forecast near-term and future volatility with increasing accuracy—which is, of course, why so many financial institutions use them.