What Is Value at Risk (VaR)?
Value at risk (VaR) is a statistic that quantifies the extent of possible financial losses within a firm, portfolio, or position over a specific time frame. This metric is most commonly used by investment and commercial banks to determine the extent and probabilities of potential losses in their institutional portfolios.
Risk managers use VaR to measure and control the level of risk exposure. One can apply VaR calculations to specific positions or whole portfolios or use them to measure firm-wide risk exposure.
- Value at risk (VaR) is a way to quantify the risk of potential losses for a firm or an investment.
- This metric can be computed in several ways, including the historical, variance-covariance, and Monte Carlo methods.
- Investment banks commonly apply VaR modeling to firm-wide risk due to the potential for independent trading desks to unintentionally expose the firm to highly correlated assets.
Value at Risk (VaR)
Understanding Value at Risk (VaR)
VaR modeling determines the potential for loss in the entity being assessed and the probability that the defined loss will occur. One measures VaR by assessing the amount of potential loss, the probability of occurrence for the amount of loss, and the timeframe.
A financial firm, for example, may determine an asset has a 3% one-month VaR of 2%, representing a 3% chance of the asset declining in value by 2% during the one-month time frame. The conversion of the 3% chance of occurrence to a daily ratio places the odds of a 2% loss at one day per month.
Using a firm-wide VaR assessment allows for the determination of the cumulative risks from aggregated positions held by different trading desks and departments within the institution. Using the data provided by VaR modeling, financial institutions can determine whether they have sufficient capital reserves in place to cover losses or whether higher-than-acceptable risks require them to reduce concentrated holdings.
There are three main ways of computing VaR. The first is the historical method, which looks at one's prior returns history and orders them from worst losses to greatest gains—following from the premise that past returns experience will inform future outcomes.
The second is the variance-covariance method. Rather than assuming the past will inform the future, this method instead assumes that gains and losses are normally distributed. This way, potential losses can be framed in terms of standard deviation events from the mean.
A final approach to VaR is to conduct a Monte Carlo simulation. This technique uses computational models to simulate projected returns over hundreds or thousands of possible iterations. Then, it takes the chances that a loss will occur, say 5% of the time, and reveals the impact.
Example of Problems with Value at Risk (VaR) Calculations
There is no standard protocol for the statistics used to determine asset, portfolio, or firm-wide risk. Statistics pulled arbitrarily from a period of low volatility, for example, may understate the potential for risk events to occur and the magnitude of those events. Risk may be further understated using normal distribution probabilities, which rarely account for extreme or black-swan events.
The assessment of potential loss represents the lowest amount of risk in a range of outcomes. For example, a VaR determination of 95% with 20% asset risk represents an expectation of losing at least 20% one of every 20 days on average. In this calculation, a loss of 50% still validates the risk assessment.
The financial crisis of 2008 that exposed these problems as relatively benign VaR calculations understated the potential occurrence of risk events posed by portfolios of subprime mortgages. Risk magnitude was also underestimated, which resulted in extreme leverage ratios within subprime portfolios. As a result, the underestimations of occurrence and risk magnitude left institutions unable to cover billions of dollars in losses as subprime mortgage values collapsed.