Value at risk, often referred to as VaR, measures the amount of potential loss that could happen in an investment or a portfolio of investments over a given time period. Financial institutions use VaR to gauge the cash reserves they need to cover potential portfolio losses.  VaR has become a supplement to the more traditional risk measurement of volatility. There are three components to a VaR measurement: a timeframe, a confidence level and a loss amount (sometimes expressed as a loss percentage). This leads to a VaR risk assessment question: Assuming a 95% to 99% confidence level, what is the most amount or percentage that can be expected to be lost on the investment over the next day, week, month or year?  There are three methods for calculating the VaR.  The historical method uses a statistical analysis of past losses to determine the loss over a given time period.  The variance method also uses statistical analysis, but assumes returns are normally distributed in the classic bell-shape curve based on the average return and its standard deviation.  The Monte Carlo method involves developing a model and using that to predict future investment prices, and then using that data in a statistical analysis to determine the worst-case loss on the investment.