The minimum number of simulations that should be run for a reasonably accurate value at risk (VaR) assessment is generally considered to be 1,000, but the industry standard is to run a minimum of 10,000 simulations.

The Monte Carlo method for assessing VaR is a variation of the historical returns method, one that relies on random number generation. The main advantage of this method is that it typically considers a much broader range of possible results than the historical method does, providing a more accurate assessment of total risk. Advocates of the historical method argue that actual historical results provide a more realistic assessment of probable risk levels, even though they may not encompass all possible scenarios.

VaR is a risk management assessment tool that was developed to augment the traditional risk measure of volatility. The identified problem with volatility measures is that they do not usually distinguish between good volatility and bad volatility. Volatility is not truly a risk if it acts to increase the value of an investment. VaR is based on focusing risk assessment on answering the question of the maximum potential loss – or more precisely, the maximum potential loss or temporary drawdown level that can be reasonably expected to occur. For example, while it is theoretically possible to experience a 100% loss on a purchase of stock shares in General Motors, that is not a realistic possibility. VaR has become a widely used method for risk assessment in major financial service and investment firms.

VaR measures potential losses of either an individual asset or an entire portfolio of investments over a given period of time and with a specified level of confidence. The level of confidence is essentially a probability measure. For example, if the VaR calculation of an investment asset is $1,000 for a period of one month with a 95% level of confidence, that means that there is only a 5% probability of experiencing a loss greater than $1,000 within the time frame of one month. VaR calculations can specify any level of confidence, but they are most typically run for confidence levels of 90%, 95% or 99%.

The three primary methods used for calculating VaR are the historical method, the variance-covariance method and the Monte Carlo simulation method. The historical method uses the input of actual historical returns on an investment asset, reorganizing them to appear in order from worst loss outcomes to best profit comes. The result usually resembles a typical statistical bell curve, showing higher probability for the more frequently occurring returns and lowest probability for the least common investment returns.

Instead of actual historical returns, the Monte Carlo method uses a random number generator to produce a range of possible investment return outcomes. A potential weakness of the method lies in the effect that the initially randomly generated number may have on overall results, which is why it is recommended to run at least 1,000 simulations. Each simulation produces different results, but a higher number of simulations results in a smaller average variation between the simulations.