What Is Risk Analysis?
The term risk analysis refers to the assessment process that identifies the potential for any adverse events that may negatively affect organizations and the environment. Risk analysis is commonly performed by corporations (banks, construction groups, health care, etc.), governments, and nonprofits. Conducting a risk analysis can help organizations determine whether they should undertake a project or approve a financial application, and what actions they may need to take to protect their interests. This type of analysis facilitates a balance between risks and risk reduction. Risk analysts often work in with forecasting professionals to minimize future negative unforeseen effects.
- Risk analysis seeks to identify, measure, and mitigate various risk exposures or hazards facing a business, investment, or project.
- Quantitative risk analysis uses mathematical models and simulations to assign numerical values to risk.
- Qualitative risk analysis relies on a person's subjective judgment to build a theoretical model of risk for a given scenario.
- Risk analysis is often both an art and a science.
Understanding Risk Analysis
Risk assessment enables corporations, governments, and investors to assess the probability that an adverse event might negatively impact a business, economy, project, or investment. Assessing risk is essential for determining how worthwhile a specific project or investment is and the best process(es) to mitigate those risks. Risk analysis provides different approaches that can be used to assess the risk and reward tradeoff of a potential investment opportunity.
A risk analyst starts by identifying what could potentially go wrong. These negatives must be weighed against a probability metric that measures the likelihood of the event occurring.
Finally, risk analysis attempts to estimate the extent of the impact that will be made if the event happens. Many risks that are identified, such as market risk, credit risk, currency risk, and so on, can be reduced through hedging or by purchasing insurance.
Almost all sorts of large businesses require a minimum sort of risk analysis. For example, commercial banks need to properly hedge foreign exchange exposure of overseas loans, while large department stores must factor in the possibility of reduced revenues due to a global recession. It is important to know that risk analysis allows professionals to identify and mitigate risks, but not avoid them completely.
Types of Risk Analysis
Risk analysis can be quantitative or qualitative.
Quantitative Risk Analysis
Under quantitative risk analysis, a risk model is built using simulation or deterministic statistics to assign numerical values to risk. Inputs that are mostly assumptions and random variables are fed into a risk model.
For any given range of input, the model generates a range of output or outcome. The model's output is analyzed using graphs, scenario analysis, and/or sensitivity analysis by risk managers to make decisions to mitigate and deal with the risks.
A Monte Carlo simulation can be used to generate a range of possible outcomes of a decision made or action taken. The simulation is a quantitative technique that calculates results for the random input variables repeatedly, using a different set of input values each time. The resulting outcome from each input is recorded, and the final result of the model is a probability distribution of all possible outcomes.
The outcomes can be summarized on a distribution graph showing some measures of central tendency such as the mean and median, and assessing the variability of the data through standard deviation and variance. The outcomes can also be assessed using risk management tools such as scenario analysis and sensitivity tables. A scenario analysis shows the best, middle, and worst outcome of any event. Separating the different outcomes from best to worst provides a reasonable spread of insight for a risk manager.
For example, an American company that operates on a global scale might want to know how its bottom line would fare if the exchange rate of select countries strengthens. A sensitivity table shows how outcomes vary when one or more random variables or assumptions are changed.
Elsewhere, a portfolio manager might use a sensitivity table to assess how changes to the different values of each security in a portfolio will impact the variance of the portfolio. Other types of risk management tools include decision trees and break-even analysis.
Qualitative Risk Analysis
Qualitative risk analysis is an analytical method that does not identify and evaluate risks with numerical and quantitative ratings. Qualitative analysis involves a written definition of the uncertainties, an evaluation of the extent of the impact (if the risk ensues), and countermeasure plans in the case of a negative event occurring.
Examples of qualitative risk tools include SWOT analysis, cause and effect diagrams, decision matrix, game theory, etc. A firm that wants to measure the impact of a security breach on its servers may use a qualitative risk technique to help prepare it for any lost income that may occur from a data breach.
While most investors are concerned about downside risk, mathematically, the risk is the variance both to the downside and the upside.
Example of Risk Analysis: Value at Risk (VaR)
Value at risk (VaR) is a statistic that measures and quantifies the level of financial risk within a firm, portfolio, or position over a specific time frame. This metric is most commonly used by investment and commercial banks to determine the extent and occurrence ratio of potential losses in their institutional portfolios. Risk managers use VaR to measure and control the level of risk exposure. One can apply VaR calculations to specific positions or whole portfolios or to measure firm-wide risk exposure.
VaR is calculated by shifting historical returns from worst to best with the assumption that returns will be repeated, especially where it concerns risk. As a historical example, let's look at the Nasdaq 100 ETF, which trades under the symbol QQQ (sometimes called the "cubes") and which started trading in March of 1999. If we calculate each daily return, we produce a rich data set of more than 1,400 points. The worst are generally visualized on the left, while the best returns are placed on the right.
For more than 250 days, the daily return for the ETF was calculated between 0% and 1%. In January 2000, the ETF returned 12.4%. But there are points at which the ETF resulted in losses as well. At its worst, the ETF ran daily losses of 4% to 8%. This period is referred to as the ETF's worst 5%. Based on these historic returns, we can assume with 95% certainty that the ETF's largest losses won't go beyond 4%. So if we invest $100, we can say with 95% certainty that our losses won't go beyond $4.
One important thing to keep in mind is that VaR doesn't provide analysts with absolute certainty. Instead, it's an estimate based on probabilities. The probability gets higher if you consider the higher returns, and only consider the worst 1% of the returns. The Nasdaq 100 ETF's losses of 7% to 8% represent the worst 1% of its performance. We can thus assume with 99% certainty that our worst return won't lose us $7 on our investment. We can also say with 99% certainty that a $100 investment will only lose us a maximum of $7.
Limitations of Risk Analysis
Risk is a probabilistic measure and so can never tell you for sure what your precise risk exposure is at a given time, only what the distribution of possible losses are likely to be if and when they occur. There are also no standard methods for calculating and analyzing risk, and even VaR can have several different ways of approaching the task. Risk is often assumed to occur using normal distribution probabilities, which in reality rarely occur and cannot account for extreme or "black swan" events.
The financial crisis of 2008, for example, exposed these problems as relatively benign VaR calculations greatly understated the potential occurrence of risk events posed by portfolios of subprime mortgages.
Risk magnitude was also underestimated, which resulted in extreme leverage ratios within subprime portfolios. As a result, the underestimations of occurrence and risk magnitude left institutions unable to cover billions of dollars in losses as subprime mortgage values collapsed.