Foggy Day

Value at risk can only quantify what you can see.


Value at risk represents the potential impact of risk on the value of a project or portfolio of projects.  Risk is monitored at specific points of the project life cycle.  Monitoring includes an evaluation of the potential cost impact of remediating the risks that have not been fully remediated weighted by the probability of occurrence.  Where the cost impact of risk is above program risk tolerance specific remediation plans will be established to reduce the estimated risk impact.  The value at risk metric provides the team with a tool for prioritizing risks and risk management activities.


In its simplest form the equation for value at risk (for IT projects) is:

Value at Risk = Probability of Risk * Estimated Cost Impact of Un-remediated Risk

A more precise view of value at risk would reflect the time value of money using the following formula:

Value at Risk = Probability * Net Present Value of Estimated Cost Impact of Un-remediated Risk

The formula could be made to reflect the variability of the cost impact over time; however, on projects of less than a year this is not usually necessary.


The value at risk metric has three primary uses.  All uses fall in the category of risk management.

The first use is perhaps the most important: quantifying risks and linking them to the value of the project makes the potential impact of each risk and the overall portfolio of risks easily understandable.   Reducing the impact of risks to Dollars, Euros, Rupees, Pounds or any other currency and then maintaining the analysis as the project evolves is language everyone on the project can understand.

The second use of this technique is as a tool in prioritizing risks so that resources can be targeted on the risks that have the greatest weighted potential to affect value.

The third use is as a monitoring tool, and when combined with risk tolerance guides, as a tool to precipitate action. For example, when used as a project or program metric, the value at risk can be monitored and reviewed at specific points of the life cycle.  Were the value at risk is above program or organizations risk tolerance specific remediation plans can be established to reduce the estimated impact.


There are several criticisms of value at risk; many of the criticisms of this method focus on the step of quantifying risks.

The first criticism is that numbers do not cover everything.  The criticism argues that not all of the risks can be easily quantified.  Risks driven by factors such as morale, economic and political volatility, emerging technologies and external market innovation are typically considered intangible risks.  I would suggest that almost anything can be measured (just see the book How to Measure Anything).  Intangible risks might be difficult to define in concrete and dollar terms, but rather than giving up recognize that quantification is possible but requires a greater degree of subjectivity, intuition and monitoring vigilance.

The second criticism of value at risk (or any risk quantification technique) is that it is hard to predict the unpredictable.  From Wikipedia, “The Black Swan Theory or Theory of Black Swan Events is a metaphor that encapsulates the concept that, the event is a surprise (to the observer) and has a major impact. After the fact, the event is rationalized by hindsight.”   The criticism is valid; value at risk can only quantify that which is knowable.  Risk management techniques are not one time events.  Everyone involved or interested in the project must be constantly aware of the world in and around the project, when new risks begin to emerge they need to be identified and evaluated.

A third criticism is that new risks are difficult to quantify because we do not have any historical data or experience to evaluate the risk.   The corollary to this criticism is that analysis puts too much weight on the past because there is an assumption that the past is an accurate predictor of the future.  Again both are valid criticisms, but rather than an argument for not trying to quantify any risk, I would suggest that both are augments for quantification, collection of current history and constant monitoring.

A final criticism is based on the mathematics of the value at risk formula.  A risk that has high probability and low loss could have a similar valuation as a risk with a low probability and a high potential loss.   This criticism is valid and related to the “numbers don’t cover everything” criticism noted earlier.  Even though the criticism is valid, it is manageable.  Analysis of individual risks should never reflect a simple ranking.

Related or Variants Measures:

  • Risk matrix
  • Risk impact evaluations


As with criticisms there are a few potential issues with measuring risks, most of these issues are driven by human nature. Therefore forewarned is forearmed.

The first issue is that when leveraging quantitative risk analysis there can be a temptation to over-interpret the data. Psychologists have long known that humans have a great a facility for recognizing patterns even when they do not exist.  Combining the use of a diverse team and transparency across the entire risk analysis life cycle will reduce the potential for succumbing to over analysis and visits to blind alleys.

The second issue is, that since risk measurement and risk management is not a onetime affair, risks need to be monitored, assessed and avoided (and remediated when it makes sense) over the entire life of a project or program.  Reassessing the impact of a risk on the anticipated delivery value of the project requires time and effort.  Time and effort that some would argue is overhead and not focused on delivering functionality, the problem is that if risks transform themselves into issues it might be better if the functionality wasn’t delivered (an extreme case that hopefully never occurs).  Rather than an issue, this is a statement of fact.  Evolving risks require continual adjustment of your risk analysis because we live in a dynamic environment; value at risk is a tool that makes the output of the monitoring process visible and easily understandable.

The final issue is that estimating risk probabilities is difficult.  Human nature tends to drive perception into a binary on/off position.  Hans Peter Duerr, a successor to Werner Heisenberg, the discoverer of the famous Uncertainty Principle, a foundation of quantum science once opined,

We want it to be either yes or no. But the truth is always somewhere on the way between yes and no.

Group consensus techniques like Delphi are useful to ensure multiple points of view are involved in estimating probabilities which will help keep binary perceptions at bay.