Nassim Taleb is not a fan of Value At Risk (VaR)
“You’re worse off relying on misleading information than on not having any information at all. If you give a pilot an altimeter that is sometimes defective he will crash the plane. Give him nothing and he will look out the window. Technology is only safe if it is flawless.”
This post is neither an all out defense or vilification of VaR, but Taleb has made his position pretty clear! I will have a separate post trying to put some of Taleb’s strongest points forward. I’m not yet convinced it is more dangerous than useful.
Background to VaR
Value at Risk (VaR) attracts significant criticism in risk management circles. Many of these criticisms are valid – but are they targeting VaR itself, or just its most basic, flawed (and, unfortunately, common) implementation?
A key criticism is that “VaR intrinsically and dangerously underestimates tails”. Let’s unpack that – What makes VaR dangerous: assuming normal distributions, using limited historical periods, applying parametric methods, and assuming independence. This combination systematically understates tail risks and creates false confidence in risk estimates.
But VaR doesn’t require these simplifying assumptions. Consider:
- Using empirical distributions (typically via bootstrapping) that capture actual observed tail behavior, or at least fitting distributions that better match higher moments.
- Including longer historical periods that incorporate significant stress events. (10 day VaR estimated over a year seems bizarre)
- Applying Extreme Value Theory techniques for modeling beyond observed data
- Carefully understanding independence and dependence, include where created through use of rolling periods. Estimating confidence intervals is useful.
Other common criticisms include that VaR isn’t sub-additive and that it doesn’t consider the shape of risks or losses beyond the selected percentile.
Alternative measures like Tail VaR (TVaR, also Conditional Tail Expectation or Expected Shortfall) are increasingly popular, including being incorporated into regulatory requirements. It does require greater specification of the tail, but I see this as a feature not a bug.
The key questions for practitioners:
- What explicit and implicit assumptions are you making? Are you aware of the implicit ones (careful, they bite)?
- How do your distributional assumptions compare to empirical data?
- What history are you capturing, and what stress periods might you be missing?
- How are you modeling tail behavior beyond your observed data?
- What dependencies might break down in stress scenarios?
- Are you evaluating the stability and potential error in your estimates?
- What does your complementary stress and scenario testing process tell you about your risk measures?
Some Deeper Problems
Having discussed basic VaR implementation issues, let’s explore more fundamental challenges – ones that even “better” implementations struggle with.
Gaming and Metric Manipulation
“When a measure becomes a target, it ceases to be a good measure” ~ Goodhart’s Law
Risk measures becoming targets fundamentally changes behavior. This isn’t always about deliberate manipulation – it’s about rational responses to incentives that can make the financial system less stable:
- Pegged currencies show deceptively low historical volatility while fundamental pressures build. Traders take on this risk because the VaR measure used under-estimates the risk
- Risk not captured by the measure becomes systematically under-priced. There is a correlation between those opportunities where the risk measure understates the risk and the incentives to explore those opportunities!
- Short written OTM options positions are classic examples – small regular profits mask rare catastrophic losses – especially if this wasn’t factored in volatility in the historical period used to calibrate VaR. (option risk management can and often does go beyond simple VaR though)
- Complex products are designed to exploit specific weaknesses in risk measures
- “Risk-free arbitrage” often means risk has been moved somewhere the metrics don’t capture
Dynamic Estimation Challenges
Markets exhibit complex behaviors that make reliable estimation difficult:
- Volatility clustering, and regime changes in both means and volatilities make naive distribution fitting and VaR estimation less accurate
- GARCH and similar models can help but require careful specification
- Longer data periods capture more regimes but with potential loss of relevance
Systemic Risk through Standardisation
When regulators standardise risk measurement:
- Institutions adopt similar risk management approaches
- Similar triggers/limits create correlated responses
- Market participants react similarly to breaches
- Diversification benefits (and liquidity!) disappear exactly when needed most
- The system becomes more fragile precisely because everyone is using the same risk measures
Expected vs Unexpected Loss
When measuring risk, the distinction between expected and unexpected losses is fundamental. Expected losses should be handled through pricing and provisions – you don’t hold capital against losses you expect.
Capital exists to protect against unexpected adverse outcomes. By “unexpected” I mean the difference between the loss level considered and the mean or expected value.
So should Value at Risk measure:
- Total potential losses from current value, or
- Deviations from expected outcomes (unexpected losses)?
When deriving or applying VaR, we must explicitly consider the treatment of expected values. This affects both the calculation and interpretation of your risk measure.
Consider two examples:
Example 1: Equity Risk: With 100 invested, your 99.5th percentile worst outcome over a year might be 60. But if you expect to earn 10, is your VaR 40 or 50? Both are valid measures, but mean very different things for capital adequacy.
Example 2: Insurance Claims: If you expect 100m of claims but face 99.5th percentile potential claims of 150m, is your VaR 50m (above expected) or 150m (total)? Given insurance pricing should allow for expected claims, capital needs to focus on the unexpected component.
One reason this is often overlooked is that when considering short time period VaR (e.g. daily VaR or even 10 day VaR) the mean or expected return is often tiny, practically small enough to ignore. This changes as the time period becomes longer. The mean generally glows linearly with time. Standard deviation and many risk measures, assuming time periods are not perfectly dependent, will grow less than linearly with t, stereotypically sqrt(time) if the time periods are independent (not an assumption to make loosely though!)
Tail VaR (TVaR) handles this more elegantly. By taking the average of losses beyond your threshold, TVaR naturally incorporates the relationship between expected and unexpected components. The tail mean relative to the distribution mean becomes an inherent part of the measure rather than a definitional choice.
This isn’t just theoretical precision:
- Capital should protect against unexpected losses
- Provisions/pricing handle expected losses
- Risk measurement needs to align with this framework
- Different time horizons need consistent treatment, or at least everyone should be aware of why approximations are allowed and when they break down.
The key is being explicit about your treatment of expected values and ensuring consistency between your risk measures and their intended use.
A conclusion?
So is VaR useful or not? I still believe it can be, but it definitely presents dangers. Better understanding is the first step.
Leave a Reply