Why Most Cyber Risk Models Fail Before They Begin
The case for quantitative thinking in cyber risk The post Why Most Cyber Risk Models Fail Before They Begin appeared first on Towards Data Science.

In fact, PwC’s 2025 Global Digital Trust Insights Survey found that only 15% of organizations are using quantitative risk modeling to a significant extent.
This article explores why traditional cyber risk models fall short and how applying some light statistical tools such as probabilistic modeling offers a better way forward.
The Two Schools of Cyber Risk Modeling
Information security professionals primarily use two different approaches to modeling risk during the risk assessment process: qualitative and quantitative.
Qualitative Risk Modeling
Imagine two teams assess the same risk. One assigns it a score of 4/5 for likelihood and 5/5 for impact. The other, 3/5 and 4/5. Both plot it on a matrix. But neither can answer the CFO’s question: “How likely is this to actually happen, and how much would it cost us?“
A qualitative approach assigns subjective risk values and is primarily derived from the intuition of the assessor. A qualitative approach generally results in the classification of the likelihood and impact of the risk on an ordinal scale, such as 1-5.
The risks are then plotted in a risk matrix to understand where they fall on this ordinal scale.
Often, the two ordinal scales are multiplied together to help prioritize the most important risks based on probability and impact. At a glance, this seems reasonable as the commonly used definition for risk in information security is:
\[\text{Risk} = \text{Likelihood } \times \text{Impact}\]
From a statistical standpoint, however, qualitative risk modeling has some pretty important pitfalls.
The first is the use of ordinal scales. While assigning numbers to the ordinal scale gives the appearance of some mathematical backing to the modeling, this is a mere illusion.
Ordinal scales are simply labels — there is no defined distance between them. The distance between a risk with an impact of “2” and an impact of “3” is not quantifiable. Changing the labels on the ordinal scale to “A”, “B”, “C”, “D”, and “E” makes no difference.
This in turn means our formula for risk is flawed when using qualitative modeling. A likelihood of “B” multiplied by an impact of “C” is impossible to compute.
The other key pitfall is modeling uncertainty. When we model cyber risks, we are modeling future events that are not certain. In fact, there is a range of outcomes that could occur.
Distilling cyber risks into single-point estimates (such as “20/25” or “High”) don’t express the important distinction between “most likely annual loss of $1 Million” and “There is a 5% chance of a $10 Million or more loss”.
Quantitative Risk Modeling
Imagine a team assessing a risk. They estimate a range of outcomes, from $100K to $10M. Running a Monte Carlo simulation, they derive a 10% chance of exceeding $1M in annual losses and an expected loss of $480K. Now when the CFO asks, “How likely is this to happen, and what would it cost?”, the team can respond with data, not just intuition.
This approach shifts the conversation from vague risk labels to probabilities and potential financial impact, a language executives understand.
If you have a background in statistics, one concept in particular should stand out here:
Likelihood.
Cyber risk modeling is, at its core, an attempt to quantify the likelihood of certain events occurring and the impact if they do. This opens the door to a variety of statistical tools, such as Monte Carlo Simulation, that can model uncertainty far more effectively than ordinal scales ever could.
Quantitative risk modeling uses statistical models to assign dollar values to loss and model the likelihood of these loss events occurring, capturing the future uncertainty.
While qualitative analysis might occasionally approximate the most likely outcome, it fails to capture the full range of uncertainty, such as rare but impactful events, known as “long tail risk”.
The loss exceedance curve plots the likelihood of exceeding a certain annual loss amount on the y-axis, and the various loss amounts on the x-axis, resulting in a downward sloping line.
Pulling different percentiles off the loss exceedance curve, such as the 5th percentile, mean, and 95th percentile can provide an idea of the possible annual losses for a risk with 90% confidence.
While the single-point estimate of Qualitative Analysis may get close to the most likely risk (depending on the accuracy of the assessors judgement), quantitative analysis captures the uncertainty of outcomes, even those that are rare but still possible (known as “long tail risk”).
Looking Outside Cyber Risk
To improve our risk models in information security, we only need to look outwards at the techniques used in other domains. Risk modeling has been matured in a variety of applications, such as finance, insurance, aerospace safety, and supply chain management.
Financial teams model and manage portfolio risk using similar Bayesian statistics. Insurance teams model risk with mature actuarial models. The aerospace industry models the risk of system failures using likelihood modeling. And supply chain teams model risk using probabilistic simulations.
The tools exist. The math is well understood. Other industries have paved the way. Now it’s cybersecurity’s turn to embrace quantitative risk modeling to drive better decisions.
Key Takeaways
Qualitative | Quantitative |
Ordinal Scales (1-5) | Probabilistic modeling |
Subjective intuition | Statistical rigor |
Single-point scores | Risk distributions |
Heatmaps & color codes | Loss exceedance curves |
Ignores rare but severe events | Captures long-tail risk |
The post Why Most Cyber Risk Models Fail Before They Begin appeared first on Towards Data Science.