Modern Risk Management – Presentable, Not Useful

As someone interested in the history of, theory around and practical reality related to risk management, it can be frustrating to notice what seems like a growing inclination to quickly declare anything which seems remotely difficult to measure as unmeasurable.

For many, unmeasurable is a convenient label that once applied, allows many to end what should be a serious conversation about what it means to measure something – risk in particular.

Instead, it allows practitioners to quietly move the risk into the “soft” phenomenon bucket where it is stripped of structure and subject to vague claims about culture, leadership, intuition and most unnervingly… judgement.

We are seeing in real time, attempts to disguise what can only be considered professional abdication as sophisticated thinking.

Even in a technologically driven and data rich world, where it is clear that the “unmeasurable” frontier is constantly shrinking, the pattern of stubbornly refusing to accept that there are very few genuine unmeasurables not only persists but has become almost formulaic.

For virtually anything that is genuinely hard to measure, some would have us simply conclude it is unmeasurable. For anything inherently complex that resists lazy analytical thinking, we are pushed to rebrand it as qualitative. When scenarios are characterized by ambiguity or uncertainty, we are encouraged to use checklists and heatmaps as proxies for deep thought.

Taken together, whether deliberate or not, this feels more and more like a systematic retreat from doing the actual work of understanding risk and what it means to manage it.

The fact that this has become the norm in professional risk circles, among mainstream voices many of whom are credentialed and institutionally endorsed, makes it even more concerning.

To avoid the analytical rigour required and quite often actually demanded by regulators, boards and leadership teams, there are persons in risk management roles conveniently giving anything difficult the status of unknowable or impossible and satisfying themselves with the comfort of abstraction.

This post is about how that happened, why it persists, and what it would take to do something about it.

The Choice We Made

The systems we are attempting to govern have become substantially more complex over the past two decades. Digital infrastructure today is deeply interconnected, supply chains span jurisdictions and companies do not spare a second thought to partner with organizations they have never met. On the other side of the equation, threat actors operate with a level of coordination and capability that would have seemed implausible to earlier generations of security practitioners.

In this complex environment, the failure modes have multiplied almost exponentially, and unfortunately risk management’s response has been in general terms – to attempt flattening it.

As a profession, we have replaced careful reasoning about how losses actually unfold with artefacts designed to represent that reasoning. In a field that was previously characterized by economic thinking about exposure, we now have scoring systems that produce obscure numbers without units and we have replaced systems analysis with convenient cookie cutter style category trees.

We have prioritized outputs that seem structured but the underlying thinking is fragile and in many cases indefensible.

But this was not inevitable. We chose this. At some point, the discipline in large part, collectively decided that manageability was more important than accuracy and that producing outputs an organization could easily absorb was more important than producing outputs that reflected reality.

And while those tradeoffs make sense in some contexts, over time they have stopped becoming deliberate tradeoffs and have become the norm.

Now, risk artefacts have become the doctrine and approximations of risk and analytical rigor have become the standard. We have trained organizations and countless entrants to the field that what “good” risk management looks like is the ability to produce familiar outputs, regardless of whether those outputs are actually informing anything.

The Consultant-esque Model

Close to the heart of the problem is a version of risk management that dominates most large organizations today. It stands currently undefined as far as I can tell but we can call it the consultant-esque model. I use this term not as an insult to consultants specifically, but as it describes how the dominant practices entered organizations and what they are deliberately optimized for.

In the consultant-eque model, risk is a combination of the following:

  • A taxonomy or something to be categorised. You are told to identify it, name it, place it in the right part of the register and then the classification becomes the output.
  • A scoring exercise where those identified risks are assigned likelihood and impact ratings, usually on qualitative scales and usually by the same person who identified them. The ‘risk score’ is then treated as a measurement, when it is not.
  • A governance artefact. The key outputs of this model – the risk register, heat map and dashboard exist for little else than to demonstrate to boards and regulators that a process was followed. They have been designed to be seen and not used.
  • An evidence generator. In a system where the primary product is documentation, priority is placed on ensuring assessments are completed, controls are mapped, approvals are obtained and emails with the necessary attachments are sent. The entire risk management process has become in essence, a production line meant to convey assurance.

This model won approval and endorsement due to its administrative convenience. Outside of the simplest use cases, it is not particularly insightful but it did not need to be given how easy it was to teach, standardise, audit and defend. It’s ability to scale across organizations without requiring the people operating it to actually understand the actual risk landscape they are governing helped too. Ultimately it was able to persist because it reduced the need for what is now commonly seen as uncomfortable judgement.

What is a bit sad, or unfortunate though, is that most persons operating inside this model are competent professionals simply doing what the role asks of them. But given the structural problem where organizational incentives optimize for presentability and convenience rather than insight and decision utility, we will struggle to change the entrenched status quo.

The Failures it Produces

The consultant-esque model has produced a predictable set of analytical failures. Over time I’ve explicitly written about a few of these but for the first time I’ll be addressing them together.

Inherent risk. The practice of assessing exposure by imagining controls do not exist (or all somehow fail at the same time) has been made to sound not just rigorous but prudent too. In practice this approach only describes a world that does not and could not exist. Real systems do not operate without controls. They fail in ways that are deeply influenced by which controls existed and how they degraded. Inherent risk does not provide a pre-control baseline as it is a fictional construct that tells us almost nothing about actual exposure. The persistence of the concept says more about institutional conformity and governance aesthetics than it does about analytical value. The claim against it is valid and holds up under scrutiny.

Positive risk. Rebranding opportunity as ‘risk’ creates the impression that upside is something the risk function governs or manages. In practice, a team built around loss avoidance has no real direct influence over benefit realization. They have no investment mandate, no operational authority, and no meaningful success metrics. Asking them to manage opportunity does not somehow expand their remit. Instead, it leads to a dilution in focus for reasons that go deeper than semantics.

Checklist-driven assurance. In virtually every environment where the evidence of activity is treated as evidence of capability, incentives flip and teams quickly learn to produce documentation rather than actually manage risk. This isn’t simply the case of incompetence or a lack of skill. Instead, what we have here are incentive structures, feedback loops, and reporting dynamics that actively reward the appearance of security over its substance. This is an issue with deep organizational roots, one that I’ve previously explored here.

The Risk Score Problem. The elevation of ordinal scales to the status of measurement is possibly one of the most pervasive failures in modern risk management. In elevating systems that produce numbers without underlying units (e.g. currency, time, lives, etc), we have in effect created a measurement-styled object that does not actually measures anything. With ordinal scales, you cannot meaningfully add, multiply or average the resulting values1. An impact of “4” is not twice as bad as a “2” yet somehow these risk “scores” are routinely aggregated and presented on executive dashboards as if they represent a tangible volume of risk. Compounding the problem is the fact that in any group, ordinal scores create the appearance of consensus where a group may agree on a particular score, say a “4”, but walk away each thinking of very different outcomes (e.g. one week downtime, 80K in lost sales, 7-9% customer churn).

Each of these failures shares the same underlying structure. A tool or concept designed to approximate something useful was elevated to doctrine, abstracted away from the context that gave it meaning, and then treated as a substitute for the kind of analysis it was only ever meant to support.

Why it Persists

The persistence of flawed abstractions isn’t difficult to understand when you consider what they do for the organizations that use them.

Abstractions scale.

A methodology that requires deep contextual knowledge and difficult judgment cannot be replicated uniformly across a large organization. However, a methodology that can be reduced to a template, taught in a two-day course, and audited against a checklist, or better yet a “leading practice” can. The consultant-esque model wins on operational tractability, not analytical quality.

Abstractions protect hierarchy.

For many organizations, making risk a scoring exercise allows those scores to move up the org chart without ease of interpretation. Senior leaders are presented with dashboards and discouraged from looking for or engaging with the underlying reasoning. This can protect decision makers from uncomfortable details and insulates the risk function from being held accountable for specific judgements.

Abstractions survive audit.

External reviewers prioritize process compliance in their assessments. More often than not, the focus is on if the register was completed, if approvals were obtained, and if the framework was applied consistently. The analytical soundness of outputs is rarely assessed and as a result, a well-documented risk program with low predictive validity can pass the same audit as a well-documented risk program that actually informs decisions.

Abstractions reduce discomfort.

Risk analysis when done correctly is designed to identify and communicate any genuine uncertainty. The process of understanding, decomposing and analyzing a risk forces explicit statements about what we don’t know, what assumptions we’re making, and how wrong those assumptions could be. For many that can be uncomfortable, particularly for the analyst who has to make the estimate, and for the audience who has to act on it. On the other hand, abstractions offer the appearance of certainty without requiring anyone to commit to it.

Unfortunately for those over reliant on abstractions, reality actually does none of these things. Real analysis can be difficult to scale, difficult to audit, and uncomfortable to sit with. For these reasons, the profession has seemingly, and even if not wisely has rationally, organized itself around the easier path.

What other Disciplines Know

Not all disciplines that deal with complex, uncertain, and high-stakes environments have made the same choices.

The field of safety and reliability engineering does not pretend catastrophic failures are easily modeled. Rather, they use fault trees, bow-tie analysis, and probabilistic risk assessments to reason explicitly about failure modes and their interactions. They accept model uncertainty while still committing to estimates because the goal is not the misguided attempt to eliminate uncertainty but to structure it well enough to make better decisions.

Similarly, civil and structural engineering routinely works with imperfect information. Loads are regularly modelled as distributions and uncertainty is embedded in safety factors. The engineering discipline has proven itself comfortable with analyzing failure modes even when the precise probability of failure can’t be specified and in circumstances where judgment fills the gaps, it is explicit, documented, and challengeable.

Actuarial science and the global insurance industry routinely manage long-tailed, infrequent, and hard-to-measure loss events. This is done through explicit models, documented assumptions, sensitivity analysis, and an honest treatment of the limits of historical data. Actuaries have never claimed their models to be precise. The more likely argument from the actuarial community is that a reliable model should be defensible.

Across these disciplines and others, the common thread is not a fixation on certainty. Rather, it is a pointed refusal to use uncertainty as an excuse for reasoning or analysis and the deliberate effort to build systems that support analysis in the presence of uncertainty.

Corporate risk management, particularly in cyber and operational risk, has refused to make that choice. We have consistently used complexity and uncertainty as reasons to retreat into abstraction, even while other disciplines have treated those same conditions as the reason for requiring rigorous methodology.

What Better Looks Like

This isn’t an argument for blind faith in quantification. There is no perfect method and any serious quant will acknowledge that inputs can be biased, models can be made to lie and precision not only can be faked but often is. However, the problem with the consultant-esque model isn’t that it avoids numbers and simply moving to a quant approach would address the issue at hand. The problem is the consultant-esque model has to a concerning degree allowed the profession to avoid thinking.

What we need and would undoubtedly benefit from is a clear set of principles, rather than a replacement framework.

Loss-focused definitions: Risk is the probable frequency and probable magnitude of future loss2. It is not a category, or a score, or a governance status. Every analysis should at a minimum begin with a concrete answer to: what actually happens if this risk materialises, and to whom?

Context-aware prioritization: The impact magnitude of a risk cannot be assessed without knowing the environment within which it operates. To say a vulnerability is high or medium is to continue the abstraction. Context tells us if it is more or less consequential given the system it affects, the data it touches, and the controls that sit around it and any attempt to make risk based decisions, particularly around prioritization, without this context is risk theatre.

Explicit assumptions: Every risk analysis and risk model involves assumptions about factors such as frequency, impact, control effectiveness, and human behaviour. The requirement to make these assumptions visible is not a shortcoming but should be the precondition for having any conversation about the risk related to them. As hidden assumptions cannot be challenged and only explicit ones can, we elevate the nature of analysis, reporting and decision making by being transparent about our assumptions.

Distributional thinking: Single-point estimates are over precise and almost always wrong. Questions such as “what is the likelihood” do not serve us nearly as well as “what is the range, and where does the weight sit?” Communicating uncertainty honestly, as ranges without false precision, tends to make estimates more useful to the persons who have to act on them.

Scenario reasoning: Before generalizing to a risk score, we need to build meaningful stories. What specific sequence of events leads to the loss? What actor does what, using which path, to cause which outcome? Well crafted scenarios and robust scenario thinking exposes assumptions that loose risk statements and aggregate ratings are designed to hide and they generateot insights that generic risk checklists cannot.

Models as tools for disagreement: Models should not be seen as the answer to risk questions but rather as a structured way of having better arguments about what we do not know. The value of even a crude quantitative model is never the number it produces but the explicit structure it carries that makes disagreements about inputs, assumptions, and scope visible and resolvable.

Culture expressed through decisions, not slogans: The term “risk culture” does an inordinate amount of heavy lifting as it is constantly invoked but almost never actually analyzed. While culture is undeniably behavioural and evidenced in ways that include the pattern of decisions made under pressure, the things that get escalated and the things that don’t, the way incentives actually operate day to day – to assume human behaviour and by extension culture cannot be measured, modeled or analyzed beyond a high, medium, low scale is flawed thinking.

The Cost of Comfort

Over time, despite what the name suggests the risk management profession has not meaningfully made risk more manageable. Instead by mistaking presentation for control, we have made it more presentable (and palatable).

We see the consequence of popular but fundamentally flawed approaches every time failure becomes visible. Unfortunately, hindsight is 20/20 and outside of these cyclical incidents, many ignore the warning signs.

For this reason, organizations that were demonstrably compliant are proven to have had no accurate picture of their risk exposure. The most common risk management artefacts, risk registers and heat maps, despite being reviewed by the board and executive leadership, contain nothing that could have predicted what actually happened.

When we pay close attention, we see that these failures are not anomalies though. They are the expected outputs of a system that has been optimized for defensibility rather than understanding. The fact is, a regulatory, risk or audit function organized around evidence generation and collection will generate and collect evidence. It will not necessarily generate insight.

The hard truth here is that comfort is expensive. The consultant-esque model feels safe because it is by design simple, vague and administratively defensible. Unfortunately, defensibility and preparedness are not the same thing and an organization that can explain what it did in the right procedural terms still has no guarantee it understood what it was governing.

Ultimately, difficulty is not impossibility and regardless of how convenient pretending otherwise has become, that is worth remembering.

References

  1. D. Hubbard and D. Evans, Problems with scoring methods and ordinal scales in risk assessment ↩︎
  2. FAIR Terminology 101 – Risk, Threat Event Frequency and Vulnerability ↩︎

T. Cox, What’s wrong with Risk Matrices?

Leave a Reply

Your email address will not be published. Required fields are marked *