Cyber Risk as a Special Case in Operational Risk

The historical regulatory placement of cyber risk within the operational risk framework was a reasonable decision. I start there, because the argument I make below isn’t that this classification choice was a mistake.

At some point in the past, regulators and standard-setters needed a home for an emerging and poorly understood risk type and operational risk offered something valuable: an established taxonomy, a recognised supervisory logic, and a framework that sophisticated institutions already understood. Cyber risk benefited from this structure. But reasonable decisions made in one context do not always travel well into new ones.

Fifteen years of escalating cyber incidents, rapidly evolving threat actors, and the deepening digital interdependence of financial systems have changed what we’re dealing with. While the operational risk container is still there and still has utility, what we’re putting into it no longer fits the same way.

The framework was built for a different kind of risk

Operational risk, as the Basel framework defines it, encompasses losses from failed internal processes, people, systems, or external events. The definition is deliberately broad. It is broad enough to absorb quite a lot but its underlying logic was shaped around risks that are, in important ways, relatively knowable. You can study historical loss data. You can observe failure rates. You can design controls against reasonably stable threat profiles and expect those controls to hold their value over time.

But cyber risk doesn’t behave this way and the differences aren’t cosmetic.

The tail risk problem is the one that deserves the most scrutiny. Operational risk management, as practiced, is reasonably good at handling frequency. High-volume, low-severity events such as transaction errors, processing failures, and rogue trades generate enough data that institutions can estimate their likelihood with some confidence. The statistical toolbox works.

But cyber risk is characterized by a very different loss distribution: low-frequency, potentially catastrophic events that sit precisely in the part of the distribution where historical data is thin and the statistical toolbox struggles. The 2016 Bangladesh Bank heist. NotPetya. The Colonial Pipeline attack. These weren’t foreseeable from base rates. They emerged from conditions that hadn’t existed before. A framework calibrated on operational loss history will systematically underestimate the tail.

The attacker dynamic changes the problem

There’s a deeper issue that the operational risk framing doesn’t fully accommodate, and it’s this: operational risk is largely a function of internal factors. Processes fail. People make errors. Systems have weaknesses. These are things that, with sufficient investment and management attention, can be reduced over time. The exposure profile is, broadly, under the institution’s influence.

Cyber risk introduces an adversarial dimension that changes the problem in a fundamental way. The threat isn’t static. It is one that actively adapts to your defences. A control that works today against a known technique may be deliberately circumvented tomorrow. Attackers study their targets, learn from failed attempts, trade intelligence, and iterate. Defenders operate in a landscape that their adversary is continuously reshaping.

This asymmetry has real consequences for how we think about risk exposure. In a standard operational risk model, stronger controls produce lower exposure and that relationship is reasonably stable. In an adversarial environment, that relationship is conditional on the adversary’s next move.

A well-controlled institution remains exposed to threats that don’t yet exist in their current form. That’s not a failure of the institution. It’s a property of the problem.

What gets lost in the classification

None of this means operational risk tools are useless for cyber. Loss event tracking, scenario analysis, and control assessment all have a place. The problem isn’t the tools themselves. Rather, it’s what happens when the classification leads us to treat cyber risk as simply another operational risk sub-category to be managed through the same lens, measured by the same metrics, and reported through the same structures.

What gets lost is the ability to characterize the risk with the precision that consequential decisions require. When a board asks whether the institution’s cyber risk exposure is within appetite, an operational risk register doesn’t answer that question. When a regulator wants to understand whether an institution’s cyber posture is adequate relative to the threat environment it operates in, a heat map doesn’t answer that question either. These aren’t failures of execution — they’re failures of instrument.

A supplementary lens, not a replacement

The operational risk framework will likely remain the regulatory container for cyber risk for the foreseeable future and that’s fine. The argument here isn’t for dismantling what exists, but for being clear-eyed about what it can and can’t do.

What’s needed alongside it is an approach capable of handling the tail, accounting for the adversarial dynamic, and producing outputs that can actually inform decisions. We need to move beyond an approach that simply documents that a risk exists. Cyber risk quantification, built around probabilistic estimation of loss event frequency and magnitude, is designed for precisely this. It doesn’t eliminate uncertainty. It makes uncertainty legible, bounded, and actionable in a way that qualitative categorisation cannot.

Treating cyber as operational risk isn’t wrong. But if that’s where the analysis stops, we’re leaving the hardest part of the problem unaddressed.


Leave a Reply

Your email address will not be published. Required fields are marked *