Whose Risk Is It? Why Cyber Risk Supervision Needs a Different Question

All supervisory approaches naturally reflect an underlying “why”. The choice and design of an approach, whether explicit or not, encodes assumptions about what matters, what’s at stake, and who needs protection.

For all the emphasis placed on cyber risk, that “why” is often left unspoken, even as it continues to quietly shape the entire supervisory exercise. It informs the tools used, the issues that trigger concern, and the conclusions drawn at the end of an examination.

At its core, this “why” depends on a prior judgment about where risk truly sits. Which, in turn, raises a more fundamental question: whose risk is a regulator really there to assess?

A review of most supervisory frameworks implicitly indicates that the answer is the institution’s. Examiners review internal risk assessments, evaluate control environments, test the adequacy of the institution’s own cyber risk management practices. This produces a regulatory judgment that is largely derivative and asks, does the institution understand its cyber risk exposure, and is it managing it appropriately?

There is some merit to the approach. But I’d argue that it is largely disconnected from what supervisors more broadly suggest the supervision function is meant to achieve. And specifically in the cyber risk domain, that disconnect has consequences that reach further than most practitioners recognize.

Two questions that look similar but aren’t

Somewhere along the way, cyber risk supervision inherited an assumption: that the regulator’s job is to assess an institution’s cyber risk. While this may seem like a reasonable position, I’d argue that it is the wrong framing.

The first question is: what is this institution’s cyber risk exposure? This is the question the firm itself should be asking as well as investing in the capability to answer. It requires understanding the threat landscape the institution operates in, the adequacy of its controls against that landscape, the potential magnitude of losses from plausible events, and how that exposure sits relative to the institution’s own risk appetite. This is an internal risk management question. It belongs to the firm.

The second question is: does this institution’s cyber risk exposure threaten the objectives I am responsible for protecting? This is the regulator’s question. It asks something more specific and, in important ways, more bounded. A prudential regulator’s mandate goes beyond an institution’s well-being in the abstract sense. It extends to the stability of the financial system, the protection of depositors, and the integrity of markets. A bank, insurer or asset manager can carry significant cyber risk exposure and still not threaten those objectives. Conversely, a firm whose exposure is modest in isolation may be systemically significant enough that the same event carries outsized consequences.

While the difference between these two questions may seem inconsequential, there is a meaningful difference and this gap between them is where I believe most cyber risk supervision starts to go wrong.

A scenario worth considering

Consider a mid-sized bank operating in a small jurisdiction alongside two or three institutions of similar scale and interconnectedness. The bank’s internal cyber risk assessment rates its overall exposure as ‘medium’. Controls are documented, a risk committee meets quarterly, and no significant incidents have occurred in recent years. An examiner reviewing that assessment and leveraging the same inputs that produced it might reasonably conclude the institution is managing its cyber risk adequately. By the standards of most current supervisory frameworks, they would not be wrong.

But the regulator’s question is different.

Given the institution’s role in the jurisdiction’s payments infrastructure, a ransomware event taking its core banking system offline for 72 hours would not just threaten the bank. It would also threaten depositor access, settlement continuity, and potentially public confidence in the broader financial system. In this scenario, whether the institution’s internal risk rating is accurate becomes almost irrelevant. The question or scenario regulators needed to interrogate is whether this institution’s failure mode, given a plausible probability and magnitude, threatens what they’re responsible for protecting.

What conflation produces

Approaching cyber risk supervision as if the goal is to validate the institution’s internal assessment not only narrows the supervisory lens but it also inverts the purpose of the exercise.

First, tools tend to be incorrectly scoped. We see this in examination procedures that are borrowed almost identically from the risk management methodology used by supervised institutions and those who normally audit and assess them. Instruments such as heat maps, control maturity ratings, and qualitative risk registers naturally feature as a result. These tools, designed to help management think about risk exposure, were not designed to answer the question of whether a firm’s cyber posture is adequate relative to a regulator’s prudential objectives. Using them for the second purpose while believing you’re answering the second question is a category error.

Second, findings arising from the current pervasive approaches tell more about process than they do about actual risk exposure. Assessing whether a firm has a documented cyber risk policy, a functioning risk committee, and a completed risk assessment may tell something, but it doesn’t tell as much as it might seem in the regulatory context. While process compliance and actual risk exposure are theoretically related, they are often disconnected in practice.

An institution can have excellent documentation and a genuinely dangerous cyber risk profile. Supervisory tools heavily calibrated around process and not exposure are not designed to address this.

Third, and most consequentially, regulators may become dependent on the institution’s own framing of the risk. If an institution underestimates its exposure, either through genuine methodological limitation, optimism bias, or deliberate presentation, the supervisory assessment inherits that underestimation. The independent judgment, which is central to the idea of external supervision, can be compromised before an examination begins.

My research found that regulatory respondents were strongly oriented toward institution-level safety and soundness as the primary frame for cyber risk. While coherent, it skews heavily towards the institution’s question and not the regulator’s. Distinctly prudential objectives such as systemic stability, contagion risk, and sectoral resilience were less prominently emphasized. Given that this framing shapes what gets measured, escalated and addressed, the gap here isn’t simply academic.

Asking the regulator’s question properly

Regulator’s correctly asking their own question has the potential to materially shift not only the process of, but the results from cyber supervision.

The starting point is being explicit about what the prudential objectives actually are and what would threaten them. This should not be seen as an abstract thought exercise. It requires enough specificity to derive what kind of cyber events, affecting which institutions, at what magnitude, would constitute a genuine threat to those objectives. This is a prior analytical step that most supervisory cyber risk frameworks diminish, delay, or skip entirely, in favor of moving straight to the examination without establishing what they’re examining for.

With the regulator’s question in mind, the next focus should be on building an assessment that while considerate of the firm’s view, is sufficiently independent of it. A firm’s internal assessment will always be a useful data point, one that should not be dismissed, but regulators benefit from an analytical approach capable of reaching conclusions an institution’s assessment might miss. In part, this requires the ability to characterise exposure in terms that are comparable across institutions, that can be aggregated where systemic risk is the concern, and that produce outputs useful for supervisory decision-making rather than just for internal governance.

This is precisely the kind of analytical problem cyber risk quantification was built to address as it produces probabilistic estimates of loss event frequency and magnitude that can actually be interrogated, compared, stress-tested, and used to inform decisions. Despite their popularity, qualitative assessments built around the methodological tools designed for institutions, cannot do this reliably. While they are able to describe risk related information, they fail when it comes to actually measuring risk exposure.

The harder implication

Asking the right question will naturally change what regulators need to be capable of doing. Specifically for cyber risk, this is where the argument may get uncomfortable, particularly given widely acknowledge talent shortages.

But if we consider the regulator’s question to be genuinely distinct from the institution’s, and I think the case for that is clear, then regulators need independent analytical capacity to answer it. Reviewing an institution’s analysis and opining on it will not be enough.

The actual capacity to form an independent view of whether an institution’s cyber risk exposure threatens prudential objectives is what is needed. This is a more demanding capability than most supervisory frameworks currently possess or have been resourced to develop.

Building that capacity will neither be trivial nor fast. But fortunately it starts in a place that costs nothing, with regulators being explicit about what question supervision is actually trying to answer and whether the tools currently in use are capable of answering it.


Leave a Reply

Your email address will not be published. Required fields are marked *