Beyond the Checklist: What Cybersecurity Can Learn from Occupational Safety

Cybersecurity has matured technically but as a discipline we remain underdeveloped behaviorally.

You can see it in our ritualistic approach to managing risk where checklists take precedence over critical thinking.

That said, I don’t believe checklists are inherently bad. The issue is we’ve turned them into a crutch and they now do little more than provide a comforting illusion of control while leaving real risks largely unaddressed.

Across various industries, IT and risk teams are spending time chasing perfect compliance with frameworks, mechanically prioritizing CVSS scores, and mistaking vendor questionnaires for actual third-party assurance.

So it’s not much of a surprise that when a breach inevitably happens, we find ourselves asking the same question… “What did we miss?”

The uncomfortable truth is that we’re now so focused on not missing anything on the checklist that we’ve largely missed what matters.

Over time, we’ve become masters at demonstrating compliance while simultaneously sacrificing our ability to understand risk in context.

We’re now able to produce metrics, dashboards and audit evidence on demand but struggle to see how the next breach may unfold. Then when it does occur, we struggle to realize what we could have done differently and how it could have been avoided.

This isn’t a new problem. Occupational, aviation and other safety disciplines once fell into the same trap, one where checklists and box-ticking dominated.

This was until harsh reality, often through tragedies such as the BP Deepwater Horizon incident and the Air France Flight 447 crash, forced a change. The lesson they learned was simple but profound.

Completing the checklist isn’t the same as being safe.

Over time, with this insight safety evolved. It moved from compliance theater to a culture of risk awareness and accountability. From my peers in the OSH space, I’ve seen firsthand how significant of a behavioural leap it is when rote execution is replaced by adaptive thinking. The cybersecurity field needs to make a similar jump.

In the sections that follow, I explore how occupational safety escaped checklist fixation and what it will take for cybersecurity to do the same. Because unless we move beyond the checklist, we’re really just depending on luck while pretending to manage risk.

The role of checklists in risk management

Checklists have their place. In fact, I’m convinced that when used correctly, they can be useful tools.

In safety disciplines, they enable consistency by ensuring procedures are followed, steps aren’t skipped, and critical controls are verified. They reduce human error and provide a common framework for repeatable tasks.

For auditors, inspectors and regulators, they serve a simple purpose: proof. They are one possible means to show that an organization is following the rules.

Their use is widespread and across industries, checklists govern everything from equipment inspections to lockout/tagout procedures to pre- and post-shift safety checks.

But checklist oriented approaches are not without costs or limitations. When used as the foundation of a security or risk management program, rather than an enabler, they can lull organizations into a dangerous sense of security.

The Law of the Instrument reminds us that when all you have is a hammer, everything looks like a nail. In cybersecurity, we’ve made the checklist that hammer and it’s dulled our ability to think critically about risk.

The Hidden Traps of Checklist Thinking

One of the most common problems with checklists is in the way that they can substitute activity for assurance. If we’re not careful, the act of checking boxes can feel like progress, even when the underlying risks remain unaddressed.

The fact is, for many, compliance becomes the goal instead of safety or in cybersecurity’s case, actual risk reduction.

When teams fixate on perfectly completing checklists, they’re often more concerned with covering bases than with controlling hazards. This naturally leads to situations where unseen dangers persist simply because they weren’t on the list.

This is in essence the second major trap… being unprepared for the unexpected.

In environments that are heavily checklist-driven, workers, analysts, engineers, or auditors are conditioned to follow scripts. They learn to execute, not to think. Then when novel situations inevitably arise, they hesitate, waiting for permission or a process instead of exercising judgment.

For many in the cybersecurity field, every decision must align with a framework or pass a compliance gate. This is how creativity and critical thinking atrophy. Management in particular, teams too, stop asking, “What’s the real risk here?” and start asking, “What does the policy say?”

This naturally leads to the normalization of deviance. When checklists feel burdensome or disconnected from reality, people become OK with finding shortcuts. They sign items off without verification, taking a superficial look at things while treating controls as if they are ceremonial. The completion of the checklist morphs into a paper exercise that hides more risk than it prevents.

But unarguably the most significant flaw is structural.

Checklist heavy programs drive organizations to optimize for evidence rather than capability. We focus on generating artifacts for auditors instead of building systems that actually understand, adapt, and improve. We learn what to do, but not why it matters or how to tailor the approach to our specific environment.

In the end, checklist dependence creates the illusion of maturity while quietly eroding it from within.

A few ways the cybersecurity field abuses checklists

In the cybersecurity field, there a number of ways the checklist oriented approach pops up. Below are just a few to make the overarching point.

Compliance with frameworks as a poor proxy for “good” security

An inordinately large number organizations have built their entire security program solely around frameworks such as NIST CSF, ISO27001, CIS Controls, etc.

But frameworks are not the problem, our over-dependence on them is.

Rather than using the frameworks as a guide, we make it the entire security program and as an industry we know this is not an exaggeration. There are teams who will literally download the CSV layout of their preferred framework and going down the list, check off the items they satisfy, plan as best as possible to meet all others, and collect evidence for the auditors along the way.

There’s little, if any, consideration for a control’s necessity in the environment, ability to influence the organization’s risk exposure, or cost vs potential benefit. Here, the focus is more on did we implement a particular control than are we actually secure.

Vulnerability findings and patch advisories as checklists

This may come over as controversial based on how important patch and vulnerability management are for managing cyber risk. But, it is an uncomfortable truth that security teams focus heavily on whittling these lists as close to zero as possible and are often left with a false sense of security as exposure may remain unresolved.

When we prioritize items purely based on technical scores (e.g. CVSS) and do not factor in business impact, we are taking a checklist approach.

In our environments with limited resources, this leads to misallocation. Administrators who have been trained to start at the perceived top, are almost certain to focus first on a vulnerability rated 9.8 that affects a non-critical server, delaying or not committing efforts to addressing a vulnerability rated 6.9 that affects a customer facing web portal that handles sensitive data.

The same applies to patch advisories where patching all the “critical” and “high” rated vulnerabilities looks good on management reports and signals productivity.

Unfortunately, this does not necessarily equate to risk reduction as flawed security architectures, and a lack of awareness to and deficiencies in monitoring for attacker tactics, techniques and procedures can still leave us exposed.

Vendor and security questionnaires, internal or regulatory self assessments

Third-party risk management has also devolved into a compliance ritual. Internal risk functions have buried it under endless checklists, which are now mostly yes/no questions with just enough open-ended options to maintain the illusion of rigor.

These checklists, deployed as due diligence tools to assess potential and existing vendors, have become instruments of risk deflection. They have been turned into paperwork that serves no other purpose than to signal control rather than enforce it, leaving organizations more exposed than they realize.

It’s unnerving the extent to which the exercise has become pure compliance theater.

Vendors, overwhelmed by the volume of questionnaires, will literally recycle prepackaged responses, outright fabricate answers, or simply ignore them. On the recipient side, we become so eager to check the box, we do little more than skim the results, confirm all the right boxes are ticked and rarely question whether the controls actually work.

In many cases, a vendor can check all the boxes but still have critical, exploitable vulnerabilities and poorly managed systems, while recipients walk away without understanding the potential security gaps to which they are exposed.

Similar issues exist with cyber self assessments. The only major difference is that the flow of information is reversed. In an effort to demonstrate compliance to customers and regulators, organizations periodically perform and publish self-assessments against standards and regulations.

Given the incentive to obtain a passing score, it is not uncommon for self assessments to be completed based on the subjective interpretation of requirements. Teams rig the exercise to ensure compliance is met, rather than to provide an honest appraisal of security gaps.

Further to this, as the recipient of the assessment is often unable or unqualified to validate the outputs from these self assessments, companies are able to do the bare minimum to meet the same subjective interpretations.

Moving from Checklists to Culture

If cybersecurity is to evolve, the next leap won’t come from new frameworks or better tooling. Changing how we think about risk, responsibility and assurance is a necessity.

There needs to be a shift from compliance-driven execution to context-driven awareness, and from going beyond being able to prove we did the work to also proving what matters and why.

Frameworks as Starting Points, Not Checklists

Checklists still serve a purpose and the goal is not to discard them but to use them differently. High reliability organizations and the occupational safety field consistently show the value in using frameworks as minimum baselines rather than as final targets.

Their use should be to prompt questions such as “what hazards or risks exist in our environment” rather than “did we fully implement the CIS Critical Security Controls Implementation Group 2?”

In practice, this would require us to:

  • Use compliance frameworks to spark conversations about context specific risks rather than to generate work orders.
  • Empower teams to challenge irrelevant controls and explain their reasoning, rather than forcing blind adoption or documenting it as an exception.
  • Have leaders ask critical questions like “what is missing” rather than “how compliant are we?”

There needs to be a cultural shift from “demonstrate we did the thing” to “demonstrate we understand our risks.” This requires leadership that rewards thoughtful risk analysis over compliance theater, and auditors who assess quality of reasoning rather than presence of controls.

Context-Aware Prioritization Over Scores

Safety disciplines learned long ago that prioritizing hazards by generic severity scores misses context. A forklift driving through a rarely-used storage area is not the same as one moving through a crowded production floor.

It’s the same in firefighting. Responders don’t rush first to the biggest flames; they go where the risk to life and potential for escalation is greatest. A small fire near a gas line gets top priority over a larger blaze in an empty field.

Cybersecurity triage should work the same way: context, not surface level severity should dictates response.

Where that is the case, a cultural approach to vulnerability management would:

  • Empower teams to apply context rather than blindly following scores. “This is rated critical but we deferred as it’s on an isolated system with no sensitive data” should be seen as sound logic rather than a compliance violation.
  • Create space for engineering judgment. Senior analysts and engineers often know which vulnerabilities actually matter based attack paths and system architecture. Good culture elevates this insight rather than replacing it with automated scoring.
  • Focus conversations on attacker intent, and not just numeric ratings. “What could an attacker actually do?” needs to replace “What’s the CVSS score?” and this shift requires more mature threat modeling, not just a casual review of CVE details.
  • Recognize and reward thoughtful risk decisions, not just patch velocity. If a team deprioritizes a high severity vulnerability with clear reasoning and is later proven correct, that should be recognized as good judgment and not noncompliance.

The cultural challenge here is that contextual reasoning is harder to audit and doesn’t scale cleanly. Teams have to build competence and be trusted to think while leadership has to tolerate ambiguity. That’s in essence the only way to move from a control model to a capability model.

Relationships Over Questionnaires

In safety-driven industries, trust between clients/operators and suppliers is built on relationships, not paperwork. Rather than ritualistically completing questionnaires, both sides put effort into understand each other’s operations, risk and realities. Cybersecurity needs to take the same path.

For third-party security, a cultural approach would:

  • Replace or supplement questionnaires with substantive conversations. “Tell us about recent security incidents and what you’ve learned” will reveal more about security maturity than 200 cookie cutter yes/no questions. “Walk us through how you handle access control for customer data” will expose practices in a way documented policies won’t.
  • Build ongoing relationships rather than point-in-time assessments. Just as safety culture involves regular facility visits and open communication with suppliers, cybersecurity relationships should involve continuous information sharing. When a vendor has an incident, do they tell us? That’s a cultural indicator.
  • Focus on third party security culture, not their security artifacts. Do they have blameless postmortems? Do engineers feel empowered to raise security concerns? Is security integrated into development or bolted on? These cultural factors can predict future security better than our current checkbox compliance approaches.
  • Acknowledge mutual responsibility. In safety, when a contractor gets hurt on site, it’s treated as everyone’s problem. Similarly, if a vendor gets breached and it affects us, blaming them for not checking the right boxes misses the point as we chose to depend on them.

This is why regulatory moves like the SEC’s cyber disclosure requirements are a step in the right direction. While it’s still a mandate, it’s pushing for the kind of transparency we actually need. Rather than simply being another checklist, it sets the cultural expectation that material risks should be visible, discussed, and owned at the highest levels.

The irony is that evidence is already showing some organizations are trying to turn this into another compliance ritual with efforts to treat disclosure as a template exercise rather than a chance to demonstrate awareness and accountability.

But if approached with the right intent, it could do for cybersecurity what incident reporting did for occupational safety: normalize transparency as a sign of maturity, not weakness.

The hard part is that questionnaires and disclosures scale and relationships don’t. Most organizations manage dozens if not hundreds of vendors. A cultural approach means accepting that you can’t deeply assess everyone.

It needs maturity to accept that we can only focus attention on the vendors that truly shape the risk profile, while we apply lighter-touch mechanisms elsewhere. It’s less efficient on paper but far more effective in practice.

Beyond the Checklist

Cybersecurity doesn’t need more checklists, frameworks, or dashboards. We’ve largely mastered those. What we need is the courage to behave differently.

Occupational safety did not evolve because it invented new forms or templates. It evolved because practitioners widely changed their mindsets. The entire field moved from obsessing over rules and procedures to enhancing awareness and accountability.

We have the same opportunity now.

If we keep treating security as a compliance or paperwork exercise, we’ll stay in the loop of surprise, blame, and repeat. If we start rewarding curiosity, context, and transparency and if we train people to and let them think instead of just complying, we can build organizations that actually understand and manage their risk.

The next era of cybersecurity maturity won’t be defined by who checks every box. It’ll be defined by who knows which boxes matter, who can explain why, and who’s willing to act on that understanding before the next crisis forces the issue.

Leave a Reply

Your email address will not be published. Required fields are marked *