Implementation Playbook
From CVSS-only to risk-based prioritisation
1
Asset Scoring
2
Threat Intel
3
Tiers & SLAs
4
Remediation
5
Operationalise

This is the implementation playbook. It assumes your leadership has already bought in.

If they haven't, start with The Business Case for Risk-Based Vulnerability Prioritization.

You've been told to move from CVSS-sorted patching to risk-based prioritisation. The methodology is sound, the data sources are free, and you don't need new tooling.

What you need is a structured process to score your assets, enrich your vulnerability data, and change how your team triages. This guide walks through that process in five phases.

Before you start, make sure you have:

  • A vulnerability scanner producing CVE-level findings across your environment
  • An asset inventory — even a rough one. You need to know what's running and where
  • Access to CISA KEV and EPSS data. Most enterprise scanners already surface this. If yours doesn't, both are free via API
  • A business stakeholder who can tell you which systems matter. This is the hardest prerequisite. Without it, you'll guess at asset criticality and get it wrong

If any of these are missing, fix that first. Everything downstream depends on them.


Phase 1: Asset Criticality Scoring

This is where most implementations fail. Not because it's technically difficult — because security teams try to do it alone.

You cannot score asset criticality from the security team's perspective.

A database server means nothing without knowing what business service it supports. Your payment processing database and your dev test database are both PostgreSQL on Linux.

One is a 5/5, the other is a 1/5.

Start with services, not servers

Don't try to score 10,000 assets individually.

Identify your business services first — payment processing, customer portal, HR platform, ERP, email — and score those. The assets underneath inherit the score.

Most organisations have 30–60 business services that matter. That's your scoring list.

How to score

For each business service, score four criteria on a 1–5 scale:

  • Revenue impact — what does a 24-hour outage cost financially?
  • Data sensitivity — what's the worst-case data exposure?
  • Business dependency — how many teams are blocked without it?
  • Operational continuity — what does downtime cost per hour?

Use the worksheet below to score your services — or try a sample to see how the scoring changes.

Asset Criticality Scoring Worksheet
Score a business service. Assets underneath inherit the rating.
Revenue impact 24-hour outage — what's the financial hit?
1None
2Minor
3Partial loss
4Significant loss
5Revenue stops
Data sensitivity Worst-case data exposure?
1Public content
2Internal data
3Employee PII
4Customer PII
5Payment / Health / Regulated
Business dependency How many teams are blocked without it?
1None
2One team
3Multiple teams
4Function impaired
5Operations stop
Operational continuity What does downtime cost per hour?
1Negligible
2Workarounds exist
3Degraded service
4Significant cost
5Catastrophic
Score all four criteria to see the result
Service Criticality
Suggested SLA

Every asset supporting that service — app servers, databases, load balancers, API gateways — gets the service's score. If an asset supports multiple services, it inherits the highest.

Getting the scores

The business owns these scores, not security.

Pre-populate a list of services you already know about, and have the business unit owners validate and score them.

When everything gets rated a 5, force-rank: "If this and the payment gateway both go down, which gets fixed first?"

Shared services like Active Directory, DNS, and email — score those with IT ops.

Once agreed, tag the scores in your vulnerability management platform using a consistent taxonomy like criticality:5, criticality:4 — that's what drives your workflow downstream.

Common mistakes

  • Trying to score every asset individually instead of scoring services and inheriting
  • Skipping the business conversation and guessing from the security team
  • Over-engineering the taxonomy with 15 categories instead of the 4 criteria - revenue impact, data sensitivity, business dependency, and operational continuity
  • Not getting sign-off — if the business doesn't agree with the scores, they'll fight your SLAs later

Phase 2: Integrating Threat Intelligence

You have asset criticality scores. Now you need to know which vulnerabilities are actually being exploited.

Two data sources matter here:

CISA KEV — a catalogue of vulnerabilities confirmed exploited in the wild. Treat every KEV entry as urgent regardless of CVSS score.

EPSS — a machine learning model that predicts the probability of a vulnerability being exploited in the next 30 days. Scores range from 0% to 100%.

Most enterprise scanners already surface both. If yours does, this phase is about validating the data is flowing and understanding how to use it. If it doesn't, both are available via free public APIs.

How these fit together

CVSS tells you severity.

CISA KEV tells you what's confirmed and exploited.

EPSS tells you what's likely to be exploited soon.

None of them tell you whether the vulnerable asset matters to your business — that's what Phase 1 gave you.

The combination of CVSS, CISA KEV, EPSS, and Asset Criticality is your risk score:

Risk Scoring Calculator
Input a vulnerability scenario. See how the four factors combine into a risk tier.
Scanner
CVSS Score
7.8
High
Threat Intel
CISA KEV
Is this CVE on the KEV list?
Threat Intel
EPSS Score
15%
Moderate
Your Team
Asset Criticality
From Phase 1 scoring
Set all four inputs to see the risk tier
CRITICAL
Suggested SLA
ConditionRisk TierSLA
KEV = Yes + Asset Criticality 4–5Critical24 hours
KEV = Yes + Asset Criticality 2–3High7 days
KEV = Yes + Asset Criticality 1Medium30 days
EPSS > 40% + Asset Criticality 4–5Critical24 hours
EPSS > 40% + Asset Criticality 1–3High7 days
EPSS 10–40% + Asset Criticality 4–5High7 days
EPSS 10–40% + Asset Criticality 1–3Medium30 days
EPSS < 10% + Asset Criticality 4–5Medium30 days
EPSS < 10% + Asset Criticality 1–3Low90 days
CVSS accelerator: CVSS 8.0+ on asset criticality 4–5 bumps the tier up by one level (Medium → High, High → Critical). CVSS doesn't drive the tier, but high severity on a critical asset shortens the SLA.
On lower-criticality assets, CVSS acts as a tiebreaker within tiers — a CVSS 9.8 on an isolated dev sandbox with no exploitation activity is still Low.

The risk score from the calculator above feeds the tiers and SLAs in Phase 3.


Phase 3: Risk Tiers and SLAs

Now it is time to turn that into a system your team can work with.

Define four tiers

The risk scoring calculator in Phase 2 maps vulnerabilities and assets into four tiers. Each tier needs a clear SLA — timelines your team can meet.

Tier
SLA
Trigger
Example
CRITICAL
24 hours
KEV or EPSS >40% on critical assets
Actively exploited RCE on your internet-facing payment gateway
HIGH
7 days
KEV on mid-tier assets, or high EPSS on lower assets
KEV vulnerability on your internal CRM holding customer PII
MEDIUM
30 days
Moderate EPSS, or low EPSS on critical assets
Privilege escalation on an internal HR system with employee PII
LOW
90 days
Low EPSS, low criticality
CVSS 9.8 on an isolated dev sandbox with no known exploit

Start conservative. If your current Mean Time To Remediate (MTTR) for critical vulnerabilities is 38 days, setting a 24-hour SLA on day one is setting your team up to fail.

Begin with what's achievable, measure compliance for 30 days, then tighten.

Handling exceptions

Not every vulnerability can be remediated within its SLA. The system breaks down if there's no legitimate way to say "we accept this risk for now."

Define a simple exception process:

  • Who can approve (security lead for 30-day extensions, CISO for anything longer)
  • Maximum duration before re-review
  • What gets documented (the risk, the reason, the compensating controls, the expiry date)

Exceptions without expiry dates become permanent. Every exception gets a review date.

Run a pilot

Start with one business unit — ideally one with a mix of critical and low-criticality assets. Classify their backlog, apply SLAs, remediate Critical and High findings, and measure MTTR against the new targets.

The pilot gives you two things: proof that the model works, and the real-world data you need to adjust thresholds before going wider.


Phase 4: Remediation Workflow

Knowing what to fix first is half the problem. The other half is how your team actually works through it.

Patching isn't the only option

Not every vulnerability has a patch available on day one. And not every patch can be deployed immediately — change windows, dependencies, uptime requirements. Your team needs a decision path:

Remediation Paths
Path 1 Preferred
Path 2 Mitigate
Path 3 Last resort
Patch it
Can't patch — reduce the risk
Accept with documentation
When
When
When
Patch is available and you can deploy within the SLA window.
Deployment is blocked, no patch exists, or the system is legacy.
No patch, no viable mitigation. The business accepts the risk.
Do
Do
Do
Deploy the patch through your standard change process.
Harden, isolate, restrict access, or segment. Document the control and set a review date.
Document the risk, the reason, the approver, and set a review date. Exceptions without expiry dates become permanent.

Verification

A vulnerability is considered remediated when a re-scan confirms it's gone.

Build verification into the workflow — either automated re-scans on a schedule or manual verification for Critical and High findings within 48 hours of the reported fix.

Ticket automation

If your scanner integrates with your ticketing system (ServiceNow, Jira), automate ticket creation for all findings. The risk tier sets the ticket priority and SLA. This removes the manual triage step that bottlenecks many teams.

How it all connects
1
A vulnerability lands in your backlog with CVSS, KEV status, and EPSS score already attached — your scanner handles that.
2
The asset it sits on already has a criticality score from Phase 1.
3
Those four data points produce a risk tier, which sets the SLA.
4
Your team picks a remediation path and executes within that window.

Phase 5: Operationalise and Measure

Everything up to this point is a pilot. This phase makes it permanent.

Metrics that matter

Track four things:

  • Mean Time to Remediate — measured separately by tier: Critical, High, Medium, Low
  • SLA compliance rate — percentage of findings remediated within the target window
  • Risk tier distribution over time — are your Critical and High findings reducing?
  • Exception health — how many risk acceptances are past their review date?

Reporting

Your CISO needs a monthly view: MTTR trends, SLA compliance, exploitable exposure on critical assets.

The board needs a quarterly view: is our risk posture improving? Critical/High findings trend over time answers that.

The 90-day review

At the 90-day mark, you'll have enough operational data to know what's working and what isn't.

Three things tend to need adjustment:

  • SLA targets that turned out to be unrealistic
  • Scoring thresholds that aren't catching the right things
  • Asset criticality scores that have gone stale because the business changed underneath them

Revisit these regularly.


Common Pitfalls

Boiling the ocean — Instead of scoring every asset before you start, score your top 20% by revenue, run the workflow on those, expand later.

Security scores assets alone — If the business didn't agree to the criticality scores, they'll fight the SLAs. Get sign-off in Phase 1.

EPSS as gospel — It's a probability model, not a certainty. A 5% EPSS score doesn't mean safe. It means less likely.

SLAs nobody can meet — Start with what's achievable and tighten quarterly. Unrealistic targets just get ignored.

No exception process — If there's no legitimate way to defer a fix, teams find workarounds.

No baseline measurement — If you didn't capture MTTR and backlog size before starting, you can't prove the improvement. Measure before you change anything.


Tools and Resources


Final Thoughts

No framework survives first contact with your environment unchanged. The thresholds will need tuning, the business will push back on scores, and your first set of SLAs will probably be wrong. That's expected.

The value isn't in getting every parameter right on day one. It's in having a structured model that connects vulnerability data to business risk.

Start with your top 20% of assets. Run the full workflow on one business unit. Measure everything from day one so you have a baseline to improve against. In 90 days you'll either have the data to prove it works, or the data to adjust until it does. Both are progress.

The exec post made the case for why this matters. This playbook gave you the implementation. The rest is execution.