Stop Demonizing CVSS: Fix the Real Problem

The New “Usual” Suspects
If you read the newest risk-based vulnerability management literature, it appears we have a new favorite punching bag: the Common Vulnerability Scoring System (CVSS). You seemingly can’t throw a rock into the “vuln-o-sphere” without hitting someone dunking on CVSS or the National Vulnerability Database (NVD).
The argument goes something like this: “Exploitation rates are up, ransomware is surging, and vulnerabilities are multiplying like rabbits. Ergo, how we prioritize, and specifically CVSS, must be the problem to address.”
Wrong. Blaming CVSS for the state of vulnerability management is like blaming the bathroom scale for your weight. The tool isn’t the problem; it’s how we use it. In this case, it’s lazy thinking, and it’s dangerous. There are some real problems with CVSS and NVD, but we can’t let those very real concerns get in the way of the larger issues at play.
My CVSS Journey
Let’s start by going back to 2017. I was a doe-eyed grad student, certain I’d crack the code and replace CVSS (v3.0 at the time!). Armed with all the hubris of someone new to the field, I believed I could design a better system. It wasn’t hard to see why people thought CVSS was broken.
But after thousands of hours of research and experimentation, I arrived at a simple truth: CVSS works well enough when used correctly. The problem isn’t the algorithm itself; it’s how we, as an industry, operationalize it. Or more accurately, how we don’t. The primary problems we face are difficult operational problems, and the current industry trends are to avoid solving these harder challenges.
Why CVSS Gets a Bad Rep
Before jumping in too far, let’s break down some of the grievances against CVSS. There are valid concerns here, but ultimately the problems with CVSS are operational in nature, not from the algorithm itself.
More on that later. Let’s look at some of the grievances:
- “It Overrates Vulnerabilities.” The primary CVSS score used by organizations is the base score. This score measures severity, not risk. Expecting CVSS to give you a one-size-fits-all risk score is like expecting a location’s Yelp stars to predict how your date there will go. You have to provide some of your own context. Let’s face it, my Tinder history would have probably been very different if this wasn’t true! CVSS, it is meant to be extendable and go from measuring just severity to becoming an actual risk rating algorithm, but this is rarely used. Organizations need to overlay their own business context. If they don’t, that’s an operational issue, not a prioritization algorithm issue.
- “It Doesn’t Consider Business Context.” Again, the issue here is with implementation. CVSS does consider business context, but organizations often ignore these features and rely on the base CVSS score to measure a vulnerability’s risk. It’s not surprising that, when used this way, CVSS can be viewed as lacking.
- “NVD Can’t Keep Up.” True, the NVD is slow. But here’s the kicker: vendors and Coordinated Vulnerability Disclosure (CVD) teams can calculate scores themselves. They just don’t. Why? Because it’s not mandatory, and everyone’s busy pointing fingers at everyone else.
Why Our Problem Isn’t CVSS
CVSS calculated accurately requires effort by the end organization. But this is true regardless of which prioritization scheme you choose to use. While the reasons cited above may sound like dealbreakers, and they very well are big problems, they are the symptoms of deeper problems in the industry and not the root cause of our inability to keep up with attackers.
Once you have a prioritization algorithm in place, running data through it is easy. The real challenge lies in gathering accurate, standardized, and complete data, at scale. Our problem isn’t that prioritization tools like CVSS or even newer approaches like EPSS don’t work. It’s that we’re feeding these algorithms incomplete or bad data, expecting miraculous results.
Garbage in, garbage out.
If we want better results, we have to understand the data we’re feeding the algorithm and eliminate the garbage.
Asset Management
Let’s talk about asset management for a moment, the boring yet critical backbone of any prioritization effort. We love to blame scoring systems for our problems, but without a complete and accurate inventory of assets, even the best algorithms will fail. You can’t protect what you don’t know you have, and you can’t prioritize what you can’t fully understand.
Most organizations are still struggling to map their infrastructure comprehensively. Shadow IT, unmanaged assets, and poor visibility are all drivers. No algorithm can fix this for you—no matter how sophisticated.
Data Standardization: The Real Kryptonite
The CVSS algorithm can handle a lot, but it can’t account for the wild inconsistencies in how vulnerability data is collected, submitted, and shared. The real challenge is getting everyone—from vendors to vulnerability coordinators—to agree on standards for reporting vulnerabilities. Right now, submitting CVEs is largely voluntary, and many vendors simply don’t bother.
This leaves critical gaps in the vulnerability ecosystem, with third-party security firms and scanning tools stepping in to fill the void. While helpful, this patchwork approach creates fragmentation, forcing organizations (and NVD) to rely on inconsistent data sources. The new CISA Vulnrichment initiative is an example of a program designed to try and help with this issue. Only time will tell how it works out. I’m not hopeful.
The Industry’s Bad Habits
Our industry has decided, whether purposefully or through incentives, to create a fragmented, goldfish-memory and avoidant approach to vulnerability remediation. In our experience, the focus tends to be in two areas:
The Cult of Prioritization
- Vendor Proprietary Scores. Code word: RBVM or Risk-based Vulnerability Management. Vendors sell proprietary risk scores as their main product. But these are often black-box algorithms that leave organizations guessing. Worse, they’re often just CVSS base scores with a sprinkle of vague “business context” on top. Hundreds of millions, probably billions, of dollars have been invested in RBVM solutions and numerous scanners, but we keep running into the same old issues.
- New academic and end user prioritization algorithms. Vendor scores don’t do the trick, so many organizations that are non-vendors are pushing their own approaches to prioritization. Some approaches, like the Exploit Prediction Scoring System (EPSS), are great, but they have a narrow focus. EPSS predicts the likelihood of exploitation in the near future, not the overall risk posed by a vulnerability. It’s a useful tool—not a replacement for CVSS. JP Morgan recently released its own scoring system it wants to grow adoption for. And, of course, various Nucleus customers utilize custom risk scoring algorithms in their Nucleus deployments.
We’re obsessed with finding the “perfect” prioritization algorithm, ignoring that diminishing returns kick in fast. We’re wasting valuable time tuning an algorithm that we could be spending preventing real risks to organizations. That should be the focus.
More F***ing Discovery tools
I don’t think the fabled library at Alexandria would be able to hold all the marketing materials published today from vendors offering new vulnerability scanning solutions. Even more than the SIEM market, VM scanning is the perfect example of “Keeping up with the Jones’s.”
“You just have to see all the cool vulnerabilities my new Silicon Valley-branded, chrome-plated scanner discovered!”
It isn’t uncommon for organizations to feel like they need multiple scanning tools, sometimes 15 or more. Did I mention they all have their own risk-based scoring system?
We do this because we are not confident in the priorities that we have. We’re hoping if we have more information that we will make some critical discovery or find a needle in the haystack. Even worse, we are susceptible to buying products on promises because most organizations are not interested in being VM experts. Who can blame them? Unfortunately, this actually exacerbates the problem, because now we are continuing to avoid the harder challenge of fixing the vulnerabilities and finding more risks to resolve.
The Path Forward
If we’re serious about fixing vulnerability management, we need to stop re-inventing the wheel and start patching the cracks in the foundation. Here’s a non-comprehensive list to help get us started:
- Standardize the Algorithm. Just like encryption has industry standards for different purposes, vulnerability scoring needs to have universal algorithms that are used correctly. CVSS is the closest thing we’ve got. Vendors should adapt to it—not the other way around. Then the question becomes “How do we make the CVSS score work for us?” rather than complaining that the scoring methodology needs to change.
- Fix the Root Data Problem. The real challenge isn’t CVSS; it’s collecting data to accurately calculate a risk score. We need better, faster, more consistent data. We need this at the top level where CVSS sits today, but we also need it at the organization-level so the algorithm can be adapted properly. This is where federal intervention could make a real difference. Automated scoring, improved standardization, and mandatory submissions should be the new norm. It is a common refrain (also in the vuln-o-sphere) about how the number of vulnerabilities is increasing. The reality is that reporting CVEs is generally an opt-in process, and vendors in the supply chain do not consistently report vulnerabilities. Which means that we’re missing a ton of coverage in the vulnerability ecosystem, but that gap is currently being filled by third parties instead of those vendors developing products.
- Make It Easy. Organizations want an “easy button.” Incentives. They shouldn’t have to reverse-engineer vendor scores or build risk models from scratch. Simplify the process, and adoption will follow.
- Operationalize, Don’t Optimize. Stop overcomplicating prioritization. You can make a lot of progress with minimal analysis. It’s not about squeezing an extra 1% out of prioritization algorithms. It’s about using existing tools to reduce risk. CVSS V3 and V4 are good enough to get a really long way – if used as intended. We need cybersecurity vendors to solve the problem of getting the data TO the algorithm, not just creating our own disjointed algorithms. We need to align industry incentives towards the outcome for the collective good.
Final Thought: Stop Fighting the Wrong Battle
CVSS isn’t the enemy. Many organizations use it today because it’s the easiest and most reasonably simple standard across different discovery tools. But because they can’t gather the proper data, we fall back on base scores, which are indeed flawed. The answer is not to throw out the standard we have and go back to the mid-90’s approach where nobody could effectively communicate.
We are all in this together. Going in the direction of pay-to-play for vulnerability data is not the answer. We must preserve the standard we have.
CVSS is a tool. Yes, it is flawed, but it’s also essential. Demonizing it distracts from the real issues. Ultimately, we’re not losing the vulnerability management war because of CVSS. We’re losing because we’re busy fighting the wrong battles.
Let’s get off our high horse, fix the data, and start correctly using the tools we already have. I think we’ll be surprised at what we can accomplish if we work together. Otherwise, before long we’re going to find ourselves having to not just buy scanning tools but also buying a bunch of vulnerability databases and wading through a sea of scattered information. That doesn’t sound particularly efficient or effective to me at all.
See Nucleus in Action
Discover how unified, risk-based automation can transform your vulnerability management.