What is risk-based vulnerability management?
Risk-based Vulnerability Management focuses on prioritizing vulnerability remediation based on the risk that vulnerabilities pose as a greater threat to an organization. When using a risk-based vulnerability management approach, organizations should take into consideration the business criticality of an asset and the risk of a vulnerability to prioritize what to remediate first.
The challenge with prioritization without threat intelligence
Without threat intelligence, nearly 60% of vulnerabilities rate as high or critical, with about 12% making the dreaded critical rating. The conventional advice to focus on highs and criticals does little to reduce workload — it assumes that vulnerabilities are distributed on a bell curve or have even distribution, and neither is the case. When I analyzed the signatures from one of the major vulnerability scanners in early 2022, I found that 70% of its signatures rated a high or critical. 3% rated low. Just three.
And the news gets worse. Not only are high or critical vulnerabilities more numerous than mediums and lows, but those vulnerabilities occur more frequently in the real world as well. When I worked at an MSSP and measured my customers, I found that more like 80 percent of the vulnerabilities they encountered rated a high or critical.
Of course, your results depend on what software you have in your environment, but I spent a sizeable chunk of my career in a weird world where 80% of vulnerabilities are above average.
Measuring A Vulnerabilities Impact Over Time
While there is room in the CVSS equation to adjust the score based on threat intelligence, nobody recalculates it for everything. It’s generally a one-and-done deal, and the result measures the potential for a given vulnerability to hurt you.
But it’s like a scouting report. And not everything lives up to its potential.
Let’s take a lesson from my youth. When I was a kid, I hoarded baseball cards of Bo Jackson and Gregg Jefferies. According to everything I was reading, they were going to be the greatest baseball players of their generation, and maybe someday we’d talk about them the way earlier generations talked about Willie Mays or Ted Williams.
For various reasons, neither of them lived up to their seemingly limitless potential. Injuries were certainly part of it. They both had some great moments. But whatever the reason, instead of being 11s on a scale of 1 to 10, they both ended up being closer to 5.1. Above average, but just barely.
Jefferies and Jackson had a contemporary named Mike Piazza. He was drafted in the 62nd round, and only as a favor. The 62nd round was a throwaway round — it’s unusual for anyone drafted that late to make the majors at all. He didn’t project to be a star. If he worked really hard and got lucky, he projected to be a useful backup, at best. He exceeded everyone’s expectations. Even if you’re not a baseball fan, the name Mike Piazza may sound familiar. He played 16 seasons, made 12 All Star teams, made guest appearances on TV shows, was elected to the Hall of Fame in 2016, and is not only the greatest catcher of his generation, but advanced statistics and metrics say he was the fifth best catcher ever, edging out Yankees Hall of Famer Yogi Berra.
The problem with CVSS and methodologies like it is that it puts too much focus on vulnerabilities that never live up to their potential, like Bo Jackson and Gregg Jefferies, while leading you to ignore an overachieving vulnerability like Mike Piazza.
And whether you’re talking infosec, baseball, or anything else in life, prioritizing potential over results doesn’t lead anywhere good.
Challenges with CVSS Vulnerability Scoring
The most famous example of an overachieving vulnerability is Heartbleed. It’s easy to forget that Heartbleed was only a CVSS 5. But Heartbleed was the vulnerability of 2014, if not the decade. Heartbleed was so extreme we looked past its CVSS score and sounded the alarms when it hit the newswires.
The reason for that was simple: it broke SSL. It exposed the hole in CVSS because CVSS treats all information disclosure the same. Since it’s no secret what kind of information a given web server might be processing, attackers could just look for servers that solicit the kind of information they wanted. I can only speak to the organization I worked for at the time, but we treated Heartbleed with more urgency than the typical CVSS 10. It got the 1,000 on a scale of 1 of 10 treatment, at least on our public facing systems.
Risk based vulnerability management takes these kinds of factors into consideration, elevating vulnerabilities like Heartbleed from medium to critical. But just as importantly, it refactors underachievers. Critical Java vulnerabilities come along every quarter. Organizations repeatedly fail to treat those with the same priority they treated Heartbleed and Log4J, and by and large they get away with it. Otherwise, you would expect at least one high profile breach every quarter.
Java isn’t the only example. Most organizations are practicing risk based vulnerability management whether they realize it or not. They fix those Hall of Fame criticals one way or another. They may not get it done as quickly as they would like, but by and large they get it done. And yet, they typically have a backlog of faux criticals they haven’t fixed.
The question is, how does an organization go about using data rather than gut feeling to operationalize risk based vulnerability management?
Which Vulnerabilities should be prioritized?
Every kid wants to be an astronaut. One of my classmates wanted to be an astronaut and looked into it. He’s a data scientist now, so needless to say, he didn’t end up being an astronaut. He said one percent of military recruits get to be pilots, and one percent of pilots get to be astronauts.
That one percent rule seems to apply well to much of life. About one percent of high school athletes get to play professionally, and about one percent of those become Hall of Famers.
If more than one new vulnerability in a year rates a critical, we’re having a bad year as an industry. And fewer than 10 a month on average deserve the high. That’s a manageable workload.
The value of risk-based vulnerability management
I spent several years pushing patches in a DoD environment. At the time, I didn’t understand the due dates, and my management certainly didn’t understand it either. Looking back now, there must have been some risk calculation in their equation and I think if we had followed it, I wouldn’t have burned out. I fixed 800,000 vulnerabilities in about four years and my MTTR was well below 30 days
You can’t give 110% all the time, and even my methodology to cope wasn’t enough in the end. If I could have resolved the lower-severity vulnerabilities at a more relaxed pace, I would have lasted longer.
I have seen analysts state that in their observations, most businesses have no problem fixing 25% of their vulnerabilities, so we just have to tell them to fix the right ones. That’s based on the incorrect assumption that all vulnerabilities deploy at an equal success rate, which is something my experience violently disagrees with. But even if all the ones that can hurt you end up having solutions that are notoriously difficult to get down, when you’re talking two percent, it’s manageable.
And if only fixing two percent of your vulnerabilities sounds sketchy, I have good news for you. Most years, it’s between 20 and 32 percent of vulnerabilities that fall into the medium category. So you can raise the bar to fixing all of your criticals, highs, and mediums, still fix less than half what you would under a non-risk based approach, and do substantially more to lower the probability of a breach while not wearing out your remediation teams.
How to Operationalize your Vulnerability Management program with Risk-Based Vulnerability Management
When I consult with young vulnerability management programs, the complaint I most frequently hear is that they don’t know where to start.
To transform your vulnerability management program, the first thing you want to do is connect your vulnerability management tools to Nucleus. Once you’ve ingested some data, navigate to Vulnerabilities and click on Active. Click the widget that says Critical Risk Rating. These are the vulnerabilities that Mandiant Threat Intelligence deems critical. Log4J is one of them.
I wouldn’t be at all surprised if you find you have one or more of these vulnerabilities in your environment. Every month, if not every week, some new vulnerability gets discovered, gets a bunch of publicity, and distracts us from whatever we were already working on. And before you have that finished, another one comes out. Don’t let the hype machine set your priorities. Log4J was an example of a vulnerability that went from being newly discovered to being your biggest problem almost immediately. But that was very unusual.
The key to having a functioning risk based Vulnerability Management program, or better yet, a true Threat and Vulnerability Management program, is not letting high publicity vulnerabilities distract you from existing vulnerabilities that pose a greater threat to your organization. Keep your focus on the items that are most likely to do your organization harm, whether that item is new, or whether it is many years old.
Start with those and work on getting them resolved. You’ll want to reach a point where you can fix new criticals very quickly after they come in, but it’s OK if you’re not there yet. You’re getting started.
Meanwhile, deploy this month’s updates as they come out. The majority of new vulnerabilities won’t have high risk ratings yet, but I always found it was easier to just deploy all of them than to pick and choose. Deploying new updates will knock them out before they can become a threat, and will also help cut into your backlog, since some of them will supersede older updates.
Once you’ve conquered your criticals, start working on fixing highs. Then proceed to mediums. We don’t have a widget for the other Mandiant severities, but you can use the filter to find highs and mediums. You can also use an automation rule to set due dates. For handling vulnerabilities at scale over time, I recommend putting in automation rules to set due dates based on your information security policy.
It’s easier said than done, but vulnerability management is a regular discipline above all else. Perform that discipline regularly month over month, and you’ll see your risk trending downward toward an acceptable level, and once you reach that acceptable level, you’ll find maintaining it is easier than reaching it.