4 Steps RBVM
  • September 13, 2024
  • Tally Netzer

4 Simple Steps to Implement Risk-Based Vulnerability Management

Imagine if your fire alarm sensor went off every time you burned your toast or lit candles on a birthday cake. After a few false alarms, you’d probably start ignoring them or even turn your sensor off just to get some peace. This is what many information security teams are experiencing with vulnerability alerts. Given the sheer volume of alerts that security systems keep sending, it’s easy to see how a high-risk alert, which could lead to a devastating breach, can go undetected or even ignored

Today, most companies rely on a dozen or more vulnerability scanners to scan their diverse attack surfaces and alert security teams. Detecting these vulnerabilities is crucial, especially since Mandiant research proves that vulnerabilities were the top known exploit vector in 2023. However, these scanners operate in silos and generate millions of daily findings—overwhelming security teams with the volume of alerts. What’s worse, only about 6% of these vulnerabilities ever lead to a breach, according to the 2024 EPSS report. This flood of data makes it incredibly difficult to prioritize genuine risks, turning vulnerability management into a big data problem as much as it is a cybersecurity one.  

The good news? A well-established, risk-based approach can dramatically improve an enterprise’s ability to identify and prioritize high-risk vulnerabilities. Unlike traditional vulnerability management methods that rely on CVSS scores (which provide a theoretical severity), risk-based vulnerability management (RBVM) combines multiple risk factors to generate a risk score. This approach helps security teams measure the likelihood and impact of potential incidents, considering that every organization’s business and operational context is different. But how easy is it to accurately measure risk?  

The Challenges of Traditional Vulnerability Management 

In practice, most enterprise vulnerability management programs face significant challenges due to data overload. It becomes nearly impossible to analyze and identify the actual critical vulnerabilities in all the noise. Many security teams use manual processes or homegrown scripts to scour and pull data from various siloed scanning tools: network scanners, application security tools, cloud-native, and other sources, each operating in its own ‘language.’ Information is generally scattered across these tools, leading to a fragmented view of the organization’s security posture and making it very difficult for the team to pinpoint potential exploits.  

Even worse, after prioritizing critical vulnerabilities, security teams still need to identify the remediation owner. The complex enterprise IT landscape makes it difficult to assign responsibility, as security teams often lack insights into system owners. According to Gartner, the most commonly reported challenge in vulnerability management programs is having different teams assigned to vulnerability management and patch management (43%), followed by inadequate visibility into the remediation process (32%). 

Gartner VM Challenges Graph
Source: Gartner State of Vulnerability Management Programs in 2023

Such confusion not only slows down the response time, but also erodes trust between security and remediation teams and increases the risk of missing critical threats. Instead, companies should strive for a more integrated risk-based approach that prioritizes risks, sets appropriate SLAs, and effectively assigns tickets to the right remediation owners. Let’s explore this step by step.  

Step 1: Achieving Full Visibility with Unified RBVM  

The first step in rolling out risk-based vulnerability management is to bring all your risk data together and make sure it’s consistent all across. Think about it—your company might be using multiple scanners like Crowdstrike Falcon, Microsoft Defender, and others that integrate different risk frameworks. How can these be reconciled?  

The solution is to pull in all that data from every source and then standardize it. By normalizing the information, you can align risk scoring, regardless of the detecting tool. This way, you’re not comparing apples to oranges. Instead, you get a clear, unified view of your risks, so you can effectively prioritize and act where it matters most.  

Before we can move to the next step, let’s take a moment to clarify our terminology. 

  • Asset: Server, application, component, cloud instance, and more. 
  • Vulnerability: Any CVE, misconfiguration, or security weakness. 
  • Risk: The impact and likelihood of exploit of a specific vulnerability on a specific asset. 

Since assets are fundamental to risks, effective asset management is crucial to RBVM. Let’s now explore why it is important to avoid duplicate assets.  

Step 2: Correlating and Deduplicating Assets 

The first step to unifying vulnerability data is to identify the IT assets. This involves extracting all relevant data to create a comprehensive inventory of the assets in your organization and ensuring it is consistent over time. However, security scanners are often inconsistent or lack persistence when it comes to asset IDs. For example: 

  • The same container image instance has different IDs as it is copied to multiple code repositories and runtime environments.   
  • The same asset is denoted using different IDs across two or more scanners. 
  • The ID of an asset changes over time within the same security scanner. 

The challenge in each of the examples above is to correlate assets and deduplicate across different scanners, environments, and time. If you don’t deduplicate these assets, you could end up with an inflated number of alerts, especially as you use more scanners. In the case of container images, you may fail to fix the root of the problem and keep copying the vulnerable image upstream. Hence, it’s important to eliminate duplicates to ensure each asset is unique and consistent. 

Once you have a clean and accurate list of assets, the second part is to correlate the identified vulnerabilities—such as Common Vulnerabilities and Exposures (CVEs)—that may have been discovered by multiple scanners. The next question is, how can we prioritize the highest risk?  

Step 3: Scoring Risks with Business and Asset Context 

The third step is to accurately determine the risk for each vulnerability instance by weighing in the contextual factors. It is obvious that the same vulnerability on two different assets could have different risk profiles. This is determined mainly by four business context factors: 

  1. Business criticality: How important is this asset to my organization? Is this a CRM application? An online order form? A payment gateway?  
  2. Data sensitivity: Does the asset contain sensitive data or process/store personally identifiable information that could be at risk? 
  3. Internet exposure: Is the asset public-facing or internet-facing? Typically, assets that are more exposed to the internet are more exposed to risks.  
  4. Compliance scope: Does the asset fall under the scope of a compliance framework? For example, a payment processor that must comply with PCI DSS would typically be at a higher risk than one that is not required to comply with industry regulations.  

This approach is superior to CVSS scores, which are generic to the industry and only provide a theoretical number to measure severity (technical risk) without considering the context of the specific asset and organization. 

Step 4: Evaluating Your Risk with Vulnerability Intelligence 

Once all assets are mapped with corresponding vulnerabilities and risk levels, it’s time to add threat intelligence and relevance to each risk. This helps create a deeper, more tailored risk model that allows you to analyze relevant threats and account for trends and changes in the threat landscape. One of the most effective ways to add these insights is using threat intelligence feeds, including premium sources such as Mandiant and open-source ones like the Exploit Prediction Scoring System (EPSS). 

Threat intelligence feeds provide data that answers the following questions: 

  • Is there an exploit available? (Referencing databases like CISA’s Known Exploited Vulnerabilities) 
  • Is this vulnerability currently being exploited in the wild? 
  • Who are the threat actors targeting this vulnerability? 
  • What is the likelihood of an attack? 

Keep in mind that not all vulnerabilities are created equal. Some might be rare but very dangerous, while others may be common but lack the capability for targeted attacks. Take the Rowhammer attack, for example. It exploits a vulnerability in RAM by repeatedly accessing specific memory rows, leading to bit flips through electrical interference. While this can result in random data corruption, it doesn’t have the precision to target specific data or systems. It’s an opportunistic attack, yet not critical enough since it cannot be directed at specific targets. 

Combining predictions with real-time threat intelligence gives you a deeper, more thorough framework to prioritize risks.  

Adopting a Risk-Based Security Approach That Works 

Implementing risk-based vulnerability management requires a strategic approach, planning, and disciplined execution. Nucleus is committed to unifying vulnerability management with customized risk scoring, integrated threat intelligence, and automated prioritization workflows in one platform. 

The Nucleus platform provides comprehensive visibility across your security tools, correlates and deduplicates assets, provides business context to score risks, and incorporates vulnerability intelligence. By following this approach, organizations can overcome the ‘volume problem’ facing security teams everywhere, discovering and resolving the riskiest threats to their security.