Here at Nucleus, we often get questions from our customers about different vulnerability scanners and which are the best for the job — so much so that earlier this year, we put together our list of Top 5 Network Vulnerability Scanners and our tips for evaluating them for your unique program.
However, it’s important to note that employing a vulnerability scanner is only step one in an effective vulnerability management program.
Think of it like cooking up a delicious meal – bringing all the ingredients together in one place is only the first step. There still are actions to be taken, ingredients to blend, and dishes to be served. The same is true with vulnerability scan data. The true, tangible outcome of vulnerability management comes from turning scan data into something actionable, measuring the success rate, and repeating that process month over month to make progress over a long period of time… just like steadily perfecting your favorite recipe over the years.
Unfortunately, too many vulnerability analysts often get lost in the overwhelming data produced in the first step alone.
The Overwhelming Challenge of Vulnerability Scan Data
Any vulnerability scanner can produce a list of findings and can likely provide them in more than one format. However, the true challenge comes in acting on the large amounts of data that these scanners output.
If you have a large number of systems on your network, your scanner is likely going to produce more data than you can load into Excel and analyze using pivot tables in a reasonable amount of time. In large enterprises with tens of thousands or hundreds of thousands of systems, that data could be millions of rows. Plus, there’s a chance that your scanner may or may not be able to break the file into smaller pieces for you, or that you may have to resort to some other method to read and analyze the file.
For example, we’ve had customers try and load all their data into a business intelligence tool, only to find out that the tool couldn’t calculate the basic metrics or KPIs they needed with the speed and/or reliability they expected.
Unfortunately, this leaves teams having to manually slice up CSV files and calculate basic statistics like counts of exploitable and non-exploitable vulnerabilities and MTTR. In this situation, most teams find themselves simply scanning and tracking what changed in between scans, rather than leaving them with any time at all for vulnerability management. Plus, hand-generating reports is so slow that as soon as a new one is created, it is not likely to reflect the most recent remediation activity, and this makes it difficult for teams to know which data is accurate.
But while it’s easy to assume that brute force and head count is enough to tackle the problem of an overwhelming amount of scan data, that’s not the best approach. If you have 150 million vulnerabilities, it doesn’t matter how many you fix each week, because you’re never going to fix them faster than the number of new vulnerabilities coming in. Rather than focusing on the sledgehammer approach, you instead need to focus on precision.
Taking a Risk-Based Approach to Vulnerability Management
The need for high-performance data analysis is key to any enterprise vulnerability management program, and that starts with being precise about the vulnerabilities that you are tackling first. Here are four ways to get started with bringing a risk-based approach into your vulnerability management program.
1. Normalize all your data in one single place
Security analysts need the ability to quickly gain access to asset and vulnerability data so they can quickly assess risk, take action, and automate vulnerability management workflows. A risk-based vulnerability management platform like Nucleus not only helps you aggregate and consolidate all your asset and vulnerability data into a single platform, but also gives you real-time access to all your vulnerability data across 100+ integrations, which you can then use to de-duplicate and normalize your data. This provides you with a better and more complete view of vulnerability risk across your enterprise.
By taking this kind of risk-based approach to vulnerability management, the basic metrics that you used to spend weeks calculating in massive spreadsheets can now be recalculated every time a new scan is ingested.
2. Manage vulnerability exceptions, mitigations, and fix actions
Sometimes you need to factor asset context into the vulnerability decisions that you make, so you need a tool that makes it possible to handle exceptions and mitigations directly, as well as track them. Nucleus allows you to deprioritize any vulnerability and leave a note stating why you did so auditors can gain visibility into risk exceptions. This is also helpful if you and your remediation teams investigate false positives, because you can upload and attach evidence directly to the finding so when the auditor comes along once a year asking the tough questions, you can quickly pull up any paperwork you need and show them. Anything your remediators find helpful, you (or they) can put directly in the tool. That way if the finding ever surfaces again, your team can follow that same remediation procedure that worked last time.
3. Manage risk acceptances
Risk acceptance is always easier to tackle when you can make informed, intelligent decisions about the risks you ask leadership to accept. Laying in industry-leading threat intelligence from sources like Mandiant allows you to make informed decisions based on their vulnerability intelligence and analysis. To make this even easier, this leading threat intelligence is included within the Nucleus platform, where each vulnerability is clearly labeled as critical, high, medium, and low. Plus, when there is threat actor activity around a vulnerability, it names which groups are using it and what they are doing with it, so you no longer must rely on your gut feeling about what kind of risk you are accepting.
Writing up risk acceptances becomes much easier when you collect the business case for accepting the risk from the appropriate stakeholder. Plus, as a bonus, your remediators have one less distraction on their list.
4. Layer threat intelligence into your program
Mandiant threat intelligence isn’t just for writing better risk acceptances. You can also use it to divide the workload in a much more realistic and sustainable fashion than the traditional method of using CVSS.
So, rather than telling teams they have 14 days to fix every critical vulnerability, and 30 days to fix every high-severity vulnerability (and then wondering why they can’t fix 4 million vulnerabilities out of your backlog of 8 million in a single month), you can instead create security policies based on professional vulnerability intelligence. This helps you prioritize the 7-8% of your vulnerabilities that threat actors are likely to use and start hitting your due dates 85% of the time instead of missing them 85% of the time.
Vulnerability management is all about making security decisions that hold up to scrutiny and managing your vulnerability debt. When you use the four steps above in your vulnerability management program, you can come away with an actionable number of vulnerabilities that is always a much smaller number than you expect… and best of all, more manageable.
Few IT organizations have enough resources to handle more than about 10% of the vulnerabilities in their environment. That means they have a very limited remediation budget. However, effective risk-based vulnerability management is all about spending that remediation budget as wisely as possible. 10% doesn’t sound like very much, but if you’re able to target it carefully, it’s usually enough to reduce an organization’s risk significantly.