• December 23, 2022
  • Kevin Swartz

Back in October 2022, Nucleus had the opportunity to speak at the mWISE Conference in Washington D.C. – an event dedicated to connecting security practitioners from around the globe to share insights and knowledge, and network with each other. During this event, our CEO and Co-Founder, Stephen Carter, led a talk on Managing Risk of Open Source Libraries using Mandiant Vulnerability Intelligence, with a special introduction by Kevin Mandia himself. Below is a recording of the conversation, in case you missed it, along with the full transcript. Enjoy!

Introduction

Kevin Mandia: Every once in a while I think you meet a founder and you meet a company and they’re doing the right thing at the right time. My name’s Kevin Mandia, CEO of Mandiant, and all I do is show up to fund breaches and figure out what happened, what to do about it, and what we’re getting all the time from our customers when it comes to security and shields up and how do we protect ourselves. Everybody’s running Qualys, Rapid7, other scanning tools, or Tenable, and they’re going, “All right, we have 182,000 vulns on our network globally. Who do I call to patch them and how do I manage this process?” Those companies will say, “Well, we’ve automated the process.” They haven’t automated the process and they haven’t prioritized for you and we are constantly trying to help customers figure out how do we close the front door, which is the vulnerabilities facing the internet.

That’s where Nucleus comes in, that extra layer of logic where you can take all the disparate scan data to figure out, and I’m trying to do your whole presentation for you before you get up here, but the reason I want to do this is a lot of people think there’s not a need for it. There absolutely is a need to overlay logic and business logic on top of a bunch of disparate scan data to marry up, here’s your assets that matter most to you, here’s all your scan data vulnerabilities, and then you can prioritize, so what do you do about it? Whether that’s a patch or to come up with other compensating controls.

Then the second thing I’ll leave you with, which doesn’t come up at security companies that much, is on the side I do a lot of work with some venture capital companies and all I am is like the founder guy because I started a company that have self-funded and sold it twice, which is, I would argue, bad execution. But the reality is, being a self-funded company in today’s world, cyber is virtually impossible. But I get to meet a lot of founders, spend an hour with them and figure out which ones are going to make it and are real and the genesis of their company marries up to what their company is doing.

Steve, you and this company, the genesis of it all aligns. When you hear his background and what he did and then what he built, it all makes sense. You’re not getting a presentation, no offense to the Harvard Business School grads out there who say, “I went to business school and I’m going to be an entrepreneur someday,” and, “Oh, I’m going to cure this cybersecurity problem.” They jump in and you go, “What’s the genesis of your career?” “Business school.” I just go, “Oh, okay. They’re not solving this problem.” You need to be a doctor to cure cancer, in my opinion. You need to be an eye surgeon to figure out how to cure an eye problem. I think you need to know a little bit about cybersecurity before you start a company about it. With that, I want to introduce Steve Carter, who I’ve met and we’ve partnered with, and I’m a believer in what they’re doing, and I appreciate you being here. Steve, over to you.

How to Manage Risk of Open Source Libraries using Mandiant Vulnerability Intelligence

Stephen Carter: Well, thank you so much, Kevin. I think that was my far the best introduction I’ve ever had. It’s great to be here.

What I want to talk about today is managing the risk of open-source libraries using vulnerability intelligence. I chose this topic for a couple of reasons. The first reason is that this is a pain point that a lot of our customers have, something we talk about a lot at Nucleus. But the second reason is that vulnerability intelligence historically has lived more on the operational side of the house, right? It’s security analysts, it’s SOC analysts, and those folks that are primarily the consumers of intelligence. With this use case, we’re shifting intelligence left, we’re bringing it into engineering environments, into development environments. That’s really exciting stuff to me to see that happening, so that’s really why I chose this topic today.

Now, before I jump into things, I’m going to cover some background really quick. I know all you guys are professionals, you’ve all seen these terms before, but all of these terms tend to mean something slightly different depending on who you talk to, so I want to make sure we level set here and that when I say them, you guys at least know what I’m talking about.

Understanding Vulnerability Management Terms

Just to start with, vulnerability management. This is just the never-ending cycle of continuously discovering vulnerabilities, triaging vulnerabilities, and patching vulnerabilities. Pretty straightforward, simple concept. To Kevin’s point, if you ask practitioners in this space, guys doing vulnerability management in the trenches in large enterprises, they’ll tell you that vulnerability management represents some of the hardest cybersecurity challenges in the organization. That’s for a lot of different reasons that I could probably do a whole talk on the list of reasons.

But what we found is that it’s primarily because the scope of vulnerability management has increased so much in the last, I’d say, five to 10 years. It used to be that vulnerability management was really focused on just network security, on traditional computing devices. It was scan and patch, Qualys, Tenable, Rapid7. Today the scope of vulnerability management is much larger. It includes applications. It includes code, it includes OT, it includes cloud resources, firmware, and so the scope’s much larger, the problems are much more difficult to solve, the processes are more challenging. That’s where we’re at today with vulnerability management.

With risk-based vulnerability management. I really think of it as the same as vulnerability management, but all the decisions that you’re making are through this lens of risk. That’s an important distinction because if you talk to vulnerability management folks, most of them today are still prioritizing vulnerabilities based on what the tools are telling them to fix first, right, based on severity, and severity is not risk. You’re going to hear me probably repeat that about a dozen more times today. Severity is not risk. Impact is not risk. When we talk about risk-based vulnerability management, we’re implying that we’re doing vulnerability management. We’re making all these decisions about what to fix, when to fix our SLAs, our policies based on risk.

Now, that’s a good segue to the third point, vulnerability intelligence. This is going to be all the information that you need to make those risk decisions about vulnerabilities. I view vulnerability intelligence as just a slice of threat intelligence, a small subset of threat intelligence. That subset really correlates to all the different vulnerabilities and CVs out there and tells you all that information you need to make those risk decisions.

The Risk of Open-Source Libraries

All right, so when we’re talking with prospects and customers about managing the risk of open-source libraries in particular, these are two of the problems that come up really often. These are things we hear on a regular basis. The first one is that the security teams can’t get dev teams and product teams to update open-source libraries quickly enough. This is a speed issue.

Generally in this case, the organizations, they haven’t maybe fully adopted DevSecOps practices or DevOps practice as well. They’re in maybe a transition period. Sometimes there’s some dysfunction between application security teams and IT security teams or AppSec teams and product teams. But generally, they’re in this process of maturing and security teams are saying, “Hey, look, we’re accumulating all this risk. We’re accumulating vulnerabilities faster than we’re able to work them all, so we’ve got this vulnerability debt, or risk debt that we’re accumulating over time.” That’s when companies generally find us and we try to help solve that problem. But the good news is that this is actually less and less of a problem. From what we’ve seen in the last couple of years, especially, mid and large enterprises are doing a much better job of DevSecOps, of DevOps in general. They have shifted left, so it’s less and less of a problem.

The second problem is really what we’re hearing more and more about, which is that dev teams are spending far too much time updating open-source libraries. This is really in most cases a scenario where the organization has done a pretty good job of DevSecOps. They’re fairly mature, they’ve got all the right tools to discover vulnerabilities. They’ve got the SLAs in place, their dev teams are actually patching things, they’re fixing things, they’re meeting SLAs. But what we’re hearing from product teams is, “Hey, we’re spending so much time updating these libraries and fixing these vulnerabilities that’s impacted our velocity, that’s impacted our output. We’re not able to build the features that we want to build and fix other types of bugs and other security bugs because we’re spending so much time on this problem.” We’re seeing that more and more.

Now, you might say, “Hey, look. Nucleus is talking to people with problems all the time and people with challenges and vulnerability management.” Our sample size is pretty small. We don’t have every large enterprise as a customer. Hopefully one day we will. But what I want to do here is just paint the picture a little bit better at the macro level and look across all large enterprises. These are a bunch of different research reports that have come out from different vendors. All of these were in the last six months and I think they do a pretty good job of painting the picture.

I guess I’ll start on the top left and just zigzag back and forth. 96% of organizations are using open-source libraries in their code. I don’t think this is very controversial, right? If you guys spend time in enterprises, large enterprises especially, everyone is writing custom code. Every large enterprise has products they either sell or develop for internal use. Open-source is everywhere, basically.

The next point, average enterprise has 464 custom applications deployed today. Now, in this study, they looked at enterprises of all size, starting at, I guess, a thousand employees. At the floor there, I think there were 22 custom applications for enterprise of a thousand people, and the number just went up. But that’s a huge number. That was surprising to me because I know at least within Nucleus how hard it is just to maintain the security and update libraries and patches for one product. We have one product at Nucleus and it’s a big job, so 22 seems like a lot. 464 is kind of mind-blowing.

Now, the next point is that the average custom application has 128 open-source dependencies. You probably see where I’m going here. If you do the math, 128 times 464, it’s something like 60,000 dependencies. Now, obviously a lot of these dependencies are going to be reused across multiple apps. We’re talking about even if we just take a small percentage of that 60,000, say it’s 10%, we’re talking 6,000 open-source dependencies, all of which you have to monitor and scan for vulnerabilities, you have to triage, you have to patch, and they’re spread throughout the enterprise on lots of different apps.

Now, this next point, 650% increase year over year in attacks aimed at open-source suppliers. This is supply chain attacks. This is a pretty disturbing figure because what it tells me is that attackers are taking notice here. They realize, they see the increased use of open-source, and being opportunistic like hackers are, they’re going to spend their time where they think they can get the most bang for their buck with the least amount of effort. As a defender, I’m looking at this saying, “Okay, the increased focused on open-source supply chain attacks, how am I going to defend against that in my organization? What do I need to do to prepare? What do I need to start thinking about today to get out in front of this?”

That brings me to the last point. Only 49% of organizations have a policy that speaks to open-source libraries and the risk and the vulnerabilities associated with open-source libraries. Most organizations do have a vulnerability management policy, especially mid and large-size enterprise. But in most cases, they were written years ago when the scope of the vulnerability management program was very small, and they don’t really do a good job of accounting for all the new things that fall under the umbrella of open-source or now have to be accounted for in your vulnerability management program. That’s a big problem today.

Managing the Risk of Open-Source

Where to begin? When we get started with clients, one of the first things we do, we’ve got a discovery process where we ask them, “What are you doing today to manage the risk of open-source?” Most organizations today are doing a pretty good job with a lot of the front part of risk management in the context of open-source and that they’re doing a good job with discovery. We’ll talk to product teams, we’ll talk to product security teams, and they’ll tell you, “We’ve got scanning tools everywhere.” All right, source composition analysis tools, those are very common, so they’re scanning all their code. Every time a developer checks in code, they’re going to scan it for open-source libraries, and look for vulnerabilities. Every time they’re going to build a container image, they’re going to scan that container image looking for open-source library vulnerabilities. In production, they’re running Kubernetes, they’re running containers, they’re scanning those things in production on a regular basis. That part, most organizations are getting right.

When we drill down a little bit more and we say, “Okay, great. What happens when all these findings are discovered by all these different tools?” What they say is, “Yeah, we’ve got alerts that go out with all the different findings reports that go out automatically.” Sometimes we see organizations automatically generate pool requests to update the libraries. That’s very common. Sometimes organizations are actually blocking in certain parts of the CICD pipeline when certain vulnerabilities are discovered. All of this is great stuff. If you’re not doing it today, you should think about doing these things. But when we ask, “How do you prioritize those vulnerabilities, those findings, as they’re coming in from all these tools?”, this is really where things get interesting, and they just say, “Yeah, well, we’re taking what the scanning tools is giving us, and we’re stack ranking those by the severity that the tool has told us these vulnerabilities are associated to and mapped to.”

As you can see on this slide, I put some squares around the CVSS scores. I’m going to pick on CVSS a lot today. But in this case, these are fairly recent screenshots, and most of these tools today, SCA tools, and tools that are identifying vulns in open-source, they’re still using CVSS scores for prioritization. They’re still mapping labels of critical and high severity using CVSS scores. This is a big problem because CVSS is severity and impact, not risk.

Which brings me to this slide. Just what I said with the top bullet, CVSS does not equal risk, and 30 to 60% of vulnerabilities are considered high or critical severity according to CVSS scores. Now, that range is so big because it depends on which version of CVSS you’re using. But if you’re using the most recent version of CVSS, which is 3.1, I believe, it’s going to be on the higher end, it’s going to be over 50% of vulnerabilities that are considered high or critical.

This is bad for obvious reasons, right? If everything’s a fire, then nothing’s really a fire. But it’s really bad when you consider the volume of new vulnerabilities coming out each day, each month, each year. This year so far, I believe there’s around 20,000 new vulnerabilities that have been added to the National Vulnerability Database. Last year, I think there were around 20,000 in total. This year, I think we’re on target to reach around 25,000. It’s going to be a new record. Just to rewind the clock, I guess six years or so in 2016, for context, there were 6500 that year that were discovered and added to the National Vulnerability Database, so we’re talking about a 4X increase in just six years, and there’s no signs that that trend is going to stop anytime soon, so we expect it to keep going up and up. When you think about the 60% in the context of that, this is a problem that’s just getting worse and worse, really, every day, every month, every year.

The final bullet there is 75% of high and critical severity vulnerabilities according to CVSS, they’re just never exploited. This is kind of a bad and a good story at the same time. The bad news is that most of your development teams and product teams are just wasting a lot of time patching vulnerabilities that are never going to be exploited, that present no risk at all to your enterprise. They’re high severity but no risk. The good news is that, well, there’s an opportunity here to really separate the signal from the noise and focus on those vulnerabilities that will be exploited, that do present risk, and then free up a lot of your development team, a lot of your products team time and hopefully increase their output and velocity.

You might say, “Well, we don’t have a crystal ball. We can’t know exactly precisely which vulnerabilities will be exploited.” Fair enough, but we do have ways today to identify the vulnerabilities that are being exploited in the wild today with a high degree of confidence, and we do have ways to predict which vulnerabilities are going to be exploited in the next 30 days or so with a very high degree of confidence. Using those things combined, we can get pretty close to the crystal ball.

Using Intelligence-Led Prioritization in Vulnerability Management

All right, so that leads me to intelligence-led prioritization. Some people call this risk-based prioritization. The concept here is really simple. It’s that rather than relying on CVSS scores and severity and impact, we’re going to rely on vulnerability intelligence to help us make all those decisions about what to patch, what to update, when to update. To do this well, it probably goes without saying that you need a great source of vulnerability intelligence. There are a lot of vulnerability intelligence products out there. Some are free, most of them are paid for. They’re not very cheap, but they’re a handful, really, and really, just a handful in the world I think that I would trust personally to use for decision-making and vulnerability prioritization in my organization. Mandiant is one of them. You guys might recognize a screenshot for Mandiant Advantage. We believe that Mandiant is one of the best, if not the best source of vulnerability intelligence in the world. I promise no one paid me to say that. Kevin didn’t pay me. In fact, it’s quite the opposite. We actually pay Mandiant to incorporate their intelligence in our platform for the benefit of our customers.

Here you can see on the right-hand side, this is just a sampling of some of the information that you get from a great vulnerability intelligence feed. It’s everything, from pretty simple vulnerability intelligence like, “Is this being exploited in the wild? Yes or no? To what degree?”, all the way through to pretty deep intelligence around attribution of specific exploitation activity to specific threat actors and threat groups. The more mature your processes get, the more mature your organization gets, the more you’re going to want that deeper level of intelligence that Mandiant gives you.

One of the things we like a lot about Mandiant and that’s really unique is they have an analyst manually assess each vulnerability and rate the risk of it critical, high, medium, and low. That’s what the table at the bottom in blue represents. That’s really unique because most organizations, they’re vuln intel feeds are just purely automated. They’re ingesting data from lots of different sources, they’re running some AI against it, and giving you a feed. Mandiant does this expert analysis, which is very unique, and very powerful.

As you can see here at the bottom, the number of vulnerabilities that Mandiant rates as critical risk it looks like are, it’s 0.01%. It’s a very small number. I think there might be 15 or 20 in all time. Jared probably knows. It’s a very small number. Looking at the high risk, there’s only 3% of all vulnerabilities, so combined, roughly 3% compared to the CVSS 60% that are critical or high, so you can see we’re again, separating signal from the noise and really honing in on the high risks, the vulnerabilities that present real risks that are really being exploited in the wild. That’s really useful when it comes to operationalizing this vulnerability intelligence because it’s great to have all of this information, but what do you do with it? How do you apply it? How do you incorporate it in your tools and your processes?

Operationalizing Threat Intelligence

Speaking of operationalization, a couple of different ways to operationalize threat intelligence. One way that you guys are probably familiar with is just vulnerability risk scores. Everybody loves risk scores. Generally, the concept here is we’re going to take vulnerability intelligence, we’re going to take some asset context, we’re going to mash it all together with some fancy math and spit out a risk score, a number that represents the level of risk for each vulnerability. It might be on a scale of one to 10, might be on a scale of one to a hundred or a thousand, but that’s the general concept.

Risk scores are great for several things, but they’re not great for everything. Risk scores, if you want to measure and monitor risk, baseline your risk for the organization and then monitor it and track it over time, they’re really good for that. That’s a really important thing to do because you have to prove that you’re actually making progress. You’re spending a lot of money, you’re implementing a lot of changes, and you have to be able to show management, leadership that the organization is on track, and improving the program, so risk scores are really good for that. Executives love a nice chart that shows your risk going down over time. The board loves that stuff. Hopefully it’s going down, so no one gets fired.

But I’ll say that risk scores are not great for decision-making all the time in the context of vulnerability management. What we find is that as the bigger the organization gets, the more vulnerabilities you have. You can have hundreds of thousands, you can have millions of vulnerabilities. These risk scores are generally a black box in that there’s a lot of ambiguity in the scores and the folks using the scores, consuming the scores can’t always explain what the difference is between a risk score of 999 and 997. How do you make that distinction if you just have a number?

The funny thing is most vendors can’t really do a good job of explaining that either, vendors that have risk scores in their products. Nucleus has a risk score in our product. I think we do a fair job of explaining it, but most vendors outside of their engineering team struggle to explain their risk scores. Because that lack of precision, risk scores are not a great idea to prioritize vulnerabilities with in a large enterprise, especially. Smaller enterprises, it can work, but large enterprise, it becomes difficult.

That brings me to decision trees. Has anyone here heard of decision trees, by chance? Okay, a couple of people. All right. Decision trees, if you’re not familiar, you can think of them kind of like workflow diagrams where how in workflow diagrams, you’ve got the diamond and those represent decisions and you have a lot of logic and a workflow and then the ultimate outcomes are at the end of the diagram, right, “Here’s what you’re going to do at the very end of the diagram.” Decision trees are very similar to that.

There is a concept called SSVC. It stands for stakeholder-specific vulnerability categorization. The concept was written about by a few years ago, back in 2019, I believe, by the Software Engineering Institute at Carnegie Mellon. They wrote a great 70-something-page paper. Highly recommend that everyone here check it out and read it. But the TLDR in my mind, and I’ll probably oversimplify this a bit, but the TLDR is that risk scores for decision-making in the context of, well, actually specifically vulnerability management aren’t great, as I pointed out in the last slide, but that decision trees can be an excellent tool for making decisions about the actions you want to take when specific vulnerabilities are discovered in your environment.

This is what we recommend to folks. To get started with decision trees, what we generally recommend is to start thinking in general about the levels of risk in your organization, and the levels of vulnerability risk in particular. The idea is if you can say, “I’ve got four levels of risk. I’ve got critical, high, moderate, low,” you want to think about the criteria and the conditions under which a vulnerability could be mapped to that level of risk. It might be that the vulnerability is exploited in the wild. It might be that the vulnerability is exploited by an attacker that is known to attack your organization or your industry or sector. There could be a lot of conditions under which you want to pull the fire alarm and have a war room and do everything you can to get this thing patched immediately at the highest level.

But then you’re going to have a lot of levels below that. You want to think about as a vulnerability management program, and within the context of DevSecOps, what are those levels going to be, what are the criteria the vulnerabilities should match to in order to take these actions and have this response? Then finally, you want to capture all of that in the form of a decision tree, and you’ll see in a second, it’s a graphical representation of all of this logic and decision-making. In that tree, you can also capture things like when you want to accept risk, the conditions by which you can accept the risk for vulnerability, or when you want to implement compensating controls.

This is really nice. Picture’s worth a thousand words, right? It’s really nice to align all the stakeholders in the organization as to what the policy is and how we’re handling vulnerabilities as they’re discovered in the enterprise. It’s much better than just a Word document and a long bulleted list of a bunch of basically Boolean logic. It’s a picture. I’ll go ahead and show you that now. This is a decision tree on the right-hand side. In this case, what we’re doing is describing the vulnerabilities that we want to take an action to hotfix when they’re discovered in our environment. By “hotfix,” I mean the dev team, the product team, they’re going to take a timeout, they’re going to swarm and figure out how to update this specific library, the specific open-source library, how to patch it, how to get that into production as quickly as possible. This is the fire alarm, “Everyone, stop what you’re doing. We’re not going home until this is done.” That’s what these two scenarios describe.

You can see, I’ll just pick on scenario one here, we’re saying when a vulnerability is discovered, it’s got a Mandiant risk rating of critical, and the application is exposed to the internet, that’s when we want to take this hotfix action. Now, keep in mind very, very tiny number of vulnerabilities that are ever Mandiant critical, so something like this shouldn’t happen very often, right? It might happen maybe once a year. Maybe not even that, hopefully.

Same with scenario two. Here we go down a level, the Mandiant risk rating is high, the app’s exposed to the internet, and the app is either higher, moderate, and criticality, “criticality” meaning it’s just an important application to the business or important service to the business. If this thing goes down, we’re losing money, for example. Those are the conditions that need to be met for a hotfix. Keep in mind, this is just an example of a decision tree. This is a very simple example, I should say, really just to illustrate how you can combine vulnerability, intelligence, and asset context together to make decisions about what to do.

I should say, I haven’t really mentioned asset context too much so far in this presentation. It is extremely important in the context of risk-based vulnerability management. Threat intelligence, I would say is number one, vulnerability intelligence in particular. But asset context is a close second, “asset context” meaning, is this app exposed to the internet? Is the app hosting sensitive data, maybe customer data? Is the app important to the business? Does it provide a service that’s important to the business? These are all things that are very important in the context of decision trees and just risk-based vulnerability management in general.

All right, so here we’ve got three scenarios that describe the vulnerabilities that we want to update soon, but they’re not urgent, right? I’m going to assume that most of you are using Agile and Scrum and things like that, so you’ve got sprints. They’re probably every week, might be every two weeks. These are the types of vulnerabilities that you want to update in the next sprint, but we don’t have to stay all night to fix them when they’re discovered. You can see on the left side, I know this is an eye chart for most of you, but all the vulnerabilities we’re describing here, they either have a Mandiant risk rating of critical or high, but the application context has been expanded a little bit and relaxed a little bit.

If you look at scenario three, for example, Mandiant risk rating of critical, the app exposure is internal, with the idea being that since the application, the service is not exposed to the internet, we’ve got a little more time to roll a patch or an update to a library, so we don’t have to do it immediately. We’ll add an issue, we’ll add a task to the top of the backlog, and then hopefully in the next sprint, that’ll get picked up within a week or two.

All right. This completes the decision tree. You can see on the right side, I should have pointed out, I’ve been building up on the right side, this is the completed version of this tree that shows all three levels essentially of risk. In this case, we’ve got four scenarios that describe vulnerabilities that we just want to add to the backlog, so we’re saying, “Look, we’ve decided as an organization, as a product team, maybe that these are vulnerabilities we do want to fix. We don’t necessarily need to fix them today, or even next week, we just want to add them to the backlog, but make sure they get fixed sometime,” hopefully in the next, I would say a couple of months, if your backlog isn’t too big.

Speaking of backlog, I talk a lot about adding things to the backlog, one of the things that we found is really helpful if you’re not doing already, and if you don’t have a security dev team that’s just focused on security, is just to make sure you’ve got some amount of work or story points reserved in your sprints for security. It could be for updating open-source libraries, it could be for building security features, it could be for fixing other security bugs found in code, but just having that block of time and that amount of work reserved in every sprint will make sure that as you’re adding these things to the backlog that they actually get done.

The Big Takeaways

If you remember anything from this presentation, just a few points here. First is the volume of open-source library vulnerabilities is increasing year over year, really month over month pretty significantly. This is becoming more and more of a problem for development and product teams. If it’s not a problem that you’re aware of in your organization yet, you probably will be pretty soon.

If you’re prioritizing vulnerabilities using CVSS scores, essentially using vulnerability severities instead of risk, you’re fighting a losing battle. Again, it’s just a matter of time before you get to the point where your dev teams just can’t keep up, where they’re bogged down so much with library updates and patches that they can’t actually work on their product and develop their product. If it’s not an issue today, it will become an issue.

Then finally, risk-based decision-making, specifically using vulnerability intelligence and asset context and decision trees is really the best approach to managing the risk of open-source libraries.

Want to learn more about how Nucleus uses Mandiant Vulnerability Intelligence to help manage your vulnerabilities? Click here to get in touch.