Nucleus Shortcuts: Asset Metadata Automation Deep Dive

The following interview is from our most recently episode of Nucleus Shortcuts, a video series we post on YouTube where we chat with different cybersecurity experts about topics in the Vulnerability Management industry. Click here to subscribe so you don’t miss future episodes!

Hey everyone! I’m your host of shortcuts, Adam Dudley, and today we’re gonna discuss how to transform your vulnerability management program with the power of asset metadata and automation. This topic is very exciting because it’s critical to moving a vulnerability management program from dysfunctional to functional and successful, and sometimes from impossible to possible. In the case where there’s human resource limitations, you need to use automation to get vulnerability management done, which is a very complicated process. Our expert guest on this topic is an experienced and deeply technical savvy field security engineer. He’s also a Signal Sciences alumni and has a side hustle programming lighting and effects for music shows, which I think is very cool. Please welcome Aaron Attarzadeh, or A2 as we call him at Nucleus!

Can you give us a quick take on what asset metadata is and why it’s important in the context of vulnerability management process automation?

Aaron: Asset metadata ranges from things like who owns the asset, who supports the asset, the different applications that could be deployed on that asset. If that asset is running or it’s been deactivated, you know, moving into a highly hybrid cloud and on-prem environment these days, um, you have a mislinkage of data that really you don’t know if it’s running or not. As we’re moving into more so ephemeral style environments with the introduction of Kubernetes and containerized environments, you have environments that are constantly going up and down, up and down every single day. So, ensuring that you have this metadata there so you can properly allocate vulnerabilities to the respective teams, you can ensure that the correct vulnerabilities are in fact being remediated by the correct people, and you’re not chasing down an asset that is no longer even active and wasting precious valuable time during a P1 issue. So, this is asset metadata that can realistically come from anywhere, we’re talking CMDBs, spreadsheets, and also the scanners that your teams have spent so much time in putting together asset grouping rules in services like CrowdStrike. Tenable, and Qualys. So people have tried in the past, but the issue is you have all this dispersed metadata across all these different environments, and actually keeping those different fields up to date is a challenging task.

Adam: What I heard there is that asset metadata are various characteristics on different types of assets. They might be application type assets or cloud resources or even devices. And when I think of asset metadata, I think of properties, you know, properties on an asset thats just additional information that you can use to have more context about that asset.

Yeah. Say Kubernetes, for example. You have a team that deploys the image. You have a team that maintains the instance itself. You have a team that writes the code, right? So it may all boil up to a single EC2 instance that’s vulnerable to said vulnerability, and it may pop up in your VM solution that, hey, this EC2 instance is actually vulnerable to this. But unless you have the asset metadata that shows you who wrote the code, who deployed the image, who built the CS CD pipeline, you won’t know who to actually send that vulnerability to. So, at Nucleus, our goal is to consolidate this and be able to automate based off of vulnerability type, vulnerability source, data source and et cetera, to ensure that those vulnerabilities are getting to the respective people.

Would you please share some ideas on what vulnerability management problems that leveraging asset meta metadata and automation really solve?

Aaron: I’m working with a few customers right now where they have a number of on-prem environments, a number of on-prem instances, and these on-prem instances are just single VMs that are deployed on top of a hypervisor, but the teams that manage the underlying hyper hypervisor versus the teams that manage the VM itself versus the team that manage the network, those VMs that they are actually talking through are all different, right? Utilizing our ServiceNow CMDB connector, what we’ve done is we’ve been pulling relationship data of who of all the different applications and all the different teams that are being involved on this asset and bringing that into Nucleus, and then leveraging findings processing rules inside of Nucleus. We’re able to key on the different types of vulnerabilities and then autonomously assign those vulnerabilities to the respective teams within I TS< because we have this TSM app that’s already built into the store.

So, we’re able to write custom business rules that dynamically assign the correct vulnerabilities to the correct teams, when you again only see one VM in snow based off the CMDB information, but if you go up an additional level and you look at the relationship table, all that relationship metadata’s coming into Nucleus. So that’s one quick way of how we’re doing that. Then even building on top of that is the assignment of those vulnerabilities which is one thing, but also setting the risk of that asset, right? So you have all this metadata that’s coming in where it’s deployed, what it’s touching, this sensitivity of the data, if it’s external or internal that we’re getting from all the different firewalls, all the different CMDB information, even spreadsheet information that people can push into Nucleus. And we’re able to normalize that data inside of Nucleus and then autonomously map that to risk attributes that are on top of the asset. So we have business criticality, data sensitivity, publicly facing, and if that asset is in scope for compliance, so we’re autonomously mapping those metadata fields that we’re getting from all these different sources to risk attributes and Nucleus is setting a risk score that’s highly customizable based off of your own business metrics.

That risk score is then being used to calculate SLA dates. So, take for example, let’s say you have a Raspberry Pi that’s sitting on your desk and it’s a test bench for you, and it has the same exact image that you’re using as a bench, as an image that’s potentially in prod or in staging. That Raspberry Pi is going to have the same exact vulnerabilities that that instance that’s deployed on an EC2 instance, for example, what that same image has, right? But the difference is the criticality of that asset for the Raspberry Pi is nothing. It’s a Pi sitting on your desk versus that EC2 instance that is publicly facing. It’s in production, it’s highly available. So then now you have these two vulnerabilities that are sitting inside of Nucleus. One of them is risk a thousand versus one of them that’s risk a hundred. We’re setting the SLAs based off of the risk value of what’s most critical to you, not just what the severity of that vulnerability is. So that way we’re not cluttering up anyone’s feed, we’re not upsetting anyone. So to me, that’s kind of the most important part of Nucleus is ensuring that we’re getting the information to the right people. We’re not cluttering people’s feeds and we’re not giving them false positives. We’re ensuring that they have the correct information they need to remediate those vulnerabilities in a timely manner.

Adam: At the risk of sounding cliché, we’re helping our customers to surface the signals and the noise so that they’re not distracted by a bunch of irrelevant information and vulnerabilities that really aren’t as critical as they might think.

So now let’s actually take a look of some examples in Nucleus and really look at, in a demo environment, how Nucleus customers are using asset to create fine grain automation.

Aaron: When we come into Nucleus, if we jump into the asset management window and we look at the right hand column here under source, you’ll see that for some assets there’s three logos listed. So the first thing Nucleus does on ingestion is actually correlate assets from different data sources together, right? So you have assets that are seen by CrowdStrike, Tenable, CMBD, and Qualys. Those are all the same asset. So rather than having four separate assets in Nucleus, we correlate them and create a single asset record. And then if we click in on a single asset and we scroll down, you have all the metadata from the various sources. Right now we just clicked on a CrowdStrike asset, but as you slowly start to gradually add more data sources, you will have additional metadata listed here.

On top of that, you leveraging our AWS connector, you can add  live AWS information from all the running instances, all the running containers into Nucleus, that’s updating in real time. From instance, uptime security groups, tags, ownership IDs, account information, all that information can be leveraged and dropped onto assets, as well. Now, if we take a step back, we’ll see here that we have risk attributes. These risk attributes are completely customizable based off of the metadata you give us. So, either you give us metadata or, within Nucleus, we can manually set these as well. Those risk attributes tied to the active vulnerabilities are what gave you your risk score. If we jump into automation, we can talk a little bit about how these information is being set.

If we click on add rule, you have a breadth of all these different asset metadata fields that you can key on from the metadata of the asset, the OS, the risk attributes to the host name, the IP information. All this can be grouped in a single rule. So, you don’t have to create rules per individual data source. You can say Qualys, CrowdStrike, Tenable, and Rapid7 all in one rule. If it hits on this criteria, I want to go into my actions menu and I wanna set business criticality, and I wanna set this to critical. On top of that, let’s say you already have this information in a spreadsheet or in your CMDB or in a tool of some sort, you can leverage dynamic fields to dynamically set this information on the fly. Every single time we ingest, we’re ensuring that that asset that was notified to us as a critical is being kept as a critical.

Adam: Correct me if I’m wrong, but if the information changes correct in the CMD, for example, then that dynamic field will automatically update as well, correct?

Aaron: Correct. As assets are being downgraded or upgraded, those values are changing. So, correct. Making sure that that critical asset is staying critical, or if it’s being deprioritized to a medium or to a low because it’s been taken out of prod, you can deprioritize that dynamically so you’re not being boggled down with critical vulnerabilities. Right.

Adam: Does that apply to ownership as well? So, if ownership change nor ownership property changes in the CMDB or whatever the source of the asset metadata is, then Nucleus is automatically going to update that. Correct?

Aaron: Exactly. All metadata fields on all asset ownership fields on an asset risk attributes are all dynamically updated on ingestion every single time. Some of the rules that we have in place right now to just prioritize a vulnerability leverage Mandiant, but you can group both Mandiant information in addition to the vulnerability information that’s ingested from the scanner. Again, all data is normalized here. So you can see here that I’m not keying specifically on CrowdStrike vulnerabilities or Qualys vulnerabilities  or technical vulnerabilities, I’m just clicking on the name of the vulnerability.

Also being able to group both Mandiant information on top of that, as well as EPSS or CISA BOD, and then leveraging things like risk attributes, right? So I can key on business criticality down here and prioritize based off business criticality. Any of the metadata fields that you see on an asset you can key on in these prioritization rules

Adam: That allows for very detailed rule creation. Right? I mean, this is super flexible. It can suit pretty much any environment in any way. The customer wants to design these.

Aaron: Oh yeah. I’ve had, I’ve had customers that go really, really granular with it. Bringing their own scoring matrix into Nucleus. I’ve also had customers that leverage four or five Manidant and prioritization rules, and they take their 20 different rules down to five. And that’s just to prioritize the vulnerability, right? That’s not to actually assign SLA and the speed of remediation. If we come down to actions, we can jump to the findings, so you can create a rule that says, “Hey, if this is a Windows style vulnerability, I wanna assign it to the Windows team.” What if there’s 20 different Windows teams? What if there’s 20 different images teams? All that asset metadata that’s coming from your CMDB spreadsheets, X, Y, and Z can all be dynamically used to assign to their respective team without you actually keying on which team.

If you simply just write a query that says any type of vulnerability that is a window style vulnerability, I want to assign it dynamically to the Windows team. But this field Windows team is a different team for every single asset. Ok? So all this is doing is just keying on the metadata field in an asset called Windows team, but the actual value of that field can be Windows team in Arizona, Windows team in New York, Windows team in Florida, right? So you’re only writing a rule once, but being able to dynamically sign it to your hundreds of teams that we’re seeing on assets. This is kind of like the aha moment for a lot of customers. Um, because in ITSM, one of the pain points is you have to create that mapping of what goes where. So we have that information and then we can set the criteria here, and we support both regex style queries, wild card queries, contains queries where you can say it contains Page P, Java, Go Link, or Python that would go to my images team. Images can be 40 different teams, and we can just write asset images team. And now you’ve assigned those to said 40 teams and you’re only managing one rule.

Let’s talk about how to actually set SLAs. So, down here you’ll see I have two rules setting SLAs for vulnerability for assets and vulnerabilities that have SLAs of greater than 750. And then I have another rule here that says assign SLAs for assets that are 500 to 749. What this is doing is it’s not looking at the severity of a vulnerability, and it’s not just looking at the severity of an asset. It’s looking at both of those combined. So, it’s saying this critical on this asset gets an SA of three days.

First this critical on a Raspberry Pi that has a risk score of a hundred, that’s an SLA of 90 days. So now we’re hitting on both asset criticality and vulnerability criticality tied together. And rather than managing 40 different rules to specify all the different keys, you just have five or six rules that go through the range of what your internal organization’s SLAs should look like.

The the topic of ownership comes up a lot in vulnerability management. A big win for asset metadata is being able to define and consolidate that ownership in one central location, because that’s where vulnerability management really gets complicated and tricky. If you don’t know that ownership.

Aaron: Vulnerability management is not an easy task, right? When we think of prioritizing vulnerabilities, when we think of assigning vulnerabilities as we’re progressing from a more monolith style environment to microservice architecture, assets are becoming more ephemeral, being more horizontally dispersed. But the main thing is, you actually have more information about assets than you really think you do. If you actually like sit down and think about who owns the security group, who owns this IP range, X, Y, and Z, you don’t need to know everything right out the gate within Nucleus. You can segment certain rules to certain asset groups, and you can try these rules out with a handful of assets. Ensure that, “Hey, this is exactly how I want this to work.” And then with the click of a button, you can scale it out to more assets, to more assets, to more assets. So easier remediation, easy to get up and running.

Thank you so much for joining us today on Nucleus Shortcuts, and we’ll see you soon for the next episode!