Vulnerability Management Process Automation

Vulnerability Management Process Automations, are one of the last steps of a vulnerability management program. There has to be all of the necessary information in place and processes in place in order to make automatic and informed decisions.

Automation is critical to how organizations can grow their vulnerability management program at scale. Enabling them to react to changes more intelligently. It really requires that all of that information is in a single place and that it’s accessible to make those decisions as well.

In this episode of Shortcuts, Adam Dudley and Aaron Unterberger discuss leveraging finding and asset metadata to automate vulnerability management processes.

In this episode of Shortcuts, Adam Dudley and Aaron Unterberger discuss leveraging finding and asset metadata to automate vulnerability management processes.

Episode Raw Transcript

Adam Dudley:

Let’s get right into it. Would you please kick us off with a brief take? From a vulnerability management perspective and what our customers are trying to achieve, why is automation such a big part of what we do here at Nucleus?

Aaron Unterberger:

Really the challenge of vulnerability management is becoming more and more complicated, more and more complex. Technology is expanding. I would say risk is increasing, mostly because we’re starting to move more and more to computer-based process and systems, and so the impact is much greater.

Then also you have growing organizations, growing teams. You’ve got technologies that by nature are more distributed, moving towards microservices. Shift left, as well, is expanding what’s in scope. Everything is growing.

The way that organizations are able to adapt to the pace of change is essentially through automations. Fundamentally, automations, that’s one of the last steps of your vulnerability management program is there has to be all of the necessary information in place and processes in place to where you have what you need to make automatic and informed decisions.

Automations is critical to how organizations can grow in scale. Not having to throw bodies at the problem, but being able to react to changes more intelligently. It really requires that all of that information is in a single place and that it’s accessible to make those decisions as well.

Adam Dudley:

Got it. You mentioned something that I’ve talked about on a recent episode, which is throwing bodies at the problem. Where in our current circumstances in the talent pool for cybersecurity it’s becoming not possible to throw more bodies at the problem, because there are less talented people around doing cybersecurity.

I read an article recently where I think it was 60% to 80% of CIOs put automation as a very high priority in their organization’s security program, or just IT in general.

As you said, everything’s expanding, more tools, more data, everything’s getting more complex, so it becomes necessary to automate a lot of these things, because the environments are so complex. You just don’t have enough man or person power to handle all these things. Right?

Aaron Unterberger:

Yeah. That’s exactly right. The talent pool is small. The problem is growing, and so you have these conflicting forces, where how do you do more with less? Automation is a critical piece in that.

Adam Dudley:

To the meat of our topic, what types of processes, vulnerability management processes can users automate with Nucleus?

Aaron Unterberger:

I like to think about the vulnerability management life cycle. There’s the scanning, discovery phases. Oftentimes these are scheduled within the scanners. The scanners themselves, you’re scheduling when you’re doing scans. Oftentimes there’s very deliberate scan windows.

You don’t want an unplanned scan to bring down a production system, or to trigger an IDS, or cause any type of disruption. There’s thoughtful coordination of scans themselves. Then in coordination with that, bringing them into a unified plane, it’s bringing in the scans after they’ve been run.

We do automatic ingestion of scans, and so what we see our customers doing is coordinating, “When are we getting new scans? Then when do we want to pull in those new scans?” You don’t want to scan on a Tuesday, but Monday night pull in scans, because then you’ll have seven day-old data. It’s stale data.

You want to coordinate that to keep in lockstep with how you’re automating and scheduling your scans themselves. There’s also automated workflows that folks will do around asset management.

This is a really big next step of the vulnerability management life cycle, where as you’re ingesting data, the workflows, prioritization, remediation, reporting, all of that builds on asset management, and so automating how you categorize assets properly.

This often involves, I’m looking at a number of different systems. I’ve got a CMDB, but it only covers part of my assets. I’ve got some really great information in AWS, so I reference that as well. Maybe one of my scanners has some good tags.

Then we’ve got some spreadsheets over here that other teams use for maintaining, code teams. Maybe we’ve got some code owner files in other repos. Data is scattered in a lot of different places. Being able to manage assets is really about getting that information into one place, and then being able to pick and choose where the source of truth is for the different actions that you’re doing.

How serious are your assets? Where are those assets within the organization? To whom do they belong, or what teams support those assets, and automatically doing that at scale.

Adam Dudley:

Nucleus is both syncing the asset data from the different tools and unifying them in one place, which is the Nucleus console. Then, also, providing the ability to do things with those assets as they come into Nucleus, so maybe organizing them in a certain way, or, as you said, letting Nucleus know how critical these assets are to the business, those types of things, so all that.

Aaron Unterberger:

Classifying. Yeah.

Adam Dudley:

That’s a big step in automating a vulnerability management program, is that piece.

Aaron Unterberger:

Yeah. There are other parts of the workflow as well. Findings, you’re also processing. I could run a scan even on my home network and get hundreds of findings. Organizations are overwhelmed with the sheer volume of findings. What they’re doing is they’re applying automated workflows and actions to those findings.

We’ve got 10,000 criticals and highs, but when we look at threat intelligence we’ve only got 40 widely exploited, or we only have a few zero days, or we only have a few CISA BOD 22-01 findings. Focus on the greatest risk first and work our way down.

They’re performing operations automatically on findings and how they prioritize, set SLAs, how they’re assigning ownership and tracking the workflow and life cycle. Similarly, they’re automating ticketing and remediations, routing to the right teams, orchestrating life cycle over time.

You don’t want to have to go in and update, and say, “Hey, well we actually patched this finding on this asset. Can we update that ticket?” You want that to happen automatically, so you can focus on the fix rather than just getting data to move from one system to another.

Reporting is another area where we’ll see automations. Being able to generate and schedule reports for audiences. Because oftentimes reviews, actions and workflows, they’re periodic and repeating. Every week I want to know at the beginning of the week what’s out of SLA compliance.

Every month I want to look at how my risk numbers are doing. Every quarter, maybe I report to my C-suite on how our trends are, trend analysis and comparison of different teams and team performance.

Automating all of those end-to-end is really where organizations find that they can scale at the speed that they need, and at the speed of the enterprise.

Adam Dudley:

Got it. At the foundation of this automation and why the way we do automation is so valuable, a piece of it is that in Nucleus you can leverage a lot of the vulnerability and asset metadata that we’re pulling in from the different tools. You can use this data to create very fine-grain rules.

Those fine-grained rules are really necessary to orchestrating automation at the enterprise scale. Is that right?

Aaron Unterberger:

Yeah. Actually it’s a couple of things there. It’s having access to all the metadata at all times. It’s also not this ephemeral piece of data that’s coming through during ingestion, at the time that the connector executes.

It’s this is part of the overall data model, and it’s accessible whether you’re creating a ticket automation that’s running miles down the road after we’ve ingested, we’ve set SLAs, we’ve assigned asset ownership, finding ownership. We’ve done all of that work.

We still want to be able to say, “Well, hey, if this belongs to this application team, and we’re pulling that from Snyk, then I want to be able to push that to Nucleus in routing to the proper Jira project.”

Being able to access metadata as part of the data model, not just an ephemeral piece of data that comes and goes with the connector. Then you also have the notion of fine-grain logic. Being able to string together conditional operators, if this, then that, and/or, and chaining together actions.

Also, referencing actions based on other variables. Basically a level of indirection to where there’s only one rule and that’s pointing to this value. We’re updating and maintaining this one value. That’s guiding this rule as it’s executing.

Really flexible rule building to where you have robust logical operators and logical conditions that you can construct, and then also flexibility in where data can come from, whether it’s on the action side or on the criteria side.

Adam Dudley:

All clear. That’s awesome. Well, thanks so much, Aaron. That’s very clear. There’s really a lot of flexibility in how users can create rules in Nucleus to suit their environment, to suit their process and how they do things, and also to suit the different tools they might be using across the enterprise.

Aaron Unterberger:

Yeah.

Adam Dudley:

Before we wrap this episode today, is there any specific takeaway you would tell people to take away from this conversation and how Nucleus does automation specifically when it comes to the metadata we expose?

Aaron Unterberger:

If you’re finding that you’re struggling with managing your vulnerabilities at scale, if you’re finding that you’re overwhelmed with vulnerabilities and the sheer volume, or what’s being presented to you as risk, know that these are challenges that we help our customers with all the time.

We’re doing so by getting all that information in one place, including enriching with things like threat and intelligence, and then giving you the ability to easily construct and maintain automated workflows. That way you can focus on fixing, versus just stitching together a Frankenstein approach of vulnerability management.

If you see those problems, take a look, we’d love to show you what we can do. Definitely feel like we can help with solving some of those challenges.

Adam Dudley:

I know you get to see some amazing Frankenstein vulnerability funnels that enterprises have so ingeniously built, and now are suffering the pain of having to maintain.

Aaron Unterberger:

Yeah. Where the engineer in me is like, “Wow. That’s so cool. You’ve built all of this complex code. That looks like so much fun.” Then the manager in me is like, “Oh my God. This is what we’re managing. What about the actual vulnerabilities?”

Adam Dudley:

How much time is spent managing that Frankenstein funnel versus actually doing the work of vulnerability management, right?

Aaron Unterberger:

Mm-hmm.

Adam Dudley:

Oh man. Well, that’s a wrap. Aaron, thank you so much for joining us today. We’ll share any relevant links in the show notes. We’ll see you soon, next time on Nucleus Short Cuts. Thanks Aaron.

Aaron Unterberger:

Thanks for the time.