Nucleus Cofounder Scott Kuffer demos Nucleus and talks enterprise vulnerability management on Risky Biz Product Demos with Patrick Gray.

 Risky Biz Product Podcast Demo Transcript

Hey everyone. And welcome to this Risky Business Product Demo. These are the video demos we do which give our sponsors an opportunity to show you the technology that they make because yeah, sometimes seeing is more important than just talking about it on a podcast. And joining me today is Scott Kuffer, who is a co-founder and the COO of Nucleus Security and Nucleus essentially make a console or a web application that ingests vulnerability scan outputs from a variety of different technology and allows you to get all of that information in the one place that could be your Qualys and Nessus style scanners, web application scanners, whatever. And the idea is you bring it all into one place, slice it and dice it, and then give people an opportunity to access all of that data in one place. Scott, how did I actually do explaining your technology there?

You did pretty good Patrick, but at the same time you’ve been talking about it for a couple years on the podcast now so I expect you’re an old hat at this.

There you go. So, I mean that is the basic description though, isn’t it? It’s basically you ingest all of this information from scanners across and these are scanners that hit different parts of the stack as well. So web app scanners, other stuff that looks for typical CVEs and then you can say, “Well, this division of the company has these vulnerabilities and you can assign owners essentially to bugs, right?”

Right. If you boil it down, it really was meant to solve all of that weird in between state, between a scan happening and the patches actually running, or even the code being pushed to fix different vulnerabilities. In our estimation, close to a hundred different weird sub-process that need to happen in order to make vulnerabilities disappear. And nobody really manages any of those and it’s across different teams. And so nobody really knows what to do with vulnerabilities once they’ve actually been discovered. And even just asking basic questions of your data is very difficult. Especially, when you start looking in complex environments and thinking at scale, it just becomes a real challenge to actually do vulnerability management.

We all know what it means, right? Hey, we have to scan for stuff and assess our vulnerabilities and assess our technology stack, not just CVEs, but configuration vulnerabilities and cloud vulnerabilities and all those fun things that are new, but we have to actually get it fixed. And how do we do that is a really difficult challenge that really was the root of what we’re trying to solve. And being able to do that at scale through automation is I’ll say our bread and butter, right? And being able to do that for complex environments.

Well, and I think really what you were trying to replace is the spreadsheet health that people tend to wrangle to try to do this stuff in organizations. So yeah, let’s have a look at it. And as you mentioned, the more complex the environment, the more useful a technology like this is when you’ve got maybe five to 10, 15 different types of scanning technology in your enterprise, right? It’s going to make sense to try to aggregate all of that into one place. So yeah, let’s take a look at it. So, I’m guessing you’re going to show us the console that you get once you’ve plugged all of the scanners into it.

Yes, that’s correct. Obviously, I can spend plenty of time talking about how we get data in and out of the platform. And I’m happy to do that really quickly, but I mean, that’s table stakes, right? If we can’t do that, I mean, there’s no point even doing the rest of it. And being able to do that at scale is also a challenge, right? Because you’re looking at limitations, not just on the Nucleus side, but when Nessus was first created, it was not designed to export 50 million vulnerabilities a day into other systems for management, right? And so there’s a lot of work that goes into just being able to extract data at scale across all of your different asset management, vulnerability scanners, network SaaS, et cetera, right? And so the first thing that you we’re trying to do is there’s an administrative layer of just getting everything set up, right?

It’s a pro and a con for a system like this, it’s very flexible where you can connect whatever you want into the system, right? You can go in and hook up ServiceNow as much as you want. At the very basic level from just a connector to pull in CMDB information to, we have a ServiceNow App to be able to interact with super custom, crazy fun ServiceNow environments that everybody seems to have to just pointing us at CrowdStrike and like your EDR agent stuff. And being able to pull that in and correlate it with everything.

Because CrowdStrike has that spotlight thing, right? Which now generates a lot of… It’s interesting. I mean, essentially, what they do is they’ve got access to all the files on the host so they can identify vulnerabilities and yeah save you having to run a separate agent to do volume scanning on endpoints.

Absolutely. And actually that’s a pattern that we’re starting to see is, and this is something we laugh about a lot actually, right? Because when I first started, everything was host-based, right? It was all agent-based vulnerability scanning. And then everybody was like, “Oh no, agents are too much of a pain. We want to go to full on network scanning.” And then Qualys and Nessus really took off. And then now here we are a decade later and it’s like, “Hey, we actually want agency scan.”

I mean, if you can build it into the agent, then why not, right? If the agent’s already there so that seems to be the… I think, the way that it works is they just take a file hash of everything on the host and then send it somewhere central and then do the crunching there. But I’m not a 100% sure on that. Anyway, it’s beside the point. Let’s talk about Nucleus.

That’s right. So I could wax poetic all day about this stuff, so connecting the tools is the one piece, right? So being able to extract data at scale and just pointing us in the direction of your scanning tools and then getting that starting to ingest at scale is the big thing, right? The next step is we take all of that information and we start building an Asset Inventory, right? And this is actually one of those like dirty little secrets of vulnerability management is that nobody can really do it well because nobody can Asset Management well. And Asset Management is almost as important if not more important to the process as the actual vulnerability scan data itself because if you don’t have the context and you’re not deeply hitting across assets, then what are you doing, right?

You’re going through PDF reports. And the joke that we always say is that no matter how much money you spend on vulnerability management, you’re probably still using Excel somewhere. But the first thing that we do is we build all of your assets, we build essentially an asset inventory, and this is across not just the vulnerability data, but obviously the asset inventory as well. We’re hooked into your things like AWS. And we’re also de-duplicating across all of those, right? So you could see like for example, this container image was seen by PRISMACLOUD.SNYX For example, right? And we’re doing that across all the different types of tools. And then when you dig into the tool itself, not into the tool, sorry into the asset, I’m a little rusty. We go in and we actually look at all of the data associated with a particular asset.

And this is where the Asset Management takes itself to another level, which is where we say, “Hey, we have this asset record and it is being scanned and being managed in multiple different tools.” And so, we’re actually going to have to take all of the metadata about that asset from all these different places. So in this case, this particular asset, which is just a VM, is in AWS, it’s in CrowdStrike and we want to be able to manage and report and automate at scale based on any of these different fields, right? And so just very basic things like, “Hey, is this the same asset as a Nessus scan and being able to tie that into the same thing, and then being able to build analytics on top of that. So you have confidence that the data is accurate, right?

So step one. And we do that across all different types of assets. We classify assets based on the data coming in. And then from there, we allow you to start grouping things together. And this is where the real power of the platform starts to come in because we have this concept that… And this is something that blew our minds this was probably the most widely deployed feature like the fastest of anything that we ever did was just a nested hierarchy of being able to represent your organization. I think, that I actually talked about this on one of the Podcasts one time, which was, I think I called it like org based vulnerability management or org chart, vulnerability management. Yeah. Or something like that. So the concept there really is just, we have all these assets, these assets actually make up larger business objects.

Those business objects could be applications. And then those applications could be owned by teams. Those teams could be within divisions. Divisions could be part of managing directors, managing directors within departments. And I want to be able to represent all of that in one place. And I want all of that mapped automatically using the automation framework that’s built into nucleus. And so the example here is we’re looking at this accounting app, right? So this accounting app is made up of a bunch of different things. I can see all of the details around it. I can track, SLAs and scan coverage against that particular application. Compare that application against different applications. People love gamification and being able to track at these summary levels, as well as the detail oriented levels. But then at the same time, if my name is Scott Smith, I’m a Managing Director, or maybe I’m Gabe Johnson.

When I log into, into Nucleus or into the platform, I want to be able to just see the stuff that I own, right? So everything within Scott Smith, I can go in and I can see when I log into Nucleus and that applies to the API, it applies to… We integrate with things like, AWSs three to push date, all of the data to wherever you want. So we have a lot of customers that pump data into snowflake and just they can build dashboards based on any cross section of their organization that they want. And all of the vulnerability data is associated with it, as well as that-

They like a hair away from being able to crap out a PPT presentation, shaming the divisional heads who are doing a bad job. This is what we like.

Yes.

Hey look, user shaming. I’m not into user shaming, manager shaming, whole other thing that’s fine.

For sure. Yeah. And it’s actually funny because we have some customers that they’re just big enterprise global information security team, and they just want to be able to know how is this business unit doing right? Like they’re using Qualys, this other team is using Nessus. They don’t know anything about anybody and so they just want to be able to have-

I want to get some insight and see where they got problems, right? I understand why if it’s a large company and it’s a security management driven purchasing decision, I understand why that’s a popular feature to get going with.

It is. And it’s funny because that’s a byproduct, right? That wasn’t even really the set of problems that we were trying to solve. Initially, we were trying to solve the vulnerability analyst problems. I mean, I think the first time I came out of the podcast, I talked about how this was a working hands tool, right? Not the prettiest, situation we got going on here, but very important, right? A lot like a plumber, so jeans pulled down and everything, right? It’s the same way. But so what normally happens though, is a normal user. When they first log into the platform, this is where they’re going to get dropped. Now I should also mention we’re completely a multi-tenant environment. And so we have a global level above even what we were looking at.

And so when I start talking about big enterprises or MSPs, wanting to manage multiple different data pools, especially doing that across entire global organizations. So imagine you’re a giant manufacturing company, I’m just going to pick a random one, let’s say like Toyota, and you’ve got Toyota North America, you’ve got Toyota South Africa, Toyota Europe. And all of those are actually completely different businesses, but you need to manage that risk holistically and present that to the board. You need to be able to do that across different data pools and different jurisdictions and data sovereignty. So when I talk about like big complex problems, that’s the kind of stuff that I’m talking about, but most normal users that’s abstracted away. And this is what they’re going to look at is they go into a project dashboard and they’re great. I have a whole bunch of stuff boiled down for me, such as all of the scan data that we’ve ever uploaded the SLAs, because got to have those SLAs, those specific top high risk vulnerabilities that I care about.

And then being able to have all of the tools available to me to do what it is that I need. So the next kind of most common set of use cases is where we start looking at the vulnerabilities themselves, right? And so the active vulnerabilities page is really the next central place where analysts are going to work out of. And what we’re doing here is actually pretty interesting. So we take all of the vulnerabilities across all of your different scanning tools. So we had those like 50 different, or I think it was more like 10,000 exaggeration sorry, more like the 10 different scanning tools that were showing on the project dashboard. And then you can go in and we have that all normalized into something we call unique vulnerabilities charted down and de-duplicate down basically by unique instance right? So if I have 50,000 cross-site scripting vulnerabilities, I just want one cross-site scripting vulnerability and have 50,000 instances of a cross side scripting vulnerability, right? That’s the idea, right? Because you don’t want a table with millions and millions of things, you want to be able to start grouping and categorizing.

Your finger is going to get sore from doing the scrolling on the mouse wheel, right?

Yeah. It is absolutely true. We actually-

Thinking of your user’s fingers? We consider.

Yes. We used to have an infinite scroll on this table actually. And then we decided to go with paging, even though we hated paging because it was just too much scrolling. And the little bar got way too small, but so what doing here is we’re looking across not just CVEs, but any type of findings, right? So this could be configuration vulnerabilities, this could be cloud, CSP stuff. We’ve got compliance type findings as well. So you can manage all different types of findings in one spot. And then the next step is we’re actually enriching all of this data with Mandy and Threat Intel. So I had mentioned this at one point. And I know we used to disagree a little bit on the validity of having Threat Intel built into it. But I think,

No, but I mean, that’s actionable Threat Intel, right? Because this is a thing for those of watching this who don’t know Mandy and actually do offer a Threat Intel feed that essentially tells you which CVS are actively being exploited and by who, right? And that’s when you’ve got all of this data in one place, that’s certainly going to help you prioritize stuff, especially, if it is a global corporation, it might allow yeah so she might have a look at this info and say, “Well, clearly we got to fix that one urgently.” So, no, I certainly think that’s… I’ll allow it, let’s put it that way.

I will say I agree with the idea of abstracting it away, right? So a lot of vendors say, “Hey, we’re going to create a risk score out of all of those attributes, but we worked out a really great partnership with Mandy where they basically said, you can take Mandy and just embed it directly into the platform. And so we have all of those fields available for analysts and you can start using these fields to make decisions. And then when you start tying that in with our automation, you can make decisions basically over and over again, at scale, where you could say, “Hey, I actually want to prioritize something differently based on the different fields available.” And so, for example, if I click into say like Apache log for J here, I’m going to pick on that one just because it looks the coolest with that Intel stuff.

But this is meant to be the workbench for the analyst to be able to track and triage and do all of the tracking process that is… Essentially, that you have to do after a vulnerability is discovered, right? What’s the state of the vulnerability, where does it exist? What is the specific details that caused it to trigger in the scanning tool? What do we want to do to that instance of that vulnerability who owns it? What’s the state of it? Do we want to set due dates on it? All of those kind of color, like the manual processes that you would normally have to do, and that a lot of people are using spreadsheets for or some random custom database or they’re… We’ve seen some really crazy stuff where they’ve built middleware into like Archer and then pull that out into a sequel database.

It gets pretty crazy when you start talking to some of these bigger customers, but when you start looking at the Thread Intel piece, this is where it gets pretty exciting on in my side, because we have all this data available to you. Obviously, you’ve got the NVD information, because a lot of people like to use CVSs version two or version three or whatever it is that they like to use. But we also pull in the Mandiant data. We integrate with this feed called EPSS, which is maintained by first it’s an open source prioritization score. And then obviously… It’s now called they’ve rebranded it to the CISA known exploitable vulnerabilities list. But the CISA BOD was the original directorate that they came out with, which is, “Hey, is this vulnerability part of that sys a list? And is there a due date?”

And especially if you’re a government organization or critical infrastructure, you like to align yourself with that CISA list of everything that you care about, and then all of the fields that you could possibly ever want to be able to triage off of. So if I’m an analyst or if I want to start building automation into my vulnerability prioritization pipeline, it’s like, “Hey, maybe I actually am tracking certain APTs that I care about. And I just want to look at and raise the risk of all vulnerabilities that are being attacked or being used in campaigns associated with certain, APT crews, just basic example, but you could use anything, right? All of the vulnerable products that you have, any associated malware, exploit consequences, the Mandiant analysis is all here if you want to read it.

And obviously, we’re very fortunate to have this partnership with Mandiant and then you can also bring your own Threat Intel feed as well. So, if Mandiant isn’t good enough, because you have done your analysis and there’s some limitations as an organization, you want to work around with different threats you can bring your own. So for example, you can bring Recorded Future, and we have all of the risk rules that they trigger that this vulnerability has. And it’s actually really interesting to see how different feeds rate different vulnerabilities. So we’re actually going to be publishing some research on that later.

That’ll be fun, right? Because I imagine there’s just some cases where it looks like they’re completely at odds and it’s arbitrary, right?

It is fun. And, in Nucleus, all you have to do is basically sort by the Mandiant risk column and the record a future column. And you could be like, “Hey, what’s the top risk of Mandiant and what’s the lowest risk recorded future and just see what you get back.” But yes, it’ll be interesting. We’re undergoing that research project now. And there’s some interesting findings and-

Talk about them on the Podcast.

Sweet. So that’s the vulnerability section, right? So the vulnerability section is the source of truth for all of the vulnerabilities currently in your environment. Obviously, if you take actions on those vulnerabilities, they’re going to get remediated. So, just being able to track what have you fixed over time, over a specific period of time filtered by specific asset groups. And then again, building queries, building searches based on what it is that you care about and then saving those searches so that you don’t have to keep redoing them. And then being able to say, “Hey, I want the output of that search sent to me on a schedule.” So let’s just say every week I want to update of all the new critical vulnerabilities that are on publicly facing assets. That Mandiant rated is critical. Just whatever cross-

I mean, that sounds like it’s worth knowing about?

It is. Yeah. Right. So that’s the idea is that we are trying to get to the point where you could be a little bit more proactive and start building logic into your process, and then just have that work for you in the background. And then as your teams are doing their job, you’re going to essentially get credit and all the tracking and all the monitoring is just going to happen rather than having to go out and have it be a separate process, because right now, your remediation happens, right? And there’s like multi-step process to make that happen. And then you have to get that feedback and then you have to track that this event happened. So, we want to do a patch, right? So we have to go raise a ticket in ServiceNow.

And then we have to monitor that the ticket and ServiceNow has been fixed. And then I have to like track somewhere that a patch was applied. Now, I have to go, re-scan it right? And so it’s like just even doing basic things like that become very difficult. And so, we’re trying to get to a point where it’s a collaboration and it’s really a vehicle for change in the environment. So rather than just focusing on, “Hey, our SLA for critical vulnerabilities is seven days.” Well, what’s a realistic timeline for what we’re trying to do based on our environmental profile. And so, we’re trying to just make it like a much more automated and smart system rather than just free-for-all which is what it, most of the time.

I think, the interesting thing about a product like this, right? Is that probably 80% of the value is the fact that you’re just bringing the data into one place in the first place, right? And then from there it’s building features. I mean, that, I’m guessing that’s been the journey for you and your team, right?

Yes. It’s been about four years of building aggregation and detuplication and then the rest of the time just building like, “Oh, we want this widget and oh, we wish that we had this dropdown and we want to search this way.” But yeah, the second big part, I would say, so the aggregation is a big piece of it, but instead of 80%, I’d say it’s probably a 50%. And then the other 30% to get us to that 80 is actually just organizing the data. And being able to keep it up to date, because it’s great to say, “Oh, well we can add our assets to an asset group but if like it doesn’t stay up to date, it really doesn’t matter because it goes stale within minutes if you’re lucky. And so, the way that we get around that is through automation.

So we provide the tracking and the organization capabilities. And then the automation, when we talk about automation is more about automating that stuff as opposed to automating patches because nobody really wants a vulnerability management guy to go in and circumvent your change management process and your patch management process. Because he said it was important much as we love the vulnerability management guys. They don’t know more than the patch management people around the system, right? So the way that we do that is through a rules-based system that essentially is meant to reflect a vulnerability processing pipeline. So we start by ingesting data from your asset inventory and your vulnerability information. And then we essentially say, “Well, what type of asset information do you care about, right?” And so, we’re actually allowing you to have access to all of the different metadata from all the different systems that you could possibly ever care about and then match on any information that you care about, whether that’s through like Regex or any sort of matching that you could possibly ever want.

And then once you’ve matched on essentially saying, “Hey, what is the set of data that we actually want to match on? What do we want to do with those assets?” So most of the time you’re going to be defining ownership of assets and risk attributes for those assets so that you can more appropriately make better risk decisions, right? So being able to say, “Hey, this asset actually is public facing or not public facing, right?” And then at the same time, we’re going to go in and actually just add an asset to a group, but we want to do that dynamically, right? Because if you wanted to add an asset to a group over and over again, you’d have to redefine every single group, which would just be a giant nightmare. And so, what we actually have built is a dynamic templating language, like the Mailchimp language thing, but what it essentially because-

So, obviously a feature request that… Because people were having to do it manually before and complained and then you had to build this and it’s cool but like this is so obviously the result of someone’s pain.

Oh yeah, for sure. Right. And so just being able to say, “Hey, we actually have 3000 teams and we just want to assign the asset to the team from ServiceNow, right? And then as they move in ServiceNow we want them to auto move in Nucleus. So it’s like very simple kind of stuff. But it allows you to build out those hierarchies where you can actually say, “Hey, we actually have a business whatever business application in ServiceNow.” And then underneath the business application, we actually have other fields that we care about, right? So I can actually go in and say, “Just whatever we’re called BU.” And then, so it’s like, I actually can keep that entire nested hierarchy up to date with one rule. And so it makes-

And the interns of the world side in relief that they didn’t have to do data entry around this stuff anymore, basically.

That’s right. Well, the interns and the underpaid vulnerability analysts that were being told that was what 90% of their job was. But it solved some real problems. And I know that this is the part that’s, I think it’s funny on one hand, but it’s also part of why Steve and Nick and I had such a frustration with this problem not being solved. It seems so obvious that there are ways to make this job easier especially when you start dealing with millions and millions of vulnerabilities and you’ve got five people on your vulnerability management team or one person on your VM team. That it’s like, it really blew our minds that this didn’t exist yet because we’re not really… I mean, I’m not going to say that we’re not redefining the world or anything, right? We’re literally just automating all of the manual track.

Trying to make using what you have easier, right? Which is yeah it has been a bit of a blind spot for the industry for a while. I mean, I think there’s more stuff up and coming now, but it is pretty staggering. The degree to which like this wasn’t really a much of an industry sector five years ago,.

It is.

Especially when you’re talking about those big companies that are using and you got to recognize that companies do wind up with a mix of vulnerability, scanning technology, they acquire other companies, the core company might be at tenable shop. They acquire a place that uses Qualys, right? The fact that a little bit light on tech that glues those two things together is yeah, a bit insane. So I agree.

Yeah. It’s really insane when you think about the fact that if I’m a giant software organization, right? Let’s just say like an Oracle, right? I’ve got 15 business units and they all operate somewhat independently from each other, but I’m Log4j hits the news. Let’s just say like a vulnerability hits the news and they’re like, “Hey, are we vulnerable?” They have no clue at all. And it could take six months to find out if they’re even vulnerable because they have to go ask every business unit, every business unit heads, like I have no idea. And so they have to go ask their vulnerability management teams and then their VM teams are like, “Well, what are all of our scanning tools that could find Log4j, right?” Well, it’s actually sneak, any of your SCA tools it’s you could find it in lost dependency check, you can find it in AWS inspector, you can find it in Tenable, right?

There were so many different ways to find some of it, but not all of it, like even Rumble had a way of identifying quite a lot of software that was vulnerable to Log4j and they’re not even a phone scanner, right? It was a wild ride, I imagine. So, being able to pull all of the data from all of those tools into one place makes a lot of sense.

Yeah. It’s pretty wild, but the part that gets me really excited is actually this part of all the stuff that we do is because it’s essentially just a really simple rules engine to essentially go in and say, “Well, what do you want to actually do with vulnerabilities?” So we have all of those threat fields available, right? All of the Mandiant data’s available, you recorded future data’s available, EPSS et cetera. And I could literally just go in and say, “Hey, I want to be notified if a Mandiant critical risk vulnerability shows up and it’s on an externally facing asset, right? I have all of those asset fields, just like anything else. And I can auto assign them to any of the teams that they’re responsible for.”

But for the sake of argument externally facing asset, what do I actually just want to do with vulnerabilities? And then that way I can track them, right? So setting a due date. So if it’s an externally facing asset, maybe I set the due date lower than non externally facing asset, right? Very basic use case it’s really difficult to do across an entire organization, or being able to auto mark vulnerabilities as false positives. If certain criteria are correct, or even just being able to say, “Hey, who-“

When you keep saying the same false positive, that’s one where you can just go and use that rule to hammer it away, right?

Exactly. But most commonly people are using the assignment piece. So that way they can actually go in and just say, “Hey, what vulnerability is there assigned to who?” And then clicky button. And you’re on your way to a report to just say, “Hey, give me a comparison across all of my teams who is doing well over the last 30 days or the last 60 days who’s going up, who’s going down at both a risk level. Because everybody wants there everything abstracted to a risk layer, as well as my Rumble vulnerability numbers, as well as my SLA numbers, right? It’s like your three different [inaudible 00:29:10].

Then you can do stuff like say, “Well, are the team not doing a good job? Or are there differences in their stack? Is it a difficult to manage stack as it turns out, right?” Maybe we should think about that. You can start making some decisions based on that info.

Exactly. And then once you start making those decisions, it becomes a lot clearer too, right? It’s Like a lot of people they’re just making decisions in the dark without really understanding what they’re doing. They’re kind of saying, “Oh, bill industry, best practices SLA of 14 days.” So they just make that their SLA and then they go and they look and we see vulnerabilities from 1999, still open in certain environments, right? And it’s like, “Well, your Patch Management Team is, which is where you were pulling that data from says you’re at 99% patch compliance.” And then what they were doing is maybe they’re just uninstalling the old versions of the software. They’re just installing new versions of the software and saying, “Oh, okay, we’re actually good.” But then you go and you just look at the data and you’re like, “Well it’s so clearly not good, right?”

Whatever, 376 days to fix vulnerabilities. It allows you to have data, to have a real conversation with your executives, right? Well, is our SLA unreasonable or is it a thing where we need to actually invest more to meet? Do you actually want to meet that SLA? And if so, we need to make some real changes. So, it gives you the data that you need to takes your leadership, to do what you need to do. And then obviously hopefully, can get some insight out of the data as well. But then the ticketing piece is a whole other piece of nucleus, right? We’re not so arrogant to think that we can manage every single workflow for every big organization out there in the world. And so what we do is –

Sort of essentially, you’re like a ticketing waste station though, right? Like you get to yeah normalize volume data. And then, as a result, you’re spitting out uniform tickets and saving you from having to do those integrations with every scanner.

There’s that-

Which you can’t even really do because you can’t really assign owners. And you know, there’s layers missing from it.

There are. And then the main use case that we see in probably the most beneficial one is that different vulnerabilities on the same assets are actually owned by different people and oftentimes are being managed in different tools, so-

Well, there’s the person who manages Apache and then there’s the person who manages the web app, right?

Exactly. And they do it on the same asset. So being able to just reflect that there’s different owners on the same asset on the different vulnerabilities is the core use case that we help with, right? You don’t have to build that routing into Jira and ServiceNow and all these other tools and then have ServiceNow interact with Jira because you go like, pull it, push it into ServiceNow. And then like route it to internal ServiceNow and then push to Jira. And then like back up, it’s like, “Hey, we have one spot. And if it’s Apache, we actually want it to go to ServiceNow, this team, assign it to the business owner. And if it’s this type of vulnerability, say it’s like an application layer, we actually wanted to go to Jira and then we can manage all of our SLAs based on that as well.”

And so, it all just fits together in this nice little package of, well, now what’s happening is you get a new vulnerability or new set of 50 million vulnerabilities that comes in and you say, “Well, what should happen in this case?” Well, we don’t want to create 50 million tickets, right? Because that’s just not ever… Nobody’s going to do anything with that, but we can go in and we can prioritize, We can analyze all the data, we can triage it. And then, oh, by the way, you can do all of that through automation. And you can essentially just predefine how that should work. And then once all of the chips land and you’re like, “We’ve gone through all of that processing.”

Well now, we can take our remediative action and then just everything else is just monitoring, right? As tickets get updated, we just pull in the states of those tickets, we automate based on changes to the tickets, “So hey, we know that we have to rescan this set of vulnerabilities because the tickets were closed, but the vulnerability still open, click a button, generate that, send that off to the assessment team and then you’re on your way.” And then it only gets… I mean, these are just like the core building blocks, but when you start adding CSPM and pen testing and even just pushing random stuff like Tanium or like weird, like home built scanners>

You guys were ahead on the pen testers actually using a platform like this to report remediation steps, right? Because I remember when you first shipped that I spoke to some pen testers I knew. And they’re like, “But it’s hard for a system that to know the context of vulnerabilities.” It’s like, “It’s where it’s going, right?” Instead of having a pen test and even there are companies now that just do pen test reports that are more hands on, more designed to spit out tickets and things like that. So yeah, definitely ahead on that. It seems to be where it’s going.

Absolutely. I mean, so we actually have some great partnerships with tools that do that so like AttackForge is great example. They-

Yeah. That’s who I was thinking of actually with the pen test reporting.

Yeah. AttackForge are great. We partnered with them a while ago on a couple mutual customers. So that’s, that’s pretty sweet, but yeah, they have a plugin that essentially you just click a button and it pushes all your pen test, final pen test results up into nucleus.

The whole thing is like, instead of a word document that nobody reads, you can actually break it up into actionable chunks and then yeah use something like this to get those things resolved.

Yeah. I think, we did a webinar with them at one point actually. So, little shout out to like a-

Anyway, this isn’t an AttackForge-

This is not.

Demo. This is a Nucleus Demo.

Great. Yeah. But we like to give shout outs to the partners, right? So, I mean, in a nutshell at a high level, like that’s really what nucleus is all about. I think, I mean, I could talk for literal hours about all the different individual use cases, but like at a high level it’s we’re aggregating all of that data. We’ve got it all in one spot, we’re building asset inventory out of that and then that asset context allows you to make decisions about vulnerabilities. And you also have the added context of the Threat Intel to make additional, better decisions about your vulnerabilities. And then you use all of that data. Basically our marketing team built a triangle, right? To your assets, vulnerabilities and threats. I got to do the shout out to the marketing team.

And the idea is that, once you have that data, it allows you to start automating a whole bunch of different decision making processes such as just how do we want to report? How do we want to triage? How do we want to track, what do we want to look at? That’s assigned to different teams at different layers. We’ve got obviously your risk profiles, right? So there’s a risk based vulnerability management layer because the industry analysts and market loves this, right? I’ve got 50 million vulnerabilities. What do I fix? We’ve got about 10,000 criticals of those 10,000 criticals, how do I subdivide and stack rank those into a consumable list and how do we figure out which ones are the highest risk vulnerabilities? So we do that as well. But all of these use cases are really just enabled by that underlying principle of we have all this data already in existence and the problem is not a data generation problem.

We don’t need more scanning tools, we need ways to manage all the data at scale so that we can keep up as a business, right? Especially when we’re looking towards Cloud and Agile Development, right? We’re going to start seeing a lot more challenges that relate to, “Oh, well, we’re scanning with Nessus, but all of my assets are ephemeral.” Well guess what? Nessus can’t really handle ephemeral assets super well. And we actually find that we de-duplicate like up to 50% or 60% within just a single scanning tool.

So even if you’re just using like Qualys or Tenable.io, literally you upload that data to Nucleus and we’re saying, “Oh, look, these are all the same thing. It’s just was spun up and spun down. Or it’s an externally facing versus internally facing network port.” And so, “Hey, guess what? Those are the same assets and you have a clearer picture just of the data.” So we’re designed for the management of the data, whereas all of the vulnerability scanners, they’re not really designed for the management of the data much as they call themselves managers of the vulnerability.

No, I know it.

I’m with you on that. And, I think, the thing here is too, you’re not pretending that this is some light touch tool that set and forget, right? And everything it all just automatically automates. This is the tool that if that you would want your vulnerability management team at the very top of the vulnerability management globally, structure, this is where those people will live basically.

Absolutely. And we’ve actually been very surprised. We have five or 600 daily active users in some of our customers. I know for a fact they don’t have 600 vulnerability analysts. And so, a lot of those are remediation teams, a lot of those are developers or product owners that are embedded with development teams to figure out what to prioritize.

That’s a lot. I mean, I’m not surprised you’re surprised five or 600 active users in one company.

It’s a lot.

It’s a lot.

Yeah. That’s something that… It’s a tool that me and two of my friends built and now 600 people from one company are using it in a day like it’s wild.

Yeah. All right. Well, Scott, I think, that’s actually a pretty good place to leave the demo. I think, you’ve given us a really nice overview of key benefits and all of that and shown us the pretty console that’s like demo 101, got to see the pretty console, a pleasure to chat to you. And yeah, I’ll be talking to you again on the Risky Business Podcast soon. Cheers.

Sounds good, Pat. Thanks for having me.