Who Owns Vulnerabilities?
The question of ownership is perhaps the leading cause of vulnerabilities lingering in orgs much longer than they should.
Who Owns Vulnerabilities?
The question of who owns vulnerabilities is more than a philosophical one. The question of ownership is perhaps the leading cause of vulnerabilities lingering in organizations much longer than they should. Here’s how to solve the mystery of who owns those vulnerabilities that never go away.
Why Vulnerability Ownership is Important
It’s a well-known problem in physical security that you don’t ever want to say, “Somebody help me.” Asking the generic somebody for help is the same as asking nobody. The “bystander effect” creeps in – everyone assumes someone else will volunteer, and as a result, no one volunteers. But if you point at a random person and say, “You. Help me,” the person probably will, and without hesitating.
The same thing happens in vulnerability management. If you don’t assign the vulnerability to anyone, it won’t get fixed. Or if it does, it was an accident.
But ownership is also important when it comes to vulnerabilities you can’t fix. When you can’t fix a vulnerability, you need a risk acceptance. But who signs the risk acceptance? In my days as a mid-level vulnerability analyst, I wrote up dozens, if not hundreds of risk acceptances. But when we sent them off to a VP for signature, more often than not the VP would ask why it was going to them, as opposed to someone (anyone!) else.
Sometimes we sent it to the wrong VP. Given the nature of the document, that’s something you want to get right. It’s not just a signature you want. Ultimately you’re wanting someone to own the problem, so you can work together to fix it. Because when you can’t patch a system, usually there are other problems going on. Some people find the language of the risk acceptance threatening as well. They’re accepting responsibility if nonspecific “bad things” happen, which sounds like a potential career-limiting move. So if they can sidestep the issue, they have every motivation to try. No one wants to sign something that has the potential to set back their careers.
Let’s talk about how to determine ownership first, and then how to go about making that risk acceptance less threatening. Nucleus didn’t exist at the time, at least not in a form my then-employer could buy, so I didn’t have the benefit of Nucleus. But here’s how Nucleus can help you get that right.
How to Find Out Who Owns the Vulnerable System
Every company has a collection of systems nobody wants to fess up to owning. The reasons vary, but the longer those systems fester, the more dangerous it becomes.
The system’s ownership information belongs in your CMDB. You likely have some other systems that contain some clues as well. And ideally, you have all of those systems syncing with Nucleus. You may also have some static raw data, and hopefully you imported it into Nucleus too. So the first place to look is in your additional asset metadata. Pull up the asset in Nucleus, click Overview, and scroll down. The older the system, the more likely the information is to be incomplete or conflicting, but Nucleus gathers it all in one place.
What if the system fell through the cracks and there’s no ownership metadata in Nucleus? If it’s a Windows system, you can use the insights from vulnerability data to gather clues to determine ownership. In Nucleus, click on Vulnerabilities, and look through the informational findings for a finding with a title like “Last successful user login.” Click on the finding, then click Instances, and look at the section labeled Output. The username of the last person to log onto the machine should be among the details.
Between the additional asset metadata and the informational finding, you have some leads. Start talking to those people about the system.
Questions to Ask
The person probably won’t want responsibility for the vulnerabilities on the system. So the first question to ask is “who would be responsible for fixing the system if it broke?” That’s the same person who should be responsible for patching the system.
Of course, whoever that person is will probably tell you they aren’t allowed to update the system. I once asked a Unix administrator why he had 25 systems that hadn’t been rebooted in at least five years. He let out a long sigh and said, “Yeah, those are the systems I’m not allowed to touch.” The question then is “who says you’re not allowed to touch that system? Or, if the system broke, who would be the one coming to you telling you to fix it?” That person is who you need to talk to about a risk acceptance. If that person isn’t a VP, the VP that person reports to is the one who needs to be signing the risk acceptance.
What Goes in the Risk Acceptance
Many organizations simply treat risk acceptances as a get-out-of-patching-free card. That’s not how I was taught to write risk acceptances. We didn’t call it a risk acceptance, but I wrote my first risk acceptance in 2006. I was working on a DoD contract. Sun Microsystems released an update for Java, and we had a mission critical application written in Java, so I couldn’t meet the deadline. So I filled out a standard form. I had to state the problem. In my case, it was that a vulnerability existed in Sun’s Java Runtime Environment (JRE), and Computer Associates had not yet certified its product was compatible with the new version. I also had to attach something called a plan of action and milestones (POA&M). In my case it was pretty simple. It looked something like this:
- Open a case with Computer Associates regarding its incompatibility with the new JRE.
- Get a weekly update on the case until Computer Associates certifies compatibility or releases an update certified to run under the new JRE.
- Deploy the new update(s) in a test environment. If there are no issues, proceed. If issues are noted, go back to step 1.
- Deploy the new update, if needed, along with the new JRE, following Computer Associates’ instructions precisely, during the next available maintenance window.
And that’s really it. Since we did this every time Sun released an updated JRE, the wait wasn’t all that long. I don’t remember the general length of my requests and I couldn’t tell you even if I did, but it was less than a year.
I had to get a signature from an Air Force Colonel or equivalent (an O-6 in any military branch or a GS-15 would do). But as long as the Colonel understood what they were signing, they never refused. Of course, this problem was easier because all of the affected software was under maintenance. We did sometimes have end-of-life software, but it was always under extended support, so we continued to get and deploy updates until we could upgrade everything to the latest supported versions.
The Lesson for the Private Sector
In the private sector, almost any company of any significant age has at least one system that everyone is afraid to touch. It’s usually an older system no one understands, and whoever set it up probably left the company decades ago and may not even still be alive in some extreme cases. The system is probably end of life and there’s every reason to believe both the hardware and the software are unsupported. So the software publisher won’t help if there’s a software problem, and if it’s a hardware problem, the hardware manufacturer may not help either.
Your best source of replacement parts or installation media may be eBay, and your only option for support may be a vintage computer forum.
And of course there’s some business critical process that relies on this digital house of cards. Otherwise the thing would be in a dumpster instead of in production.
This isn’t a hypothetical situation, either. I actually encountered this and had to figure out how to solve it.
Every situation like this exists because someone did a cost-benefit analysis and determined the best course of action was leaving the system as-is. And it probably seemed like a prudent decision at the time. The problem is, the decision predated a formal risk acceptance process and it all happened off the books and no one ever revisited it, so every year the decision got worse and worse. Give a situation 20 years, and it can start to border on the ridiculous.
This is why we revisit risk acceptances every year, to make sure the risk is still acceptable. There is no such thing as a “forever risk acceptance.”
The system administrator frequently wants the system gone too, because they aren’t looking forward to the day they have to rebuild the system with parts they got off eBay and pose as a hobbyist on a vintage computer forum to get the help they may need to get the system running again. In this particular case, a system administrator shared with me that a system in this condition was running on a failed RAID array.
So I wrote up the risk acceptance, and made sure I included that little detail, that the system was one drive failure away from decommissioning itself.
The vice president called me up in a matter of hours. After asking the usual questions, he said he still didn’t see the problem. “If the drive fails, we’ll just reboot the system.”
“Sir, that’s the problem,” I said. “If the drive fails, there’s nothing left for the system to boot from. So then we’d be looking at having to ship the drives and the controller off to a drive recovery specialist, pay them about $1,000, and then they’ll send us back a DVD with the data on it, and then we’ll have to find working drives to load it onto and figure out how to make a usable system out of it.”
“It sounds like you’ve had to do this before?” he asked.
“Unfortunately.”
I didn’t fault him for not knowing that you can’t reboot your way into recovering from a disk crash. His job was to make as much money for the company as possible. Providing computer expertise? The company had people for that. I happened to be one of them.
Once he understood the situation his predecessors accepted had spiraled slightly out of control, he didn’t sign the formal risk acceptance. It took about two weeks, but his system administrators got that business process migrated to a current, supported server-grade operating system running on current, supported server-grade hardware. To celebrate, one of the system administrators took the old system out to a field in rural Missouri, where he and his favorite college professor did an experiment with it involving explosives. At least that’s what he told me.
Note that it wasn’t the vulnerabilities that ultimately forced this system into retirement. I tried that, and it didn’t work. But where you find vulnerabilities, you often find other questionable practices. It was the totality of the questionable decisions that drove the VP to action.
The happy ending doesn’t always come that swiftly, and even less frequently ends with a bang. But when an organization’s risks are all on the books and get reviewed annually, you can whittle away at these legacy systems nobody’s allowed to touch unless they break. It takes patience. But it can prevent something much worse from happening.
It starts with figuring out who owns the vulnerabilities and presenting a clear picture of the risk to that person in a professional manner.
See Nucleus in Action
Discover how unified, risk-based automation can transform your vulnerability management.