I love it when data surprises me.
No, really. The best way for the infosec industry to improve is through the rigorous collection and analysis of data, and when you do that long enough, you find out things that challenge the conventional wisdom. An example of this wisdom—something that everyone inherently believes and that it’s heresy to dispute—is that everything should be patched, all the time, immediately. Because not all threat models are equal among organizations, defaulting to “patch everything all the time” is a one-size-fits-all risk conclusion that doesn’t serve anyone well.
Counter to the prevailing wisdom, not all organizations are breached all the time.
Kenna Security and the Cyentia Institute took a serious look at the combination of data about vulnerabilities, published exploits, and detected exploits in the wild in the paper Prioritization to Prediction: Analyzing Vulnerability Remediation Strategies. The authors asked if it was possible to create a model for prioritizing patching in a way that optimized both efficiency (patching the right vulnerabilities that will probably be exploited) and coverage (patching all of the right vulnerabilities).
The paper is eminently readable, which isn’t surprising, because it comes from the dynamic duo of Wade Baker and Jay Jacobs, two of the creators of the also eminently readable Verizon Data Breach Investigations Report. But I also appreciate the fact that the research wasn’t conducted to prove or disprove a given hypothesis, such as “Enterprises aren’t patching fast enough, and the consequences are dire.” When you ask an open-ended question like “What kind of patching model works best?” then you leave yourself room for whatever the data says.
As my colleague Rich Smith points out, “Exploring what ‘best’ means for different cases is a useful exercise that encourages both security and the business to understand more concretely what they’re trying to achieve, rather than just blind adherence to currently popular security mantras.”
Another positive aspect to this research was that the authors collected data not just on vulnerabilities that had exploits published for them, but that were also being actively exploited in the wild. This data came from multiple sources, including the SANS Internet Storm Center, Dell Secureworks’ CTU, AlienVault’s OSSIM metadata and Reversing Labs’ metadata. Although the existence of an exploit made publicly available does not automatically mean it will be exploited in the wild, according to this paper, it does increase the chances of active exploitation by seven times.
The majority of registered vulnerabilities are never exploited in the wild.
(Note: this still doesn’t mean that everyone will be affected by that active exploitation, a point that is often missed in armchair risk analysis. Targeted attacks are different from automated, indiscriminate ones, and organizations need to know the difference this makes in the likelihood.)
A great point the authors also make is that the existence of a published exploit for a certain vulnerability tells you that exploit writers are also likely to target similar vulnerabilities in the future.
But let’s get to the bombshell data, which is my favorite part. Some of the points to ponder in this paper:
- From a coverage and efficiency standpoint, several of the favorite prioritization models in use today (such as remediating based on CVE severity score or vendor/product) are no better than remediating at random. The best strategy that Kenna and Cyentia have identified so far is to combine a number of those models: CVSS, products, categorical references, and keywords in the CVE descriptions. In other words, we don’t have a single method of prioritization that works well, but combining what we do have gets us closer to the goal of optimizing efficiency and coverage.
- If you’re going to patch, do it quickly: once a vulnerability is published, if it’s going to get exploit code associated with it, it’ll be within two weeks on average. This means that organizations badly need an automated way of analyzing and prioritizing these patches; doing it manually (assuming anyone has time to do it) takes too long.
- Going simply by numbers alone, not patching at all would still earn you an efficiency score of 77%. This will doubtless make a lot of security professionals clutch their chests and reach for the nitroglycerin, but it also explains why so many organizations don’t have a problem with breaches until the point where one finally comes along. Even if they’re not patching, they’re functioning just fine for the most part.
- The majority of registered vulnerabilities are never exploited in the wild. It isn’t until a catastrophic event such as NotPetya comes along that the lack of patching or tight security configurations exacts a price. Up to that point, the affected companies were achieving peak business efficiency by not patching, because patching can also create risk and expense. Even in this case, it doesn’t prove that they needed to patch everything; they just needed to patch the vulnerabilities that NotPetya exploited.
Knowing which ones those were, in advance, would have worked just as well, which is why a prediction model would be so valuable.
In other words, counter to the prevailing wisdom, not all organizations are breached all the time. Saving operational costs, predicting which security remediations actually have an impact on reducing the likely risk (not all risk), and prioritizing the truly essential controls are what the successful CISO has to do every day.
It’s probably at this point in the blog post that people will be getting upset. I’m certainly not advocating that organizations avoid patching.
We don’t have a single method of prioritization that works well, but combining what we do have gets us closer to the goal of optimizing efficiency and coverage.
But we need to take a rational look at the issue, not a reflexive “more is always better” approach, because if we keep climbing the update curve, we’ll be patching all applications at the speed of light, which is not good for stability. (It’s also a potential vector for attack in and of itself, so it’s worth it to consider a more deliberately designed process.) This work on remediation models by Kenna and Cyentia provides a valuable service in that it gets us closer to being able to say something other than the broadly unhelpful “patch your s[tuff].”
Data will come to our rescue.
Header image credit: rawpixel on Unsplash