Verizon Study Continues to Demolish Myths

I just read Patching Conundrum by Verizon's Russ Cooper. Wow, keep going guys. As in before, I recommend reading the whole post. Below are my favorite excerpts:

Our data shows that in only 18% of cases in the hacking category (see Figure 11) did the attack have anything to do with a “patchable” vulnerability. Further analysis in the study (Figure 12) showed that 90% of those attacks would have been prevented had patches been applied that were six months in age or older! Significantly, patching more frequently than monthly would have mitigated no additional cases.

Given average current patching strategies, it would appear that strategies to patch faster are perhaps less important than strategies to apply patches more comprehensively...

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates. And companies who did other simpler countermeasures, like lightweight standard configurations, had very strong correlations with reduced risk. The Verizon Business 2008 Data Breach Investigations Report supports very similar conclusions.
(emphasis added)

It gets even better.

In summary, the Sasser worm study analysis found that companies who had succeeded at “patching fast” were significantly worse off than “average” companies in the same study. This seemed to be because, as a group, these companies tended toward less use of broad, generic countermeasures. They also thought they had patched everyone, when in reality they hadn’t. You might say they spent more of their energy and money on patching and less on routing, ACLs, standard configurations, user response training, and similar “broad and fundamental” controls...

A control like patching, which has very simple and predictable behavior when used on individual computers, (i.e., home computers) seems to have more complex control effectiveness behavior when used in a community of computers (as in our enterprises).
(emphasis added)

So, quickly patching doesn't seem to matter, and those who rely on quick patching end up worse off than those with a broader security program. I can believe this. How often do you hear "We're patched and we have anti-virus -- we're good!"

Also, I can't emphasize how pleased I was to see the report reinforce my thoughts that Of Course Insiders Cause Fewer Security Incidents.

Comments

Anonymous said…
The Verizon report is a treasure trove of useful information and analysis. One thing that occurs to me though is that the sample is somewhat biased. The data breach report only includes incidents for which the victimized company decided to call Verizon to help. I imagine that a good number of incidents were handled in house by these same entities and VZ never saw them. The question though is in what direction this altered the results. FOr example, were patching problems mor or less likely to warrant a VZ investigation? (In which case, VZ's numbers would be higher or lower).
Joe said…
This comment has been removed by the author.
Anonymous said…
@angel one: Agreed, although sample bias here extends even further. The sample set is rich and we need *much* more of this data, so "Thank You Verizon!"; however, it's important to know what you are reading. The (impressively large) sample is those customers who a) knew they had a breach, and b) came to Verizon to investigate it. I'll bet the first factor (knowledge of breach) offers more bias than the second.
Anonymous said…
@angel one,

We know in this sample that a smaller percentage of the total is attributed to inside attacks. Companies may decide to deal with inside breaches of trust internally by "one of their own" to limit bad publicity, so indeed they may be excluded from the sample of such reports.

I am not sure that we can can extrapolate this percentage to the total number of attacks. For all we know, a greater percentage of undetected attacks may be internal.
Anonymous said…
@rob
> For all we know, a greater percentage of undetected attacks may be internal

Actually, the DLP vendors have already been measuring those stats for years. The amount of undetected insider-driven data breach is very large and almost completely unreported. DLP systems detect breach patterns that most people don't know they should be looking for.

This Verizon study is great because it uses a statistically significant sample size, but their statement of methodology indicates they are not running baseline scans for insider breach at comparable locations to those sites where they'd already reported a breach and were pursuing investigation.

@richard bejtlich: I like your blog and your work, but this Verizon study does not reach the "mythbuster" conclusions you attribute to it. First, the study clearly indicates that weighting the breach count using a severity indicator like "record count" would place insider threat in LAST place as a risk factor. Second, the study explicitly asserts that lack of knowledge around where internal data is stored is a primary causative factor in breaches. Insider threat platforms (DLP, DAM) are very good at addressing such problems, and count these 'internal exposure' problems as a primary value proposition. Both you, Hoff (and a few journalists) have jumped to this strange conclusion that the insider threat problem is now shown - by this study - to be a "myth". You sure you still want to assert this conclusion?
@kevin: The myth is that insiders account for 80% of all intrusions, which has been repeated ad nauseum by pundits, with zero evidence to support it. Yet again another study shows that is false. What else needs to be said? Sure, insiders can cause the most damage, but they are nowhere near as prevalent as people speaking with zero evidence claim.
Sam Pabon said…
Just a quick comment:

It seems that most people (not all) assume that the "Insider Threat" is always an employee of some sought with "physical" access to a specific environment ...

It should be taken into consideration that an "Insider Threat" can also originate from a compromised host that happens to be internal to the target environment, or has some sought of trust relationship(s) established.

However, the human counterpart to this "Insider Threat" is neither an employee, nor necessarily located on the same continent ;>P

Just ask anyone that works at any of the Service CERTs.
@Sam: Someone outside the organization who compromises an asset inside a target doesn't suddenly become an "insider threat".
Unknown said…
I hate the 80% myth, not necessarily because it is wrong, but because it is never clearly defined.

80% of all successful attacks?
80% of all damage from attacks?
80% of all attempts? or potentials?
80% of all attacks detected and that we hear about?

There are far more external threats and attempts, but I can stomach that a majority of the damaging attacks can come from internals (maybe because it is easier to monetize after the lawyers get involved). Still, it's all vague and unsupported, but an interesting discussion can arise from what anyone means by the 80% internal/external myth.
Anonymous said…
Richard,

Perhaps the reason for " people speaking with zero evidence claim", about insider attacks, is because it has been nearly impossible, until recently, to detect them.

There have been anonymous surveys in which a large majority of IT staff have admitted to privacy and security policy violations, simply because they are curious, or just because they can, access other people's personnel files, etc.. Even though this may not result in monetary losses, if any employee was vengeful, the stakes could instantly change. If one adds all other staff to the mix, the damages, whether intentional or not, could be nickel and diming an enterprise into lost dollars on the bottom line and they may not even know it.

Are enterprises going to commit resources to monitoring employee actions? Probably not until they have reason to, and what would flag that? They also have to watch the fine line about privacy violations as well.

Since any employee can potentially be compromised, even the most trusted one, who or what is protecting a company against those that are supposed to be protecting the company against such things?
Anonymous said…
Richard,

I don't think you've explained how this Verizon report supports your argument against the "80% myth."

I'm not saying that you're wrong, but I think the report only loosely supports your position.

The reasoning for this is that insider threats originate on the "inside" of a victim company's managed network, therefore there is rarely a reason to involve the telco carrier, unless there is a managed services relationship that extends beyond traditional WAN connectivity, or possibly some highly sophisticated transmission method for the leaked data (side channel, TOR, etc.) Also, there is a perceived benefit to the victim in limiting the involvement of outside entities where ever possible, which would further reduce the chances that Verizon would be involved in an insider investigation. The unlikelihood of Verizon being involved in an insider investigation suggests that this report is probably not the best source of data on the pervasiveness of insider threats.

Regardless of whether insider attacks are as pervasive as the "myth" claims the potential impact of insiders who threaten the security of an organization, whether is be through malice or incompetence, should be addressed as a top drawer issue. Here is a CERT site with some interesting research that not only dispels the "80% myth"(under most circumstances) better than the Verizon report, but also provides some needed perspective on the threat.
jbmoore said…
You've been tagged Richard (http://jbmoore61.blogspot.com/2008/06/meme-of-seven.html).

Kind Regards and Thanks for the Informative postings of late,

John
Anonymous said…
I'd like to correct my previous comment. Here is a good post from someone apparently on the Verizon Team. This was posted on Bruce Schnier's blog
http://www.schneier.com/blog/archives/2008/06/it_attacks_insi.html

I'm one of the authors of the report and I'd like to comment on a few questions and statements I've seen so far. As to why a telecom is issuing this type of report, it comes from the security solutions group within Verizon Business which was formerly Cybertrust. We are the principle investigators on a large proportion of publicly disclosed data breaches. Secondly, the insider vs outsider topic which has grabbed everyone's attention. We looked at three sources of data breaches - external, internal, and partner. Sure, some people consider trusted partners as insiders but we thought distinguishing between the two might be helpful for many reasons. Since risk is the product of likelihood and impact, we sought to measure each separately (keep in mind we're talking data breaches, not attacks or general security incidents). Outsiders were the most likely, followed by partners then insiders. Investigators don't typically measure the total financial impact of a breach but they do measure the size (in terms of # of records compromised) so we used that as our pseudo measure of impact. Insider breaches were typically much larger (median # of records) than outsiders, with partners somewhere in the middle. When you multiply likelihood and impact, partners represented the greatest risk within our caseload. Finally, we do think such analysis is helpful in prioritizing efforts to reduce risk. For instance, we often found partner-facing controls to be non-existent. Perhaps organizations that have been neglecting such risks will divert some resources to controlling them after reading these results. Thanks for the comments - I'm glad the report is being discussed.

Posted by: WHBaker at June 25, 2008 9:17 AM

So its not just attacks reported to the telco, its actually a consulting group within Verizon that used to be Cybertrust. I still think that arguments supporting the 80% claim, (or something like it) can be made if you look at all incidents instead of what security professionals narrowly call a "breach." Consider, lost backup tapes, sysadmin snooping, running unauthorized infosec tools, stealing/sharing passwords, etc. and its not hard to see how insiders easily represent your greatest security risk. Even if technically they cannot launch as many attacks as bot nets and automated XSS attacks.
Anonymous said…
@Richard. Thanks for hosting this blog and getting a lively discussion going about our study. A few comments:

To the general point about whether our insider numbers can be broadly applied. We are not saying insiders do not pose a risk. It’s important to note the source of these statistics. These are drawn from data compromise cases routed into Verizon Business for immediate incident response and computer forensic support. Increasingly today – when there is an inside job resulting in the compromise of sensitive information – the fraud patterns, customer-complaints of identity fraud, or other compelling allegation of security breach clearly suggests an internal source. For example – all customer accounts showing fraud at a given retailer tie back to cash register number 3 at store 449. While each legitimate transaction took place – Sally was working the cash register. This is just a basic example but illustrates why a security services providers like Verizon Business see fewer internal breaches as opposed to those originating outside the customer-enterprise. When there are already easily-projectable financial damages, coupled with a compelling indication of source and cause – there is less frequently a need for a security services provider to prove or dis-prove security breach. Instead these types of cases tend to go directly to law enforcement. When a company like Verizon Business does see these internal cases – our role is typically to perform third-party due diligence to determine the likelihood that the data compromised may lead to fraud – and if so what types? Also – to ensure that the matter is fully contained. These requirements are commonly driven by regulatory pressures – either at the federal level or within certain industries.

@angel one. No study can be conducted without some amount of bias, and we did explain that ours was not unbiased on page 6 in the report.

@Kevin. Obviously nobody contracted us without being aware of some malfeasance, but many did not become aware because they, themselves, discovered anything. Police, credit card issuers, individuals, etc… can all cause investigations to get underway. So a company does not need to have “knowledge of breach” to be identified as breached.

@rob lewis. To your point as to whether more or less undetected attacks are internal, for all we know all undiscovered caves are full of gold and leprechauns.

@sam. It should be really hard to compromise my data outside of my enterprise. IOWs, by your definition of “Insider Threat”, every intrusion is an inside job. Where we could track the source, we did, and when we did, we categorized them accordingly; so if inside host was compromised by outside criminal that was external. If that outside criminal was an employee remotely connecting (legitimately or not), then it was classified as an insider.

@ntokb3. As far as “breach” being a narrow term, I personally think it’s more factual than using figures that might include lost data tapes and their ilk. As for your assertion that insiders are your greatest security risk, I’d disagree. For there to be risk, there must be threat. For something to be your greatest security risk, the threat must be extremely high also. This is simply not the case from insiders in our caseload. When looking at the overall assault a typical company is under, it is far less from an insider than others. See our pseudo risk numbers on page 11. Remember, risk is threat x vulnerability x cost. A CFO could order the transfer of the entire companies funds to a private account (cost=maximum) but it so rarely happens (threat <.01) that the risk is smaller than many others. Further, the raw risk (unmitigated risk) should rarely be consider (except when determining the mitigators), so just because there’s lots of employees doesn’t necessarily vastly increase threat (because, simply, there’s lots of mitigators too!)

Cheers,
Russ Cooper
Director RIT Publishing
Verizon Business
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4