Tuesday, August 29, 2006

Again, External Threat Is More Prevalent

I almost fell out of my chair when word of the following story reached my Bloglines account: Study: Rethink the Outsider Threat. I published my thoughts on the prevalence of external threats in my first book, and I reiterated those thoughts recently. Now I appear to have some outside help. From the article:

The report took data from the Department of Justice Computer Crime and Intellectual Property Section's network intrusion and data-theft prosecutions between 1999 and 2006. (See How Much Does a Hack Cost?) Phoenix Technologies commissioned the report, but the data came from DOJ cases...

Outside attackers committed 79 percent of the crimes where user accounts were infiltrated[,] and former employees were the perpetrators in 21 percent of these types of breaches. And overall, 57 percent of attackers had no relationship with the victim organizations, 22 percent were former employees, 14 were current employees, and 7 percent had a customer or supplier relationship or similar "connection" to the victimized organization.
(comma added, emphasis added)

Where's the 80% myth now? Gone, except in the minds of people who cling to it. I don't expect to see it disappear overnight. Please, if you want to repeat the 80% myth, at least cite a source. (You won't be able to find anything authoritative, just reports citing each other in a circular manner.)


Colin Percival said...

Outside attackers committed 79 percent of the crimes where user accounts were infiltrated.

Note that this provides a very biased measure of the relative danger of internal vs. external attacks: I think it's fairly safe to guess, even without authoritative sources, that the majority of internal attacks don't involve infiltrating user accounts, but instead are accomplished by legitimate users abusing the privileges with which they were provided.

Richard Bejtlich said...

Hi Colin,

I'm sure we could debate just what "where user accounts were infiltrated" means, too.

wpn said...

Colin makes an excellent point. The very term "infiltrated" is pretty biased towards the implication of an external attacker ...

Anonymous said...

Hi Richard, finally some actual data points, but I keep thinking about the denominator for these statistics: prosecutions between 1999 and 2006. Add all the cases that don't see a court room, and I suspect the reality is even more tilted towards the external threat.

Insider threats seem relatively easy to prosecute given you know their name, address, SSN, etc and most importantly they are within reach of law enforcement. Surprising, then, that they constitute a small fraction of DoJ prosecution data? On the other hand, insider threats, as I believe you said, are threats you can actually eliminate. Perhaps because they can be eliminated (i.e. fired), companies may leave it at that and avoid a public court room battle.

Then again, the real intrusions missing from this equation are the ones Maj Gen Lord describes. Talk about an external threat, the kind that siphons off 10-20TB of your data! How many external threats originate in countries that don't play nice with LEO? I suspect these would add quite a few more to the external threat column.

Jay Schwitzgebel said...

What a great resource this study will be for awareness efforts! I was pleased to see stats that don't marginalize the external threat. However, I also suspect that the radical shift from "conventional wisdom" is at least partially explained by a difference in what constitutes threat. By that, I mean that this study seems to define a threat as something that might result in a crime. Perhaps that's too narrow, as I believe the oft-cited "80%" stats also include internal users wandering inadvertantly into areas where they shouldn't be. Even if that sort of "wandering" is less than innocent, I can see event scenarios that would be disruptive and even harmful that may not be criminal. If you throw those in, what happens to the stats?

With regard to the article, there's one sentence I can't make sense of as is. The third bullet says, "84 percent of computer crimes could have been prevented if the computer that was broken into had been verified as an authorized device." IMHO, it seems that it should actually read, "84 percent of computer crimes could have been prevented if the computer that was broken into had verified the source as an authorized device." That makes much more sense to me. Am I missing something?

It's been some years, Rich, since we've been in touch, but I stumbled on your blog only just yesterday and have added it to my Bloglines - thanks!

LonerVamp said...

From my college days in statistics, I've long become a skeptic when it comes to using stats to back an argument. It is actually fun to take a particular data set or study, and "move" numbers around to back completely opposite arugments. It's one of the beauties and dangers of statistics (and rhetoric).

I really believe the 80% myth problem is twofold.

First, few begin to define what the myth is and what those terms mean. You've mentioned "80% of all security incidents are caused by insiders" is the myth. Are these incidents that are successful, or do they include all the noise that bounces off the firewalls from automated worm attempts? Is this based on cost, such as whether an internal incident costs more than a successful external incident? Did someone mention threat, and how would they define threat? Would something that takes advantage of a configuration problem be an internal issue or an external issue? What about a virus a user has to execute to activate? Does this include businesses with only 5 employees (insiders) but a large Internet presence in terms of traffic? What about a company with 50,000 insiders plus extranets? What about college campuses? Before making assumptions on things like this, I totally require the statement laid out, terms defined, and other assumptions defined. I don't mean to pick on your definition, but most every time I see the 80% statement made or attacked, it is not very qualified at all. We could all be in total agreement, but we just see things just differently enough that we get blinded by arguing semantics. In the meantime, the fact remains that there are both insider and exsider (hehe) threats and incidents that need attention.

Second, I don't think 80% is necessarily meant to be an empirically backed measure all the time, but rather something used as part of either common sense or an illustration. This is similar to the whole 80%/20% notion that 80% of your problems are caused by 20% of your users, or 80% of your security incidents can be fixed with only 20% of your budget. I really think a lot of people use it in that fashion. It might not be journalistic or scientific, but it is still used.

In my experience, I have felt the effects of very, very, very few external successful incidents(and yes, maybe I just don't catch them, I'll accept a level of error). I do, however, see far more attempts from the noise outside the firewalls from automated worms, automated script scans, etc. On the other hand (I think this gives me three hands now...) I have felt many affects from poor user education, poor user habits, implementation mistakes, internal misunderstandings, or just mistaken risk acceptance, etc.

Because of this feeling, I think a lot of people combine that feeling with the typical 80%/20% idea and just explain that 80% of our incidents are from insiders. I certainly feel that, but I've certainly not taken the time to gather the evidence for it.

In the end, much like statistics, without definitions, I can argue that the only external incidents that make any type of measurement on external incidents are highly skilled hackers that I wouldn't have been able to stop anyway. Everything else might be attributable to insiders or insider mistakes. Conversely, I could also say that the only insider incidents are those of a consciously malicious nature that cost the company some dollar value and did not include anything from the external world (i.e. if there were no Internet and no outside physical world, the incident would still have happened). In either case, I've dramatically changed the measures to suit my approach.

Anyway, sorry to vomit out about that, and I certainly do not mean any of this in anything other than a friendly discussion tone. I truly enjoy the challenge you present in defining terms and dispelling myths (you've made me a convert in using terms like "threat" appropriately as much as possible). :)

LonerVamp said...

Wow, please delete one of those, I've obviously got rabies or something today...

Jay Schwitzgebel said...

Vamp - thanks. Not sure if the "You" you're addressing is me or not, but you stated and supported my position much better than I did. Stats are dangerous things to rely on unless terms and parameters are well-defined (which they *so* aren't here). Cheers!

LonerVamp said...

Hrmm...I swore I accidentally posted the same thing twice, hehe. Oh well. I meant "you" as pretty general, but that works. :)

Anonymous said...

Firstly, I'd like to say Thank You. Thank you for discussing the research at all. Being one of the people involved in determining what would be put into the report I wanted to point out a couple of things. 1) Yes, this research could be flawed. All research can be flawed. Our intent was to find a raw data set that was as close to truth and reality as possible; in other words, NOT A SURVEY. The flawed portion is that it is only the cases that were prosecuted (in the U.S.) and let me tell you these things take years to prosecute so there aren't that many. Also, when you consider that some of the cases go back to 1999, you're looking at less sophisticated attacks. What most people don't know about this report is that it wasn't originally done to release to the public. It was done because Phoenix needed to figure out what had really been happening. The CSI/FBI report is interesting but they themselves admit it is only what people will admit to and the standard industry analyst firms will just survey their customers resulting in similar responses (I'm guessing) to the CSI/FBI report.

2) Internal vs. External was not the debate the research was attempting to prove/disprove. As one of you mentioned above, the likelihood that an internal breach would end up in the court system is certainly less than just firing the person and maybe suing them in civil court for some sort of remuneration. What Phoenix was ultimately trying to figure out was 'if you identified devices that access your network, can you decrease unauthorized access and thereby the possibility of data theft?' I think they got their answers and the report ended up being pretty interesting. Once something interesting gets around, the PR people start salivating and the rest is history. Now, that being said, anyone who wants to, can go through the DOJ’s website and do a few Lexus/Nexus searches to slice and dice this data anyway they want. The report certainly raises questions that aren’t answered.

Because, as I’ve admitted, I was involved in this project I’m enjoying the conversation and would love to have one answer for myself: If you were trying to figure out where the threats were coming from and how to prevent them, what would you do and where would you go to get the raw data? And, I don’t mean any of this in a defensive or derogatory way, this was a challenging project and we did the best we could, but we’re only a few people and the ideas of many are certainly more influential.
Thanks for the opportunity to discuss

Anonymous said...

As Adam of Emergent Chaos said, and was mentioned above, prosecution of insiders might cause too much brand damage. I recommend the Insider by Dan Verton to study this issue. The author suggests that insider losses are generally hidden losses that are low-tech, by authorized users and are revenge or greed motivated, something policies and training won't stop. The estimated losses in the US alone are staggering.

Shouldn't all unauthorized use (or abuse) of enterprise resources should be included in this thinking, not just infiltrations that lead to investigations and prosecutions.