Vulnerabilities in Perspective

It's been nine days since Dan Kaminsky publicized his DNS discovery. Since then, we've seen a Blackberry vulnerability which can be exploited by a malicious .pdf, a Linux kernel flaw which can be remotely exploited to gain root access, Kris Kaspersky promising to present Remote Code Execution Through Intel CPU Bugs this fall, and David Litchfield reporting "a flaw that, when exploited, allows an unauthenticated attacker on the Internet to gain full control of a backend Oracle database server via the front end web server." That sounds like a pretty bad week!

It's bad if you think of R only in terms of V and forget about T and A. What do I mean? Remember the simplistic risk equation, which says Risk = Vulnerability X Threat X Asset value. Those vulnerabilities are all fairly big V's, some bigger than others depending on the intruder's goal. However, R depends on the values of T and A. If there's no T, then R is zero.

Verizon Business understood this in their post DNS Vulnerability Is Important, but There’s No Reason to Panic:

Cache poisoning attacks are almost as old as the DNS system itself. Enterprises already protect and monitor their DNS systems to prevent and detect cache-poisoning attacks. There has been no increase in reports of cache poisoning attacks and no reports of attacks on this specific vulnerability...

The Internet is not at risk. Even if we started seeing attacks immediately, the reader, Verizon Business, and security and network professionals the world-over exist to make systems work and beat the outlaws. We’re problem-solvers. If, or when, this becomes a practical versus theoretical problem, we’ll put our heads together and solve it. We shouldn’t lose our heads now.

However, this doesn’t mean we discount the potential severity of this vulnerability. We just believe it deserves a place on our To-Do lists. We do not, at this point, need to work nights and weekends, skip meals or break dates any more than we already do. And while important, this isn’t enough of an excuse to escape next Monday’s budget meeting.

It also doesn’t mean we believe someone would be silly to have already patched and to be very concerned about this issue. Every enterprise must make their own risk management decisions. This is our recommendation to our customers. In February of 2002, we advised customers to fix their SNMP instances due to the BER issue discovered by Oulu University, but there have been no widespread attacks on those vulnerabilities for nearly six years now. We were overly cautious. We also said the Debian RNG issue was unlikely to be the target of near-term attacks and recommended routine maintenance or 90 days to update. So far, it appears we are right on target.

There have been no increase in reports of cache poisoning attempts, and none that try to exploit this vulnerability. As such, the threat and the risk are unchanged.


I think the mention of the 2002 SNMP fiasco is spot on. A lot of us had to deal with people running around thinking the end of the world had arrived because everything runs SNMP, and everything is vulnerable. It turns out hardly anything happened at all, and we were watching for it.

Halvar Flake was also right when he said:

I personally think we've seen much worse problems than this in living memory. I'd argue that the Debian Debacle was an order of magnitude (or two) worse, and I'd argue that OpenSSH bugs a few years back were worse.

Looking ahead, I thought this comment on the Kaspersky CPU attacks was interesting: CPU Bug Attacks: Are they really necessary?:

But every year, at every security conference, there are really interesting presentations and lot of experienced people talking about theorically serious threats. But this doesn't necessarily mean that an exposed PoC will become a serious threat in the wild. Many of these PoCs require high levels of skill (which most malware authors do not have) to actually make them work in other contexts.

And, I feel sorry to say this, but being in the security industry my thoughts are: do malware writers really need to develop highly complex stuff to get milions of pcs infected? The answer is most likely not.


I think that insight applies to the current DNS problems. Are those seeking to exploit vulnerable machines so desperate that they need to leverage this new DNS technique (whatever it is)? Probably not.

At the end of the day, those of us working in production networks have to make choices about how we prioritize our actions. Evidence-based decision-making is superior to reacting to the latest sensationalist news story. If our monitoring efforts demonstrate the prevalance of one attack vector over another, and our systems our vulnerable, and those systems are very valuable, then we can make decisions about what gets patched or mitigated first.

Comments

Unknown said…
You are right wher you say "At the end of the day, those of us working in production networks have to make choices about how we prioritize our actions."

Sometimes that security bug is not so important compared to normal bug in the application that prevents curtomers from using it for say, 4 hours. Those 4 hours can be really an hell, the customers keep calling on the telephone, your boss behind you shoulders pushing pressure on soliving the issue (and tipically something worse is happening)

Anyway, i believe it is a good thing when security bug are spread accross websites, blogs and so on. In this way you can say, "ok, today i have no time, but patching the system is tomorrow's job". Pushing so much importance on a security bug makes you feel like "If i don't patch it as soon as possible, i will have my dns cache poisoned"(or my ssh server in the hand of some attaker, etc.), than you patch it!

An attacker must have skill to write an exploit and you say that often he has not! Besides, an attacker has to know what to exploit. If the bug is big almost every dns admin would have it patched in some days; who did not patch? how can i find that dns? what is behind that dns? what can i poison and how many people are going to be affected?
I believe it is easier to send email around the globe, with some malware and blue pills inside, but please continue research on security and keep saying that there is a new "worst in the history" security bug.
Anonymous said…
And when a patch hurt and disrupts a network as in the case of http://blogs.technet.com/sbs/archive/2008/07/17/some-services-may-fail-to-start-or-may-not-work-properly-after-installing-ms08-037-951746-and-951748.aspx one has to balance the risk of the threat of the vulnerability, with the risk of the threat of patch disruption.

Patches bring change to a network and should not be blindly applied as well.
Anonymous said…
While agree with a large portion of this post, an issue was not addressed. It's not the low-level malware writers that keep me up at night. It is the "low & slow" professional hacker that worries me.

I look at the disclosures two ways:
1) An "unknown" tool of a professional hacker has been discovered (even though they may already have been using it) Giving the vendor a chance to fix it, no matter how long it takes them. This give me a chance to take a foothold away from the attacker.
2) The released information just gave a professional hacker another tool of which to whittle away at my defenses. Full disclosure or not if they are good, they are probably smart enough to develop an attack at what information they get. (albeit, I'm not against most forms of disclosure) Again, the vendor can come up with a fix allowing me to take away a foothold for the attacker.

Again, it's not the malware writer or script kiddie I lose sleep. The guys that have the talent to utilize such information are what keeps me up at night.
Sanju said…
"Evidence-based decision-making is superior to reacting to the latest sensationalist news story."

You have hit the home run on this one.Many people forget the issue at hand and get carried away by the sensational stories which may or may not be affecting them.
mubix said…
In reference to R = V * T * A, how does one assess the 'Threat' of a 0day exploit, or this DNS bug? How does one take a proactive approach? The equation breaks down if T is an unknown. So I believe that some sensationalism is required when V and A are large values with T as an unknown.

Also, some organization's executives, especially government types need some magic dust thrown at them to get things approved and moving along.

Like with Y2K, sensationalism helped push things to the point where on the that day in history, nobody noticed even a blip or pause.

So yes, within the community it should be a level-headed discussion, but getting it public enough for people to take action is something that the straight facts can't always accomplish.
Anonymous said…
Along with the question of a "T" being unknown, if intruders are smart and unpredictable( see Tao ) how can one ever assign "T" a value of 0 ?

Is "Evidence-based decision-making" related to your discussion of Indicators and Warnings in the Tao, Richard ?

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4