Ptacek v. Lindstrom
There's a major battle over vulnerability and exploit disclosure occurring between Thomas Ptacek and Pete Lindstrom. I've linked the first post in each side of the debate. I don't know which one should be Godzilla or Mechagodzilla, but I liked the photo at left.
I think each side makes some valid points. I agree with Tom that vulnerability disclosure has resulted in elimination of many security problems. I agree with Pete that, in some sense, nothing has really improved, as victims are still being compromised. In the end I would lean more towards Tom; clueful people have a better chance of defending their networks, and at least knowing what is happening if their preventative measures fail. Remember that ten years ago their was no Snort, no Ethereal, no Nessus. Fifteen years ago there was no Argus, and no FreeBSD! Would you believe that Tcpdump is over eighteen years old though?
Tom does make an excellent point regarding cryptanalysis: why is it ok to analyze and break crypto algorithms, but supposedly not security software? Could it be that the people who really need strong crypto, like .gov and .mil types, know that bad guys are always trying to break the good guys' crypto?
If we are to believe Pete, we would not recognize this fact. Because Pete doesn't have first-hand knowledge of the sorts of research that occurs "in the shadows," he is quick to poke fun at people like Adam Shostack who say "We've always known that there's lots of exploit code for unannounced vulnerabilities out there." Pete and friends, there are people who have developed techniques months, and in some cases, years, before they appear in mailing lists or Black Hat talks.
With regard to discussions on specific new vulnerabilities and exploits, all I can tell you is "those who say don't know, and those who know can't say."
I think each side makes some valid points. I agree with Tom that vulnerability disclosure has resulted in elimination of many security problems. I agree with Pete that, in some sense, nothing has really improved, as victims are still being compromised. In the end I would lean more towards Tom; clueful people have a better chance of defending their networks, and at least knowing what is happening if their preventative measures fail. Remember that ten years ago their was no Snort, no Ethereal, no Nessus. Fifteen years ago there was no Argus, and no FreeBSD! Would you believe that Tcpdump is over eighteen years old though?
Tom does make an excellent point regarding cryptanalysis: why is it ok to analyze and break crypto algorithms, but supposedly not security software? Could it be that the people who really need strong crypto, like .gov and .mil types, know that bad guys are always trying to break the good guys' crypto?
If we are to believe Pete, we would not recognize this fact. Because Pete doesn't have first-hand knowledge of the sorts of research that occurs "in the shadows," he is quick to poke fun at people like Adam Shostack who say "We've always known that there's lots of exploit code for unannounced vulnerabilities out there." Pete and friends, there are people who have developed techniques months, and in some cases, years, before they appear in mailing lists or Black Hat talks.
With regard to discussions on specific new vulnerabilities and exploits, all I can tell you is "those who say don't know, and those who know can't say."
Comments
:)
Pete Lindstrom
The problem, Richard, is that the goal of security vendors should be to create a network environment where one does not have to be "clued" to defend their network. Unfortunately, the reality of the situation is the fact that this would be counter productive to the economic sustainability of these same companies. If you buy a lock that never breaks, where's your incentive to buy new locks? No security vendor in their right mind would market a product that didn't have to be supported.
(1) "...those who know can't say."
Like many, I worked at the No Such Agency. Even if you have the lucrative TS/SCI Lifestyle, the "crypies" have a deathgrip on their techniques and play dumb most of the time.
(2) The post which states, "If you buy a lock that never breaks, where's your incentive to buy new locks?" Hackers need us, and we need them. Yin-Yang, you know the old story. We (security professionals) have no purpose without them. I mean let's face it, we all would be sys admins without our security specialization, or even worse, help desk support!
As a consultant, I prided myself on having a thorough vulnerability assessment process. One of the things I did was interview people within the client company. One question I asked the IT management is, "who on your staff is designated to receive vendor-issued security alerts?"
Many times, the initial response to this question was (and still is) a blank stare.
Over 5 yrs ago, I did an assessment of the NW3C. I asked the staff when I was on-site in WV who received the Microsoft security bulletins. I was told that no one did...no one was designated to do so, as they were too complicated and difficult to understand. That evening, back in my hotel room, I had two emails from MS...security bulletins. In less that 30 seconds, I reviewed each of them and determined whether or not they affected the client's infrastructure.
My point is this...regardless of how well-thought-out the vulnerability disclosure process may be, there will *always* be people who don't even bother to pay attention. And don't bother blaming the admins...sure, they could take it upon themselves to be proactive...because until IT management starts making security skills development a requirement for retention and promotion/bonuses, little is likely to change.
H. Carvey
"Windows Forensics and Incident Recovery"
http://www.windows-ir.com
http://windowsir.blogspot.com
It's hard not to agree with Tom, he's smarter than you and I put together, but there's a fact we're overlooking here: There's no such thing as a one-sided coin. The other side of the disclosure coin is "How many security problems have been exacerbated by the disclosure of their respective vulnerabilities?", and unfortunately the (realistic) answer to that question is more have than haven't. Sad but true, life goes on.
Now we address what arises as the fundemental paradox of modern (and historic) Information Security; the dilema presented by the fact that, from an asset protection standpoint, being vulnerable to exploitable software flaws is bad for companies; companies have little incentive to patch holes not being widely exploited (mainly because if they're not being widely exploited said company has no knowledge of them); and the fact that software vendors have very little reason to patch vulnerabilities lest the receipt of external pressure from their clients, who will only know about them if they're published, but if the vulnerability is published the rate of exploitation will drastically increase, and the vicious circle will start all over again.
Will certain high profile, large market cap, companies with the ability to muster the implementation of their security policy benefit from the release of vulnerability information, and corresponding software patches? Absolutely.
But now the important question that plagues the center of the Full Disclosure/Disclosure/Non-Disclosure argument (and there are three legs to this debate, whether anyone wants to try to make it black and white, or not): Is it worth releasing this information for the financial advantage of the few that can properly roll the fix out with an enterprise patch management system, at the expense of the 10s if not 100s of thousands of users who will be left with their pants down and their proverbial dicks flapping in the wind out of ignorance? It all depends, like everything else, which side of the fence you're sitting on (Corporate User vs. End User, Security Researcher vs. Hacker, Stock Holder vs. Consumer). Let all things be relative to the originating point of the observer's observation. Einstein was indeed a genius.
Leaving end users high and dry, in favor of statisifying the paranoia of the Fortune 50 companies that want to be assured they aren't vulnerable to the BotW (Bug of the Week), while they remain vulnerable to several which won't make BotW for several weeks, months, hell maybe years, is in my eyes not only irresponsible, but highly, gratuitously, and self-servingly unethical.
But in the immortal words of Dennis Miller, that's just my opinion, I could be wrong.
Although I do have a feeling that Tom was referring more to the "outting" of specific exploitation techniques, and thus having the result be a reduction in the overall number of instances of those specific types of vulnerabilities, which is definitely 100% a Good Thing(tm). You just can't have the benefits, without the drawbacks that the process creates. And I'm not so sure the ends justify the means in this case.