OSVDB on Problems with Identifying Vulnerabilities

This post titled If you can't, how can we? described a problem I had not previously considered regarding identifying vulnerabilities. ("VDB" refers to Vulnerability Database.)

Steve Christey w/ CVE recently posted that trying to keep up with Linux Kernel issues was getting to be a burden. Issues that may or may not be security related, even Kernel devs don’t fully know... Lately, Mozilla advisories are getting worse as they clump a dozen issues with "evidence of memory corruption" into a single advisory, that gets lumped into a single CVE. Doesn’t matter that they can be exploited separately or that some may not be exploitable at all. Reading the bugzilla entries that cover the issues is headache-inducing as their own devs frequently don’t understand the extent of the issues. Oh, if they make the bugzilla entry public. If the Linux Kernel devs and Mozilla browser wonks cannot figure out the extent of the issue, how are VDBs supposed to?...

VDBs deal with thousands of vulnerabilities a year, ranging from PHP applications to Oracle to Windows services to SCADA software to cellular telephones. We’re expected to have a basic understanding of ‘vulnerabilities’, but this isn’t 1995. Software and vulnerabilities have evolved over the years. They have moved from straight-forward overflows (before buffer vs stack vs heap vs underflow) and one type of XSS to a wide variety of issues that are far from trivial to exploit. For fifteen years, it has been a balancing act for VDBs when including Denial of Service (DOS) vulnerabilities because the details are often sparse and it is not clear if an unprivileged user can reasonably affect availability. Jump to today where the software developers cannot, or will not tell the masses what the real issue is...

It is important that VDBs continue to track these issues, and it is great that we have more insight and contact with the development teams of various projects. However, this insight and contact has paved the way for a new set of problems that over-tax an already burdened effort. MITRE receives almost 5 million dollars a year from the U.S. government to fund the C*E effort, including CVE [Based on FOIA information]. If they cannot keep up with these vulnerabilities, how do their "competitors", especially free / open source ones [5], have a chance?

Projects like the Linux Kernel are familiar with CVE entries. Many Linux distributions are CVE Numbering Authorities, and can assign a CVE entry to a particular vulnerability. It’s time that you (collectively) properly document and explain vulnerabilities so that VDBs don’t have to do the source code analysis, patch reversals or play 20 questions with the development team. Provide a clear understanding of what the vulnerability is so that we may properly document it, and customers can then judge the severity of issue and act on it accordingly.

I think many of us just take for granted that assigning vulnerability identifiers is easy. Discovering the vulnerability is supposed to be the hard part. This is disturbing, because it means that the people with the most at stake -- the asset owners -- don't know how to assess risk. If you think about the risk equation, lack of knowledge of vulnerabilities just augments the problems of not knowing what you're protecting (assets) or who wants to exploit them (threats).

It's really an problem of incentives. The group with the strongest incentive to fully comprehend the vulnerability is the group that seeks to exploit it. Once they understand the vulnerability they have a strong incentive to not tell anyone else so they can financially or otherwise benefit from their asymmetric knowledge.

I am not a fan of government regulation or intervention, but it sounds like this incentive misalignment may require one or the other or both.

Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.


Anonymous said…
Hi Richard.

I will leave comments regarding the CVE vs Linux Kernel Dev & Mozilla at the approperate places, but I feel compelled to comment on your suggestion of government regulation.

This has just got bad news written all over it! It would create more problems that it could ever solve.

Once again the same questions come up:
- Who's Govt?
- What happens if the regulations are not met by a researcher in a non-regulated country?
- Process pain vs loopholes (providing incorrect details for less effort)


Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Five Reasons I Want China Running Its Own Software

Cybersecurity Domains Mind Map

A Brief History of the Internet in Northern Virginia