Does Anything Really "End" In Digital Security?

Adam Shostack wrote an interesting post last week titled Smashing the Future for Fun and Profit. He said in part:

15 years ago Aleph One published “Smashing the Stack for Fun and Profit.” In it, he took a set of bugs and made them into a class, and the co-evolution of that class and defenses against it have in many ways defined Black Hat. Many of the most exciting and cited talks put forth new ways to reliably gain execution by corrupting memory, and others bypassed defenses put in place to make such exploitation harder or less useful. That memory corruption class of bugs isn’t over, but the era ruled by the vulnerability is coming to an end.

Now, I'm not a programmer, and I don't play one at Mandiant. However, Adam's last sentence in the excerpt caught my attention. My observation over the period that Aleph One's historic paper was written is this: we don't seem to "solve" any security problems. Accordingly, no "era" seems to end!

Is this true? To get a slight insight into whether my sense of history is correct, I consulted the Open Source Vulnerability Database and ran queries like the following:

Query for all vulnerabilities of attack type "input manipulation," with "buffer overflow" in the text, from time 1 Aug 96 to 1 Aug 97

I chose to run these "August" periods to capture time as it passed since Aleph One's paper was published in August 1996.

The results were:

Year Vulns
1997 11
1998 10
1999 6
2000 48
2001 41
2002 43
2003 94
2004 127
2005 86
2006 27
2007 29
2008 39
2009 36
2010 48
2011 44
2012 45
As a chart, they looked like this:

I find these results interesting, and I accept I could have run the query wrong by selecting the wrong terms. If I managed to get in the ballpark of the correct query, though, it seems we are not eliminating buffer overflows as a vulnerability.

I suppose one could argue about where researchers are finding the vulnerabilities, but they're still there in software worth reporting to OSVDB, and apparently trending upward.

My bottom line is to remember that security appears to be a game of and, not a game of or. We just add problems, and tend not to substitute them.


Matt said…
Interesting graph... Although, I imagine there are more open source applications now than back then. Is there any way to measure buffer overflows against the volume of code out there at any one time? I suspect the graph might actually trend downwards on that measure...
clint said…
More importantly, how many of those were exploitable and how many of those actually had exploits produced? Vulnerabilities by themselves are not that big of a deal. If they are exploitable and actually have exploits, then that's what really matters.
Adam said…
Interesting analysis, thanks for following up on my point! As you know, I love data, and enjoy anytime someone brings up data to contest my points.

I think that counting vulns may miss a subtle point that I was making--it's not only the decline of vulns discovered, but the rise of memory defenses like ASLR in production code. At the same time, the products most relevant when that spike in vulns occurred are getting phased out. For example, in 2012, there's just less XP or distros of Linux that deployed 2004. So a vuln discovered in 2004 was still potentially easy to exploit if not patched in 2006, but that system is now likely retired. Meanwhile a vuln discovered in 2011 may be hard to exploit in 2011 because of memory defenses in place.

Getting to your actual numbers, I think it would be interesting to examine them in relation to the amount of code being produced in languages that risk memory corruption (Perhsps using github, sourcefourge, or google code as a proxy?)

I think it would be even more interesting to compare reported vulns to exploit attempts or even successful exploits, but those are tricky numbers to come by.

Regardless, thanks for giving thought to testing my claims.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4