Does Anything Really "End" In Digital Security?
15 years ago Aleph One published “Smashing the Stack for Fun and Profit.” In it, he took a set of bugs and made them into a class, and the co-evolution of that class and defenses against it have in many ways defined Black Hat. Many of the most exciting and cited talks put forth new ways to reliably gain execution by corrupting memory, and others bypassed defenses put in place to make such exploitation harder or less useful. That memory corruption class of bugs isn’t over, but the era ruled by the vulnerability is coming to an end.
Now, I'm not a programmer, and I don't play one at Mandiant. However, Adam's last sentence in the excerpt caught my attention. My observation over the period that Aleph One's historic paper was written is this: we don't seem to "solve" any security problems. Accordingly, no "era" seems to end!
Is this true? To get a slight insight into whether my sense of history is correct, I consulted the Open Source Vulnerability Database and ran queries like the following:
I chose to run these "August" periods to capture time as it passed since Aleph One's paper was published in August 1996.
The results were:
Year Vulns 1997 11 1998 10 1999 6 2000 48 2001 41 2002 43 2003 94 2004 127 2005 86 2006 27 2007 29 2008 39 2009 36 2010 48 2011 44 2012 45As a chart, they looked like this:
I find these results interesting, and I accept I could have run the query wrong by selecting the wrong terms. If I managed to get in the ballpark of the correct query, though, it seems we are not eliminating buffer overflows as a vulnerability.
I suppose one could argue about where researchers are finding the vulnerabilities, but they're still there in software worth reporting to OSVDB, and apparently trending upward.
My bottom line is to remember that security appears to be a game of and, not a game of or. We just add problems, and tend not to substitute them.
Comments
I think that counting vulns may miss a subtle point that I was making--it's not only the decline of vulns discovered, but the rise of memory defenses like ASLR in production code. At the same time, the products most relevant when that spike in vulns occurred are getting phased out. For example, in 2012, there's just less XP or distros of Linux that deployed 2004. So a vuln discovered in 2004 was still potentially easy to exploit if not patched in 2006, but that system is now likely retired. Meanwhile a vuln discovered in 2011 may be hard to exploit in 2011 because of memory defenses in place.
Getting to your actual numbers, I think it would be interesting to examine them in relation to the amount of code being produced in languages that risk memory corruption (Perhsps using github, sourcefourge, or google code as a proxy?)
I think it would be even more interesting to compare reported vulns to exploit attempts or even successful exploits, but those are tricky numbers to come by.
Regardless, thanks for giving thought to testing my claims.