Response to Is Vulnerability Research Ethical?

One of my favorite sections in Information Security Magazine is the "face-off" between Bruce Schneier and Marcus Ranum. Often they agree, but offer different looks at the same issue. In the latest story, Face-Off: Is vulnerability research ethical?, they are clearly on different sides of the equation.

Bruce sees value in vulnerability research, because he believes that the ability to break a system is a precondition for designing a more secure system:

[W]hen someone shows me a security design by someone I don't know, my first question is, "What has the designer broken?" Anyone can design a security system that he cannot break. So when someone announces, "Here's my security system, and I can't break it," your first reaction should be, "Who are you?" If he's someone who has broken dozens of similar systems, his system is worth looking at. If he's never broken anything, the chance is zero that it will be any good.

This is a classic cryptographic mindset. To a certain degree I could agree with it. From my own NSM perspective, a problem I might encounter is the discovery of covert channels. If I don't understand how to evade my own monitoring mechanisms, how am I going to discover when an intruder is taking that action? However, I don't think being a ninja "breaker" makes one a ninja "builder." My "fourth Wise Man," Dr Gene Spafford, agrees in his post What Did You Really Expect?:

[S]omeone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior...

Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids. First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit. Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled. (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way — that behavior does not establish the bona fides for real expertise. If anything, it establishes a disregard for the community it endangers.)

More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on...
(emphasis added)

So, in some ways I agree with Bruce, but I think Gene's argument carries more weight. Read his whole post for more.

Marcus' take is different, and I find one of his arguments particularly compelling:

Bruce argues that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not--the last 20 years of software development (don't call it "engineering," please!) absolutely refutes this position...

The biggest mistake people make about the vulnerability game is falling for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing to Web 2.0 as an example.

Has what we've learned about writing software the last 20 years been expressed in the design of Web 2.0? Of course not! It can't even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, Web 2.0 would not be happening...

If Bruce's argument is that vulnerability "research" helps teach us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the latter is happening--and it scares me.
(emphasis added)

I agree with 95% of this argument. The 5% I would change is that identifying vulnerabilities addresses problems in already shipped code. I think history has demonstrated that products ship with vulnerabilities and always will, and that the vast majority of developers lack the will, skill, resources, business environment, and/or incentives to learn from the past.

Marcus unintentionally demonstrates that analog security is threat-centric (i.e., the real world focuses on threats), not vulnerability-centric, because vulnerability-centric security perpetually fails.

Comments

Anonymous said…
I wish I'd seen Marcus's comments earlier -- good point.

Basically, what it comes down to is that we are buying soft widgets at the big box store, then poking at them with sharp sticks to see when they fall over. The company then sends us a patch for sharp stick pokes such as that. And the people continue poking. The people with the best luck poking with the sticks are anointed as "Expert Stick Pokers" by the crowd. Some don't have much luck with their sticks (too thin) and so they proclaim that the widget looks pretty sturdy to them, and besides, the combination of crowd and suppliers finds and produces fixes pretty darn quick.

But almost nobody in the crowd recognizes that it was a cheap widget, it never claimed to be immune to sharp sticks, and even chimpanzees could have some success if they picked up one of the long sticks. Hardened widgets used to be available, but they were expensive and non-portable, so people stopped buying them. Now there is a large market in light, portable widgets -- that continue to fall over, and sometimes hurt people in the process.

Nonetheless, the crowd goes back to the store when a new widget is released -- now with blinking lights, chrome, and 897 new soft spots that shouldn't be poked with sticks. And while they're there, they'll buy a roll of aluminum foil to wrap the widget in to keep it safe from sticks -- at least from soft ones. And the government leads the pack because their procurement orders specified a widget of precisely that shape at the lowest possible cost.

So, nearly everyone can buy widgets (and aluminum foil), they all grumble about their quality, and they ooh and aah at the exploits of the stick wielders. And almost no one -- government or industry -- bothers to invest in how to build stronger widgets, because, after all, we're used to the current ones, punctures and all.

It's definitely not sound engineering or sound business practice.

Keep on with the good blogging!
Spaf, thanks for your comment. To summarize -- you think we could learn from mistakes and make less vulnerable products, but it's not done because that approach is cost prohibitive?

I agree with that for certain classes of problems, but for others it seems developers are just not adopting simple, cheap practices -- like disabling unnecessary services.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4