Excellent Schneier Article on Selling Security

Bruce Schneier wrote an excellent article titled How to Sell Security. This is my favorite section:

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It's a choice between a small sure loss -- the cost of the security product -- and a large risky loss: for example, the results of an attack on one's network... [A]ll things being equal, buyers would rather take the chance that the attack won't happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don't understand why, and are continually trying to frame their products in positive results. That's why you see slogans with the basic message, "We take care of security so you can focus on your business," or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they're willing to do almost anything to make that feeling go away...

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service... Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for...

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

That neatly summarizes the greatest challenge facing our industry. This problem is compounded by the thought that the further up the corporate ladder one rises, the more likely the manager will "take the chance that the attack won't happen." How many of you have listened to CEOs and other business leaders talk about the need to "take risks," "take a big swing," and so on?

I would add that many customers assume that security is already integrated, when it's not. Furthermore, many customers assume that incidents happen to "someone else," because they are "special," and never to them.

I would be interested in knowing what the risk literature says about people who don't put their own assets at risk, but who put other's assets at risk -- like financial sector traders. Does Bruce's summary -- all things being equal, we tend to be risk-adverse when it comes to gains and risk-seeking when it comes to losses -- apply when other people's assets are being put in jeopardy? (Or is that a universal business problem?)


Porter said…

I bet there is already statistical evidence of this. I wonder if you took the results of poker tournaments. One where the player uses his own money, and the other where they use fake money. I would imagine that there are more risks taken when using the fake money than when using your own.

Granted, there are other variables at work, but this could give some useful results.

- CP
Anonymous said…
Bruce Schneier has got it the wrong way around, you first "wait" for a attack then pitch the fix ;-)
marklar said…
The assumption that security is already integrated is indeed a common one, the adage "trust but verify" applies here too.
Anonymous said…
i love this blog.. nice
Richard, the answer to your question,

"I would be interested in knowing what the risk literature says about people who don't put their own assets at risk, but who put other's assets at risk",

is partly contained in work by Ross Anderson. http://www.cl.cam.ac.uk/~rja14/#Econ

"Over the last few years, it's become clear that many systems fail not for technical reasons so much as from misplaced incentives - often the people who could protect them are not the people who suffer the costs of failure."

"Prospect Theory" and the "Economics of Information Security" therefore work together to create an even worse situation:

Bruce is saying that you will not protect you own assets and Ross is saying you may not even consider protecting other's assets because it may be not be your problem when security fails.

Now, if you also consider the The Psychology of Security....


So I think this is not only a universal business problem, but inherent in humanity.
Anonymous said…
I think part of the problem stems from a lack of good metrics and measurements. With risky financial transactions, there's more of a history of numbers (usually tied directly to dollars) that you can use to estimate the cost of risk. People may under or over estimate the risk, but at least you can come up with a justifiable model. With security, this is much harder. I see lots of dollar numbers, but rarely does the methodolgy behind these numbers look good.

If I buy an orange future at $50, and then there's a bumper crop and I can only sell it for $20, I know I've lost $30. Now someone tell me how much you lost with your last security incident. Does that include employee-hours worked? Loss of future business? Loss of reputation?
helenjacobs said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Five Reasons I Want China Running Its Own Software

Cybersecurity Domains Mind Map

A Brief History of the Internet in Northern Virginia