Dan Geer was kind enough to send me a copy of his new book Economics and Strategies of Data Security, published by his employer, Verdasys. The book is exceptionally well written and packed with the sorts of insights that make Dan one of my Three Wise Men. I'd like to present a few excerpts here, partially for my own easy reference but also because they might be useful to you. I recommend that anyone who reacts violently to these ideas try reading the book. It will take only an hour or two and you can vet your response against the full text, in context, and not these snippets.
In theory, there is no difference between theory and practice, but, in practice, there is. (prior to introduction)
That's why I dislike speculation on the effectiveness of security measures and prefer collecting evidence and performing tests.
[These changes to our computing models imply] that data must either become self-protecting (massive amounts of encryption and the conversion of passive data objects into miniature program fragments), or the endpoints where services and users come together will have to be very well instrumented indeed. We find the latter more plausible... [W]e can no longer locate the whole in whole, only the parts remain locatable and barely that. The sum -- that which is greater than the parts -- comes together evanescently and on demand, and therefore it is at the point of use that any protections have to be done. (pp 18-19)
I agree. Note that moving from the idea of "protecting data" to implementing it requires making some choices. I like the emphasis on visibility too.
Since the total workload for information security professionals is proportional to the cumulative sum of all attack vectors yet invented, but the total work factor for the attack side is proportional to the cost of creating a new attack tool, the professionalization of the attack class punctures the existing security equilibrium from a moderately symmetric one to a highly asymmetric one where the advantage is structurally more favorable to the attackers. (pp 30-31, emphasis added)
This is why defenders are losing, especially as we are tasked to do "more with less" as "mutlitalented specialists."
To protect intellectual property you must model the attacker as an insider and prevention must be your only goal because secret data is never unrevealed thus any loss of it is never mitigable. (p 32)
I think this falls in the "nice in theory" category. Thinking of the attacker as an insider makes sense if you define insider as someone who has assumed trusted status by virtue of their unauthorized access to enterprise resources. I don't think you can equate an external party with an insider unless that external party is a former employee or working with an employee, thereby leveraging true insider knowledge and not just network position.
As far as prevention being the only goal, prevention eventually fails. So, that's obviously everyone's goal, but it has never been attainable and never will be. Getting closer to it is best, I agree.
This change in accounting standards, if it transpires, would make a very important statement to those of us who worry with data security, viz., that to be an accounted-for asset means there is a value that must be associated with it and that as a balance sheet item the Boards of Directors of all listed firms would answer to misuse of the corporate asset that data represents... [O]nce data has a value and once that value appears on the balance sheet, then the interplay between the Boards of Directors and the CEOs of this world will include, amongst the other valuations to be protected and to be grown, the valuation of data... For the business side, the most important realization is that data is rising as a fraction of total corporate wealth. (pp 43-44, 147)
This section's discussion of data as a goodwill asset could really change the rules of the game. I recommend reading it closely.
[To secure an enterprise] let's presume that we are not starting from a known state, In such a situation, we likely do not know how data moves and our first action would be to start recording how data moves across the board so as to build a model of data movement in lieu of a model of total data state... This focus on the anomalous is precisely what you do when you don't know everything but you do know something... A data surveillance regime set to "record" only, i.e., not to intervene but merely to watch, is a first step...
[R]eal knowledge of how data moves is very much not the norm, and thus the first priority for the firm is to get a handle on what is normal. Of course, knowing what is of value, such as through a formal data classification exercise, is the gold standard, and that should be the preferred long-term outcome...
[W]hen you know nothing, permit-all is the only option.
When you know something, default-permit is what you can and should do.
When you know everything, default-deny becomes possible, and only then.
Because of the special characteristics of data, if you don't watch it then you won't have it. (pp 47-4, 51)
That is pure gold. Monitor first, like Bruce said? Of course.
[P]roving a negative is impossible except in the case where all possible alternatives are known and each is examined... "Prove a negative" in this context means to be able to show a skeptical party that such and such a thing did not happen... As a matter of science, to prove that something did not happen toy must have every place where it couldhappen under surveillance...
The way it will pay you is that it, and it alone, will enable you to say "I can prove that X did not happen because I have records of everything that did happen, and X is not in there." (p 53, 76, 77-78)
This reminds me of my last point in Are You Secure? Prove It.
[I]f you want to get ahead of the threat you have to either invest more than the opponent does, you have to be a fortune teller, or you have to understand that when you are losing a game you cannot afford to lose, then you have to change the rules. (p 71)
Funny, I don't think IBM qualifies for any of those three.
[T]he cost of protection... is the tax you elect to pay in the absence of an event, and the cost of cleaning up failures that may occur if you elect to pay the tax... "How much is everyone else spending?"... [I]f you are an outlier in that distribution then someone will ask why you are... [I]t is often better to look at data security not as a tax but as an investment... Nobody things of an investment and imagines perfection; no, they imagine (hope) that for a certain outlay there will be a corresponding return. Investment is a risk management practice that taxation never is; it trades one downside for another, and it is about odds. (pp 79-81)
Note Dan is not saying "investing" in security makes money. He says the uncertainty of the outlay is the controlling factor.
[T]he issue in data security is that our failure modes are not the random bad luck of physical breakage or discoordination between parts of the enterprise. Our opponents are sentient, and that -- sentient opponents -- makes all the difference... If a product does not have sentient opponents, then it is not a security product. (p 81)
Intelligent adversary...Intelligent adversary...Intelligent adversary...
[E]conomics favor an accountability model focused on the monitoring of information use rather than the gatekeeping of information access... Security that gets in the way is security that is circumvented, but an accountability system lets things go forward that must go forward. (pp 108-109)
This does not exactly square with "prevention as the only goal," so I agree with it.
If you like these excerpts, you'll like Dan's book!