Economist on the Peril of Models

Anyone who has been watching financial television stations in the US has seen commentary on the state of our markets with respect to subprime mortgages. I'd like to cite the 21 July 2007 issue of the Economist to make a point that resonates with digital security.

Both [Bear Stearns] funds had invested heavily in securities backed by subprime mortgages... On July 17th it admitted that there “is effectively no value left” in one of the funds, and “very little value left” in the other.

Such brutal clarity is, however, a rarity in the world of complex derivatives. Investors may now know what the two Bear Stearns funds are worth. But accountants are still unsure how to put a value on the instruments that got them into trouble.

This reminds me of a data breach -- instant clarity.

Traditionally, a company's accounts would record the value of an asset at its historic cost (ie, the price the company paid for it). Under so-called “fair value” accounting, however, book-keepers can now record the value of an asset at its market price (ie, the price the company could get for it).

But many complex derivatives, such as mortgage-backed securities, do not trade smoothly and frequently in arm's length markets. This makes it impossible for book-keepers to “mark” them to market. Instead they resort to “mark-to-model” accounting, entering values based on the output of a computer.

Note the reference to "complex"[ity] and book-keepers basing their decisions on models created by other people who make assumptions. Is this starting to sound like risk analysis to you, too?

Unfortunately, the market does not always resemble the model... Models are supposed to show the price an asset would fetch in a sale. But in an illiquid market, a big sale can itself drive down prices. This can sometimes create a sizeable difference between “mark-to-model” valuations and true market prices.

That is not the only problem with fair-value accounting. According to Richard Herring, a finance professor at the Wharton School, “models are easy to manipulate”...

Unfortunately, the alternatives to fair-value accounting can be worse. Historic cost may be harder to manipulate than the results of a model. But as Bob Herz, chairman of America's Financial Accounting Standards Board, points out, it too is “replete with all sorts of guesses”, such as depreciation rates...

"Models are easy to manipulate" and alternatives are also "replete with all sorts of guesses." This sounds exactly like risk analysis now.

Fair value is perhaps most worrying for auditors, who are often blamed for faulty accounts. Faced with murky models, the best they can do is examine assumptions and ensure disclosure.

This means that the role of the auditor becomes that of an outside expert who makes a new set of subjective decisions, perhaps challenging the assumptions of those who made subjective decisions when creating their model. The auditor's advantage, however, is that he/she has insight into the workings of many similar companies, and could compare "best practices" against the specific company being audited.

Incidentally, I would love to know how the "CISO of a major Wall Street bank" who criticized Dan Geer as mentioned in Are the Questions Sound? feels now about his precious financial models. Somehow I doubt his bonus will be as big as it was last year, if his company is even solvent by year's end.


Anonymous said…
"Models are easy to manipulate" and alternatives are also "replete with all sorts of guesses." This sounds exactly like risk analysis now."

C'mon Richard, you're much too smart to cherry pick like this.

"he/she has insight into the workings of many similar companies, and could compare "best practices" against the specific company being audited."

Again, aren't "best practices" simply the attempt by a lazy security professional to transfer the risk of being wrong about their analysis to someone else's analysis? After all, why do we implement the controls and processes in a best practice list? Because someone else has found that it reduces risk to some degree in their own crude analysis, right?

Let me offer this: "Best Practices" themselves are replete with all sorts of guesses and assumptions, as well. Guesses about what the threat community has looked like for you, or may look like in the future. Assumptions about your risk tolerance, your capabilities to manage the best practices on an ongoing basis, the frequency with which certain threat communities are going to be creating threat events, and most importantly,assumptions that you won't stupidly game best practices the same way you insinuate that you can "game" risk analysis.

FWIW - you can't really "game" Bayesian inference anymore than you can "game" scientific method.

Look, we can sit back and aim our scorn and laughter at folks who use crude risk assessment methods such as 800-30 and silly risk analysis modeling like "risk=threatxvulnerabilityximpact. But at some point someone will (has?) develop a good framework for risk expression that will start to answer that CISO's questions (or teach him to ask the right questions) and then they will have a significant competitive advantage over those who are still telling us that bad models are bad, so we should stop using scientific method and go back to interpreting the liver spots in our security pigeons.

And you know what? Eventually, that good framework (or model) will be replaced by a better one. And that model replaced by an even better one and so on and so forth... That's how science works - why should we be any different? Why should we cling to these, our dark ages?

Popular posts from this blog

Five Reasons I Want China Running Its Own Software

Cybersecurity Domains Mind Map

A Brief History of the Internet in Northern Virginia