Saturday, April 21, 2012

Clowns Base Key Financial Rate on Feelings, Not Data

If you've been reading this blog for a while, you know I don't think very highly of mathematical valuations of "risk." I think even less highly of the clowns in the financial sector who call security professionals "stupid" because we can't match their "five digit accuracy" for risk valuation. We all know how well those "five digit" models worked out. (And as you see from the last link, I was calling their bluff in 2007 before the markets imploded.)

Catching up on last week's Economist this morning I found another example of financial buffoonery that boggles the mind. The article is online: Inter-bank interest rates; Cleaning up LIBOR -- A benchmark which matters to everyone needs fixing:

It is among the most important prices in finance. So allegations that LIBOR (the London inter-bank offered rate) has been manipulated are a serious worry.

LIBOR is meant to be a measure of banks’ own borrowing costs, and is used as the foundation for a host of other interest rates. Everyone is affected by LIBOR: it influences the payments made on mortgages and personal loans, and those received on investments and pensions.

Given its importance, the way LIBOR is calculated is astonishingly flimsy. LIBOR rates are needed, every day, for 15 different borrowing maturities in ten different currencies. But hard data on banks’ borrowing costs are not available every day, and this is the root of the LIBOR problem.

The British Bankers’ Association (BBA), responsible for LIBOR, gets around it by asking banks, each day, what they feel they should pay to borrow.

So LIBOR rates—and the returns on $360 trillion of financial contracts related to them, five times global GDP—are based on best guesses rather than hard data.

Let that sink in and forget about what you learned in business school or economics classes. LIBOR isn't based on actual rates; it's based on feelings!

The next part of the article talks about suspicions that banks manipulate this broken process to the advantage of the financial sector.

The remainder offers recommendations for improvement:

[T]he BBA should revamp LIBOR to ensure it is simple, transparent and accountable. These principles suggest LIBOR should be based on actual inter-bank lending, with any gaps filled in with the help of statistical techniques. Banks’ own guesses should be used as a last resort, not the first.

And regulators should collect data that could help spot LIBOR cheats: banks should be required to submit information on other banks’ borrowing costs, as well as their own. Regulators could cross-check submissions against hard data on banking-sector risk, and publicly report LIBOR abusers.

Keep this system in mind the next time a so-called "master of the universe" offers a lecture on measuring risk in digital security.

Wednesday, April 04, 2012

Salvaging Poorly Worded Statistics

Today I joined a panel held at FOSE chaired by Mischel Kwon and featuring Amit Yoran. One of the attendees asked the following:

At another session I heard that "80% of all breaches are preventable." What do you think about that?

My brief answer explained why that statement isn't very useful. In this post I'll explain why.

The first problem is the "80%." 80% of what? What is the sample set? Are the victims in the retail and hospitality sectors or the telecommunications and aerospace industries? Speaking in general terms, different sorts of organizations are at different levels of maturity, capability, and resourcefulness when it comes to digital security.

In the spirit of salvaging this poorly worded statistic, let's assume (rightly or wrongly) that the sample set involves the retail and hospitality sectors.

The second problem is the term "breach." What is a breach? Is it the compromise of a single computer? (What's compromise? Does it mean executing malicious code, or login via stolen credentials, or...?) What is the duration of the incident? There are dozens of questions that could be asked here.

To salvage this part, let's assume "breach" means "an incident involving execution of unauthorized code by an unauthorized intruder" on a single computer.

The third problem is the word "preventable." "Prevention" as a concept is becoming less useful by the second. Think about how an intruder might try to execute malicious code against a victim. Imagine a fully automated attack that happens when a victim visits a malicious Web site. An exploit kit could throw a dozen or more exploits against a browser and applications until one works. Are they all non-zero day, or are some zero day? Again, many questions beckon.

To salvage the end of the original statement, let's translate "preventable" into "exploitation of a vulnerability for which a patch had been publicly available for at least seven days."

Our new statement now reads something like "In the retail and hospitality sectors, 80% of the incidents where an unauthorized intruder successfully executed unauthorized code on a single computer exploited a vulnerability for which a patch had been publicly available for at least seven days."

Isn't that catchy! That's why we heard shortcuts like the original statement, which are basically worthless. Unfortunately, they end up driving listeners into poor conceptual and operational models.

The wordy but accurate statement says nothing about preventability, which is key. The reason is that a determined adversary, when confronted by a fully patched target, may decide to escalate to using a zero-day or other technique for which patches are irrelevant.