Clowns Base Key Financial Rate on Feelings, Not Data

If you've been reading this blog for a while, you know I don't think very highly of mathematical valuations of "risk." I think even less highly of the clowns in the financial sector who call security professionals "stupid" because we can't match their "five digit accuracy" for risk valuation. We all know how well those "five digit" models worked out. (And as you see from the last link, I was calling their bluff in 2007 before the markets imploded.)

Catching up on last week's Economist this morning I found another example of financial buffoonery that boggles the mind. The article is online: Inter-bank interest rates; Cleaning up LIBOR -- A benchmark which matters to everyone needs fixing:

It is among the most important prices in finance. So allegations that LIBOR (the London inter-bank offered rate) has been manipulated are a serious worry.

LIBOR is meant to be a measure of banks’ own borrowing costs, and is used as the foundation for a host of other interest rates. Everyone is affected by LIBOR: it influences the payments made on mortgages and personal loans, and those received on investments and pensions.

Given its importance, the way LIBOR is calculated is astonishingly flimsy. LIBOR rates are needed, every day, for 15 different borrowing maturities in ten different currencies. But hard data on banks’ borrowing costs are not available every day, and this is the root of the LIBOR problem.

The British Bankers’ Association (BBA), responsible for LIBOR, gets around it by asking banks, each day, what they feel they should pay to borrow.

So LIBOR rates—and the returns on $360 trillion of financial contracts related to them, five times global GDP—are based on best guesses rather than hard data.

Let that sink in and forget about what you learned in business school or economics classes. LIBOR isn't based on actual rates; it's based on feelings!

The next part of the article talks about suspicions that banks manipulate this broken process to the advantage of the financial sector.

The remainder offers recommendations for improvement:

[T]he BBA should revamp LIBOR to ensure it is simple, transparent and accountable. These principles suggest LIBOR should be based on actual inter-bank lending, with any gaps filled in with the help of statistical techniques. Banks’ own guesses should be used as a last resort, not the first.

And regulators should collect data that could help spot LIBOR cheats: banks should be required to submit information on other banks’ borrowing costs, as well as their own. Regulators could cross-check submissions against hard data on banking-sector risk, and publicly report LIBOR abusers.

Keep this system in mind the next time a so-called "master of the universe" offers a lecture on measuring risk in digital security.

Comments

Tom C said…
I agree with your thinking on mathematical valuations of “risk,” but I would extend that thinking to qualitative evaluations of risk as well (this view seems to be heresy to most people I talk to about this) – especially if you accept that at some point all systems can be compromised. To rate the risk to a system or data as being high, medium/moderate, or low is absurd, and I’ve yet to see an instance where it was useful before a system had been compromised. The truth is people are horrible at measuring risk. That’s why people drive over the speed limit, buy lottery tickets, etc.

Not to go too far down a rabbit hole, but if you work in an industry where IT security is (to use a term I’ve heard you use before) controls centric, risk assessment becomes even less meaningful/useful. How many systems were “approved to operate” and later compromised? The only “risk assessment” going on here is: What happens to my career if the system is compromised, and the compromise is actually detected, after I approve the system to operate? Much more useful questions that I have rarely seen asked include: What is the value of the data over its useful lifetime? How much value does the data lose over time after it has been compromised (whether it’s the confidentiality/integrity/availability)? Then you can figure out how long you can tolerate not knowing if the data is compromised.
I think there are some numbers in information security that are measurable and should be measured, like:

-number of virus infections per year
-number of security events/incidents reported (you want these to go up at a steady rate)
-number of laptops lost



These are useful metrics, because they live in Mediocristan. And number crunching methods, like the Applied Information Economics could be useful (a red flag is raised when it references MPT, but it is not all rubbish).


Let's stop measuring things that live in Extremistan. Funny, if an organisation is under constant siege, advanced hacks could move from extremistan to mediocristan because of their frequent rate :)

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4