Excerpts from Ross Anderson / Tyler Moore Paper

I got a chance to read a new paper by one of my three wise men (Ross Anderson) and his colleague (Tyler Moore): Information Security Economics - and Beyond. The following are my favorite sections.

Over the last few years, people have realised that security failure is caused by bad incentives at least as often as by bad design. Systems are particularly prone to failure when the person guarding them does not suffer the full cost of failure...

[R]isks cannot be managed better until they can be measured better. Most users cannot tell good security from bad, so developers are not compensated for efforts to strengthen their code. Some evaluation schemes are so badly managed that ‘approved’ products are less secure than random ones. Insurance is also problematic; the local and global correlations exhibited by different attack types largely determine what sort of insurance markets are feasible. Cyber-risk markets are thus generally uncompetitive, underdeveloped or specialised...

One of the observations that sparked interest in information security economics came from banking. In the USA, banks are generally liable for the costs of card fraud; when a customer disputes a transaction, the bank must either show she is trying to cheat it, or refund her money. In the UK, the banks had a much easier ride: they generally got away with claiming that their systems were ‘secure’, and telling customers who complained that they must be mistaken or lying. “Lucky bankers,” one might think; yet UK banks spent more on security and suffered more fraud. This may have been what economists call a moral-hazard effect: UK bank staff knew that customer complaints would not be taken seriously, so they became lazy and careless, leading to an epidemic of fraud.

In 1997, Ayres and Levitt analysed the Lojack car-theft prevention system and found that once a threshold of car owners in a city had installed it, auto theft plummeted, as the stolen car trade became too hazardous. This is a classic example of an externality, a side-effect of an economic transaction that may have positive or negative effects on third parties. Camp and Wolfram built on this in 2000 to analyze information security vulnerabilities as negative externalities, like air pollution: someone who connects an insecure PC to the Internet does not face the full economic costs of that, any more than someone burning a coal fire. They proposed trading vulnerability credits in the same way as carbon credits...

Asymmetric information plays a large role in information security. Moore showed that we can classify many problems as hidden-information or hidden-action problems. The classic case of hidden information is the ‘market for lemons'. Akerlof won a Nobel prize for the following simple yet profound insight: suppose that there are 100 used cars for sale in a town: 50 well-maintained cars worth $2000 each, and 50 ‘lemons’ worth $1000. The sellers know which is which, but the buyers don’t. What is the market price of a used car? You might think $1500; but at that price no good cars will be offered for sale. So the market price will be close to $1000.

Hidden information, about product quality, is one reason poor security products predominate. When users can’t tell good from bad, they might as well buy a cheap antivirus product for $10 as a better one for $20, and we may expect a race to the bottom on price.

Hidden-action problems arise when two parties wish to transact, but one party’s unobservable actions can impact the outcome. The classic example is insurance, where a policyholder may behave recklessly without the insurance company observing this...

[W]hy do so many vulnerabilities exist in the first place? A useful analogy might come from considering large software project failures: it has been known for years that perhaps 30% of large development projects fail, and this figure does not seem to change despite improvements in tools and training: people just built much bigger disasters nowadays than they did in the 1970s. This suggests that project failure is not fundamentally about technical risk but about the surrounding socio-economic factors (a point to which we will return later).

Similarly, when considering security, software writers have better tools and training than ten years ago, and are capable of creating more secure software, yet the economics of the software industry provide them with little incentive to do so.

In many markets, the attitude of ‘ship it Tuesday and get it right by version 3’ is perfectly rational behaviour. Many software markets have dominant firms thanks to the combination of high fixed and low marginal costs, network externalities and client lock-in noted above, so winning market races is all-important. In such races, competitors must appeal to complementers, such as application developers, for whom security gets in the way; and security tends to be a lemons market anyway. So platform vendors start off with too little security, and such as they provide tends to be designed so that the compliance costs are dumped on the end users. Once a dominant position has been established, the vendor may add more security than is needed, but engineered in such a way as to maximise customer lock-in.

In some cases, security is even worse than a lemons market: even the vendor does not know how secure its software is. So buyers have no reason to pay more for protection, and vendors are disinclined to invest in it.

How can this be tackled? Economics has suggested two novel approaches to software security metrics: vulnerability markets and insurance...

Several variations on vulnerability markets have been proposed. Bohme has argued that software derivatives might be better. Contracts for software would be issued in pairs: the first pays a fixed value if no vulnerability is found in a program by a specific date, and the second pays another value if one is found. If these contracts can be traded, then their price should reflect the consensus on software quality. Software vendors, software company investors, and insurance companies could use such derivatives to hedge risks. A third possibility, due to Ozment, is to design a vulnerability market as an auction...

An alternative approach is insurance. Underwriters often use expert assessors to look at a client firm’s IT infrastructure and management; this provides data to both the insured and the insurer. Over the long run, insurers learn to value risks more accurately. Right now, however, the cyber-insurance market is both underdeveloped and underutilised. One reason, according to Bohme and Kataria, is the interdependence of risk, which takes both local and global forms. Firms’ IT infrastructure is connected to other entities – so their efforts may be undermined by failures elsewhere.

Cyber-attacks often exploit a vulnerability in a program used by many firms. Interdependence can make some cyber-risks unattractive to insurers – particularly those risks that are globally rather than locally correlated, such as worm and virus attacks, and systemic risks such as Y2K.

Many writers have called for software risks to be transferred to the vendors; but if this were the law, it is unlikely that Microsoft would be able to buy insurance. So far, vendors have succeeded in dumping most software risks; but this outcome is also far from being socially optimal. Even at the level of customer firms, correlated risk makes firms under-invest in both security technology and cyber-insurance. Cyber-insurance markets may in any case lack the volume and liquidity to become efficient.
(emphasis added)

If you made it this far, here's my small contribution to this paper: what about breach derivatives? To paraphrase the paper, contracts for companies would be issued in pairs: the first pays a fixed value if no breach is reported by a company by a specific date, and the second pays another value if one is reported. If these contracts can be traded, then their price should reflect the consensus on company security.

I understand the incentives for companies to stay quiet about breaches, but this market could encourage people to report. I imagine it could also encourage intruders to compromise a company intentionally, as the authors mention:

One criticism of all market-based approaches is that they might increase the number of identified vulnerabilities by motivating more people to search for flaws.

What do you think?

Comments

Anonymous said…
One of the papers delivered at WEIS in Minneapolis in 2004 spoke to your derivatives question (http://www.dtc.umn.edu/weis2004/adkins.pdf)
I confess to finding it challenging to follow (Barbie and I have a similar aptitude for math, if not fashion), but clearly the idea is one which is at least potentially sensible.
Unknown said…
I summed this argument up in my blog a while back. I call it the Plastic Swimming Pool Theory of Security - "When one person pisses in a swimming pool it affects everyone"

I have the theory which points out the problem. I don't have a solution though. I don't think that a "carbon credits" copy would work. It would be too difficult and time consuming to work out what patches a person is missing and then to try get their banking details so you can charge them maybe 28c or $1.26.

One way this could be handled nicely is if ISPs had IPSs that would block their customers' network access if strange traffic was detected. This would have to be handled very delicately.
Anonymous said…
You'd have to account for unethical programmers gaming the system, i.e. 1. develop software with deliberately added vulnerabilities, 2. report them through someone else, 3. Profit.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics