So what about WEIS? I attended because of the "big brains" in the audience. Seriously, how often do you get Dan Geer, Ross Anderson, Whit Diffie, Bruce Schneier, Hal Varian, etc., in the same room? I should have taken a picture. Dumb security groupie.
I'll share a few thoughts.
- Tracey Vispoli from Chubb Insurance spoke about cyber insurance. Wow, what an interesting perspective. She said the industry has "no expected loss data" and "no financial impact data." Put that in your pipe and smoke it, Annualized Loss Expectancy (ALE) fans! So how does Chubb price risk without any data, in order to sell polcies? Easy -- price them high and see what happens. This is what the industry did when legislators started creating laws on employment discrimination. Companies wanted insurance, so the industry made them pay through the nose. Later, to compete, insurers dropped rates -- but too low. When they started losing money they jacked up the rates again. Eventually insurers have some data, but only after years of offering a service in the marketplace. That floored me but it makes sense now.
- Again on insurance, Tracey said the industry insures for incidents whose impact can be concretely and quickly measured. What does that mean? Insurance against economic espionage, national security incidents, and related events is unlikely because you can't really measure the impact, at least in the short term!
- After spending two days with academics, I'd like to add to Allan Schiffman's famous phrase "Amateurs study cryptography; professionals study economics":
Amateurs study cryptography; professionals study economics. Operators work in the real world.
Seriously, I think economics will help mitigate many security problems, but some researchers need to visit living, breathing enterprise environments before publishing papers. I won't name names, but if you're writing a paper that relies on raw IDS alerts to measure "attacks on open source software," you need to spend some time in a SOC or CIRT to see what analysts think of that kind of "evidence."
- It seems researchers have a suit of academic tools (math, statistics, functions, models, game theory, simulations, previous research, etc.) and they look for data to which they can apply those tools. They formulate a hypothesis, and at that point the applicability of the approach is probably out the window. Very quickly in several talks I noticed that the topic at hand was implementation of an analytical technique, with the underlying problem somewhere several slides back. This seemed a little weird, but it makes sense in the context of researchers doing what they know how to do -- identify an issue, develop a hypothesis, collect data, etc.
Overall I found the experience very interesting, but I'm not sure if I will try to return next year.