Analog Security is Threat-Centric
If you were to pass the dark alley in the image at left, I doubt you would want to enter it. You could imagine all sorts of nasty encounters that might deprive you of property, limb, or life. Yet, few people can imagine the sorts of danger they encounter when using a public PC terminal, or connecting to a wireless access point, or visiting a malicious Web site with a vulnerable browser.
This is the problem with envisaging risk that I discussed earlier this week. Furthermore, security in the analog world is much threat-centric. If I'm walking near or in a dark alley, and I see a shady character, I sense risk. I don't walk down the street checking myself for vulnerabilities, ignoring the threats watching me. ("Exposed neck? Could get hurt there. Bare hands? Might get burnt by acid." Etc...)
It seems like the digital security model is like an unarmed combatant in a war zone. Survivability is determined solely by vulnerability exposure, the attractiveness of one's assets to a threat, and any countermeasures that might disrupt threats.
In the analog world, one can employ a variety of tactics to improve survivability. Avoiding risky areas is the easiest, but let's assume one has to enter dangerous locations. A potential victim could arm himself, either using a weapon or martial arts. He could travel in groups, hire a bodyguard, or enlist the police's aid.
The term "hack-back" crops up in the digital scenario. This is really not a useful approach, because hacking the system attacking you does absolutely nothing to address the real threat -- the criminal at the keyboard.
In the analog world, consider the consequences for "hacking back." If you shoot an assailant, you'll have to explain yourself to the police or potentially a court of law. You probably can't shoot someone for simply being on your property, but you can if they threaten or try to harm you.
On a related note, we need some means to estimate threat level in a systematic, repeatable manner. When I say "threat" I mean threat, not vulnerability. Something like a system of distributed honeypots with distinct configurations might be helpful. Time-to-exploit for a given patch set might be tracked. I know the Honeynet Project periodically issues reports on how long it takes to 0wn a box, but it might be neat to see this in a regular, formal manner.
This is the problem with envisaging risk that I discussed earlier this week. Furthermore, security in the analog world is much threat-centric. If I'm walking near or in a dark alley, and I see a shady character, I sense risk. I don't walk down the street checking myself for vulnerabilities, ignoring the threats watching me. ("Exposed neck? Could get hurt there. Bare hands? Might get burnt by acid." Etc...)
It seems like the digital security model is like an unarmed combatant in a war zone. Survivability is determined solely by vulnerability exposure, the attractiveness of one's assets to a threat, and any countermeasures that might disrupt threats.
In the analog world, one can employ a variety of tactics to improve survivability. Avoiding risky areas is the easiest, but let's assume one has to enter dangerous locations. A potential victim could arm himself, either using a weapon or martial arts. He could travel in groups, hire a bodyguard, or enlist the police's aid.
The term "hack-back" crops up in the digital scenario. This is really not a useful approach, because hacking the system attacking you does absolutely nothing to address the real threat -- the criminal at the keyboard.
In the analog world, consider the consequences for "hacking back." If you shoot an assailant, you'll have to explain yourself to the police or potentially a court of law. You probably can't shoot someone for simply being on your property, but you can if they threaten or try to harm you.
On a related note, we need some means to estimate threat level in a systematic, repeatable manner. When I say "threat" I mean threat, not vulnerability. Something like a system of distributed honeypots with distinct configurations might be helpful. Time-to-exploit for a given patch set might be tracked. I know the Honeynet Project periodically issues reports on how long it takes to 0wn a box, but it might be neat to see this in a regular, formal manner.
Comments
More unfortunately its very hard to come up with a common and practical definition of tresspassing in the network world (rule of law fails here) and short of living in a William Gibson novel, we have no way to shoot intruders (counter measures fail here).
My point is not that we should kill network intruders, but that in the analog world, societies often accept the concept of self defence where police protection is not possible. Currently we have no real way to model this in the digital realm.