On a similar note, I was considering the idea of intrusion tolerance recently, loosely defined as having a system continue to function properly despite being compromised. A pioneer in the field describes the concept thus:
Classical security-related work has on the other hand privileged, with few exceptions, intrusion prevention... [With intrusion tolerance, i]nstead of trying to prevent every single intrusion, these are allowed, but tolerated: the system triggers mechanisms that prevent the intrusion from generating a system security failure.
It occurred to me recently that, in one sense, we have already fielded intrusion tolerant systems. Any computer operated, owned, or managed by a person who doesn't care about its integrity is an intrusion tolerant system.
People tolerate the intrusion for various reasons, such as:
- "I don't think any threats are attacking me."
- "I don't see my system or information being disclosed / degraded / denied."
- "I don't have anything valuable on my system."
All of those are false, but intrusion tolerant systems (meaning the human plus the hardware and software) tolerate intrusions. What's worse is that modern threats understand these parameters and seek to work within them, rather than do something stupid like open and close a CD-ROM tray or waste bandwidth, tipping off the human by interfering with the operation of the system.
Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.