Friday, December 05, 2014

Nothing Is Perfectly Secure

Recently a blog reader asked to enlist my help. He said his colleagues have been arguing in favor of building perfectly secure systems. He replied that you still need the capability to detect and respond to intrusions. The reader wanted to know my thoughts.

I believe that building perfectly secure systems is impossible. No one has ever been able to do it, and no one ever will.

Preventing intrusions is a laudable goal, but I think security is only as sound as one's ability to validate that the system is trustworthy. Trusted != trustworthy.

Even if you only wanted to make sure your "secure" system remains trustworthy, you need to monitor it.

Since history has shown everything can be compromised, your monitoring will likely reveal an intrusion.

Therefore, you will need a detection and a response capability.

If you reject the notion that your "secure" system will be compromised, and thereby reject the need for incident response, you still need a detection capability to validate trustworthiness.

What do you think?


Venious said...

I agree; Trust but verify. History has proven that detection and response is 100% required. Response also needs to include a plan of action for the inevitable data loss/leakage as well, which is likely goes well beyond a technical response.

Venious said...

I agree; trust but verify. It all plays into the defense in depth strategy, and history has proven that monitoring and response is 100% required. Also keep in mind that response also needs to include procedures for action on the inevitable as well; data will be lost/exposed so prepare for it.

mwlucas said...

Any system that relies on "perfection" or "not making mistakes" is inherently exploitable.

Everything is made by human beings. And human beings fail, even when you have teams of careful people double-checking each other's work.

To say you'll never fail is classic greek legend hubris.

redpriest said...

As Dan Geer would say... "The absense of unmitigable surprise".

Anonymous said...

You should absolutely strive for a high level of security, but they are not arguing for that. They are arguing that monitoring and testing are not necessary.

They are using a belief in what they think makes a "secure system" in order to justify the argument against monitoring. That belief is a matter of faith which is contradicted by the facts.

You can use the same logic against any kind of QA or stress testing in the world. Can you imagine what would happen if Boeing decided that everything they made was so well-engineered that none of it required quality control or any kind of monitoring?

The same argument has been used to hide failures and excuse bad decisions throughout history, and it has always failed.

Anonymous said...

What does 'secure' mean? Even if you have a system that can't be used in any way other than what is designed (one definition of security) you are still subject to intentional or unintentional design flaws, or just a clever use of the existing functions. 'Secure' is really a human judgement, not one machines are likely to be able to make. Even if everything else is perfect, it can't be 'perfectly secure' since an authorized person can still go in and copy all the data and ship it off to someone else. I guess the old saying is true, perfectly secure means powered off, sealed in concrete, then dropped into the sun.

Anonymous said...

I'm sure a few people thought they built "perfectly secure" systems a few years ago using the latest SSL libraries and look how that turned out.
A "full patched" or "secure" system is so only at that fleeting moment in time. As technology continues to compress space and time, that "full patched and secure" moment grows shorter everyday.

Anonymous said...

Schneier's Law: Anyone can invent a security system that he himself cannot break.

I'm not sure how I'd feel about having to work with someone who doesn't understand that. The job security probably isn't worth the headaches.