Are You Secure? Prove It.
Are you secure? Prove it. These five words form the core of my recent thinking on the digital security scene. Let me expand "secure" to mean the definition I provided in my first book: Security is the process of maintaining an acceptable level of perceived risk. I defined risk as the probability of suffering harm or loss. You could expand my five word question into are you operating a process that maintains an acceptable level of perceived risk?
Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well.
For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it.
So, are you secure? Prove it.
Incidentally, this post explains why deploying a so-called IPS does nothing for ensuring "security." Of course, you can demonstrate that it blocked attacks X, Y, and Z. But, how can you be sure it didn't miss something?
If you want to spend the least amount of money to take the biggest step towards Magnificent Number 7, you should implement Network Security Monitoring.
Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well.
For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it.
So, are you secure? Prove it.
- Yes. Then, crickets (i.e., silence for you non-imaginative folks.) This is completely unacceptable. The failure to provide any kind of proof is security by belief. We want security by fact.
- Yes, we have product X, Y, Z, etc. deployed. This is better, but it's another expression of belief and not fact. The only fact here is that technologies can be abused, subverted, and broken. Technologies can be simultaneously effective against one attack model and completely worthless against another.
- Yes, we are compliant with regulation X. Regulatory compliance is usually a check-box paperwork exercise whose controls lag attack models of the day by one to five years, if not more. A compliant enterprise is like feeling an ocean liner is secure because it left dry dock with life boats and jackets. If regulatory compliance is more than a paperwork self-survey, we approach the realm of real of evidence. However, I have not seen any compliance assessments which measure anything of operational relevance.
- Yes, we have logs indicating we prevented attacks X, Y, and Z. This is getting close to the right answer, but it's still inadequate. For the first time we have some real evidence (logs) but these will probably not provide the whole picture. Sure, logs indicate what was stopped, but what about activities that were allowed? Were they all normal, or were some malicious but unrecognized by the preventative mechanism?
- Yes, we do not have any indications that our systems are acting outside their expected usage patterns. Some would call this rationale the definition of security. Whether or not this answer is acceptable depends on the nature of the indications. If you have no indications because you are not monitoring anything, then this excuse is hollow. If you have no indications and you comprehensively track the state of an asset, then we are making real progress. That leads to the penultimate answer, which is very close to ideal.
- Yes, we do not have any indications that our systems are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and memory-based evidence for signs of violations. This is really close to the correct answer. The absence of indications of intrusion is only significant if you have some assurance that you've properly instrumented and understood the asset. You must have trustworthy monitoring systems in order to trust that an asset is "secure." If this is really close, why isn't it correct?
- Yes, we do not have any indications that our systems are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and memory-based evidence for signs of violations. We regularly test our detection and response people, processes, and tools against external adversary simulations that match or exceed the capabilities and intentions of the parties attacking our enterprise (i.e., the threat). Here you see the reason why number 6 was insufficient. If you assumed that number 6 was ok, you forgot to ensure that your operations were up to the task of detecting and responding to intrusions. Periodically you must benchmark your perceived effectiveness against a neutral third party in an operational exercise (a "red team" event). A final assumption inherent in all seven answers is that you know the assets you are trying to secure, which is no mean feat.
Incidentally, this post explains why deploying a so-called IPS does nothing for ensuring "security." Of course, you can demonstrate that it blocked attacks X, Y, and Z. But, how can you be sure it didn't miss something?
If you want to spend the least amount of money to take the biggest step towards Magnificent Number 7, you should implement Network Security Monitoring.
Comments
I really enjoyed this post. Where I see a relation to past discussions you and I have had at times is that at each increasing increment you are not only trying to add qualitative justifications to back up the probability of your belief statement concerning "secure", but you are also measuring your uncertainty (which is a probability theory term - a more "engineering friendly" term for the concept might be "variance").
So you are making a probability (and belief) statement about your state of "secure" here. You're using a more and more complex network of prior information (and seemingly more rational one the more you progress) to arrive at and defend the probabilities you attach to that belief statement. You are essentially attempting to create a Bayesian network.
Where early iterations start to fall short is a lack of frequency. Note that frequency becomes a prior you begin to add in at level 4 and progress up to periodic benchmarking in step 7.
Very cool stuff indeed.
'nuff said.
/Hoff
How would you go about performing #7 without some type of SEM? Ideally, you would combine SEM with NSM, which is what I plan on doing. Any suggestions? I've read through several of your posts regarding CS-MARS, etc. and I can understand how SEMs don't give you enough information to act upon alerts as they are alert-centric and usually don't provide you with session data or full content data, but at least they can point you in the right direction of further investigation. They provide you with what Daniel from the OSSEC project calls a LIDS (log-based intrusion detection system) and then do the job of correlating them from numerous devices. So how would you do the above (#7) without some sort of SEM?
PS. I've read a little about NSM and currently have a copy of the Tao and Extrusion Detection being shipped to me. Thanks.
Good post!
Can you please explain a bit more what you mean by an "external adversary" in #7. You say a bit later that you should test you environment against a "neutral third party". For example, does this party have a lot of prior knowledge of your environment (like an insider) or does the party start completely cold?
Regards
Johann
eg = exemplia gratia = "for example"
ie = id est = "that is"
Raj
Now, how to design a poster of this blog post and make the text look like the silhouette of Richard?
Sguil does already correlate alert data, and even more! You have access to your session data and even full-content!
In our environment we still have a SEM running, but at the moment I use Sguil as my main application.
In the (near) future, I propable will be removing the SEM completely.
Robin
Really nice! Good job!
Fair enough -- although I only used those terms twice, and one was incorrect. Fixed now.
Any insights/comments on the SEM discussion above? Thanks.
Respectfully - what's the focus on 'escalate' in #6 & #7? Or, to spin this a different way, - what's the definition of 'escalate'? To me, escalate means to just makes sure the proper level of people and resources are applied to the issue at hand. Which, to me, is what is applied when something is properly 'analyzed.' So, if something is properly 'analyzed' does it need to be 'escalated'? Respectfully, I just see 'escalation' as a subset of a proper 'analysis', which I suspect can't be what you are meaning.
As always, thanks,
Sean
Johann,
I recommend the external/third party review as a way to introduce some level of assurance that your detection and response team is aware of current attack models and capable of handling them. I used to perform this external benchmarking as a consultant. I intend to solicit similar outside tests against my own operation to ensure I am not ignoring anything significant.
First, remember that Sguil != NSM. Furthermore, no tool is NSM. The core of NSM consists of data types (alert, statistical, session, full content). If your SEM provides them, great. If not, you need to augment it. Sguil is the same. Sguil lacks statistical data, so many people augment with MRTG, Ntop, etc.
Second, remember that the N in NSM stands for network. Network is only one source of security monitoring data. The host consists of platform, OS, application, and data sources, usually as logs. These are all helpful and often critical resources. I begin instrumenting any enterprise at the network but I continue with logs. The third data source resides in memory, and it is usually collected during live response as part of an incident response.
Again agree on Sguil is not NSM ;)
* HIPS data
* System logs
* Application logs
?
I'm not trying to invent anything here. I'm trying to get this properly conceptualized as I move forward with this (which requires that I sell it first).
I want to make sure I can intelligently discuss the issue without creating confusion. So really the question is similar to the one above -- what's the overlap between SEM and NSM?
Here are my raw, likely disjointed thoughts:
The idea of NSM is taking different information types that can be helpful for security, and getting them so that they're useful. That's great, and a given SEM implementation may not give us all of that. A given implementation of anything is usually to small to give us all of anything.
So what if we simply expand NSM to include all available types of events. That would mean more than just network events -- system, application, etc. So that would just be SM.
So would NSM simply be a subset of SM, which SEM vendors are trying to give us (but are failing at)?
So here's the potential statement:
NSM is the process that works best, but is limited to network-oriented analysis. The grail is to have the NSM process but with access to all available types of information, with the ability to pivot seamlessly through the different information sources. This would be "Security Monitoring", no?
It's a lame name, to be sure, but the point is simply that NSM would be a subset of it.
Anyway...just thinking...as I try to build my own program...
Thoughts and scolding welcome...
Although my definition for NSM (from my books, and before) is very broad, I've always intended it to concentrate on network traffic, specifically full content, session, statistical, and alert. Having said that, read Pervasive Security Monitoring.