Extending Security Event Correlation
Last year at this time I wrote a series of posts on security event correlation. I offered the following definition in the final post:
Security event correlation is the process of applying criteria to data inputs, generally of a conditional ("if-then") nature, in order to generate actionable data outputs.
Since then what I have found is that products and people still claim this as a goal, but for the most part achieving it remains elusive.
Please also see that last post for what SEC is not, i.e., SEC is not simply collection (of data sources), normalization (of data sources), prioritization (of events), suppression (via thresholding), accumulation (via simple incrementing counters), centralization (of policies), summarization (via reports), administration (of software), or delegation (of tasks).
So is SEC anything else? Based on some operational uses I have seen, I think I can safely introduce an extension to "true" SEC: applying information from one or more data sources to develop context for another data source. What does that mean?
One example I saw recently (and this is not particularly new, but it's definitely useful), involves NetWitness 9.0. Their new NetWitness Identity function adds user names collected from Active Directory to the meta data available while investigating network traffic. Analysts can choose to review sessions based on user names rather than just using source IP addresses.
This is certainly not an "if-then" proposition, as sold by SIM vendors, but the value of this approach is clear. I hope my use of the word "context" doesn't apply to much historical security baggage to this conversation. I'm not talking about making IDS alerts more useful by knowing the qualities of a target of server-side attack, for example. Rather, to take the case of a server side attack scenario, imagine replacing the source IP with the country "Bulgaria" and the target IP with "Web server hosting Application X" or similar. It's a different way for an analyst to think about an investigation.
Security event correlation is the process of applying criteria to data inputs, generally of a conditional ("if-then") nature, in order to generate actionable data outputs.
Since then what I have found is that products and people still claim this as a goal, but for the most part achieving it remains elusive.
Please also see that last post for what SEC is not, i.e., SEC is not simply collection (of data sources), normalization (of data sources), prioritization (of events), suppression (via thresholding), accumulation (via simple incrementing counters), centralization (of policies), summarization (via reports), administration (of software), or delegation (of tasks).
So is SEC anything else? Based on some operational uses I have seen, I think I can safely introduce an extension to "true" SEC: applying information from one or more data sources to develop context for another data source. What does that mean?
One example I saw recently (and this is not particularly new, but it's definitely useful), involves NetWitness 9.0. Their new NetWitness Identity function adds user names collected from Active Directory to the meta data available while investigating network traffic. Analysts can choose to review sessions based on user names rather than just using source IP addresses.
This is certainly not an "if-then" proposition, as sold by SIM vendors, but the value of this approach is clear. I hope my use of the word "context" doesn't apply to much historical security baggage to this conversation. I'm not talking about making IDS alerts more useful by knowing the qualities of a target of server-side attack, for example. Rather, to take the case of a server side attack scenario, imagine replacing the source IP with the country "Bulgaria" and the target IP with "Web server hosting Application X" or similar. It's a different way for an analyst to think about an investigation.
Comments
I agree with your addition in this post, Richard. I agree with your definition overall.
I believe that the list of what SEC is NOT might be misleading to the person who doesn't read every word. The important phrase is "not simply", for all of those things can play into effective SEC.
In my opinion, the point at which those things in the NOT category become useful is the point when they cross the threshold between data and knowledge.
Sometimes correlating between known good events and suspicious events can be useful too. For example, having a pile of data-flow documents is just a pile of data. Taking that information and using it as a background to hold realtime IDS alerts against has produced huge wins for me.
I'm a student of Sun Tzu, and I feel that this sort of work falls under the "know yourself" portion of his teachings.
Too many IT departments do not know themselves. This is why we are losing battles.
Ken Walling, CISSP, GREM
aka Metajunkie
I don't get it. So you say tcpdump -n turns off tcpdump's built in SEC functionality? ;-)
Josh
Risk = Threat x Vulnerability x Asset
In this regard, anything that helps establish context and value of the asset to the organization will help with a more informed and helpful response tactic.
In a more practical example- while working for a large enterprise during the course of using a SIM product combined with a centralized log repository, we discovered UDP packets on random destination ports leaving our zones that represented regular work desktop/laptop client IP blocks and headed for different geographical zones that mostly included Eastern Europe and former Soviet Bloc countries. This was one of the earliest detection of the new Storm worm that we were aware of before it really became public knowledge.