Friday, January 23, 2015

Is an Alert Review Time of Less than Five Hours Enough?

This week, FireEye released a report titled The Numbers Game: How Many Alerts are too Many to Handle? FireEye hired IDC to survey "over 500 large enterprises in North America, Latin America, Europe, and Asia" and asked director-level and higher IT security practitioners a variety of questions about how they manage alerts from security tools. In my opinion, the following graphic was the most interesting:

As you can see in the far right column, 75% of respondents report reviewing critical alerts in "less than 5 hours." I'm not sure if that is really "less than 6 hours," because the next value is "6-12 hours." In any case, is it sufficient for organizations to have this level of performance for critical alerts?

In my last large enterprise job, as director of incident response for General Electric, our CIO demanded 1 hour or less for critical alerts, from time of discovery to time of threat mitigation. This means we had to do more than review the alert; we had to review it and pass it to a business unit in time for them to do something to contain the affected asset.

The strategy behind this requirement was one of fast detection and response to limit the damage posed by an intrusion. (Sound familiar?)

Also, is it sufficient to have fast response for only critical alerts? My assessment is no. Alert-centric response, which I call "matching" in The Practice of Network Security Monitoring, is only part of the operational campaign model for a high-performing CIRT. The other part is hunting.

Furthermore, it is dangerous to rely on accurate judgement concerning alert rating. It's possible a low or moderate level alert is more important than a critical alert. Who classified the alert? Who wrote it? There are a lot of questions to be answered.

I'm in the process of doing research for my PhD in the war studies department at King's College London. I'm not sure if my data or research will be able to answer questions like this, but I plan to investigate it.

What do you think?


Anonymous said...

Building the expectation with seniors that all critical alerts will be "mitigated" within 1 hour seems like it could could put you in a difficult situation if you have a serious breach and not some generic malware or simple situation; misconfiguration etc.. It then leads me to the following questions:

What are you calling mitigation? Removing the box from the network?

What happens if your alert had identified a compromise that has been present for months/weeks? Do you just perform mitigation using the information at hand within the first hour? I dont think whack-a-mole is a good strategy for IR, and I think there are definitely cases I have seen where it has put networks at a far greater risk (scorched earth).

Richard Bejtlich said...

Hi Anonymous,

Good question. I've addressed this many times before, but not in this post. The "one hour rule" had an exception -- we chose not to follow it if we could not determine how the intruder gained access to the network. For example, if we discovered an intrusion, and could tell that we found a beacon caused by a phishing email, and we had just identified the foothold, then we would contain immediately. On the other hand, if we found a beacon, but had no idea how the intruder gained access to the network, we would recognize that the beacon and compromised box was our only "link" to the intrusion. Cutting it off immediately would be a bad idea.

I cover this topic in chapter 9 of The Practice of Network Security Monitoring.

Armando said...

It strikes me that this becomes a task of managing the ratio of signal-to-noise in the critical alert channel. The threat landscape is constantly evolving and the nature of alerts should change in response. However when *responding* to a critical alert there is little time to assess the it with a broader perspective. Perhaps it would help to have a weekly or monthly audit where alerts can be reviewed and reclassified according to current criticality. This could also present an opportunity to tweak those alerts that seem too 'noisy'.