My Investigative Process Using NSM
(Please: I don't want to hear people complain that I'm using data from a cable line with one public target IP address; I'm not at liberty to disclose data from my client networks in order to satisfy the perceived need for bigger targets. The investigative methodology is the same. Ok?) s shown in the figure at left, I'm using Sguil for my analysis. I'm not going to screen-capture the whole investigation, but I will show the data involved. I begin this investigation with an alert generated by Bleeding Threat ruleset. This is a text-based representation of the alert from Sguil.
Count:1 Event#1.78890 2007-01-15 03:17:36
BLEEDING-EDGE DROP Dshield Block Listed Source
184.108.40.206 -> 220.127.116.11
IPVer=4 hlen=5 tos=32 dlen=48 ID=39892 flags=2 offset=0 ttl=104 chksum=57066
Protocol: 6 sport=1272 -> dport=4899
Seq=2987955435 Ack=0 Off=7 Res=0 Flags=******S* Win=65535 urp=35863 chksum=0
I'm not using Sguil or Snort to drop anything, but I'm suspicious why this alert fired. Using Sguil I can see exactly how Snort decided to fire this alert.
alert ip [18.104.22.168/24,22.214.171.124/24,126.96.36.199/24,188.8.131.52/24,184.108.40.206/24,
any -> $HOME_NET any (msg:"BLEEDING-EDGE DROP Dshield Block Listed Source - BLOCKING";
reference:url,feeds.dshield.org/block.txt; threshold: type limit, track by_src, seconds 3600,
count 1; sid:2403000; rev:319; fwsam: src, 72 hours;)
/nsm/rules/cel433/bleeding-dshield-BLOCK.rules: Line 32
Wow, that's a big list of IPs. The source IP in this case (220.127.116.11) is in the class C of the Snort alert, so Snort fired. Who is the source? Again, Sguil shows me without launching any new windows or Web tabs.
% [whois.apnic.net node-2]
% Whois data copyright terms http://www.apnic.net/db/dbcopyright.html
inetnum: 18.104.22.168 - 22.214.171.124
person: Nguyen Manh Hai
address: Vietel Corporation
address: 47 Huynh Thuc Khang, Dong Da District, Hanoi City
changed: firstname.lastname@example.org 20040825
Vietnam -- probably not someone I visited and no one from whom I would expect traffic.
So I know the source IP and I know exactly (not to be taken for granted given other systems) why this alert appeared. The question now is, should I care? If I were restricted to using an alert-centric system (as described earlier), my main option would now be to query for more alerts. I'll spare the description and say this is the only alert from this source. In the alert-centric world, that's it -- end of investigation.
At this point my log-centric friends might say "Check the logs!" That's a fine idea, but what if there aren't any logs? Does this mean the Vietnam box didn't do anything else, or that it did act but generated no logs? That's an important point.
In the NSM world I have two options: check session data, and check full content data. Let's check full content first by right-clicking and asking Sguil to fetch Libpcap data into Wireshark.
Here I could show another cool screen shot of Wireshark, but the data is the important element. Since Sguil copies it to a local directory I specify, I'll just re-read it with Tshark.
1 2007-01-14 22:17:36.162428 126.96.36.199 -> 188.8.131.52 TCP 1272 > 4899
[SYN] Seq=0 Len=0 MSS=1460
2 2007-01-14 22:17:36.162779 184.108.40.206 -> 220.127.116.11 TCP 4899 > 1272
[RST, ACK] Seq=0 Ack=1 Win=0 Len=0
3 2007-01-14 22:17:37.230040 18.104.22.168 -> 22.214.171.124 TCP 1272 > 4899
[SYN] Seq=0 Len=0 MSS=1460
4 2007-01-14 22:17:37.230393 126.96.36.199 -> 188.8.131.52 TCP 4899 > 1272
[RST, ACK] Seq=0 Ack=1 Win=0 Len=0
Not very exciting -- apparently two SYNs and two RST ACK responses. As we could have recognized from the original alert, port 4899 TCP activity is really old and since I know my network (or at least I think I do), I know I'm not offering 4899 TCP to anyone. But how do you know for all the systems you administer -- or watch -- or don't watch? This full content data, specific to the alert generated, but collected independently of the alert tells us we don't need to worry about this specific event.
This next idea is crucial: just because we have no other alerts from a source, it does not mean this event is all the activity that host performed. With this IP-based Snort alert, we can have some assurance that no other activity occured because Snort will tell us when it sees packets from the 184.108.40.206 network. If we weren't using an IP-based alert, we could query session data, collected independently of alert and full content data, to see what else the attacking host -- or target host -- did.
(Yes, these sections are heavy on the bolding and even underlining, because after five years of writing about this methodology a lot of people still don't appreciate the real-world problems faced by people investigating network incidents.)
I already said we're confident nothing else happened from the attacker here because Snort is triggering on its specific netblock. That sort of alert is a miniscule fraction of the entire rule base. Normally I would find other activity by querying session data and getting results like this from Sguil and SANCP.
Sensor:cel433 Session ID:5020091160069307011
Start Time:2007-01-15 03:17:36 End Time:2007-01-15 03:17:37
220.127.116.11:1272 -> 18.104.22.168:4899
Source Packets:2 Bytes:0
Dest Packets:2 Bytes:0
As you can see, Sguil and SANCP have summarized the four packets shown earlier. There's nothing else. Now I am sure the intruder did not do anything else, at least from 22.214.171.124 within the time frame I queried. For added confidence I could query on a time range involving the target to look for suspicious activity to or from other IPs, in the event the intruder switched source IPs.
I have plenty of other cases to pursue, but I'll stop here. What do you think? I expect to hear from people who say "That takes too long," "It's too manual," etc. I picked a trivial example with a well-defined alert and a tiny scope so I could avoid spending most of Saturday night writing this blog post. Think for a moment how you could expand this methodology and the importance of this sort of data to more complex cases and I think you'll give better feedback. Thank you!