Great Papers from Honeynet Project

If you haven't seen them yet, Know Your Enemy: Behind the Scenes of Malicious Web Servers and Know Your Enemy: Malicious Web Servers are two great papers by the Honeynet Project. You might want to see Web Server Botnets and Server Farms as Attack Platforms by Gadi Evron as background. You'll notice people like e0n are using NSM to combat bots. I have not seen any IRC-controlled SIP/VoIP attack bots and botnets yet. If you think your IPS will save you against bots, keep in mind the time it takes to update some of them. I also recommend reading The World's Biggest Botnets by Kelly Jackson Higgins.

Comments

Anonymous said…
Hi Richard,
Thanks for the heads-up on the blog post on using NSM for fighting bots. But as bots become more sophisticated and use HTTPS or other encrypted channels, how do you see NSM helping out (besides giving you the amount of traffic between the C&C server and the compromised host? Thanks
e0n said…
Just my two cents, but NSM also allows you to audit binary downloads (EXE's and DLL's) and extract those executables from network traffic via tools like "tcpxtract". The analyst can then determine if the binary is hostile, and if possible determine its characteristics, i.e. C&C sites etc... which aids in further detection and response. Bleeding Edge Threats have great sigs looking for these downloads, most significantly, downloads on unusual ports or from suspicious sites. These alerts mixed with any known C&C IP alerts would surely indicate the presence of an infection/compromise, regardless of whether the bot used encryption or not. This level of analysis would not be possible without NSM.

BTW...Thank you Richard for the ACK.
Regarding "Know Your Enemy: Behind the Scenes of Malicious Web Servers", I have seen the IP and browser identifier tracking in the wild so it is nice to see an actual example of how they're doing it on the backend. The times I've seen it, altering the user-agent has been enough to get the malicious file so it matches the behavior described in the writeup.

Won't this technique drastically limit the number of systems that can be exploited? Say you're on a network that has many internal addresses translated to one public source address. Then the only differentiation will be user-agents. If you also have a standard browser on the desktop, the number of user-agents will be fairly limited. For each source IP address, an exploit is only attempted against the first unique user-agent if the malicious site is using this technique. Am I misunderstanding, and if not, I wonder if this is to deter malware collection and analysis?

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4