Today I read this new Cisco advisory containing these words:
Cisco routers and switches running Cisco IOS® or Cisco IOS XR software may be vulnerable to a remotely exploitable crafted IP option Denial of Service (DoS) attack. Exploitation of the vulnerability may potentially allow for arbitrary code execution. The vulnerability may be exploited after processing an Internet Control Message Protocol (ICMP) packet, Protocol Independent Multicast version 2 (PIMv2) packet, Pragmatic General Multicast (PGM) packet, or URL Rendezvous Directory (URD) packet containing a specific crafted IP option in the packet's IP header...
A crafted packet addressed directly to a vulnerable device running Cisco IOS software may result in the device reloading or may allow execution of arbitrary code.
This is the sort of "magic packet" that's an attacker's silver bullet. Send an ICMP echo with the right IP option to a router interface and whammo -- you could 0wn the router. Who would notice? Most people consider their border router an appliance that connects to an ISP, and only begin watching traffic behind the router. I've been worried about this scenario since I posted Ethernet to your ISP in 2005 and read Hacking Exposed: Cisco Networks in 2006.
A router, like a printer, is just a computer in a different box. We should design architectures such that all nodes within our control can be independently and passively observed by a trusted platform. We should not rely on a potential target to reliably report its security status without some form of external validation, such as traffic collected and analyzed by network security monitoring processes.
This Cisco advisory reminds me of a case I encountered at Foundstone. Shortly after joining the company in early 2002 the Apache chunked encoding vulnerability became known in certain circles. As the new forensics guy on the incident response team, I was asked how I could tell if Foundstone's Apache Web server was hacked. I replied that I would first check session data collected by a trusted sensor looking for unusual patterns to and from the Web server, as far back as the data and my time would allow.
Foundstone, being an assessment company, didn't share my opinions on NSM. No such data was available, so the host integrity assessment descended into the untrustworthy and often fruitless attempt to find artifacts of compromise on the Web server itself. Of course the Web server could not be taken down for forensic imaging, so we performed a live response and looked for anything unusual.
This case was one of the easier ones in the sense that all that was at stake was a single server. We didn't have a farm of Web servers, any of which could have been hacked.
Whenever I am trying to scope an incident I always turn to network data first and host data second. The network data helps decide which hosts merit additional individual scrutiny at the host-centric level. It makes no sense and is not time-efficient to deeply or even quickly analyze a decent-sized group of potential victims.
Because most security shops do not bother to collect NSM data, their first response activities usually involve a system administrator perusing the victim file system, manually checking potentially altered or erased logs on the target, and inspecting potentially obfuscated processes in memory controlled by a root kit. While I believe it can be important to collect live response data and even image and inspect a hard drive, I never start an incident response with those activities.
Remember: network first. I don't care if the intruder uses an encrypted channel. As long as he has utilized the network in a mildly suspicious manner, I will have some validation that the system is not behaving as expected. This presumes system owners know what is normal, preferably by comparing a history of network data against more recent, potentially suspect, activity.
Copyright 2007 Richard Bejtlich