Saturday, July 30, 2005

Notes for USENIX Security Students

In a few hours I will be teaching Network Security Monitoring with Open Source Tools at USENIX Security in Baltimore, MD. I have two items of interest for my students concerning their slides.

First, the default Tethereal ring buffer syntax has changed. My first book, and the Tethereal slide, use this syntax:

tethereal -n -i -s -a duration:3600 -b 24 -w

The new syntax requires a filesize whenever -b (ring buffer mode) is invoked, like so:

tethereal -n -i -s -a filesize:1000000 -a duration:3600 -b 24 -w

Also, there is a slide missing before the Trafshow screen shot. It should look like this.

ISS Pursues Lynn Presentation Copies

It looks like I spoke too soon about the Lynn affair being closed. ISS is now pursuing Web sites posting Mike Lynn's presentation. For example, Rick Forno has removed his copy of the Lynn slides after receiving a cease-and-desist letter from lawyers representing ISS. The document (.pdf), by DLA Piper Rudnick Gray Cary US LLP attorney Andrew P. Valentine features this piece of exceptional grammar:

"The posting is located on your [Forno's] website... and relates to a presentation that ISS decided not go give [sic] at the Black Hat 2005 USA Conference in Las Vegas, Nevada."

The letter also states

"On Wednesday, ISS and Cisco sued Mr. Lynn and Black Hat for claims of copyright infringement, misappropriation of trade secrets, and breach of employment agreement in connection with improper distribution of the material. On Thursday, Judge Jeffrey White of the United States District Court for the Northern District of California issued a permanent injunction preventing further distribution of the material...

We also understand that the unlawful distribution of this information is the subject of a federal investigation."

I wonder how ISS and Cisco will handle Web sites located outside of the US, like the disLEXia 3000 blog? Maximillian Dornseif makes several good points in his blog, including asking whether the slides Lynn showed at Black Hat are the same as those now circulating on the Web.

The "stipulated permanent injunction" faxed to Rick Forno contains an interesting paragraph:

"Lynn... acknowledges that ISS did not authorize him to present [his talk] and which he had notified ISS he would not present. In particular, ISS had directed no presentation or live demonstration would be made which included disassembled Cisco code and the 'pointers'. (ISS and Cisco stipulate that they had prepared an alternative presentation designed to discuss Internet security, including the flaw which Lynn had identified, but without revealing Cisco code or pointers which might help enable third parties to exploit the flaw, but were informed they would not be allowed to present that presentation at the conference)."

I assume Jeff Moss was the party that would not allow ISS and Cisco to present at Black Hat.

It looks like Cisco employee John Noh wrote the injunction?

Update: Check out the photographs of slides from Mike's talk posted at Tom's Networking.

Friday, July 29, 2005

New Cisco Advisory and Statements

I guess we can wrap up the Cisco and ISS vs. Mike Lynn and Black Hat saga by mentioning the new Cisco security advisory released today: IPv6 Crafted Packet Vulnerability, which states:

"(IOS®) Software is vulnerable to a Denial of Service (DoS) and potentially an arbitrary code execution attack from a specifically crafted IPv6 packet. The packet must be sent from a local network segment. Only devices that have been explicitly configured to process IPv6 traffic are affected. Upon successful exploitation, the device may reload or be open to further exploitation."

Assuming these details are correct -- and who knows now? -- this is not an earth-shattering discovery. However, this may have been a sample vulnerability Mike demonstrated to explain his technique. He may have picked this vulnerability because he thought it would not affect much of the Internet, but he needed to let people know that his technique was already in use by malicious parties.

Cisco's main security page addresses the Lynn affair directly as well.

This Reuters article quotes Jeff Moss:

" Jeff Moss, president of Black Hat, predicted the ruling would have a dampening effect on security enthusiasts.

People will say, Why would we tell the public about this if we're going to be sued? We're just going to post this anonymously,' he said. 'Who is going to tell Cisco about a problem now?'"

Who indeed. Good work, Cisco. You've just alienated anyone who would consider quietly approaching you with vulnerability details. You've probably also stirred up an army of independent researchers who will look for new holes in IOS.

The real tragedy is the vulnerability of all the enterprises running Cisco gear, to include all of my clients. It's time for me to figure out better ways to monitor Cisco equipment for signs of compromise. The protected domain or boundary does not start inside your border router -- it must now include that router, as it remains at risk of direct attack. How long before the first router-based worm, I wonder?

Mike Lynn Presentation Online

Rick Forno has posted a .pdf of Mike Lynn's presentation. So much for the removal of pages from the Black Hat books by Cisco goons! This is a pathetic charade that public relations personnel and lawyers should study in the future. Cisco and ISS have handled this in exactly the wrong way. Did they ever think they could supress information at a hacker convention, of all places? Bruce Schneier has weighed in as well:

"Despite their thuggish behavior, this has been a public-relations disaster for Cisco. Now it doesn't matter what they say -- we won't believe them. We know that the public-relations department handles their security vulnerabilities, and not the engineering department. We know that they think squelching information and muzzling researchers is more important than informing the public. They could have shown that they put their customers first, but instead they demonstrated that short-sighted corporate interests are more important than being a responsible corporate citizen."

One of the comments on Bruce's blog says

"Mike's roommate just let me know that the FBI is investigating Mike and is currently seizing his stuff. Also, no one has any information on his whereabouts.

Posted by: jim at July 29, 2005 10:29 AM"

Update: FBI involvement confirmed: Wired reports Whistleblower Faces FBI Probe:

"The FBI is investigating a computer security researcher for criminal conduct after he revealed that critical systems supporting the internet and many networks have a serious software flaw that could allow someone to crash or take control of the routers.

Mike Lynn, a former researcher at Internet Security Systems, said he was tipped off late Thursday night that the FBI was investigating him for violating trade secrets belonging to his former employer, ISS...

Lynn's lawyer, Jennifer Granick, confirmed that the FBI told her it was investigating her client.

Granick said, however, that she thought the agency was simply following through on a complaint it received when Cisco and ISS filed their lawsuit against Lynn and that it didn't come after her client reached his settlement. She didn't know the nature of the complaint but said it was probably something to do with intellectual property and that it most likely came from Cisco or ISS.

'The investigation has to do with the presentation,' she said, 'but what crime that could possibly be is unknown because they haven’t found any (evidence against him).'

She hadn't spoken with the U.S. attorney in charge of the investigation but said she thought it was possible that the investigation would wind down soon for lack of evidence, now that Lynn had reached an agreement with Cisco and ISS.

'There's no arrest warrant for (Lynn) and there are no charges filed and no case pending,' Granick said. 'There may never be. But they got a complaint and as a result they were doing some investigation.'
Lynn said that if the case was not dropped, he thought it unlikely that the FBI would try to arrest him this weekend.

'I think they got burned with the Dmitry Sklyarov case,' he said."

Mike Lynn Settles

It appears Black Hat presenter Mike Lynn has avoided personal disaster, acccording to Brian Krebs:

"Under the terms of a permanent injunction signed by a federal judge this afternoon, Lynn will be forever barred from discussing the details about his research into the vulnerabilities he claimed to have discovered in the widely used Cisco hardware."

I recommend reading the rest of Brian Krebs' story for details.

I saw this NANOG post refer to a FrSIRT advisory, but the relevant FrSIRT page has been removed (though not without trace).

In case anyone has forgotten, I remember attending the presentation by FX of at Black Hat USA 2003 involving heap-based overflows in Cisco IOS. It was an extension of work he presented at Black Hat USA 2002. His Ultimaratio page has more info, and he published a Phrack article and an exploit for Cisco IOS 11.x.

Maybe Mike Lynn's mistake was working for a security company (ISS) and with a vendor (Cisco) and being a US citizen? Nothing bad happened to FX before, during, of after his presentation. Was FX's vulnerabilities considered too old to be a problem, but Mike's too recent?

I've been poking around at the Cisco Web site, and I noticed that in April and May they began a massive removal of old IOS images. This product bulletin 2863, Cisco IOS Software Center Update: Effective April 2005 (.pdf) outlines the process, and this Cisco IOS Software Center Update Q&A (.pdf) answers questions on the clean-up. While this could have been planned well before Mike Lynn notified Cisco of his discoveries, it's also possible Cisco took steps to remove vulnerable IOS images because of his findings. Either way, removing old vulnerable images is an excellent idea.

Update: I found a few interesting NANOG posts by James Baldwin, who is in Las Vegas and spoke to Mike Lynn. According to Mr. Baldwin, "Lynn did not have NDA access to the Cisco source." Lynn "developed this information based on publicly available IOS images. There were no illegal acts committed in gaining this information nor was any proprietary information provided for its development." Cisco had initially approved this talk. My [Baldwin's] understanding is that this has been fixed and no current IOS images were vulnerable to the techniques he was describing. ISS, Lynn, and Cisco had been working together for months on this issue before the talk."

Finally, There was no source or proof of concept code released and duplicating the information would only provide you a method to increase the severity of other potential exploits. It does not create any new exploits. Moreover, the fix for this was already released and you have not been able to download a vulnerable version of the software for months however there was no indication from Cisco regarding the severity of the required upgrade. That is to say, they knew in April that arbitrary code execution was possible on routers, they had it fixed by May, and we're hearing about it now and if Cisco had its way we might still not be hearing about it."

I guess that explains the IOS clean-up. Lynn might have shown a way to develop exploits based on analyzing differences in IOS images. According to Cisco's End User License Agreement:

"Customer shall have no right, and Customer specifically agrees not to:...

(iii) reverse engineer or decompile, decrypt, disassemble or otherwise reduce the Software to human-readable form, except to the extent otherwise expressly permitted under applicable law notwithstanding this restriction;"

Maybe Cisco is enforcing its EULA in the most draconian method it can imagine?

Thursday, July 28, 2005

Snort 2.4 Released

Snort 2.4.0 has been released. Here are the release notes. The obvious change in this release is the removal of all rules from the snort-2.4.0.tar.gz tarball. The rules are available separately. Marty assures me that the rule download page will have rules available for non-subscriber and non-registered Snort users by close of business today.

Update: All rules are available -- even those for unregistered users. Nice work Sourcefire.

Distributed Traffic Collection with Pf Dup-To

The following is another excerpt from my upcoming book titled Extrusion Detection: Security Monitoring for Internal Intrusions. I learned yesterday that it should be available the last week in November, around the 26th.

We’ve seen network taps that make copies of traffic for use by multiple monitoring systems. These copies are all exactly the same, however. There is no way using the taps just described to send port 80 TCP traffic to one sensor, and all other traffic to another sensor. Commercial solutions like the Top Layer IDS Balancer provide the capability to sit inline and copy traffic to specified output interfaces, based on rules defined by an administrator. Is there a way to perform a similar function using commodity hardware? Of course!

The Pf firewall introduced in Chapter 2 offers the dup-to keyword. This function allows us to take traffic that matches a Pf rule and copy it to a specified interface. Figure 4-17 demonstrates the simplest deployment of this sort of system.

Figure 4-17. Simple Pf Dup-To Deployment

First we must build a Pf bridge to pass and copy traffic. Here is the /etc/pf.conf.dup-to file we will use.


pass out on $int_if dup-to ($l80_if $l80_ad) proto tcp from any
port 80 to any
pass in on $int_if dup-to ($l80_if $l80_ad) proto tcp from any
to any port 80

pass out on $int_if dup-to ($lot_if $lot_ad) proto tcp from any
port !=80 to any
pass in on $int_if dup-to ($lot_if $lot_ad) proto tcp from any
to any port !=80

pass out on $int_if dup-to ($lot_if $lot_ad) proto udp from
any to any
pass in on $int_if dup-to ($lot_if $lot_ad) proto udp from
any to any

pass out on $int_if dup-to ($lot_if $lot_ad) proto icmp from
any to any
pass in on $int_if dup-to ($lot_if $lot_ad) proto icmp from
any to any

To understand this configuration file, we should add some implementation details to our simple Pf dup-to diagram. Figure 4-18 adds those details.

Figure 4-18. Simple Pf Dup-To Implementation Details

First consider the interfaces involved on the Pf bridge:

  • Interface sf0 is closest to the intranet. It is completely passive, with no IP address.

  • Interface sf1 is closest to the Internet. It is also completely passive, with no IP address.

  • Interface sf2 will receive copies of port 80 TCP traffic sent to it by Pf. It bears the arbitrary address Access to this interface by other hosts should be denied by firewall rules, not shown here.

  • Interface sf3 will receive copies of all non-port 80 TCP traffic, as well as UDP and ICMP, sent to it by Pf. (For the purposes of this simple deployment, we are not considering other IP protocols.) It bears the arbitrary address Access to this interface by other hosts should be denied by firewall rules, not shown here.

Now consider the two sensors.

  • Sensor 1 uses its interface sf2 to capture traffic sent to it from the Pf bridge. It bears the arbitrary IP address Access to this interface by other hosts should be denied by firewall rules, not shown here.

  • Sensor 2 uses its interface sf3 to capture traffic sent to it from the Pf bridge. It bears the arbitrary address Access to this interface by other hosts should be denied by firewall rules, not shown here.

One would have hoped the Pf dup-to function could send traffic to directly connected interfaces without the involement of any IP addresses. Unfortunately, my testing revealed that assigning IP addresses to interfaces on both sides of the link is required. I used OpenBSD 3.7, but future versions may not have this requirement.

With this background, we can begin to understand the /etc/pf.conf.dup-to file.

  • The first set of declarations define macros for the interfaces and IP addresses used in the scenario.

  • The first set of pass commands tell Pf to send port 80 TCP traffic to, which is the packet capture interface on sensor 1. Two rules are needed; one for inbound traffic, and one for outbound traffic.

  • The second set of pass commands tell Pf to send all non-port 80 TCP traffic to, which is the packet capture interface on sensor 2. Again two rules are needed.

  • The third and fourth set of pass commands send UDP and ICMP traffic to as well.

Before testing this deployment, ensure Pf is running, and that all interfaces are appropriately configured and enabled. To test our distributed collection system, we retrieve the Google home page using Wget.

$ wget
=> `index.html'
Connecting to[]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

[ <=> ] 1,983 --.--K/s

10:19:51 (8.96 MB/s) - `index.html' saved [1983]

Here is what sensor 1 sees on its interface sf2:

10:18:58.122543 IP >
S 101608113:101608113(0) win 32768

10:18:58.151066 IP >
S 2859013924:2859013924(0) ack 101608114 win 8190
10:18:58.151545 IP >
. ack 1 win 33580
10:18:58.153027 IP >
P 1:112(111) ack 1 win 33580
10:18:58.184169 IP >
. ack 112 win 8079
10:18:58.185384 IP >
. ack 112 win 5720
10:18:58.189840 IP >
. 1:1431(1430) ack 112 win 5720
10:18:58.190344 IP >
P 1431:2277(846) ack 112 win 5720
10:18:58.190483 IP >
F 2277:2277(0) ack 112 win 5720
10:18:58.192706 IP >
. ack 2277 win 32734
10:18:58.192958 IP >
. ack 2278 win 32734
10:18:58.204719 IP >
F 112:112(0) ack 2278 win 33580
10:18:58.232685 IP >
. ack 113 win 5720

Here is what sensor 2 sees on its interface sf3.

10:18:58.089226 IP >
64302+ A? (32)
10:18:58.113853 IP >
64302 3/13/13 CNAME,
A, A (503)

As we planned, sensor 1 only saw port 80 TCP traffic, while sensor 2 saw everything else. In this case, “everything else” meant a DNS request for So why build a distributed collection system? This section presented a very simple deployment scenario, but you can begin to imagine the possibilities. Network security monitoring (NSM) advocates collecting alert, full content, session, and statistical data. That can be a great amount of strain on a single sensor, even if only full content data is collected.

By building a distributed collection system, NSM data can be forwarded to independent systems built specially for the tasks at hand. In our example, we offloaded heavy Web surfing activity to one sensor, and sent all other traffic to a separate sensor.

We also split the traffic passing function from the traffic recording function. The Pf bridge in Figure 4-18 is not performing any disk input/output (IO) operations. The kernel is handling packet forwarding, which can be done very quickly. The independent sensors are accepting traffic split out by the Pf bridge. The sensors can be built to perform fast disk IO. This sort of sensor load-balancing provides a way to apply additional hardware to difficult packet collection environments.

Free Michael Lynn

Ex-ISS X-Force researcher Mike Lynn is in a world of hurt right now. Yesterday he delivered a briefing at Black Hat on Cisco security flaws. Lynn decided to resign from ISS instead of complying with the wishes of his employer and Cisco to keep his discoveries quiet. For a lot more detail, I strongly recommend reading the Brian Krebs Security Fix blog hosted by the Washington Post. Krebs is in Las Vegas and has spoken with Lynn, who "has been served with a temporary restraining order designed to prevent him from discussing any more details about the flaw...[and] is sheduled to appear in federal district court at 8:00 a.m. Thursday." (!) I think it's time to start a Free Michael Lynn campaign to pay for his legal bills.

Update: Within this Slashdot thread is a comment by someone claiming to be Mike Lynn. Here is Cisco's statement. Also, SecurityFocus has a good article with this statement:

"Lynn outlined a way to take control of an IOS-based router, using a buffer overflow or a heap overflow, two types of memory vulnerabilities. He demonstrated the attack using a vulnerability that Cisco fixed in April. While that flaw is patched, he stressed that the attack can be used with any serious buffer overrun or heap overflow, adding that running code on a router is a serious threat."

What is the problem, then? Maybe this?

"During his presentation, Lynn outlined an eight step process using any known, but unpatched flaw, to compromise a Cisco IOS-based router. While he did not publish any vulnerabilities, Lynn said that finding new flaws would not be hard."

Good grief. The outcome of this situation could be very important for the future of security research.

Update 2: Here is the relevant text on reverse engineering from the Digital Millennium Copyright Act.

Wednesday, July 27, 2005

Snort "Not Eliligible" for Zero Day Initiative

I recently wrote about TippingPoint's Zero Day Initiative (ZDI), a pay-for-vulnerabilities program. Thank you to the poster (whom I will keep anonymous) for notifying me of this article Vendors Compete for Hacker Zero Days by Kevin Murphy. It features this quote:

"[C]ompetitors will have to sign agreements to the effect that they will not irresponsibly disclose the information, and that any data they provide to their own customers cannot be easily reverse engineered into an attack, he [3Com’s David Endler] said.

"'Some technology based on Snort would not be eligible because Snort by its nature is open,' Endler said, referring to the open-source IDS software. 'But there are products based on Snort that are closed. We’ll have to take it on a case-by-case basis.'"

This means Sourcefire will never be able to learn of ZDI vulnerabilities. Any registered Snort user can download Sourcefire VRT rules and see everything except rules younger than five days old. VRT subscribers have access to the latest rules immediately.

It sounds to me like the only "technology based on Snort" that would be "eligible" would be sensors provided by a managed security services provider, or sensors sold without access to the console and rule sets. Such vendors could add ZDI-inspired rules but never let users see them.

I never thought for a minute TippingPoint would do anything to help Sourcefire, as they are two major competitors in the (misnamed) IPS market.

Tuesday, July 26, 2005

Public Network Security Operations Class

I am happy to announce the first public Network Security Operations class is tentatively scheduled for the last week in September, starting Tuesday 27 September and ending Friday 30 September. The class is tentatively scheduled to be held at Nortel PEC in Fairfax, VA. I plan to offer 13 seats to the public, at a cost of $2995 per seat.

The course offers four sections, one per day:

  1. Network Security Monitoring: theory, tools, and techniques to detect sophisticated intruders

  2. Network Incident Response: network-centric means to contain and remediate intrusions

  3. Network Forensics: collect, protect, analyze, and present network-based evidence to prosecute or repel intruders

  4. Live Fire Exercises: apply the preceding three days of skills in an all-day, all-lab environment

More information is contained in either the color .pdf or the grayscale .pdf flyers.

Once I have confirmed the location and time, I will post those details at the TaoSecurity training page.

Interested parties should email me: richard at taosecurity dot com. Once I have set up payment processing, those who pay first will receive seats first. Those who only email their intent to attend the class will be contacted as alternates if paying students fail to follow through for some reason. Therefore, it is worthwhile to signal your intention to attend the class, if you are so interested.

I look forward to seeing you in Fairfax in September!

Unable to Specify Interface for TCP Portmapper

I'm crushed. Today while working on a FreeBSD system with multiple interfaces, I noticed the portmapper (rpcbind) listening where I didn't think it should be.

# sockstat -4 | grep rpcbind
root rpcbind 354 10 udp4 *:*
root rpcbind 354 11 udp4 *:*
root rpcbind 354 12 udp4 *:1007 *:*
root rpcbind 354 13 tcp4 *:111 *:*

The UDP version was listening on interface as I expected. What was the TCP version doing listening on all interfaces? Also, what was port 1007 UDP doing?

I checked my /etc/rc.conf file to see if I had messed up the synatx.


That looked ok to me. I double-checked with /etc/defaults/rc.conf.

# grep "^rpcbind" /etc/defaults/rc.conf
rpcbind_enable="NO" # Run the portmapper service (YES/NO).
rpcbind_program="/usr/sbin/rpcbind" # path to rpcbind, if you want a different one.
rpcbind_flags="" # Flags to rpcbind (if enabled).

I finally looked at the man page and clarified the -h switch.

-h Specify specific IP addresses to bind to for UDP requests. This
option may be specified multiple times and is typically necessary
when running on a multi-homed host. If no -h option is speci-
fied, rpcbind will bind to INADDR_ANY, which could lead to prob-
lems on a multi-homed host due to rpcbind returning a UDP packet
from a different IP address than it was sent to. Note that when
specifying IP addresses with -h, rpcbind will automatically add and if IPv6 is enabled, ::1 to the list.

OH NO. It only mentions UDP and not TCP. That's why I'm crushed. One of the characteristics I like about FreeBSD (and Unix in general) is the granular control over services enabled via simple text files. I should have been able to tell both UDP and TCP rpcbind versions to listen on a specified interface. That doesn't seem possible.

Now my only alternative is to firewall the interfaces where I do not want rpcbind to listen. That's a lousy solution in the Unix world. :(

Human Error Results in Being 0wn3d

Bill Brenner's article in the July 2005 Information Security magazine clued me in to a press release by the Computing Technology Industry Association (CompTIA). They announced the results of their third annual CompTIA Study on IT Security and the Workforce. From the press release:

"Human error, either alone or in combination with a technical malfunction, was blamed for four out of every five IT security breaches (79.3 percent), the study found. That figure is not statistically different from last year."

This study and the 2004 edition appear to be the source for other reports that claim 80% of security breaches are the result of human error. Note the CompTIA study says "Human error, either alone or in combination with a technical malfunction," is to blame.

Nevertheless, I am not surprised by this figure. I rarely perform an incident response for an organization that is beaten by a zero day exploit in the hands of an uber 31337 h@x0r. In most cases someone poorly configures a server, or doesn't patch it, or makes an honest mistake. The fact is many IT systems are complicated, and none are getting simpler. Administrators have too many responsibilities and too few resources. They are often directed by managers who have decided to weigh "business realities" more important than security. The enterprise is not running a defensible network architecture and the level of network awareness is marginal or nonexistent. No network security or host integrity monitoring is done.

In such an environment, it is easy to see how a lapse in a firewall rule, a misapplied patch, or an incorrect application setting can provide the foothold a worm or attacker needs.

So what is my answer? No amount of preventative measures will ever stop all intrusions. I recommend applying as much protection as your resources will allow, and then monitor everywhere you can. If your monitoring doesn't help you identify a policy failure and/or intrusion, it will at least provide the evidence needed to remediate the problem, and then better prevent and/or detect the incident in the future.

Update: I found this Infonetics Research press release that stated the following:

"In Infonetics Research’s latest study, The Costs of Enterprise Downtime, North American Vertical Markets 2005, 230 companies with more than 1,000 employees each from five vertical markets—finance, healthcare, transportation/logistics, manufacturing, and retail—were surveyed about their network downtime...

'The finance and manufacturing verticals are bleeding the most, with the average financial institution experiencing 1,180 hours of downtime per year, costing them 16% of their annual revenue, or $222 million, and manufacturers are losing an average of 9% of their annual revenue,' said Jeff Wilson, principal analyst of Infonetics Research and author of the study...

Human error is the cause of at least a fifth of the downtime costs for all five verticals, and almost a third for financial institutions; this can only be fixed by adding and improving IT processes...

Security downtime is not a major issue anywhere, though it reaches 8% of costs within financial organizations."

New RSS Feed

My RSS feed from is reporting "Bandwidth Limit Exceeded. The server is temporarily unable to service your request due to the site owner reaching his/her bandwidth limit. Please try again later." Those looking for a new RSS feed can use I will try to get this new icon on the blog template when Blogger cooperates.

SC Magazine IPS Reviews

Recently I received the new SC Magazine and noticed a new Group Test addressing so-called intrusion prevention systems. The reviewer was Christopher Moody, but I was unable to get any sort of background information on him. He has written most of the recent SC Magazine Group Tests, however. As you can read in the story, or in this press release, the Sourcefire IS-2000 won SC Magazine's "Best Buy" award. From the review:

< "Its high level of protection and simple rule writing using the Snort engine make it a good standalone product. But it is when it is used as part of the 3D System that it really takes off. Sourcefire’s Defense Center provides excellent centralized management and reporting, and its Real-time Network analysis appliance gives a wider look at the network to help secure it."

The Top Talyer IPS 5500 Attack Mitigator was the SC Magazine Recommended product, even though it had a "small attack signature database compared to other products." Review readers will notice that all of the heavyweight IPS vendors were listed, including TippingPoint and ISS. In addition to Sourcefire, three (perhaps four) other products were Snort-based: Countersnipe, Barbedwire, and V-Secure. (I suspect XSGuard is Snort-based too, but I have no proof.) Did you notice that none of those three are part of Sourcefire's Certified Snort Integrator program? That means they are not allowed to apply VRT rule updates to their products.

Overall I do not have that much confidence in the quality of the review. I trust someone like Greg Shipley who seems to ask the right questions and back them up with real tests. See his recent firewall round-up as an example; at least they mention testing methodologies. I suspect Mr. Moody was limited by page space, but he could have provided more detail on the SC Magazine Web site. I do think that Snort + RNA is incredibly powerful, and I doubt there is a better solution available. I just don't think SC Magazine makes its judgements in a manner I find most helpful.

On a related note, the Open Source Snort Rules Consortium (OSSRC) is online; consider joining.

Monday, July 25, 2005

Thoughts on Web Application Security Consortium

Rather than post to his own blog, Aaron Higbee decided to bait me with a link to the Web Application Security Consortium's Web Security Threat Classification guide. Uh oh, there's that magic word -- "threat." Immediately I suspected this document's use of the word "threat" in the title might be problematic, as I doubted it would be a classification of the parties with the capabilities and intentions to exploit vulnerabilities in assets.

The document description states "The Web Security Threat Classification is a cooperative effort to clarify and organize the threats to the security of a web site. The members of the Web Application Security Consortium have created this project to develop and promote industry standard terminology for describing these issues. Application developers, security professionals, software vendors, and compliance auditors will have the ability to access a consistent language for web security related issues."

That sounds to me like an open invitation for a debate on their language!

Before you get too cross with me, note that I was pleasantly surprised to see most of the content correctly framed as "attacks." Other content was not labelled correctly. Here are a few examples, which I will let you assess before I suggest an alternative way of looking at these issues.

  1. A Brute Force attack is an automated process of trial and error used to guess a person's username, password, credit-card number or cryptographic key.

  2. Insufficient Authentication occurs when a web site permits an attacker to access sensitive content or functionality without having to properly authenticate.

  3. Weak Password Recovery Validation is when a web site permits an attacker to illegally obtain, change or recover another user's password.

  4. Credential/Session Prediction is a method of hijacking or impersonating a web site user.

Can you guess which terms are vulnerabilities, and which are attacks? The attacks are easy; they are 1 and 4. The document authors correctly call number 1 an "attack" in its description. The vulnerabilities are items 2 and 3. These are easy to spot, too. The document authors didn't want to call these conditions vulnerabilities, so they bailed out by using the phrases "occurs when" and "is when". This is a sure sign that the term is not an attack, but is something else. Here are the other vulnerabilities in the document, listed as "attacks":

  1. Insufficient Authorization is when a web site permits access to sensitive content or functionality that should require increased access control restrictions.

  2. Insufficient Session Expiration is when a web site permits an attacker to reuse old session credentials or session IDs for authorization.

  3. Automatic directory listing/indexing is a web server function that lists all of the files within a requested directory if the normal base file is not present.

  4. Information Leakage is when a web site reveals sensitive data, such as developer comments or error messages, which may aid an attacker in exploiting the system.

  5. Insufficient Anti-automation is when a web site permits an attacker to automate a process that should only be performed manually.

  6. Insufficient Process Validation is when a web site permits an attacker to bypass or circumvent the intended flow control of an application.

So, that's a great list. Add those six items and the earlier two and you have eight total vulnerabilities. At this point I would recommend breaking the document into two sections: (1) Vulnerabilities and (2) Attacks. The document itself should be renamed the Web Vulnerability and Attack Classification Guide.

If you think for a moment you can even pair attacks properly labelled in the document with the relabelled vulnerabilities above. Consider item two in this second list -- Insufficient Session Expiration. Its extended description says

"Insufficient Session Expiration is when a web site permits an attacker to reuse old session credentials or session IDs for authorization. Insufficient Session Expiration increases a web site's exposure to attacks that steal or impersonate other users."

Notice the term "exposure"? That's a vulnerability that can be exploited by the following attacks listed in the guide:

  • Session Fixation is an attack technique that forces a user's session ID to an explicit value. [me: like old session credentials?]

  • Credential/Session Prediction is a method of hijacking or impersonating a web site user. [me: by predicting old session credentials?]

  • Abuse of Functionality is an attack technique that uses a web site's own features and functionality to consume, defraud, or circumvents access controls mechanisms. [me: like insufficient session expiration?]

So there you go Aaron. I've probably angered twenty-odd more security professionals by debating their security classification program. (At least I know Mike Shema on that list, and he owes me for leaving to attend a concert during an incident response!) I'm hoping that this quote from Pete Lindstrom (who recently emailed with a link to his blog) is accurate:

"'The need for consistent technical terminology for web related security issues has a significant impact on risk assessment and remediation activities,' Pete Lindstrom, research director of Spire Security, said in a statement Monday. 'Establishing a standard approach for identifying these issues, along with a common terminology that everyone can adhere to, will help to thwart the increasing security risks associated with Web applications.'"

I would also say the Web Application Security Consortium's Distributed Open Proxy Honeypots (aka "proxypot") is an awesome idea. Again, I have no issue with the technical work by these groups, as they far outweigh anything I have done in those fields. However, I would be very happy to see industry-wide consistency in core security terms.

Lancope's Take on NetFlow

Earlier this year I had a chance to try a Lancope Stealthwatch appliance. Recently Adam Powers from Lancope weighed in on the focus-ids list with ways NetFlow records can be best utilized for security purposes. This is part of a thread started by Andy Cuff (aka Talisker). To hear more from Lancope, check out their WebEx Wednesday at 11 AM eastern.

David Sames started a second interesting focus-ids thread about IDS evaluation. The thread evolved into a discussion of the functions of various security devices. After a great post by Devdas Bhagat, I joined the fray. You can even see a vendor say I make "wrongheaded argument[s]". Oooh, scary. :)

Thoughts on TippingPoint Zero Day Initiative Program

Through the accursed Slashdot I learned of Tipping Point's Zero Day Initiative program. (Incidentally, I just figured out that Slashdot is like Saturday Night Live: we all remember it being a lot better years ago, it stinks now, yet we still watch.) According to this CNet story by Joris Evers, which cites TippingPoint's rationale for the program:

"'We want to reward and encourage independent security research, promote and ensure responsible disclosure of vulnerabilities and provide 3Com customers with the world's best security protection,' David Endler, director of security research at TippingPoint, said in an interview."

This program is similar to the iDEFENSE Vulnerability Contributor Program launched in 2002 amidst much fanfare. This April 2003 interview with iDEFENSE VPC Manager Sunil James is also enlightening. Part of the VCP is a retention reward program that paid a $3,000 bonus to the Danish CIRT and $1,000 to l0rd_yup for vulnerabilities reported to iDEFENSE in the first quarter of 2005. Some iDEFENSE advisories give anonymous credit to vulnerability discovers, like Sophos Anti-Virus Zip File Handling DoS Vulnerability, while others name their sources, like Lord Yup in Microsoft Word 2000 and Word 2002 Font Parsing Buffer Overflow Vulnerability. In some cases iDEFENSE Labs finds the hole, as in Adobe Acrobat Reader UnixAppOpenFilePerform() Buffer Overflow Vulnerability.

Thus far I have not heard much discussion about iDEFENSE's program, although it seems like the payout to the vulnerability researchers is dwarfed by the value earned by iDEFENSE. Otherwise I have not heard too many condemnations of the pay-for-bugs program and I have not heard of anyone suing iDEFENSE over a vulnerability produced through their VCP.

Looking at some of the details of the TippingPoint Zero Day Initiative, I found this item in their FAQ amusing:

"Since 3Com and TippingPoint customers are protected prior to the disclosure, are they aware of the vulnerability?

In order to maintain the secrecy of a researcher's vulnerability discovery until a product vendor can develop a patch, 3Com and TippingPoint customers are only provided a generic description of the filter provided but are not informed of the vulnerability. Once details are made public in coordination with the product vendor, TippingPoint's Digital Vaccine® service for the Intrusion Prevention System provides an updated description so that customers can identify the appropriate filters that were protecting them. In other words, 3Com and TippingPoint will be protected from the vulnerability in advance, but they will not be able to tell from the description what the vulnerability is."

Anyone who reads this blog knows I think this sort of "protection through secrecy" is ridiculous. If I can't figure out how a product is making its decision to "protect" me, I will try to avoid it. I certainly wouldn't want it blocking traffic on my behalf. What about anti-virus software, you ask? I don't run it on my servers!

This is also funny:

"Why are you giving advance notice of the vulnerability information you've bought to other security vendors, including competitors?

We are sharing with other security vendors in an effort to do the most good with the information we have acquired. We feel we can still maintain a competitive advantage with respect to our customers while facilitating the protection of a customer base larger than our own.

What types of security vendors are eligible for the advanced notice?

In order to qualify for advanced notice, the security vendors must be in a position to remediate or provide protection of vulnerabilities with their solution, while not revealing details of the vulnerability itself to customers. The security vendor's product must also be resistant to discovery of the vulnerability through trivial reverse engineering. An example of such a vendor would be an Intrusion Prevention System, Intrusion Detection System, Vulnerability Scanner or Vulnerability Management System vendor."

I am eager to see what vendors can live up to these requirements. Snort rules won't, and neither will Nessus NASL scripts.

I am uneasy about programs like this. Consider modifying Mr. Endler's statement in this manner:

"We want to reward and encourage independent security research, promote and ensure responsible creation of viruses and provide our customers with the world's best security protection."

A virus is malware launched by a threat; it's not a vulnerability. Publication of a vulnerability does not explicitly mean publication of new code to be used by threats. Still, it's not that difficult to move from vulnerability disclosure to exploit creation.

TippingPoint is basically paying researchers to justify the vendor's existence. No vulnerabilities = no need to buy a TippingPoint IPS. More vulnerabilities means more opportunities for threats to craft exploit code, and that justifies buying more IPSs.

How is this different from the Mozilla Bug Bounty program, you might ask? When Mozilla pays researchers to report vulnerabilities in Mozilla code, Mozilla is effectively outsourcing its security quality assurance program. This is done to improve the quality of the software released by Mozilla. When TippingPoint pays researchers to report vulnerabilities in anyone's software, and then keeps those vulnerabilities to itself (followed by limited disclosure), TippingPoint is justifying its product's existence.

You might also wonder what I think of Microsoft's $250,000 bonus to those who expose virus writers. I have no problems with such a program, and I see it as another way to remove threats from the streets.

Sourcefire Certified Snort Integrator Program

Did you see Sourcefire's press release on its Certified Snort Integrator Program? If you're not in this program, and you use Snort to provide services or products to third parties, you can't deploy or sell sensors with Sourcefire VRT rule sets. The only exception involves major release versions of Snort, e.g., 2.3.0 or 2.4.0, each of which are packaged with the latest rules at the day of release.

The press release says "charter members of the program include: Astaro, BRConnection, Catbird Networks, Counterpane Internet Security, e-Cop, Netreo, NTT DATA CORPORATION (Japan), ProtectPoint, SecurePipe, StillSecure, VarioSecure Networks, VeriSign, Voyant Strategies and WatchGuard."

If you own a Snort-based appliance or contract with a third party to provide Snort-based services, is your vendor on this list? If not, ask your vendor why not, and how they intend to keep their rules up-to-date. If you run Snort on your own to protect your own enterprise, this new program does not affect you. You can still register to become a VRT rules user for free and get the latest rules five days after they are provided to VRT subscribers.

1000th Post

This is the 1000th TaoSecurity Blog post. Thankfully, after being broken for months, Blogger fixed the post tracking counter in time for me to notice this milestone.

I started the blog on 8 January 2003 as a place to post word of new book reviews. I haven't read a new book since May, because I have been extremely busy launching my new company TaoSecurity. I plan to resume reading books very shortly, probably starting with Extreme Exploits.

The blog has now evolved into a place where I record tips on using FreeBSD and other operating systems and applications. I also post thoughts on network security monitoring and related security topics. I constantly refer back to posts here to remember how I configured a program or what my thoughts were on a certain subject. I detest keeping bookmarks, so I try to store anything of value here. A bookmark has no context and says nothing about how or why I recorded it. In brief, this blog helps me keep a grip on developments in the tech world.

Looking ahead, I have two new projects in store: the TaoSecurity Podcast and a resource I'm calling You can expect to read more about these in the fourth quarter of this year. I appreciate everyone who reads this blog and I especially enjoy reading your comments and emails. Our next milestone is the blog's third birthday in January, so I hope to see you then!

FreeBSD Status Report Second Quarter 2005

The latest FreeBSD Status Report makes for interesting reading. Many of the ongoing tasks are Google Summer of Code projects. Nmap author Fyodor posted that Google is spending $2 million to fund projects this summer, including ten for Nmap itself. I highly commend Google for devoting a small portion of its market capitalization to these coding efforts.

  • Emily Boyd will redesign Previous work includes the PostgreSQL Web site. Previews are posted here.

  • Dario Freni is reengineering and rewriting FreeSBIE to include it in the source tree.

  • Andre Oppermann is trying to raise enough money to fund three full months of dedicated development on improving the TCP/IP stack. I will contact the FreeBSD Foundation to see if they will accept tax-deductible donations on his behalf.

  • Chris Jones is working on making gvinum ready for prime time.

  • Andrew Thompson has an OpenBSD-like if_bridge interface ready for FreeBSD 6.0.

  • Andrew Turner is integrating into FreeBSD the BSD Installer originally used in DragonFly BSD and then FreeSBIE.

  • Brian Wilson is working on journalling for UFS. This Slashdot post examines different journalling options.

There are also many improvements in SMP and TrustedBSD that will appear in FreeBSD 6.0, probably in September.

Friday, July 22, 2005

Ron Gula Podcast

I finally got a chance to listen to a new podcast with Ron Gula. Sondra Schneider from Security University interviewed Ron. The podcast lasts about 26 minutes and discusses Ron's experience as a NSA red team aggressor and his work at BBN.

I specifically liked Ron's discussion of the difference between access control and monitoring. He said making a firewall change affects customer service level agreements; hence, firewalls were part of operations as they had direct impact on moving packets. Monitoring was typically not an operational function, because it was passive and was not access control. Ron said IPSs need to be treated as part of operations (they are a firewall, after all) because they block traffic.

Ron also pointed out confusion between credit card theft and identity theft. Some people consider the two events to be the same. This is not the case, since recovering from a stolen credit card is much easier.

Here is Ron's bio, for those of you not familiar with him.

Ron Gula - President and Chief Technical Officer, Tenable Network Security

Mr. Gula was the original author of the Dragon IDS and CTO of Network Security Wizards which was acquired by Enterasys Networks. At Enterasys, Mr. Gula was Vice President of IDS Products and worked with many top financial, government, security service providers and commercial companies to help deploy and monitor large IDS installations. Mr. Gula was also the Director of Risk Mitigation for US Internetworking and was responsible for intrusion detection and vulnerability detection for one of the first application service providers. Mr. Gula worked for BBN and GTE Internetworking where he conducted security assessments as a consultant, helped to develop one of the first commercial network honeypots and helped develop security policies for large carrier-class networks. Mr. Gula began his career in information security while working at the National Security Agency conducting penetration tests of government networks and performing advanced vulnerability research. Mr. Gula has a BS from Clarkson University and a MSEE from University of Southern Illinois. Ron Gula was the recipient of the 2004 Techno Security Conference "Industry Professional of the Year" award.

FreeBSD Quality

The topic of the quality of FreeBSD has recently appeared in several places. Earlier this week SecurityFocus reported on the results of a study by Coverity. From Coverity's 27 June 2005 press release:

Coverity "released software defect and security vulnerability results for FreeBSD 6.0... [and] found 306 software defects in FreeBSD's 1.2 million lines of code, or an average of 0.25 defects per 1,000 lines of code."

That is interesting, considering they did the study well over a month ago, before 6.0 was even in BETA status. Also:

"FreeBSD security is getting better very quickly - over the course of a year, FreeBSD's code size doubled, while the total number of defects went down by 50%."

The SecurityFocus story made this observation:

"Not all the potential flaws found by analysis tools are security holes. For FreeBSD, while 306 problems were flagged by Coverity's software, only 5 issues could be triggered by user input. The software classified another 12 vulnerabilities as buffer overruns, another potentially serious security issue. The FreeBSD project has analyzed the flaws and fixed the issues."

For some commentary on the study, check out this thread.

Over on the freebsd-stable mailing list, Alexey Yakimovich started a Quality of FreeBSD thread by complaining about ATA errors. I thought Robert Watson's reply provided very useful insights into the problems of operating system development and testing.

Thursday, July 21, 2005

Visa and AmEx Pull the Plug on CardSystems

Thanks to Richard Stiennon for informing me that Visa and American Express will no longer allow CardSystems Solutions to process their credit cards. I am stunned, but in a good way. If companies begin to take security seriously, I will be very pleased. If this turns into a rationale to justify the current "compliance = security" mindset, then nothing will change and more organizations will be compromised.

The CardSystems news page reported yesterday that "John Perry, President and CEO of CardSystems "look[s] forward to the opportunity to share CardSystems' story with the [Congressional] Subcommittee." I found the press release by the House Financial Services Subcommittee on Oversight and Investigations saying the hearing is today at 10 am.

BSD Certification Group Publishes Survey Results

Yesterday the BSD Certification Group published the results of their task analysis survey. The 147 page report is available here.

I found these excerpts interesting:

The survey saw an "often expressed desire to see the eventual certifications emphasize advanced achievement and mastery of Unix knowledge in general and BSD usage in particular. Yet, desires that the certification be difficult to obtain were balanced by the concern to not neglect younger, enter level candidates or those more experienced who are coming to BSD from other computing platforms."

"A proposition that specific knowledge of all BSDs be required was rejected by most in favor of emphasis on general Unix concepts, with an understanding of how and why BSD is unique. 'Linux vs. BSD' style topics were commonly rejected. A focus on BSD similarities instead of BSD differences was more often expressed. Interestingly, the least preference was for coverage of only a single BSD."

Among all respondents, almost 78% used FreeBSD, almost 41% used OpenBSD, almost 16% used NetBSD, and almost 4% used DragonFly BSD. The "Other BSD" group was bigger than DragonFly at almost 7%! Many Darwin and Mac OS users reported in as "other."

A huge portion of the survey involves reproducing survey taker comments. I am not sure how the document itself was created, but I cannot use Adobe Reader to highlight and copy text into documents like this blog. The .pdf is also not searchable.

The survey represents a massive amount of work by the BSD Certification Group and I thank them for their efforts! You can read the press release here.

Tuesday, July 19, 2005

Excerpt from Network Forensics Chapter

A crucial component of using trusted tools and techniques is ensuring that the network evidence collected by a sensor can be read and analyzed in another environment. This may seem like an obvious point, but consider my recent dismay when I tried to analyze the following trace supposedly captured in Libpcap format. I started by using the Capinfos command packaged with Ethereal. On a regular trace, Capinfos lists output like the following.

bourque:/home/analyst$ capinfos goodtrace
File name: goodtrace
File type: libpcap (tcpdump, Ethereal, etc.)
Number of packets: 1194
File size: 93506 bytes
Data size: 213308 bytes
Capture duration: 342.141581 seconds
Start time: Thu Jun 23 14:55:18 2005
End time: Thu Jun 23 15:01:01 2005
Data rate: 623.45 bytes/s
Data rate: 4987.60 bits/s
Average packet size: 178.65 bytes

On the trace in question, Capinfos produced this odd output.

bourque:/home/analyst$ capinfos bad2.tcpdump.052705
capinfos: An error occurred after reading 1 packets from
"bad2.tcpdump.052705": File contains a record that's not valid.
(pcap: File has 1701147252-byte packet, bigger than
maximum of 65535)

That’s disturbing. Something is wrong with this trace. Tcpdump can’t read anything beyond one packet.

bourque:/home/analyst$ tcpdump -n -r bad2.tcpdump.052705
reading from file bad2.tcpdump.052705, link-type EN10MB
16:57:20.259256 IP >
P 2134566659:2134567683(1024) ack 3376746668 win 24840
tcpdump: pcap_loop: bogus savefile header

Something is wrong with the bad2.tcpdump.052705 trace. Perhaps I could use the Editcap program, also bundled with Ethereal, to convert it from its present form to something recognizable by Tcpdump?

bourque:/home/analyst$ editcap -v bad2.tcpdump.052705
File bad2.tcpdump.052705 is a Nokia libpcap (tcpdump) capture file.

That looks promising. The bad2.tcpdump.052705 file may be a trace captured from a Nokia version of Libpcap. Editcap reports it understands the following five variations on the Libcap format:

  • libpcap - libpcap (tcpdump, Ethereal, etc.)

  • rh6_1libpcap - RedHat Linux 6.1 libpcap (tcpdump)

  • suse6_3libpcap - SuSE Linux 6.3 libpcap (tcpdump)

  • modlibpcap - modified libpcap (tcpdump)

  • nokialibpcap - Nokia libpcap (tcpdump)

The four options after the first represent various vendor tweaks to the Libpcap library. They are nothing but a source of headaches for network investigators and security vendors. However, perhaps we can use Editcap to convert from the reported Nokia variant to a standard Libpcap format.

bourque:/home/analyst$ editcap bad2.tcpdump.052705
editcap: An error occurred while reading "bad2.tcpdump.052705":
File contains a record that's not valid.
(pcap: File has 1701147252-byte packet, bigger than maximum
of 65535)

Unfortunately, that process fails. For a last ditch try at reading this data, we move the trace to a Windows system equipped with Tethereal and the WinPcap library.

D:\>Tethereal -n -r bad2.tcpdump.052705
1 0.000000 e0:a6:08:00:45:00 -> 50:c8:00:0d:65:18 LLC I,
N(R)=0, N(S)=32; DSAP b2 Group, SSAP SNA Command
tethereal: "bad2.tcpdump.052705" appears to be damaged or corrupt.
(pcap: File has 1701147252-byte packet, bigger than maximum
of 65535)

Again, we are unable to read this trace. This is an extreme example of collecting traffic on one system and not being able to read it elsewhere. All is not completely lost, however. The raw file itself is still available. Using a hex viewer like hd, we can see the raw file contents.

bourque:/home/analyst$ hd bad2.tcpdump.052705 > bad2.tcpdump.052705.hd

bourque:/home/analyst$ less bad2.tcpdump.052705.hd

4d 49 4d 45 2d 76 65 72 73 69 6f 6e 3a 20 31 2e MIME-version: 1.
30 0d 0d 0a 58 2d 4d 49 4d 45 4f 4c 45 3a 20 50 0...X-MIMEOLE: P
72 6f 64 75 63 65 64 20 42 79 20 4d 69 63 72 6f roduced By Micro
73 6f 66 74 20 4d 69 6d 65 4f 4c 45 20 56 36 2e soft MimeOLE V6.
30 30 2e 32 39 30 30 2e 32 31 38 30 0d 0d 0a 58 00.2900.2180...X
2d 4d 61 69 6c 65 72 3a 20 4d 69 63 72 6f 73 6f -Mailer: Microso
66 74 20 4f 75 74 6c 6f 6f 6b 2c 20 42 75 69 6c ft Outlook, Buil
64 20 31 30 2e 30 2e 36 36 32 36 0d 0d 0a 43 6f d 10.0.6626...Co
6e 74 65 6e 74 2d 74 79 70 65 3a 20 6d 75 6c 74 ntent-type: mult
69 70 61 72 74 2f 72 65 70 6f 72 74 3b 0d 0d 0a ipart/report;...
20 62 6f 75 6e 64 61 72 79 3d 22 2d 2d 2d 2d 3d boundary="----=
5f 4e 65 78 74 50 61 72 74 5f 30 30 30 5f 30 30 _NextPart_000_00

Although Tcpdump and related Libpcap tools cannot read the trace due to the presence of corruption or a non-standard format, the ASCII content can still be read.

It would have helped to have known more about the system and tools used to capture the bad2.tcpdump.052705 trace when trying to decode it. Therefore, I recommend saving data like the following and bundling it with traces when shared with investigators: On a Unix system, record Uname, Tcpdump, and Tethereal versions.

bourque:/home/analyst$ uname -a
5.4-RELEASE #0: Thu Jun 23 14:29:51 EDT 2005 i386

bourque:/home/analyst$ tcpdump -V
tcpdump version 3.8.3
libpcap version 0.8.3

bourque:/home/analyst$ tethereal -v
tethereal 0.10.11
Compiled with GLib 1.2.10, with libpcap 0.8.3, with libz 1.2.2,
without libpcre, without UCD-SNMP or Net-SNMP, without ADNS.
NOTE: this build doesn't support the "matches" operator for
Ethereal filter syntax.
Running with libpcap version 0.8.3 on FreeBSD 5.4-RELEASE.

On Windows, use the Sysinternals program Psinfo to record system information, and then obtain version numbers from any installed packet capture programs like Tethereal.


PsInfo v1.63 - Local and remote system information viewer
Copyright (C) 2001-2004 Mark Russinovich
Sysinternals -

System information for \\ORR:
Uptime: 0 days 0 hours 45 minutes 49 seconds
Kernel version: Microsoft Windows 2000, Uniprocessor
Product type: Professional
Product version: 5.0
Service pack: 4
Kernel build number: 2195
Registered organization: TaoSecurity
Registered owner: Richard Bejtlich
Install date: 5/15/2005, 3:08:21 PM
Activation status: Not applicable
IE version: 6.0000
System root: C:\WINNT
Processors: 1
Processor speed: 750 MHz
Processor type: Intel Pentium III
Physical memory: 384 MB
Video driver: ATI Mobility M3

D:\>"c:\Program Files\Ethereal\Tethereal" -v
tethereal 0.10.11
Compiled with GLib 2.4.7, with WinPcap (version unknown), with
libz 1.2.2,with libpcre 4.4, with Net-SNMP 5.1.2, with ADNS.
Running with WinPcap (3.0) on Windows 2000 Service Pack 4,
build 2195.

Saving this information to a text file in the directory where network packets are recorded may save another investigator time and frustration when analyzing traces. This information also provides more solid footing when defending the nature of the collection process to a legal body or human resources panel.

Monday, July 18, 2005

Scary New Dangers in Cyberspace

I sometimes watch TV, and I happened to catch a story on ABC World News Tonight called "Your Computer's Stealth Identity Thief." I listened carefully and learned about something scary called a "keylogger." I even saw some cool shots of Symantec's cyber ninjas tapping away on their uber-31337 keyboards. I really paid attention to the tips to help protect [my]self against key logging, spyware, and other computer viruses like "Do not click OK on pop-up windows without first reading them thoroughly." The next time I see a pop-up that says "It's ok, I won't 0wn j00," I'll feel better!

Obviously I am jaded by stories about old technology. For pete's sake, Bugbear from mid-2003 had a keylogger built in. I'm sure there are even older examples out there.

Worse, none of the "tips" mention the steps that would really make a difference, in order of least to most impact on change of user habits:

  • Patch your system.

  • Don't browse the Web or read email as administrator or root.

  • Use an alternative Web browser and mail client.

  • Don't run Windows.

Instead we're told to " Use a firewall to help prevent any unauthorized computer activity." Good grief.

News from Visa on Payment Card Industry Standards

Today I got an email from Visa about their participation in the Payment Card Industry standards. They wrote:

"A key component of PCI Data Security Standard implementation success is merchant and service provider compliance. When Standard requirements are enforced, they can provide a well-aimed defense against data exposure and compromise. This is why on-site PCI validation assessments performed by Visa-approved Qualified Data Security Companies (QDSC) have become increasingly critical in today’s environment. The proficiency with which a QDSC conducts an assessment can have a tremendous impact on the consistent and proper application of PCI measures, and controls. Given this very important fact, Visa is modifying its process to qualify security companies that choose to take on the role of a QDSC...

At a high level, to meet the new qualification requirements, security companies must: (a) apply as a firm for qualification in the program; (b) provide documentation of financial stability, technical capability, and industry experience; (c) qualify individual employees to perform the assessments; and (d) execute an agreement with Visa governing performance.

We are now accepting applications for PCI Qualified Data Security Companies. Those new and existing companies that wish to begin or continue participating need to qualify through this new process and submit the new qualification application by August 18, 2005."

The Visa CISP assessors (check the URL -- it says "accessors") page lists 30 companies currently certified by Visa as Qualified Data Security Company (QDSC).

Does anyone want to share thoughts on this program?

Stiennon on Enforcement

Richard Stiennon's blog makes a great point today. He says

"The entire IT security market is focused on protections. This is great as more and more protections by default are deployed. But I believe that enforcement actions must be taken as well. There is some sign that cooperation between enforcement agencies in the UK, Israel, and Russia have been effective. The most important was the breaking up of a ring of cyber-extortionists in 2003 that dramatically slowed the number of DDOS incidents.

As it will be a while before prosperity finds its way to every corner of the globe it is imperative that law enforcement agencies start working together to track down and jail cyber criminals now."

He is completely correct. Remember the risk equation: Risk = Threat x Vulnerability X Cost (of asset). We security practitioners (and our clients) can only really influence the vulnerability aspect of the equation. We can't usually decrease the value of an asset, either. Only those in law enforcement or the military can take direct action against threats. The only real way to eliminate risk is to eliminate the threat. No amount of countermeasures can remove all vulnerabilities and keep a determined adversary from exploiting a target. Making the threat go to zero is the only way to make risk go to zero.

Stiennon also points out a fascinating Privacy Rights Clearinghouse chronology of data breaches since the ChoicePoint incident.

Sunday, July 17, 2005

Draft of Extrusion Detection Submitted for Copyeditin

I am happy to report that I just submitted the final draft of my next book Extrusion Detection: Security Monitoring for Internal Intrusions to my publisher, Addison-Wesley. The new book is a sequel to The Tao of Network Security Monitoring: Beyond Intrusion Detection. I think readers will find the new book very interesting. Thus far my reviewers have provided positive feedback.

For those interested in the mechanics of book writing: I thought of the idea last summer, just after my first book arrived. I signed a contract in November, then began writing in January. My first due date was 1 April for half the book in draft form, followed by the rest of the book in draft form by 1 June. I've been working on addressing reviewer feedback since late June, and now the book is ready for copyediting.

The chapter-level table of contents is listed next.

  1. Network Security Monitoring Revisited

  2. Defensible Network Architecture

  3. Extrusion Detection Illustrated

  4. Enterprise Network Instrumentation

  5. Layer 3 Network Access Control (by Ken Meyers)

  6. Traffic Threat Assessment

  7. Network Incident Response

  8. Network Forensics

  9. Traffic Threat Assessment Case Study

  10. Malicious Bots (by Mike Heiser)

Furthermore, there are these elements:

  • Foreword by Marcus Ranum

  • Preface

  • Appendix A. Collecting Session Data in an Emergency

  • Appendix B. Minimal Snort Installation Guide

  • Appendix C. Survey of Enumeraiton Methods (by Ron Gula)

  • Appendix D. Open Source Host Enumeration (by Rohyt Belani)

I'm estimating the book will be between 450 and 500 pages, but I usually err on the low side. Expect to see the book on shelves in December 2005 or January 2006. I'll probably provide excerpts as publication approaches as well.

You can also get a thorough look at material from the new book at day two of my class at USENIX Security in two weeks. If I am accepted to USENIX LISA in December, I hope to teach three days. The third day will also be based on Extrusion.

Friday, July 15, 2005

FreeBSD 6.0-BETA1 Available

The availability of FreeBSD 6.0-BETA1 was just announced. I am excited to see this release approaching. Here are a few excerpts from the release announcement thread that may be of interest.

  • Colin Percival: "The FreeBSD Security Team will support FreeBSD 5.x until at least the end
    of September 2007."

  • Colin Percival: "If I was deploying a new server today, I'd install FreeBSD 5.4. If I were planning on installing a new server next month, I'd install FreeBSD 6.0-BETA-whatever-number-we're-up-to-by-then."

  • Scott Long: "There will be a 5.5 release this fall and possibly a 5.6 a few months after that. Per the standard procedure, the security team will support the branch for 2 years after the final release. There will likely be other developers who have an interest in backporting changes to RELENG_5 for some time to come, just as has been done with RELENG_4. So the earliest that RELENG_5 will be de-supported is late 2007."

  • Scott Long: "Part of the purpose of moving quickly on to RELENG_6 is so that the migration work for users from 5.x to 6.x is very small. 6.x is really just an evolutionary step from 5.x, not the life-altering revolutionary step that 4.x->5.x was. It should be quite easy to deploy and maintain 5.x and 6.x machines side-by-side and migrate them as the need arises. We don't want people to be stranded on RELENG_5 like they were with RELENG_4. 6.x offers everything of 5.x, but with better performance and (hopefully) better stability. If you're thinking about evaluating 5.x, give 6.0 a try also."

The release schedule shows a 15 August release announcement, but I expect to see 6.0 RELEASE in mid-September. I am sure the FreeBSD release team will want 6.0 to be a solid product, and the todo list still contains multiple important outstanding issues.

New Libpcap and Tcpdump Available

Yesterday Libpcap 0.9.3 and Tcpdump 3.9.3 were released at The changelog lists "Support for sending packets" as a new feature. This is the biggest release since 0.8.3/3.8.3 in March last year. I hope to see the FreeBSD ports tree updated to include these new versions, although eventually they will be imported into the base system.

Thursday, July 14, 2005

Network Trace Archival and Retrieval

I don't pay close enough attention to the Pcap mailing lists. While doing research on WinPcap, I learned of a new project hosted at the WinPcap site called Network Trace Archival and Retrieval (NTAR). The Web site says "the main objective of NTAR is to provide an extensible way to store and retrieve network traces to mass storage."

I found this post by NTAR developer Gianluca Varenni make the claim that NTAR is "a working prototype of a library that reads and writes the PCAP-NG format." PCAP-NG is a reference to the PCAP Next Generation Dump File Format as documented in an expired RFC Draft.

If you would like to learn more about NTAR, check out the NTAR-workers mailing list. Searches of the tcpdump-workers mailing list show references to PCAP-NG back in February 2005, although a search of the Ethereal-dev mailing list has a mention in October 2003!

Auditors in Charge, but 0wn3d Anyway

I read in the latest SC Magazine this comment from Lloyd Hession, CSO of Radianz.

"'What is really happening is the head of security is losing control over the security agenda, which is being co-opted by audit and this umbrella of controls...

The ability to decide which security projects get funded is being taken out of the security officer's hands...

This focus on regulatory issues is causing a loss of control over the security agenda, which is being pushed and dictated by the audit and controls group and meeting the requirements of the regulation."

I see this focus on "controls" as more of the "prevention first and foremost" strategy that ignores the importance of detection and response. I had this reaction when I saw Dr. Ron Ross of NIST speak at a recent ISSA meeting. The NIST documents seem to focus on prevention through controls, and then they stop.

The unfortunate truth is that prevention eventually fails, as readers of the blog and my books know. While researching the Institute of Internal Auditors Web site, I came across this article which supports my theory. Here are the findings of Does Risk Management Curb Security Incidents? in brief.

  • Are organizations that have conducted an information security risk assessment less, more, or equally likely to have a documented information security policy? Yes.

  • Are organizations that have a documented information security policy less, more, or equally likely to implement system security measures? Yes.

  • Are organizations that have a documented information security policy less, more, or equally likely to implement information security compliance measures? Yes.

  • Are organizations that have a documented information security policy less, more, or equally likely to have an information security awareness program? Yes.

  • Are organizations that employ information security compliance measures, an information security awareness program, and system security measures less, more, or equally likely to experience security incidents? An analysis of variance (ANOVA) test failed to support the hypothesis (H5) that businesses that employ such programs and measures suffered fewer security incidents.

This is pathetically amusing. So, perform a risk assessment, document security policy, be compliant, teach awareness, and still be 0wn3d. It seems to me that, at the very least, some attention needs to be paid to the detection and response functions. Otherwise, a lot of money will continue to be spent on prevention, and organizations won't be any more "secure."

PS: The reference cited by the IIA article is available here. I originally visited the IIA to learn more about their Global Technology Audit Guides.

Net Optics Seminar on Passive Monitoring Access

I just received word that Net Optics will be hosting a free seminar titled Fundamentals of Passive Monitoring Access. It will start at 0830 on Wednesday 3 August 2005 at the Hilton Santa Clara in Santa Clara, CA. You will notice the seminar description uses terms like pervasive network awareness and defensible network, which I described when I spoke at Net Optics in May. I am scheduled to speak again at a Net Optics event in September in California. I will post details when available.

Verisign to Acquire iDEFENSE

The 45 survivors at iDEFENSE must be breathing a sigh of relief. Verisign will buy iDEFENSE for $40 million. That is $100 million less than the cost to acquire Guardent in December 2003. Verisign has over 3,500 employees according to its fact sheet, and it seems to be making ever bigger advances into the security market. I would be interested in hearing from any iDEFENSE insiders (anonymously here) what they think of this acquisition.

Wednesday, July 13, 2005

How Do You Read TaoSecurity Blog?

Would anyone care to mention how they read this blog? I ask because an owner of a site that aggregates blog postings thoughtfully asked my permission to include TaoSecurity Blog content on his site. I said I preferred to not have this blog's content aggregated and posted elsewhere. I prefer readers to visit this site directly or use the provided XML or RSS links. What are your thoughts?

How to Misuse an Intrusion Detection System

I was dismayed to see the following thread in the bleeding-sigs mailing list recently. Essentially someone suggested using PCRE to look for this content on Web pages and email:

(jihad |al Qaida|allah|destroy|kill americans|death|attack|infidels)

(washington|london|new york)

Here is part of my reply to the Bleeding-Sigs thread.

These rules are completely inappropriate.

First, there is no digital security aspect of these rules, so the "provider exception" of the wiretap act is likely nullified. Without obtaining consent from the end users (and thereby protection under the "consent exception"), that means the IDS is conducting a wiretap. The administrator could go to jail, or at least expose himself and his organization to a lawsuit from an intercepted party.

Second, the manner in which most people deploy Snort would not yield much insight regarding why these rules triggered. At best a normal Snort user would get a packet containing content that caused Snort to alert. That might be enough to determine no real "terrorism" is involved, but it might also be enough to begin an "investigation" that stands on dubious grounds due to my first point.

Third, does anyone think real terrorists use any of the words listed in the rules? If anyone does, they have no experience with the counter-terrorism world.

An IDS should be used to provide indicators of security incidents. Otherwise, it becomes difficult to justify its operation, legally and ethically.

Unfortunately, I saw both rules (at least commented out) in the latest bleeding ruleset.

What do you think?

New Desktop Computing Variant from ClearCube

Clued in by Slashdot I learned of this ZDNet article on ClearCube. This company sells "blade desktops." Users see have a device ClearCube calls a "user port" on their desk. Remotely connected to the user port by Cat 5, fiber, or IP is a "PC blade" mounted in a "cage" sitting in a server room or data center. Smart management software allows administrators to switch user ports from blade desktop to blade desktop if one fails.

The following diagram explains the same concepts in a single figure.

Regular blog readers may remember my enthusiasm for thin clients like the Sun Ray and wonder how I view these blade desktops. For casual users who surf the Web, read email, and use office software, blade desktops are overkill. I think the Sun Ray is a better solution. For those who need Microsoft products, I imagine a solution incorporating VMWare would be appropriate.

I see blade desktops as a possible way to provide dedicated hardware to power users. For example, in my last job our engineers did not think a thin client would work for them. Each software engineer needed a dedicated PC with tons of RAM, a fast CPU, and big hard drives to run their own instances of VMWare and other software. A few of them using thin clients connected to a single server would quickly consume too many resources. Instead, they could each get a blade desktop.

Removing the actual PC from the work space eliminates physical security threats and makes it more difficult to steal data, assuming the "user port" USB ports could be disabled.

Tuesday, July 12, 2005

Chip Andrews Webcasts on SQL Server Security

Chip Andrews of and co-founder of Special Ops Security will present his fourth Webcast on SQL Server Security tomorrow morning. I hadn't heard about these previously, but I am sure they are excellent. Chip was also author of the excellent SQL security book pictured at left, SQL Server Security.