Sunday, July 30, 2006

Notes for TCP/IP Weapons School Students

This note is intended for students in my TCP/IP Weapons School class at USENIX Security 2006.

These are the tools that will be discussed. Remember, this is a class on TCP/IP -- tools are not the primary focus. However, I needed something to generate interesting traffic.

The traces we will analyze are available at You will need to have Ethereal, Wireshark, or a similar protocol analyzer installed to review the traces. Tcpdump might be somewhat limited for this class but you can at least inspect packets with it.

Network Security Operations: LA Edition: 6-7 Sep 06, Glendale, CA

Thanks to the Information Systems Security Association (ISSA) Los Angeles chapter, TaoSecurity is pleased to present an exclusive two-day class: Network Security Operations: LA Edition. NSO:LA will be held 6-7 September 2006, in Glendale, CA. Topics for this hands-on, technical class include:

  • Network Security Monitoring: Case studies, theory, network access options, statistical data, session data, full content data, and hybrid data

  • Network Incident Response: Theory, preparation for network IR, detecting and
    investigating intrusions, and first response

  • Network Forensics: Case studies, theory,collecting/preserving/analyzing/presenting network traffic as evidence

Students that bring a laptop running the free VMware Server product will receive a custom-built virtual machine to run hands-on labs.

Registration fees:

  • By Monday 21 August: Non-ISSA member: $1395; ISSA member: $1255 (10% off)

  • After Monday 21 August: Non-ISSA member: $1595; ISSA member: $1435 (10% off)

To register for the class, complete this form (.pdf) and return it to TaoSecurity by emailing training [at] taosecurity [dot] com or faxing it to 703.637.1249. Acceptable payment methods include corporate check, PayPal, or requesting an electronic invoice payable via credit card and processed by PayPal.

Friday, July 28, 2006

SPI Dynamics JavaScript Scanner

Ok, this is a little weird. Thanks to SecurityMonkey I just tried the SPI Dynamics JavaScript Scanner. From that page:

Imagine visiting a blog on a social site like or checking your email on a portal like Yahoo’s Webmail. While you are reading the Web page JavaScript code is downloaded and executed by your Web browser. It scans your entire home network, detects and determines your Linksys router model number, and then sends commands to the router to turn on wireless networking and turn off all encryption. Now imagine that this happens to 1 million people across the United States in less than 24 hours.

This scenario is no longer one of fiction.

I recommend reading the white paper (.pdf). I tried out the proof of concept on Windows 2000 as a non-admin user running the latest Firefox. Here's what I got. Now all three hosts exist, but due to known issues none are correctly detected. Still, this is a cool idea. Note that I ran the page while using a Web proxy, so all of the requests went through that device.

Anyone Going to DoD Cybercrime?

Is anyone going to the DoD Cybercrime conference in St. Louis, MO, 21-26 January 2007? I didn't think so. St. Louis, in January? What happened to Palm Harbor, FL? I spoke there in 2005 and 2006. I have friends living near Palm Harbor, but none in St. Louis.

They're also one of the few conferences (RSA comes to mind) that pays no expenses for speakers; they even charge for attendance! At least RSA picks up the conference fee.

I don't think I'll be going to DoD Cybercrime this year. I think a sign that other people are staying away is the extension of the Call for Papers to 7 Aug 06.

Tenable and Nessus Blog Launched

You might want to check out Ron Gula's new Tenable Network Security and Nessus Blog. I just added it to my Bloglines collection, which has increased to over 140 feeds. I love being able to let Bloglines check these sites for news.

Slow Time with FreeBSD 6.1 guest on VMware Server 1.0.0 build-28343

I was prepared to release a new FreeBSD 6.1-based Sguil virtual machine today, but I ran into an old problem. The VMware Server Release Notes say "Full support for 32-bit and 64-bit FreeBSD 6.0 as guest operating systems." I expected that meant the timing problems that had forced me to use FreeBSD 5.x were no longer a problem with FreeBSD 6.x.

Well, today I built a FreeBSD 6.1 guest VM on a Windows XP SP2 host running VMware Server 1.0.0 build-28343. It turns out the guest OS runs at about half speed. I am apparently not the only person with this problem; a #snort-gui regular mentioned running ntpdate every 3 seconds (!!) to mitigate this problem.

I posted this Vmware forum question to see if anyone responds with similar experiences. If you are running FreeBSD 6.x on VMware Server, how are you handling time problems?

Review of Counter Hack Reloaded Posted's loss is your gain. I just tried to submit the following for my 200th technical review. I read Counter Hack Reloaded by Ed Skoudis and Tom Liston. I tried to submit the review to, but they refused since I already reviewed Counter Hack.

Man, that bugs me. The second edition could have been garbage, and no one who reviewed the first edition could say so! I'm not going to create a fake account simply to review the book again.

I was able to review the third edition of Anti-Hacker Toolkit without any trouble.

As you might expect, I loved Counter Hack Reloaded. It would get five stars if would let me say so.

Still the best single technical introductory volume for security pros

I read and reviewed the first edition of Counter Hack (CH) almost five years ago, and I put that book on my list of top 10 books of the last 10 years. Counter Hack Reloaded (CHL) is an excellent update to the original book, and it remains the single best technical introductory volume for all security professionals. If you're looking to start a digital security career, CHL is the book you must read and remember.

CHL is a thorough update of CH. The old book was 564 pages. The new book, using the nicer fonts and layout seen in newer Pearson imprints, is 748 pages -- but thinner, due to a different paper type. Both books have 13 chapters covering the same topics, but several have been substantially increased. Ch 7 in CH is 66 pages; the same chapter in CHL is 98. This does not mean that new pages were simply added to old ones. Rather, obsolete discussions are replaced by modern issues. For example, 10 pages on BO2K in CH are replaced by a single screenshot and a URL, making room for talk of Hacker Defender, AFX, Adore-Ng, and FU.

I like CHL because it covers just about all the subjects I would expect of someone with operational security knowledge. Chapters on Windows, Unix, reconnaissance, scanning, application/OS-based attacks, network-based attacks, denial of service, maintaining access, and covering tracks are written clearly and to the appropriate depth. CHL isn't "Hacking Exposed," however; attacks are not demonstrated with syntax and relevant output. CHL instead concentrates on the underlying vulnerabilities or exposures that make exploitation possible.

A few updates are specifically worth mentioning. CHL adds sections on 802.11 wireless security, Google hacking, and recent attacks. I was pleased to see the revised explanation of stack overflows in Ch 7, along with new details on heap overflows. I have one suggestion for future editions: by convention, most coders talk of the stack growing "down" and the heap growing "up." CHL's diagrams are upside-down with respect to this convention, and should be changed.

CHL is a special book, and for that reason I saved it for my 200th technical book review. Congratulations to authors Ed Skoudis and Tim Liston for a job well done.

The Face of Another Threat

Kim Zetter wrote a great piece for Wired called Confessions of a Cybermule. It's the story of a criminal who converted stolen credit card numbers into actual cards, then withdrew money at ATMs. In the words of the article:

They are the mules of electronic fraud, filling a vital role at the intersection of the virtual and the real: converting stolen account information into cold, hard cash.

That's a central challenge for digital criminals. The criminal, who in the story uses the nick John Dillinger, started out converting credit cards into cash this way:

Dillinger got several stolen credit-card numbers and spent two months traveling California with a partner, buying high-end laptops and reselling them. He'd never had disposable income, and got a rush from entering a store with a credit card stamped with someone else's account and walking out with expensive products.

Later Dillinger created fake cards for use at ATMs:

[A] spammer collected hundreds of account numbers, then distributed them to Dillinger and other "cashers" who encoded them onto blank plastic cards with an MSR206 and fanned out to hit ATMs. In two days, Dillinger says he collected $20,000 using the counterfeit cards and stolen PINs.

He wired the money, minus his take, to the Russian via Western Union. The operation lasted only a couple of weeks, though, before Western Union started blocking the money transfers.

This guy was a small fish:

U.S. Postal Inspector Greg Crabb confirmed that Dillinger was involved in cashing, though he and other investigators Wired News spoke with consider him relatively small-time compared to other cashers who made hundreds of thousands of dollars. This could explain why authorities didn't arrest Dillinger in 2004 when the Secret Service nabbed dozens of carders and identity thieves in a yearlong sting operation that targeted Shadowcrew and other carding sites.

This is why addressing the threat is important:

Dillinger said weeks before he was arrested that he was tired of cashing. "It's hard. It's scary," he said. "I don't want to get arrested. You go to ATMs, your picture's being took. You always have to look over your shoulder. Even when you're done with it, for the rest of your life pretty much you've got to look over your shoulder."

If law enforcement had more resources to identify, arrest, and prosecute these threats, we'd have less cybercrime.

Thursday, July 27, 2006

Another Sign C&A is Really Broken

I just read an exceptionally interesting post at the ClearNet Security Blog. It explains the Certification and Accreditation (C&A) process implemented by the US Department of Veteran's Affairs. Yes, those are the same guys who lost that laptop with my Air Force records. Consider this blog excerpt:

Right about this time the second bomb shell went off.... The guy up front promptly says that all test results we collect are to be given to the VA. This makes sense as it is their computers and they are entitled to our analyzed results right? Wrong! The guy corrects himself and says that the results are not to be analyzed by the auditors but by VA personnel. at this point I am not touching a computer nor am I analyzing the results for risk or what is wrong. Something seems very broken about this process at this point.

Please read the whole post for the entire story. I hope CNS continues to share their experiences.

Run Your Own Server Podcast

Adam Glenn from Run Your Own Server interviewed me last week. You can listen to the audio here. I like the very NPR-like conclusion to the show. An interview with another site should be posted shortly, and I have a few more on the way.

Wednesday, July 26, 2006

The State of the Security Book Market

At left is the juggernaut of the security book market -- Hacking Exposed. I mention this book because it came up in a discussion I had with someone in the publishing community today. She reported that the state of the security book market is somewhat weak. She worried that Hacking Exposed (published in late 1999) might have created a "bubble" in the security book market, and the bubble is now deflating.

I interpreted her comment to mean that publishers have flooded bookshelves with too many security books over the last 7 years. Publishers were chasing readership figures that were inflated by false expectations caused by Hacking Exposed.

Over the last 6 or 7 years I've read and reviewed almost exactly 200 technical titles, the majority of which are security books. That's a huge number, with at least half of those books being titles I thought would be good to read. You can begin to imagine the number of titles I've missed when I tell you that I concentrate on reading books from Pearson (Addison-Wesley, PHPTR, etc.), Osborne/McGraw-Hill, Wiley, O'Reilly and friends (Syngress, No Starch, etc.), and recently Apress. I basically never touch Auerbach and several other publishing houses.

If you look at my Wish List you'll see a large selection of mainly security titles that I would like to read, or at least look at before making a decision. Recently there seems to have been a lull in books arriving at my doorstep, which is great considering the depth of my reading list. I'm making progress again, and you can expect another review -- my 200th technical book -- shortly.

What is your opinion of the security book market? Here's a few questions.

  1. What subjects would you like to see discussed? Hot topics at the moment seem to be forensics, reverse engineering, and rootkits.

  2. How many security books do you purchase per year?

  3. About how much do you consider paying for a book? What price is too expensive?

  4. Do you have a favorite publisher? Why?

  5. What is the biggest problem with security books today?

If you're wondering, these are my questions. The publishing person referenced earlier has nothing to do with these questions. I'm just curious.

Finally, if you find my reviews helpful, please vote them as being helpful when you read them. I get no financial compensation from one way or the other, but I do keep notes while reading and I try to deliver something useful when done. Seeing my helpful vote count jump from the current 3376 for 207 reviews (8 are nontechnical) might motivate me to update my Listmania Lists. :) Thank you!

No PCI Express NICs in PCI Express Graphics Slots?

I own a Shuttle SB81P that has a 32 bit 33 Mhz PCI slot, and a 16x PCI Express slot. Earlier I asked if anyone was using the Intel PRO/1000 PT Dual Port Server Adapter, since I wanted to use that NIC in the PCI Express slot.

It turns out that I cannot use that NIC in my Shuttle. I got a sense that it might not work when I noticed the Shuttle documentation called the 16x slot a "PCI Express Graphics (PEG)" slot.

I inserted the 4x NIC into the 16x slot, but I could never get the Shuttle to recognize it. I even followed helpful advice from this VMware thread pointing me to Intel's ibautil.exe, which is a DOS utility that probes for Intel cards (among other tasks). It didn't see the PCI Express NIC.

I eventually took the NIC to my friend Hank at NetWitness, and we put the NIC into the PCI Express slot of a Dell 850 server. I booted the server with a FreeBSD 6.1 install CD, and then started a shell. Sure enough, FreeBSD detected em0 and em1 -- two new Intel Pro NICs.

This is probably a stretch, but is anyone using PCI Express NICs in Shuttles? I'm considering buying a new Shuttle, if I can find one that has at least one PCI Express slot that accepts PCI Express NICs. It seems Shuttles like the SB95P V2 have 16x slots (probably for PEG/video) and 1x, which doesn't help me with the 4x Intel Pro NIC.

Alternatively, is anyone using PCI Express NICs in any PCI Express slots that are presumably for graphics cards?

Tuesday, July 25, 2006

Keith Jones Podcast on Real Digital Forensics

Keith Jones was interviewed about our book Real Digital Forensics. The site conducting the interview is Let's Talk Computers. You can reach the audio in Real Audio or Windows Media format here.

You can tell this interviewer has been around the block. He actually broadcasts on real AM and FM radio. The whole interview is about 13 minutes long and very informative.

I Just Joined USENIX

I'm speaking at the USENIX Security conference next week, delivering two days from one of my training classes. I've been teaching at USENIX for two years, beginning with USENIX Security 2004.

When attending USENIX conferences I always get a few free copies of ;login: magazine. I've written about this magazine before. One of the best aspects of it is the conference proceedings section. It's a great way to read summaries of academic papers on security and system administration topics.

Because I'm working on submitting an application to pursue a PhD in computer security, I decided I needed to get serious about what's happening in the academic side of security. (Byt the way, I've started a NSM Research blog to capture thoughts and links to papers as I progress. I don't suggest reading it, since it's mainly for me. Anything worthwhile I will publish here. I detest keeping bookmarks, so blogging makes more sense.)

One way to get serious about security academics is to subscribe to ;login:, and I did that by joining USENIX. It's $115 per year, but I consider that a small investment in order to gain access to this great academic resource. It's true that ;login: issues are made public one year after they are published, but being one year behind won't help me when I prepare my research proposal.

A second reason I joined is that I continue to teach at USENIX, and I figured it was time to show I support the organization!

Are any of you USENIX members?

Review of Anti-Hacker Toolkit, 3rd Ed Posted

What is that? It can't be a new book review, can it? It's true, I'm working through my reading list before my wish list gets any longer. It's been over two months since my last review, but I plan to posting reviews again throughout 2006.

My 199th technical review covers Osborne's Anti-Hacker Toolkit, 3rd Ed. . I'm friends with a few of the people who have worked on the editions of this book over the years, primarily Mike Shema and Keith Jones. Keith is no longer working on the book, but Mike is actively involved. From my four-star review:

I reviewed the first edition "Anti-Hacker Tool Kit" (AHT:1E) in August 2002, and the second edition (AHT:2E) in June 2004. AHT:3E was published in February 2006. I continue to like AHT, because it addresses many of the tools an operational security professional should know how to use. I'll point out the differences between AHT:2E and AHT:3E, then offer some suggestions for AHT:4E.

Updated FreeBSD Forensics

This morning I was reading the third edition of Anti-Hacker Toolkit. I realized no one had updated the section "Vnode: Transforming a Regular File into a Device on FreeBSD." Keith Jones wrote that section four years ago when he co-authored the first edition of AHT. That part of AHT shows how to mount a hard drive image as a file, such that the hard drive image can be examined in a forensically safe manner.

If you follow the advice in the book and try to vnconfig, you get this error:

orr:/home/richard$ vnconfig
ERROR: vnconfig(8) has been discontinued
Please use mdconfig(8).

Fair enough. Let's see what we need to do to use mdconfig.

I used the jbr_bank/forensic_duplication/JBRWWW.dd.gz hard drive image from Real Digital Forensics by Keith Jones, Curt Rose, and myself. If you want that image or any other files from the book, you'll need the DVD that ships with it.

After gunzipping the archive, I used mdconfig to create a vnode.

orr:/nsm/rdf$ sudo mdconfig -a -t vnode -f JBRWWW.dd
orr:/nsm/rdf$ sudo mdconfig -l -u md0
md0 vnode 4.0G /nsm/rdf/JBRWWW.dd

I now have JBRWWW.dd attached to device md0. Let's see what it is.

orr:/nsm/rdf$ sudo fdisk /dev/md0
******* Working on device /dev/md0 *******
parameters extracted from in-core disklabel are:
cylinders=524 heads=255 sectors/track=63 (16065 blks/cyl)

parameters to be used for BIOS calculations are:
cylinders=524 heads=255 sectors/track=63 (16065 blks/cyl)

Media sector size is 512
Warning: BIOS sector numbering starts with sector 1
Information from DOS bootblock is:
The data for partition 1 is:
sysid 7 (0x07),(OS/2 HPFS, NTFS, QNX-2 (16 bit) or Advanced UNIX)
start 63, size 8401932 (4102 Meg), flag 80 (active)
beg: cyl 0/ head 1/ sector 1;
end: cyl 522/ head 254/ sector 63
The data for partition 2 is:

The data for partition 3 is:

The data for partition 4 is:

That looks like a NTFS partition. Time to mount it.

orr:/nsm/rdf$ sudo mount_ntfs -o ro /dev/md0s1 /mnt
orr:/nsm/rdf$ mount
/dev/ad0s2a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s2f on /home (ufs, local, soft-updates)
/dev/ad0s2g on /nsm (ufs, local, soft-updates)
/dev/ad0s2h on /tmp (ufs, local, soft-updates)
/dev/ad0s2d on /usr (ufs, local, soft-updates)
/dev/ad0s2e on /var (ufs, local, soft-updates)
/dev/ad0s3 on /data (msdosfs, local)
/dev/acd0 on /cdrom (udf, local, read-only)
/dev/md0s1 on /mnt (ntfs, local, read-only)

So far so good. What do we see on the drive?

orr:/nsm/rdf$ ls /mnt
$AttrDef IO.SYS
$BadClus Inetpub
$Extend Program Files
$LogFile System Volume Information
$Secure arcldr.exe
$UpCase arcsetup.exe
$Volume boot.ini
CONFIG.SYS pagefile.sys
Documents and Settings update.exe

That looks like a Microsoft Windows NTFS drive to me.

When done, I clean up.

orr:/nsm/rdf$ sudo umount /mnt
orr:/nsm/rdf$ sudo mdconfig -d -u md0

You can use this same technique with .iso's too.

orr:/data/iso$ sudo mdconfig -a -t vnode -f boot.iso
orr:/data/iso$ sudo mdconfig -l -u md0
md0 vnode 38M /data/iso/boot.iso
orr:/data/iso$ sudo mount -t cd9660 /dev/md0 /mnt
orr:/data/iso$ mount
/dev/ad0s2a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s2f on /home (ufs, local, soft-updates)
/dev/ad0s2g on /nsm (ufs, local, soft-updates)
/dev/ad0s2h on /tmp (ufs, local, soft-updates)
/dev/ad0s2d on /usr (ufs, local, soft-updates)
/dev/ad0s2e on /var (ufs, local, soft-updates)
/dev/ad0s3 on /data (msdosfs, local)
/dev/acd0 on /cdrom (udf, local, read-only)
/dev/md0 on /mnt (cd9660, local, read-only)
orr:/data/iso$ ls /mnt
TRANS.TBL etc images ppc
orr:/data/iso$ sudo umount /mnt
orr:/data/iso$ sudo mdconfig -d -u md0

I think that's neat.

Monday, July 24, 2006

ISSA-NoVA Summer Social

The ISSA-NoVA Summer Social will be held Thursday 17 August 2006 at the American Tap Room, 1811 Library St., Reston Town Center, in Reston, VA. There is no speaker, just a chance for members to chat for a few hours. I will probably be there but I have not yet RSVP'd. Last year I talked with Transzorp most of the meeting, but he's in CA working for Google now. If you're going, please reply here.

Participating in e-Symposium Wednesday

I was asked to participate in an ISSA e-Symposium titled Emerging Threats and Response. Readers of this blog will guess I may have a field day with this one. I'm part of a round table called Migration to IP: Convergence or Collision?. For those of you with extra coin, registration costs £50 / €75 / $90. Wow, I might be in the wrong business! Has anyone heard of these talks before?

Update: As pointed out in the comments, it's free for ISSA members to register.

NYCBSDCon 28-29 October 2006

I'm not having any luck with schedules and BSD conferences this year. I missed the third BSDCan and I'm going to miss the second NYCBSDCon on 28-29 October in Davis Auditorium, Columbia University, New York city. This will undoubtedly be a great conference, since NYCBUG is hosting it. Remember, they built the site based on a suggestion I made. Awesome.

Blogger Now Offers RSS and Atom

I just realized that Blogger provides native RSS and Atom feeds. Previously only Atom was provided. For consolidation purposes I plan to cancel the and Feedburner RSS feeds soon, so please switch your RSS readers to the Blogger RSS feeds. (I may not be able to cancel the feed since it looks like an automatic set-up.) Thank you.

Help Freshports

Do you use Dan Langille's Freshports site to track FreeBSD ports? Dan is asking for donations to help buy hardware for the site. I just sent $25 USD. Will you help? Thank you.

Saturday, July 22, 2006

SANS Log Management Summit

Last week I paid for and attended the SANS Log Management Summit. I'd like to share a few thoughts about what I saw. First, I think Alan Paller did a great job as host. He kept the presentations moving and unflinchingly kept to his schedule. Talks started at 8 am, period. I thought his "yellow card" system for questions worked very well. (If you wanted to ask a question, you wrote it on a yellow card. SANS staff collected the cards then handed them to the speaker or Alan, who answered the question.) The system prevented the "speeches" one usually sees in large crowds with open microphones.

Alan started the conference by presenting his "faces of cybercrime" presentation, based on his testimony (.pdf) in late 2005. He reminded the audience of the advice to learn hacking given by soon-to-be-executed Bali bomber Imam Samudra. Alan claimed at least one organized crime group has moved two hackers to Africa and forced them to compromise targets "20 hours per day, 7 days per week," "for food." He reminded us of China's military doctrine of asymmetric warfare and repeated his earlier statements about Titan Rain.

With regard to new information, Alan named three ways to help fight back against cybercriminals.

  1. Respond faster.

  2. Change metrics.

  3. Shift some responsibility to suppliers and integrators.

I like this approach. For a few years Alan was beating the drum for #3, and for the last year he's been working #2. I like #1 alot, since I am an incident responder.

With regard to metrics, Alan likes the term "attack-based metrics," and uses phrases like "measuring what we need to do," "how are we being compromised," and "how can we defend ourselves." He noted the Air Force is "lead" for this approach. Alan said measuring report writing (e.g., FISMA) is a waste of time. Some example metrics he noted were:

  • Can an incident of successful spear phishing be detected in 30 minutes or less?

  • What percentage of employees fall victim to a spear phishing test?

Alan mentioned using privilege user monitoring as a means to counter insiders, although he also said "The insider threat is baloney," until an outsider becomes an insider. Alan concluded his talk by sounding optimistic about SCADA procurement standards. I have no dog in that fight, but I recommend reading Dale Peterson's SCADA Blog for all things SCADA.

Lawyer Ben Wright spoke next about log management and legal issues. I really wanted to see this talk. Ben said logs can indicate control, thereby preventing claims of negligence and offering evidence to resolve disputes. His most interesting point was that records of log review are more important than the logs themselves. In other words, in the legal eye, it's better to make a note every time you review your logs than it is to retain those logs. Email is far more important to retain, since firms are fined millions for failing to keep email.

Ben noted that HIPAA Security Rule 45 CFR 164.308(a)(1)(ii)(D) mentions logs, as does NIST SP 800-66 and the PCI Standard of January 2005, part 10.6. However, review of logs, not retention of logs, is critical. Ben explained negligence law, where the standard is "reasonableness." He said that if a company writes a policy stating "We will do X," and they fail to perform X, it's easy for a jury to find the company negligent. That means lawyers will recommend companies write policies saying "We may do X." The audience had a hard time handling that idea, since it's a lawyer's point of view and not that of an auditor or security person.

Ben provided three suggestions regarding log management.

  1. Policy should stress preferences, not statements saying "We will do X."

  2. Keep records of the fact you reviewed logs.

  3. Only a company's full audit committee should know about all monitoring methods -- neither employees nor the CEO should know what is watched or stored.

Ben liked promoting "mystery" in the workplace to keep people on the straight and narrow. It's sounds like a great deterrence tool, but a little draconian for me.

Next followed the first of several presentations by users of vendor log management solutions. Here I should mention that companies formally represented at the Summit were Arcsight, Network Intelligence, LogLogic, Prism Microsystems, and SenSage. Yes, that's it -- no Tenable or other big players. More importantly, the vendors chose all of the customers who presented their product experiences. Not surprisingly, all gave glowing opinions. This was really disappointing. The only opposing point of view came from Stephen Northcutt at the Summit's end, who reported a majority of log management users are dissatisfied with their solutions! Although TriGeo did not speak on any panels, some customers reported using their product.

I won't mention individual user reports by name, since most weren't that helpful. Here's a clue that they lacked the detail I (and other attendees) expected: when Alan has to ask, at the end of a presentation, "So what product do you use?", you know the briefer didn't share the details that attendees wanted to hear.

Here are a few data points though, collected from all of the customer reports and hence out-of-order chronologically. Chad Mead from JPMorgan Chase said his shop, with 210,000 desktops, 40,000 servers, 400 NIDS, 900 firewalls, 81 mainframe LPARs, and over 1 million network ports, produces over 150,000 events per second. He operates two security management centers with five people operating the log management solution and 65 analyzing logs. Wow, that's what I like to hear! His security team owns the logging infrastructure, and the device owners own the log feeds. This was a common refrain.

Mark Olsen from CareGroup Healthcare System said all emergency room records are transmitted electronically "to Atlanta," by which I guess he meant to the CDC. This is a measure to identify Bird Flu outbreaks. That sounds like something from the X-Files. He also said that while HIPAA enforcement actions thus far have been few and far between ("19,000 violations in 2005, 7 selected for prosecution"), expect that to change in 2007.

Chris Calabrese said a word about the Security Issues in Network Event Logging (syslog) IETF working group. Mike Poor said his company deploys LaBrea Tar Pits on Soekris boxes inside companies to watch for unexpected traffic. Jay Leak from Nokia justified his log management project by realizing the waste caused by his company making over 1,000 requests for log data each year, with each request taking over 4 hours and half of those never being resolved. Keith Fricke said "IPS is completely misnamed." He uses his IPS to block outbound malicious traffic from compromised internal systems! What's misnamed about that -- he's trying to "prevent" someone else from being compromised.

Chris Brenton and Mike Poor next unveiled the SANS Top 5 Essential Log Reports (.pdf). This appears to have gotten zero news coverage, which I don't understand. Here they are, meaning what you should look for while reviewing logs:

  1. Attempts to Gain Access through Existing Accounts

  2. Failed File or Resource Access Attempts

  3. Unauthorized Changes to Users, Groups and Services

  4. Systems Most Vulnerable to Attack

  5. Suspicious or Unauthorized Network Traffic Patterns

I found it funny that I wrote a whole book (Extrusion Detection) about #5, but the term "extrusion detection" isn't mentioned in the SANS .pdf. They did mention the mailing list, which I should probably start reading.

The end of day one concluded with a "vendor shoot-out," where the five vendors I named earlier made pitches and argued with each other. It seemed more hostile than the Real World Intrusion Detection Workshop I attended four years ago with Bamm Visscher, right before I left Ball Aerospace to join Foundstone (sorry Bamm!).

I liked that LogLogic's Anton Chuvakin (I know you're reading) prefers to collect everything from a log source and let the centralized solution handle presenting useful information. He said "you never know what might be important," which is the foundation for my NSM approach. During a nice "lunch and learn" Anton also said the biggest obstacle to building one's own log management solution is keeping pace with changing log formats. I had never imagined that problem.

One really astute question-asker wondered why three vendors showed Lehman Brothers as a client on each of their presentations. Each vendor stated their case, with Network Intelligence saying their product did the real log collecting, after which it forwarded a feed to ArcSight.

Day two started with Mike Poor discussing network early warning systems (NEWS). He said the famous Dutch botnet wasn't 1.5 million victims strong -- it was more like 5.1+ million systems. Wow. Mike reminded us of the Dabber worm which attacked Sasser victims. He said Dshield collects logs from 40,000 sensors watching 500,000 IPs. Mike spent some time discussing DNS cache poisoning and SANS' role.

By now you might be wondering, "where is the news on NEWS?" (Oh, too funny.) To be honest, I didn't hear much of anything new. In fact, there wasn't much "early warning" to speak of. If you deploy the systems Mike mentioned in his talk, you aren't learning of an attack before it happens -- you're learning afterwards. I suppose if you are at the front of the victim list and you share what you know, you're the NEWS for others! Mike did name the Chinese Honeynet Project as the source of some interesting tools. I might try those.

I've already noted the parts of the second day worth repeating, so that ends day 2.

Day 3 consisted of classes by Randy Franklin Smith on Windows Event Logs and Chris Brenton on building your own solution. In short, Randy is a Windows EVL guru and Chris is a great instructor. These two classes probably saved the entire three days for me, since they at least had some of the detail I expected from the previous two days. Randy's class really emphasized that understanding Windows EVL is an art in itself. It takes a lot of work to make sense of them. Randy said the appointment of Eric Fitzgerald as a sort of Windows EVL czar will help unify the system, at the expense of changing everything once Vista appears. Chris reminded me to try programs like Simple Event Correlator, Privateye, and Syslog NG.

Overall, I think I got my money's worth from the Summit. I do not do log management as a primary task, so I was exposed to a whole new world of challenges. I met some interesting people and I got to attend the CDX briefing and Vendor Expo. Both yielded contacts that might result in future blog posts.

I predict that three years from now people will still be disgusted with their log management and security incident management "solutions," and will be looking for "the next big thing." It's already happened with firewalls, IDS, IPS, and now LM/SIM.

What did you think of the Summit?

Friday, July 21, 2006

NoVA Sec Founded

Inspired by Matasano's ChiSec, I decided to start NoVA Sec. Here's the deal. We find a place to meet, we pick a time, and we talk security tech.

I do not want to hear the terms CISSP, FISMA, DITSCAP, C&A, or any related subjects. If you are a security type in the northern Virginia area -- and you perform operational security work -- we want to meet you. If you read, write, audit, or enforce regulations, you won't like this group.

I am working on finding a location. I would like to hold our first meeting in August. If you have any suggestions, please post them as comments to this post at the NoVA Sec Blog. Thank you.

Call for Def Con Dunk Tank Volunteers

I am not attending Black Hat or Def Con this year. However, Russ Rogers asked me to spread the word on the following event:

Defcon will once again be running the Defcon Dunktank as a fund raiser for the fine folks at the EFF ( This email is a call for volunteers that are willing to sit in the dunktank for 30 minutes and let random attendees attempt to dunk them. The money is for a good cause, the water is nice and cool in the hot desert, and you'll be richer and sexier simply for volunteering your time!

Please let me know if you would be willing to sit in the tank. I need to put together a schedule, so if you have a specific time slot you'd like, please let me know that as well. It would be most useful if you can provide multiple slots so I have some room to work.

Please pass this email to everyone you can possibly think of. I'd like to get as many speakers, hackers, and others to sit in the tank at some point during Defcon. And the water is likely to be much cleaner than that in the hotel pool!

Email Russ if you'd like to volunteer: russ [at] defcon dot org

Building an Internet Server with FreeBSD 6

I received a review copy of Bryan Hong's new book Building an Internet Server with FreeBSD 6. This is the first book I've received from Lulu Press. I am surprised by the physical quality of this book. It looks just as good as any softcover book you might find at a store, and you can purchase it through or other sellers. Based on the page count and form factor, I estimate it cost Bryan about $6.45 per copy to publish the book (181 pages, 100 copies). I am not sure what it might cost to get the book listed at Lulu or Amazon, and I am not sure if Bryan ships every book himself.

Services like Lulu are a great idea if you don't want to publish with a formal publisher. I personally enjoy working with Addison-Wesley. Why?

  1. AW's production team is top-notch. I think every book I publish is formatted and printed in just the right manner. They convert my lousy PowerPoint scribblings into real artwork. They know how to lay out the pages properly. The text is easy to read. Doing all that work myself would result in a lower quality product that takes more time to write.

  2. AW's editing process is excellent. They challenge authors to write the right books. They circulate proposals and drafts among peers who more or less provide helpful feedback. Their copyeditors are usually grueling taskmasters who ensure the book reads well. (I prefer writing to copyediting any day -- oh, the pain, the pain.)

  3. AW's marketing never stops. They get space on bookshelves. They set up shop at conferences and sponsor USENIX. They create and print flyers for me to hand out. They work with magazines and news sites to get content to potential readers. I literally have a team trying to sell books on my behalf. They even ship a few freebie books here and there as give-aways when I speak.

  4. AW fights pirates. This is an argument for letting the publisher hold copyright. If I held copyright for my books, I would have to personally fight every pirate who copies and distributes my books. AW has a team that works to shut down sites distributing pirated books.

  5. I've had three great publishing experiences with AW. I hope they will consider publishing my work in the future.

I'm sure a dozen or more of you want to jump all over my piracy comment. Here's my thoughts on book pirates: stop kicking a guy when he's down. It takes months, usually years to write a book. Once published, the content may not be relevant in three years (or shorter, depending on the book). You might still be listening to the Beatles in 2014 (50 years after their hey-day), but you won't be reading my first book 10 years after it was first published!

My understanding is that the best-selling security book of all time, Hacking Exposed, has sold over 500,000 copies since it was published in 1999. That's about 72,000 copies a year (6,000 per month -- easy math). With three lead authors, royalties are split three ways. I can't publicly say how much royalties I would expect on a book like that, but I'm guessing the average security consultant salary would easily exceed the yearly royalty amount.

So where does the "kicking a guy when he's down" come into play? Guess how many books must be sold to consider a title "a hit" in the security space. 500,000 is considered the king. So what is it -- 200,000, 100,000, 50,000? Try 10,000 or less. Imagine spending a year or more, full-time, writing a book, and then getting a third or less of your normal salary -- over several years after the book is published. Remember -- that's for a "hit"! Most technical books sell 5,000 copies or less.

In other words, nobody gets rich writing technical books, and hardly anyone could afford to live off royalties. Seeing one's book distributed on p2p networks is just a final insult.

Of course being a published author has advantages beyond royalties, but how many of you are willing to devote such a large chunk of your time to an endeavor that might result in gain somewhere down the line? In conclusion, most authors write for the opportunity to share what they know and advance the state of the practice.

Wednesday, July 19, 2006

IT-Harvest Launches IT Security Database

I read on Richard Stiennon's blog that his new company IT-Harvest has launched its Knowledge Base, with "830 security vendors, 1,600 security products, and 2,200 security people in the data base." The company plans to charge access to the database but offer its research for free. I look forward to seeing the fruits of this work.

New Layout

I used the latest version of the "andreas01" design by Andreas Viklund to design I used an older version of this CSS design for, so I will probably upgrade when I have time.

I have to say I had no real trouble using CSS, unlike Mr. Dvorak. Needless to say my two sites are much simpler than his, and I would dare say far less gaudy!

Thanks to Royce for mentioning the HTML Validator Extension -- I plan to use it to fix warnings and any errors.

Breaking News: UBS Intruder Guilty

Keith Jones just emailed me saying Roger Duronio, UBS intruder was convicted of one count of Securities Fraud and one count of Fraud and Related Activity In Connection with Computers. He was found not guilty of two counts of Mail Fraud. Congratulations Keith!

Update: Here's the newest Information Week article by Sharon Gaudin.

Israeli Incident Response Report

Incident responders from Beyond Security published an interesting report (.pdf) explaining their involvement in a recent defacement of an Israeli Web site. I read the report but was surprised to not see any mention of shutting down access to the Web site upon discovering the intrusion. There was no question of compromise -- the image above shows what happened to the Web site. Consider the following excerpt from the report.

[T]he web site in question was defaced by Team Evil and action had to be taken immediately. There was no time to perform a full forensic investigation. What the attacked organization required was a real-time forensic analysis of the attack in order to contain damage and respond accordingly, with the following operational goals in mind:

1. Stop the continuing damage being inflicted as soon as possible by kicking out the attackers who were damaging the site while analysis was done.

2. Prevent further access from the attackers.

3. Determine what hole the attackers used to get in, and seal it.

While these goals are sequential, they had to be done simultaneously to be successful as the attackers were at the same time performing counter-measures and attacking back.

It was a fight between the attackers who were already in the system, and the incident response personnel on the ground with the help of the local system administrator.

"No time" for a forensic investigation? Try shutting down the Web server's switch port. It sounds like the intruders were active while the IR team worked:

While examining the second web GUI tool we noticed that there was currently a user trying to use the exploit. At this stage we no longer had a system administrator present, nor access to the attacked machine. (emphasis in original)

Again, shut down the switch port. Is this the "uptime argument?" Who needs uptime when the public is visiting a defaced Web server?

No wonder the intruders were active -- the defenders were visiting a potentially hostile Web site:

We soon located the tool on the web page... Looking at their site provided another clue.

If the victims were visiting an intruder's Web site during the incident response, that's an easy way to tip off the attackers that defense is taking place.

Finally, observe how the IR team finally tried to take control of the victim:

Left with no other alternative and the organization's approval, we used the intruders' web GUI tool to retrieve the MySQL password and used it to get into the forum database and escalate the permission of a user under our control to an administrator status (probably like the intruders themselves have done...).

Use an intruder's tool to remove the intruder? Wow.

IR is certainly a fluid experience, but I think some basic rules were violated during this scenario. Still, I'm very happy to see Beyond Security share its story. The report itself contains a ton of technical details and I highly recommend everyone read it. It's been a while since I've read anything like it.

ISSA NoVA Meeting Thursday

This Thursday is the next ISSA NoVA meeting. It will be held at the Microsoft Technology Center in Reston, VA. The social hour starts at 1730 and the meeting starts at 1830. A government civilian is the speaker. :) RSVP by noon today.

Tuesday, July 18, 2006

Redesigned TaoSecurity Web Site

I was so motivated by my TaoSecurity Blog redesign that I decided to revamp too. I used a template from Open Source Web Design created by Andreas Viklund. Now that the site is up I will spend some time fixing the problems found by the validator. When I have some free time I will work on next. Remember, I am a security guy -- not a Webmaster.

New TaoSecurity Layout

I've decided to try a new layout and enable Title and Links explicitly in the posts. Over the past year or more I've received multiple messages about feed issues and so on, so maybe this will help.

Monday, July 17, 2006

HD Moore Continues to Rock

What do you get when you combine creativity, deep technical and programming knowledge, and the ability to rapidly execute? The answer is HD Moore. Bamm (Sguil author) and I had the good fortune to have lunch with HD in 2001 in San Antonio, and he made quite an impression on us.

Thanks to this Offensive Computing post, I just learned of HD's new Malware Search Engine. You can read this eWeek interview for motivations behind the project. All of the code will fit into three browser panes. Read this page for examples of how to use it. I wonder if some ignorant policy maker will see this site as a problem and try to shut it down? Browserfun is still operational and July will end soon. Update

I just posted news on at the OpenPacket Blog. I made an initial announcement about OpenPacket last year. In short, this project is going nowhere unless I get some help with development or financing, due to my lack of Web development skill and time. I appreciate any comments you might post on the OpenPacket Blog.

Update: Please visit the OpenPacket Blog for fresh updates. I created devel and users mailing lists, and two people have already volunteered development help. Wow!

How Do You Fit Into the Security Community?

I've spent some time beefing up my Bloglines feeds. As I look for people with ideas that could be useful, I'm reminded of the vast differences among those who would all presumably claim to be "security professionals." I am acutely aware of these differences when I visit security conferences, and I wrote about this phenomenon after attending USENIX 2003, Black Hat 2003, and SANS NIAL 2003 within a span of 30 days.

At the risk of being attacked for promoting stereotypes or hurting feelings, I decided to share a few thoughts on this subject. What group describes you?

  • Academics: This group consists of undergraduates, graduates, PhD candidates, and faculty. They tend to frequent USENIX conferences where they will be talking about the latest security protocol. They have ties to government organizations because that is the source of grant money. They write papers, mostly speak in front of other academics, and take deep looks at improving security technologies in formal and peer-reviewed ways. Academics obviously have formal training and they tend to have tinkered with security before joining this group.

  • Policemen: (Police women also fit here!) Policemen enforce the law. They like to talk about who they have "busted." They seem to often assume duties for which they are not prepared. They are overwhelmed by the amount of work they face, even though they are one of the few groups who can eliminate threats. Their organizations usually consider their work to be secondary to "real law enforcement." Sometimes their bosses don't even read email. Policemen tend to struggle to understand technology because they usually come from traditional police backgrounds, and their workload ensures no free time to tinker. Policemen often concentrate on host-based forensics, and they attend HTCIA and InfraGard conferences.

  • Government civilians and contractors: Government civilians and contractors obsess over certifications. They are most likely to be talking about CISSP, PMP, ISSAP, ISSEP, GIAC, and so on at ISSA meetings. They often perform certification and accreditation and don't understand why those processes are broken. Some of them are trying very hard to fix their agencies, but they struggle with political infighting and bureaucratic inertia. This group likes SANS and CSI conferences.

  • Warfighters: This is the uniform-wearing military. This group is youngish and skilled. Many of them would fit into the "hacker" category (see below) but they are definitely on the white hat side. They are sharp because their infrastructures are under constant assault. Unfortunately the military personnel system generally offers no career path to develop their skills and interests. This group tends to leave the military for the commercial or government worlds just when they are becoming real experts. Warfighters attend their own closed conferences but they also try to learn from their opponents at offensive-minded conferences like Black Hat or CanSec.

  • Hackers: To some degree all of the groups here would want to consider themselves "hackers," with the exception of policemen and some government civilians. (Being a hacker is supposed to be cool, but some consider it to be bad.) In reality, you know a hacker when you talk to him or her. Hackers tend to have extremely deep technical knowledge in very specific areas. A hacker might write his own compiler or debugger, but not practice sound system administration practices. (For example, a hacker might think it's ok to put all system files in a single partition on a production server.) Hackers are the source of real public innovation in attack methodologies and they are extremely creative and unpredictable. Hackers are more likely to speak at conferences like Black Hat or CanSec, but they seem to be migrating to smaller or private gatherings. Hackers are some of the youngest members of the security community, but as they build families and get older they migrate to another group. When young they are either in high school or college. Upon graduation (from either place), hackers usually work as consultants. Sometimes they work directly for governments or the military.

  • Consultants/Corporates: This group includes those who work for security companies, and those who provide security services within non-security companies. Consultants and corporates are a very diverse group, drawing upon most of the earlier categories. Many corporates have general IT backgrounds and "end up" in security because they staff a one- or two-person IT shop. If they are serious about providing good services, and their employers agree, they tend to specialize in one or two areas. (Companies who expect consultants and corporates to be experts in everything should expect disaster.) This group is second to the government civilians and contractors in pursuing certifications, because they think clients will value them or their employers will reward them.

  • Developers: The last group creates security products, but I prefer to concentrate on those who participate in the design process. (Code monkeys who implement without consideration for underlying security principles aren't really security people.) Security developers are usually former members of the other groups, since serving two roles is too tough. Developers have decided they want to solve a problem encountered in their previous lives. They are very skilled in their work area, with depth of knowledge rivalling the hackers. Some developers are older hackers.

Did I miss anyone?

Keep in mind that some people may fit in one category while working in another category. For example, I know many "hackers" who are government contractors during their day jobs. Many consultants are like government civilians or contractors. Also note I do not consider any of these people to be the adversary. I will not be discussing threats.

I wanted to record these thoughts, because you can probably imagine the diversity of opinion suggested by this list. I have some ties to each of these groups, and they approach problems from very different angles. I have no way of knowing the sorts of people who read my blog, but in some ways I'm guessing few hackers, developers, or policemen read it. I could be wrong though.

I would be interested in hearing your thoughts, especially if you can help refine/define these categories. This is not some sort of formal taxonomy, just some ideas.

Beta Test Argus 3.0 and Tcpreplay 3.0

If you're a packet monkey like me, you probably use tools like Argus and Tcpreplay.

Carter Bullard is preparing to release Argus 3.0 soon, which includes a lot of community feedback. You can try the latest release candidates here. I helped testing by providing access to a box running FreeBSD 6.1 amd64.

Similarly, Aaron Turner just released a new beta version of Tcpreplay. I ran into a problem with Tcpedit on FreeBSD 6.1 i386 when running 'make'.

Try downloading and testing these beta versions and provide feedback to the authors. Thank you!

Sunday, July 16, 2006

One Thought on State Department Incidents

I have absolutely no special knowledge of this event. All I know I've learned from stories like this. The following caught my eye:

The department also temporarily disabled a technology known as secure sockets layer, used to transmit encrypted information over the Internet.

Hackers can exploit weaknesses in this technology to break into computers, and they can use the same technology to transmit stolen information covertly off a victim's network.

Many diplomats were unable to access their online bank accounts using government computers because most financial institutions require the security technology to be turned on. Cooper said the department has since fixed that problem.

So DoS (heh, pun intended) disabled outbound HTTPS? It sounds to me like the intruders used a HTTPS covert channel (not so covert, actually) to communicate with their victims. I think we are getting to the point where encrypted outbound HTTP will have to terminate on a proxy server that permits inspection or at least logging of the HTTP action. The proxy will then establish its own connection with the remote HTTPS server.

Yes, (1) this breaks end-to-end HTTPS; (2) users will probably have to accept an unexpected SSL certificate; (3) this will undermine any training they may have received to avoid connecting to the uber hacker at However, to have any shot at identifying these connections in a timely manner, inspection of clear text at the network perimeter is a must. (What perimeter? It's the line between what you own/control and what you don't.) Not being able to do this probably hurt the DoS.

Speaking at Net Optics Think Tank on 26 September

Almost exactly one year after my last appearance, I will be speaking at the next Net Optics Think Tank on 26 September 2006 in Fairfax, VA. I haven't figured out exactly what I will be covering yet. I might talk about some material from my TCP/IP Weapons School class and how it relates to recent incidents like the Freenode event. It looks like I will be speaking during lunch from 1215 to 1315.

Saturday, July 15, 2006

More Notes from TechnoSecurity 2006

I found another page of notes I took at Techno Security 2006. These were from Marcus Ranum's talk, and I listen to Marcus. He observed that small vendors tend to sell products designed for sophisticated users, because large companies tend to sell products for unsophisticated users. Which market is bigger? The unsophisticates vastly outnumber the sophisticates. Therefore, start-ups usually chase a very small market and tend to be weak.

Marcus said "security ROI is dead" and "legislation has made security a cost." He predicted "we will be competing with legal for money (or working for them) in the next five to ten years." To hammer the point Marcus then said "there never was a security ROI." Amen.

For a way forward, Marcus offered two paths. Path A sees multi-level security rising from the ashes. Marcus claimed this is not likely, although papers like The Path to Multi-Level Security in Red Hat Enterprise Linux (.pdf) might beg to differ.

Path B involves the death of general purpose computing. Everyone will own appliances, perhaps even disposable ones like cell phones. All data will be on a backend somewhere. It's a return to mainframe computing that reverses what Marcus called the "Satanic bargain" of general purpose computing. What's the bargain that was made in order to rid the world of mainframes? "Everyone becomes a system administrator." Clearly that has not worked. Marcus said "distributed data equals distributed vulnerability," and the recent public laptop thefts make that clear.

Marcus told his audience to watch for a day when they can no longer buy software. Instead, people will rent and lease "capabilities," not applications. We're already doing this with anti-virus, intrusion detection and layer 7 firewalls, etc. What's next?

Comments on SANS CDX Briefing

One of the benefits of paying for this week's SANS Log Management Summit was attending a briefing last week on the latest Cyber Defense Exercise conducted by the NSA. SANS organized a panel with a USAFA cadet, a USNA midshipman, a USMA-grad Army 2LT, and several NSA or ex-NSA representatives, along with their boss, Tony Sager. Although I've known of CDX for several years, this was my first real insight to how these exercises are conducted.

The NSA organizer, or "white cell leader," is Bruce Rogers. He explained that competitions can be conducted either as capture-the-flag style events or purely defensive affairs. CDX is purely defensive. When I asked Mr. Rogers if he had spoken to any organizers of other cyber competitions, like those of Def Con or ShmooCon, he said no. Mr. Rogers has 20 white controllers overseeing the exercise, which includes 6 targets (the six defending teams -- USAFA, USNA, USMA, USMMA, AFIT, and NPS).

The attackers are split into two groups. The first group consists of "tainters". These are 13 NSA personnel, which included one high school-age intern and one college-age intern. The tainters spent about 80 man-hours building and then misconfiguring, rootkitting, and otherwise tampering with virtual machines delivered to the CDX participants. The participants had two weeks to analyze these VMs for vulnerabilities and exploitation, after which they had to activate and then defend them. These compromised servers are supposed to be similar to "host nation machines" that military personnel might find themselves operating.

I was initially shocked by this news. Who in their right mind would trust host nation equipment for sensitive operations? Wouldn't it be best to rip everything out and start fresh with clean, trusted media? After some thought, I decided that this tainting phase was more realistic than I initially believed. Unless one joins a very small company, no new security or IT employee is ever allowed to begin work at a new job by rebuilding all infrastructure. When you join a new company, you're stuck with all of the garbage they give you.

The second group of attackers are the traditional red team. This group consists of real red teams from across the services, such as the USAF 92nd Information Warfare Aggressor Squadron. The red team hammered the 6 target networks for 4 straight days. The target networks were hosted on live network links at the respective schools and were connected back to NSA via VPN. No simulated non-malicious traffic was carried to or from the target networks. Everything on the wire was considered malicious since the red team was creating it. This is highly unrealistic, but partially driven by the bandwidth available to some of the teams. At least one hosted their target network on an ISDN line.

Each of the military participants said a few words about their teams and experiences. Three themes stood out. First, the team size varied widely. USAFA's team had 9 people. USMA's team had 35-40. (USAFA won.) Second, most of the teams admitted having little or no security training. I was amazed. Who signs up for a hack-fest without having security experience? Third, the networks designed by each of the teams varied widely. USAFA emphasized simplicity. USNA concentrated upon prevention, and never regained control once their servers were compromised. ("Prevention eventually fails." -- Tao) USMA's network was exceedingly complex, but they tried to watch outbound traffic for signs of compromise (e.g., extrusion detection). No team was allowed to block traffic from malicious IPs.

All of the target networks ended up being 0wn3d. USAFA didn't notice a rogue Apache module that resulted in a Web site defacement. USMA missed a default password on their router and lost control of it. The red team said that the best team only found 15% of the vulnerabilities created by the "tainters." Wow. By the way, the tainters did not tell the red team what they did to the VMs. The tainters dropped some clues as the exercise progressed, but the red team mostly used standard penetration techniques.

These were the lessons learned from the 2006 CDX.

Top 9 Exploited Vulnerabilities

  1. Microsoft Windows LSASS Buffer Overflow Vulnerability

  2. Microsoft DCOM

  3. LM Hash versus NTLM Authentication Protocol

  4. Use of Weak Passwords

  5. Use of the same password on Multiple Systems

  6. Microsoft Windows Default Administrative Shares

  7. Rich Text Format / HTML Email

  8. Access to System Executables

  9. Use of Unnecessary Services / Accounts

Student Best Practices

  1. Know the Network and Keep it Simple: Each additional device is another avenue of attack. The entire team must understand the network. Troubleshooting is easier with a simple design.

  2. Deny by Default Policy: Only allow what is absolutely necessary. It's easier than blocking known bads.

  3. Remove Unnecessary Services, Software, and User Accounts: What is the role of the computer? Remove unnecessary software completely.

  4. Plan for Contingencies: All networks will eventually have a problem.

Finally, two of the panel members (I remember USAFA cadet Michael Tanner told this story) participated in CDX and the National Collegiate Cyber Defense Competition. Rob Lemos wrote about it for SecurityFocus. Cadet Tanner said the national CDX was completely different from the military CDX. The military CDX allowed participants to protect and host their VMs using a variety of technologies. USAFA used mainly OpenBSD. AFIT used all Windows. Other groups uses other technologies. At the national CDX, participants were given a ton of commercial equipment (all from sponsors, no doubt) and then found themselves hacked to pieces five minutes into the exercise. Apparently they were given no opportunity to do anything with the equipment prior to the exercise starting?

Overall, I found the session to be extremely informative. I'd like to thank Alan Paller from SANS for organizing this event and I appreciate the participants sharing their experiences. If you want more details, I found some papers on both exercises posted at the The Colloquium for Information Systems Security Education.

Three Pre-Reviews

Three generous publishers sent me three books to review this week. The first is Apress' Pro Nagios 2.0 by James Turnbull. This is the second book on Nagios on my reading list. I plan to deploy Nagios on my test network to gain a better understanding of how it works. I will use both books and compare and contrast them once I've finished each.

The second book is O'Reilly's IPv6 Essentials, 2nd Ed by Silvia Hagen. I did not read the first edition, because by the time I gained interest in IPv6 newer books were published. For example, I really liked Apress' Running IPv6 and O'Reilly's IPv6 Network Administration. I plan to deploy an IPv6 testbed soon, so I will use this new book to help that project. I'll compare the new book to the two older texts.

I'm hesitant to mention this last book, because I don't plan to read it. (I only review books that I read.) I don't plan to read Syngress' Dictionary of Information Security by Robert Slade. If you peruse reviews of this author's other books at, they are uniformly bad. I am surprised that Mr. Slade managed to get luminaries like Fred Cohen, Peter Neumann, and Gene Spafford to contribute forewords to this book.

If someone is going to write a "dictionary," they should take it seriously. This comment on the back of the book encouraged me not to read it: "Don't be fooled by the refreshing lack of pomposity and the occasional jokey entry." A "jokey entry" in a book by someone who claims to be "facilitating the ISC(2) CBK review seminar"?

I'll also save you the trouble of seeing if I have some sort of personal problem with Mr. Slade by pointing you to his negative review of Real Digital Forensics, a book I co-authored, along with two of the world's best forensics experts. (These are people who have testified in court.) I think he hammered RDF because I refused to review his "forensics" book Software Forensics. I think this comment by reviewer Eric Kent says it best: Software Forensics "is a book by a person who clearly has no real world experience in the world of digital forensic investigations." Ouch.

Friday, July 14, 2006

2006 CSI-FBI Study Confirms Insider Threat Post

Earlier this week I said Of Course Insiders Cause Fewer Security Incidents. I'm taking heat over at Matasano, but I've got some fresh facts to back me up, after getting a pointer from this Dark Reading story. In short, the 2006 CSI-FBI study is now available, and it confirms my proposition.

The study shows there is a new significance for the number "80%". Check out this chart from page 13. If you add up the two left columns, it shows that a clear majority of survey respondents -- 61% -- believe that "none" or 20% or less of their losses come from insiders. In other words, for the clear majority of survey respondents, 80% or more of their losses come from external threats.

Why is this? Apparently it's caused by the number one cause of dollar losses -- "virus contamination." "Unauthorized access to information" is a fairly close second, compared to the positions of the other dollar losses. It's interesting that "system penetration by outsider" accounts for a fraction of losses -- but maybe external attackers gaining "unauthorized access to information" is the problem? It's probably not insiders gaining "unauthorized access to information," since (1) most insiders already have access -- they just misuse it; and (2) if insiders were such a problem, they would account for a bigger proportion of losses.

Again, again -- I agree that insiders have the potential to cause the biggest amount of damage. Look at the havoc caused by rogue CEOs and CFOs. However, the numbers aren't lying -- external problems are bigger. We have ways of dealing with rogue insiders that completely dwawf our options with handling outsiders.

At some point my critics will recognize the reason I am so vocal about the difference between threats and vulnerabilities. For the most part, we can address vulnerabilities. Secure coding, security infrastructure, configuration management, training, etc. all help reduce the number of vulnerabilities (or at least have the potential to). After all, we control what we deploy (or at least we should!)

Threats are completely different. Threats are people, people living in basements in Eastern Europe, China, Africa, wherever -- outside the reach of law enforcement and the military. While we patch and configure away old vulnerabilities, these same external intruders never stop.

Behind every virus, Trojan, worm, botnet is a person -- a person -- yet we continue to worry about the latest hole in the Linux kernel. The only group who can make any difference in this battle against bad people are the police and military. Unfortunately, the current police and military infrastructure is currently not capable of addressing these threats.

We are not going to code, configure, or patch our way out of this problem, because the real issue is the threat, not the vulnerability. Vulnerabilities come and go; at present, threats continue until they get bored.

Tuesday, July 11, 2006

TCP/IP Weapons School Will Rock

Are you attending TCP/IP Weapons School at USENIX Security 2006 In Vancouver on 31 July and 1 August 2006? If yes, these are the topics I will cover:

  • Hardware and Network Design

    • Bridges

    • Hubs

    • Switches

    • Routers

    • Duplex and Domains

    • Layer-X Switches

    • Middleboxes

    • Local Area Networks

    • xANs, VPNs, and WLANs

    • VLANs

  • Layer 1

    • What is Layer 1?

    • Ethernet

    • Raw Ethernet (Nemesis)

    • UTP

    • Ethernet over UTP

    • Fiber Optics

    • Ethernet over Fiber Optics

    • Ethernet Emulation over FireWire

    • IP over FireWire

    • IP over Wireless

  • Layer 1 Attack

    • Rogue Access Point

  • Layer 2

    • What is Layer 2?

    • Ethernet Revisited

    • Revisiting What is Layer 2?

    • Test Network Layout

    • Packet Delivery on the LAN

    • Ethernet Interfaces

    • ARP Basics

    • ARP Request/Reply

    • ARP Cache

    • Arping

    • Arpdig

    • Arpwatch

    • Dynamic Trunking Protocol

  • Layer 2 Attacks

    • Test LAN Reference

    • Changing MAC Addresses

    • MAC Flooding (Macof)

    • ARP Denial of Service (Arp-sk)

    • Port Stealing (Ettercap)

    • Layer 2 Man-In-The-Middle (Ettercap)

    • Dynamic Trunking Protocol Attack (Yersinia)

  • Layer 3

    • What is Layer 3?

    • Internet Protocol

    • Raw IP (Nemesis)

    • IP Options (Fragtest)

    • IP Time-To-Live (Traceroute)

    • Internet Control Message Protocol (Sing)

    • ICMP Error Messages (Gnetcat)

  • Layer 3 Attacks

    • IP Spoofing

    • Gont ICMP Attacks

    • ICMP Shell

I am really excited by this class. If you read the class description posted at USENIX, you'll notice it goes through levels 1-7. After creating 312 slides for a two-day class, I realized I needed to stop with level 3. I originally envisioned this class being a four-day affair, and once I develop material for levels 4-7 I can see it being a new four-day class.

One of the reasons I think this class will be special is that I generated Libpcap traces of all of the interesting traffic discussed in the class. Students can load them into Wireshark and follow along as we learn what they mean.

Developing the class was absolutely grueling (well, not like digging a ditch), but still fun. I had never used Yersinia to fake a trunk line and get access to VLAN traffic on a Cisco switch, but it's in the class now.

The USENIX class description recommends students bring some version of VMware to class so they can run a VM I will provide. I will indeed provide a FreeBSD VM including all of the tools I used on FreeBSD. I'll probably also include a Debian VM for those tools that didn't run on FreeBSD. However, you will not be able to duplicate all of the attacks I ran while developing this class. VMware is nice, but it cannot simulate conditions in a real hardware lab, especially when mucking around with layer 2.

If you have any questions, please post them here.

I am probably going to offer this same two-day class at USENIX LISA on 3-4 December 2006 in Washington, DC. I am contemplating offering additional material independent of USENIX, perhaps before the conference (which runs 3-8 December) or after the conference. That means Saturday 2 December or Saturday 9 December. These would be paid events separate from USENIX. If you would have any interest in attending training while you are in town, email me (richard at taosecurity dot com) with your ideas.

Of Course Insiders Cause Fewer Security Incidents

Today's SANS NewsBites points to this eWeek article, which in turn summarizes this Computer Associates press release. It claims "more than 84% [of survey respondents] experienced a security incident over the past 12 months and that the number of breaches continues to rise."

The SANS editor piqued my interest with this comment: "(Honan): It is interesting to note that this survey highlights the external threat is becoming more prevalent than the internal one." (emphasis added)

"Becoming more prevalent?" This is Mr. Honan's answer to this part of the CA story: "Of the organizations which experienced a security breach, 38% suffered an internal breach of security." That means 62% experienced an external breach, or perhaps less if one could not determine the source of the breach.

I highlight "becoming more prevalent" because it indicates the speaker (like countless others) fell for the "80% myth," which is a statement claiming that 80% of all security incidents are caused by insiders. I document in Tao the history of this myth. I challenge anyone who believes the 80% myth to trace it back to some definitive source. If you do you will find it leads nowhere reputable.

If the 80% myth were true, security would be a fairly easy problem to solve. The biggest problem I see with modern digital security is the inability to remove threats from the risk equation. In other words, victims of secuirty incidents lack the personal power to eliminate threats; only the police or military can really remove threats from the picture. Since the police is ill-equipped and overwhelmed, and the military similarly not well-positioned to eliminate threats, attackers continue to assault with impunity.

However, if the majority (the vast majority, if you believe the 80% myth) of threats are internal, this completely changes the situation. To immediately and irrevocably alter the risk equation, all an employer or organization needs to do is identify and fire or remove the internal bad apples. Problem solved. "Oh, that's too hard," I'm going to hear. Maybe, but compare that option (which happens every day) to identifying, apprehending, prosecuting, and jailing a Romanian.

Since organizations have the tools to largely remove the insider threat, but security incidents continue to be a problem, insiders must be dwarfed by the size of the outsider threat community. However, as I've said elsewhere, insiders will always be better informed and positioned to cause the most damage to their victims. They know where to hurt, how to hurt, and may already have all the access they need to hurt, their victim.

The bottom line is that the number of external attackers far exceeds the number of internal attackers.

Friday, July 07, 2006

Control-Compliant vs Field-Assessed Security

Last month's ISSA-NoVA meeting featured Dennis Heretick, CISO of the US Department of Justice. Mr. Heretick seemed like a sincere, devoted government employee, so I hope no one interprets the following remarks as a personal attack. Instead, I'd like to comment on the security mindset prevalent in the US government. Mr. Heretick's talk sharpened my thoughts on this matter.

Imagine a football (American-style) team that wants to measure their success during a particular season. Team management decides to measure the height and weight of each player. They time how fast the player runs the 40 yard dash. They note the college from which each player graduated. They collect many other statistics as well, then spend time debating which ones best indicate how successful the football team is. Should the center weigh over 300 pounds? Should the wide receivers have a shoe size of 11 or greater? Should players from the north-west be on the starting line-up? All of this seems perfectly rational to this team.

An outsider looks at the situation and says: "Check the scoreboard! You're down 42-7 and you have a 1-6 record. You guys are losers!"

In my opinion, this summarizes the mindset of US government information security managers.

Here are some examples from Mr. Heretick's talk. He showed a "dashboard" with various "metrics" that supposedly indicate improved DoJ security. The dashboard listed items like:

  • IRP Reporting: meaning Incident Response Plan reporting, i.e., does the DoJ unit have an incident response plan? This says nothing about the quality of the IRP.

  • IRP Exercised: has the DoJ unit exercised its IRP? This says nothing about the effectiveness of the IRT in the exercise.

  • CP Developed: meaning Contingency Plan developed, i.e, does the DoJ unit have a contingency plan should disaster strike? This also says nothing about the quality of the CP.

  • CP Exercised: has the DoJ unit exercised its CP? Same story as the IRP.

Imagine a dashboard, then, with all "green" for these items. They say absolutely nothing about the "score of the game."

How should the score be measured then? Here are a few ideas, which are neither mutually exclusive nor exceedingly well-thought-out:

  • Days since last compromise of type X: This is similar to a manufacturing plant's "days since an accident" report or a highway's "days since a fatality" report. For some sites this number may stay zero if the organization is always compromised. The higher the number, the better.

  • System-days compromised: This looks at the number of systems compromised, and for how many days, during a specified period. The lower, the better.

  • Time for a pen testing team of [low/high] skill with [internal/external] access to obtain unauthorized [unstealthy/stealthy] access to a specified asset using [public/custom] tools and [complete/zero] target knowledge: This is from my earlier penetration testing story.

These are just a few ideas, but the common theme is they relate to the actual question management should care about: are we compromised, and how easy is it for us to be compromised?

I explained my football analogy to Mr. Heretick and asked if he would adopt it. He replied that my metrics would discourage DoJ units from reporting incidents, and that reporting incidents was more important to him than anything else. This is ridiculous, and it indicates to me that organizations like this (and probably the whole government) need independent, Inspector General-style units that roam freely to assess networks and discover intruders.

In short, the style of "security" advocated by government managers seems to be "control-compliant." I prefer "field-assessed" security, although I would be happy to replace that term with something more descriptive. In the latest SANS NewsBites (link will work shortly) Alan Paller used the term "attack-based metrics," saying the following about the VA laptop fiasco: "if the VA security policies are imprecise and untestable, if the VA doesn't monitor attack-based metrics, and if there are no repercussions for employees who ignore the important policies, then this move [giving authority to CISOs] will have no impact at all."

PS: Mr. Heretick shared an interesting risk equation model. He uses the following to measure risk.

  • Vulnerability is measured by assessing exploitability (0-5), along with countermeasure effectiveness (0-2). Total vulnerability is exploitability minus countermeasures.

  • Threat is measured by assessing capability (1-2), history (1-2), gain (1-2), attributability (1-2), and detectability (1-2). Total threat is capability plus history plus gain minus attributability minus detectability.

  • Significance (i.e., impact or cost) is measured by assessing loss of life (0 or 4), sensitivity (0 or 4), operational impact (0 or 2), and equipment loss (0 or 2). Total significance is loss plus op impact plus sensitivity plus equipment loss.

  • Total risk is vulnerability times threat times significance, with < 6 very low, 6-18 low, 19-54 medium, 55-75 high, and >75 very high.