Saturday, December 30, 2006

Favorite Books I Read and Reviewed in 2006

2006 was my most productive reading and reviewing year yet. I read and reviewed 17 in 2000, 42 in 2001, 24 in 2002, 33 in 2003, 33 in 2004, 26 in 2005. This year I read and reviewed 52 books. I was determined to make as big a dent as possible in the huge stack of books sent to me by publishers and blog readers, and I made a lot of progress.

My ratings yielded the following:

  • 1 star: 0 books
  • 2 stars: 1 book
  • 3 stars: 9 books
  • 4 stars: 29 books
  • 5 stars: 13 books
Because I don't try to read every book, I'm glad my ratings are skewed towards the higher end. I don't intentionally read books I expect to be bad.

I thought I would list the 13 books that I gave five stars, starting with my favorite and working down.

  1. 802.11 Wireless Networks: The Definitive Guide, 2nd Ed by Matthew S Gast: A first-rate technical book that dispels myths by speaking authoritatively and comprehensively.
  2. Running IPv6 by Iljitsch-van-Beijnum: A close second, this book nicely describes IPv6 in a practical manner.
  3. Protect Your Windows Network by Jesper M. Johansson and Steve Riley: Yes, really -- a "Windows" book! This book is amazing because the security principles within apply to any platform.
  4. The Debian System by Martin F. Krafft: I would love to see a book like this written for FreeBSD.
  5. PGP-GPG by Michael Lucas: This book should be given to anyone who needs to use PGP or GPG, before they create their first key!
  6. IPv6 Essentials, 2nd Ed by Sylvia Hagen: This book is the perfect companion for the previous IPv6 book, because this title is mostly IPv6 formats and theory.
  7. Software Security by Gary McGraw: Of the six books I read this year on building secure software, this was my favorite and the only five-star recipient.
  8. Hacking Exposed: Web Applications, 2nd Ed by Mike Shema, Joel Scambray, and Caleb Sima: I liked this book because it is a thorough update of the 1st Ed, and it covers the subject very well. It still won't win over all you HE-bashers out there. (You know who you are.)
  9. Apache Security by Ivan Ristic: This is the best book on Apache security, and a good introduction to Web attacks as well.
  10. Phishing Exposed by Lance James: I liked this book because it seemed to extend the boundaries of knowledge regarding phishing, and not just rehash old attacks.
  11. File System Forensic Analysis by Brian Carrier: If you do any sort of host-centric forensics, this book is a must-have.
  12. Pro Nagios 2.0 by James Turnbull: The best Nagios book, thus far.
  13. Skype Me! by Michael Gough: Wow, I gave a Skype book five stars? It was very well-written.
So, congratulations to Matthew Gast for being my favorite author of 2006!

I have more than 30 books sitting on my shelf waiting to be read now, and another 40 plus books on my Amazon.com Wish List. I've assigned priority values to the Wish List based on projected publication date. In other words, books that are already on shelves or due soon are rated "Highest." Books arriving next year, for example, are rated "lowest."

If you find my reviews helpful, please rate them as such at Amazon.com. I look forward to hitting the 4000 mark for "Helpful Votes" in 2007. I hit 1500 three years ago and 3000 at the beginning of 2006. Since I am not paid for my reviews I appreciate any indication that they are helpful. Thank you.

Friday, December 29, 2006

Prereview: Inside the Machine

Thank you to Patricia at No Starch for sending me two copies of Jon Stokes' Inside the Machine. I was drawn to this book by an Amazon.com review which said this:

This book is an introduction to computers that fills the gap between classic and challenging books like Hennesy and Patterson's, and the large number of "How Your Computer Works" books that are too basic for engineers.

I like the fact the book covers a variety of microprocessor types. Comparison is a great teaching method. I didn't know who Jon Stokes was, but you can follow that link to read about his motivation for writing the book. I plan to read and review the new book next month.

I've Been Blog-Tagged

It would be nice if the Tag in this situation were a watch, but it turns out Martin McKeay has blog-tagged me. I'm supposed to mention five items you probably don't know about me, and then name five of my fellow bloggers. Here goes.

  1. I'm a 1994 graduate of the US Air Force Academy. I graudated third of 1024 cadets, with degrees in history and political science, and minors in French and German. However, my whole life I wanted to attend my backyard school, the Massachusetts Institute of Technology (MIT). I was accepted to USAFA first, and when the letter arrived there seemed to be no question about where I should attend. Admission to USAFA requires acceptance by the school (not easy for a nearly-blind non-flyer like me) and Congressional appointment (thanks Ed Markey and Ted Kennedy -- I can't believe I said that). So, USAFA was the "long shot" and it seemed like the opportunity of a lifetime. I still wonder if I should have attended MIT (on an Air Force ROTC scholarship, which I also received). At least I can claim Academy heritage, like my favorite historical figure US Grant.

  2. I ran cross country, indoor, and outdoor track in high school. I lettered in all three and was captain of the indoor track team my senior year. I credit joining the cross country team my junior year as fundamentally altering the course of my life and I am thankful for that experience. I am very proud of the "Most Improved" award I received my senior year, which acknowledged me moving from the back of the JV team to the third fastest runner on the varsity team in one year.

  3. I studied martial arts seriously from 1994 through 2002. I did some Karate and Judo before then and some Jeet Kune Do and Kung Fu after that span, but nothing as hard core as that eight year period. During that time I studied with Michael Macaris (Kung Fu), Troy Baker (Tae Kwon Do and Modern Arnis), and Curtis Abernathy (American Kenpo). I did try Wing Tsun before American Kenpo, but that disaster means I won't tell you the name of the instructor. One day I would like to return to studying martial arts, although a shoulder injury makes me reluctant.

  4. I am a non-practicing Amateur Radio ("ham") operator. My dad is one and my grandfather (his dad) was also. I earned my Tech license to try packet radio, which is basically a dead art.

  5. I am an Eagle Scout. For my project I organized a road race to raise money for a scholarship fund in honor of a high school friend who died of leukemia.


Here are my five tag victims:

  1. Dru Lavigne

  2. Anton Chuvakin

  3. Harlan Carvey

  4. CS Lee

  5. Richard Stiennon


I think that's a nice mix of people with technical blogs.

Snort Report 1 Posted

SearchSecurityChannel.com (SSC) has posted my first Snort Report. This is a new monthly series I'm writing for SSC that is starting at ground zero with Snort and working towards greater levels of complexity.

I thought it would be helpful to begin by explaining how to install Snort in a manner that allows easy testing of new versions while running older versions. I also discuss the modes Snort supports. Next month I describe the snort.conf file and show how to get Snort to perform useful work in IDS mode without using a single rule.

Is there some aspect of Snort you'd like to know more about? I may not have all the answers tumbling around in my head, but I can do research and ask some of the best Snort minds around if necessary.

Lessons from Analog Security

As a security person I try to take notice of security measures in non-digital settings. These are a few I noticed this week.

  • When visiting a jewelry store, I saw a sign say the following: "Our insurance policy does not permit us to remove more than one item at a time from this display case." This sign was attached to a case containing the store's most valuable jewelry. This is an example of limiting exposure by restricting access to one asset at a time. In a more generic sense, the digital version might involve following guidelines applied by an insurance company. Perhaps they would require WPA2 for wireless networks, etc.

  • I received a check from a client. Underneath the signature line I read "Two signatures required for amounts over $75,000." This is an example of dual accountability. It requires someone writing fraudulent checks to have an accomplice. The digital version involves requiring two privileged users acting together to accomplish a particularly sensitive task.

  • At many stores I saw video cameras directly above the cash register. While these might be useful for recording thieves, it is probably in place to deter employees from stealing. The digital version is comprehensive host- and network-centric monitoring.


I think one of the fundamental problems of digital security is the inability to translate historically sound analog security practices into digital forms. Traditional computer scientists are not security experts. Traditional security experts are usually not computer scientists. Addressing this gap would be beneficial to both communities.

Can you think of other examples of security measures in the analog world that could be applied to the digital world?

Thursday, December 28, 2006

Pervasive Network Awareness via Interop SpyNet

In my 2005 book Extrusion Detection (p. 27) I defined the term pervasive network awarenesss (PNA):

A truly defensible network permits security administrators to achieve pervasive network awareness. Pervasive network awareness is the ability to collect the network-based information -- from the viewpoint of any node on the network -- required to make decisions.

Today while perusing Webcasts at Gigamon University, I listened to a Gigamon presentation on a "data access network" (so-called "DAN") built as the Interop SpyNet, shown earlier.
This is exactly an implementation of PNA. The Interop network and security admins can monitor the InteropNet and see traffic anywhere they like. This Interop Blog post provides a portal into discussions of the SpyNet, including history showing the idea stretches back to 1996. This shows that PNA is a good idea, and like many good ideas, not even new!

At some point I would like to see a SpyNet in person. I will be in Australia for Interop Las Vegas, but I will look into visiting New York in October.

It would be nice to see this approach built into all networks. I believe the reason it is not is that the InteropNet is a clean slate each year. If you're allowed to build a network from scratch using the latest and greatest tools and techniques, then you can see developments like this in action. Networks that have grown "organically" over a decade are likely to have plenty of dark streets and dangerous alleys where monitoring is dicey or impossible.

Update: I should mention that I dislike the term "data access network" (DAN). What could be more generic? What they should have said was "traffic access network" (TAN). Now we're describing the nature of the solution.

FreeBSD Developments

I wanted to quickly highlight two FreeBSD developments.

First, FreeBSD 6.2 RC2 is available. Assuming nothing serious happens, expect FreeBSD 6.2 RELEASE in about two weeks. This post explains the various .iso images. This post explains real weaknesses in the FreeBSD installation documentation, from the standpoint of a person not familiar with FreeBSD.

Second, Dru Lavigne explained how the new modular X.org works:

xorg 7.x is modular. In practical terms, this means that every driver, font, and application has its own port/package. To spell it out more clearly: my full installation of xorg 6.9 comprised 11 packages; a complete install of xorg 7.2 comprised just over 300.

I think it will be cool to only have to install a dozen or so ports in order to run X, instead of 300+. (Right now the equivalent is installing everything and then using only a small portion of the code.)

On a related FreeBSD note, I just subscribed to the RSS feed from Planet BSD. I can't believe what I've been missing.

Wednesday, December 27, 2006

Solera DataEcho

I came across this press release from Solera Networks on their open source DataEcho application. DataEcho is a Windows program that captures live traffic or reads traces in Libpcap format. It's best used for interpreting Web traffic, as shown in this screen capture of a visit to www.bejtlich.net recorded in Wireshark and fed to DataEcho.



My Web site doesn't render that well because it uses CSS, but you can see how DataEcho breaks down the Web traffic. This is a similar view from Wireshark, sorted on the last column.



Besides DataEcho, I found a SourceForge project page for a Solera-related "tEthereal Network Forensic Console", which says:

Management Console to reconstruct emails, web sessions, VOIP sessions, FTP, and all known supported Internet Protocols for Network Forensics. ***UPDATE*** Project release scheduled.

That looks interesting, but no files are available. I have been exchanging emails with Solera CEO Terry Haas, so I hope to find out more about this company's projects.

How Many Spies?

This is a follow-up to Incorrect Insider Threat Perceptions. I think security managers are worrying too much about insider threats compared to outsider threats. Let's assume, however, that I wanted to spend some time on the insider threat problem. How would I handle it?

First, I would not seek vulnerability-centric solutions. I would not even really seek technological solutions. Instead, I would focus on the threats themselves. Insider threats are humans. They are parties with the capability and intention to exploit a vulnerability in an asset. You absolutely cannot stop all insider threats with technical solutions. You can't even stop most insider threats with technical solutions.

You should focus on non-technical solutions. (Ok, step two is technical.)

  1. Personnel screening: Know who you are hiring. The more sensitive the position, the deeper the check. The more sensitive the position, the greater the need for periodic reexamination of a person's threat likelihood. This is common for anyone with a federal security clearance, for example.

  2. Conduct legal monitoring: Make it clear to employees that they are subject to monitoring. The more sensitive the position, the greater the monitoring. Web surfing, email, IM, etc. are subject to inspection and retention within the boundaries of applicable laws.

  3. Develop and publish security policies: Tell employees what is proper and improper. Let them know the penalties for breaching the policy. Make them resign them annually.

  4. Discipline, fire, or prosecute offenders: Depending on the scope of an infraction, take the appropriate action. Regulations without enforcement (cough - HIPAA - cough) are worthless.

  5. Deterrence: Tell employees all of the above regularly. It is important for employees who might dance with the dark side to fully understand the consequences of their misdeeds.


At the end of the day, you should wonder "how many spies?" are there in your organization. Consider the hurdles an insider threat must leap in order to carry out an attack and escape justice.

  • He must pass your background check, either by having a clean record or presenting an airtight fake record.

  • He must provide a false name and mailing address to frustrate attempts to catch him.

  • He must evade detecting by your internal audit systems.

  • He must have an escape plan to leave the organization and resurface elsewhere.


I could continue, but imagine those difficulties compared to a remote cyber intruder in Russia who conducts a successful client-side attack on your company? Now which attack is more likely -- the insider or the outsider?

Incorrect Insider Threat Perceptions

Search my blog for "insider threat" and you'll find plenty of previous posts. I wanted to include this post in my earlier holiday reading article, but I'd figure it was important enough to stand alone. I'm donning my flameproof suit for this one.

The cover story for the December 2006 Information Secuirty magazine, Protect What's Precious by Marcia Savage, clued me into what's wrong with security managment and their perceptions. This is how the article starts:

As IT director at a small manufacturer of specialized yacht equipment, Michael Bartlett worries about protecting the firm's intellectual property from outsiders. But increasingly, he's anxious about the threat posed by trusted insiders.

His agenda for 2007 is straightforward: beef up internal security.

"So far, we've been concentrating on the perimeter and the firewall, and protecting ourselves from the outside world," says Bartlett of Quantum Marine Engineering of Florida. "As the company is growing, we need to take better steps to protect our data inside."

Bartlett voices a common concern for many readers who participated in Information Security's 2007 Priorities Survey. For years, organizations' security efforts focused on shoring up network perimeters. These days, the focus has expanded to protecting sensitive corporate data from insiders--trusted employees and business partners--who might either maliciously steal or inadvertently leak information.


That sounds reasonable. As I see it, however, this shift to focus on the "inside threat" risks missing threats that are far more abundant.

First things first. Inside threat is not new. Check out the lead line from a security story:

You've heard it time and time again: Insiders constitute the greatest threat to your organization's security. But what can you do about it?

That's the lead from a July 2000 Information Security article called "Managing the Threat from Within".

Let's think about this for a moment. InfoSecMag in Dec 2006 mentioned that "organizations' security efforts focused on shoring up network perimeters," so turning inwards seems like a good idea. Wasn't looking inwards a good idea already in 2000? I'm probably not communicating my point very well, so here is another excerpt from the same Dec 2006 article:

Glen Carson, information security officer for California's Victim Compensation and Govern-ment Claims Board, says the problem stems more from a lack of user education than poor authentication.

His priority is education: explaining to the 350 users in his agency why data security is important and how it will help them in the long run.

"We recently completed a third-party security assessment and got a good test of our exterior shell, but internally our controls were lacking," he says.


I wonder if that "good test of our exterior shell" included client-side exploitation? I doubt it. Do you see where I am going?

Here's one other excerpt.

Mass-mailing worms may have gone the way of the boot-sector virus, but that does mean security managers don't have malware on their radar...

Yet there hasn't been a major outbreak since the Sasser worm in 2004, so what's all the fuss? Security managers will tell you that the lack of activity says a lot about the maturation of prevention technologies, advances in automated patch management tools, effectiveness of user awareness campaigns, and overall layered defense strategies.


Ok, are you laughing now? The reason why we're not seeing massive worms is that there's no money to be made in it. Everything is targeted these days. Even InfoSecMag admits it:

It's no secret that hacker motivations have changed from notoriety to money. Many of today's worms carry key-logging trojans that make off with your company's most precious assets. Attacks are targeted, often facilitated by insiders. Rather than relying on social engineering to move infected email attachments from network to network, hackers are exploiting holes in browsers, using Javascript attacks to hijack Web sessions and steal data.

Exactly (minus the "facilitated by insiders" part -- says who, and why bother when remote client-side attacks are so easy?)

Here's my point: why are security managers so worried about Eva the Engineer or Stan the Secretary when Renfro the Romanian is stealing data right now. I read somewhere (I can't cite it now) that something like 70 million hosts on the Internet may be under illegitimate control. It may make sense to speak more of the number of hosts not compromised instead of those that are compromised. In 2004 the authors of the great book Rootkit claimed all of the Fortune 500 was 0wned. Why do we think it's any different now?

It's possible that taking steps to control trusted insiders will also slow down outsiders who have gained a foothold inside the enterprise. However, I don't see too many people clamping down on privileged users, and guess who powerful outsiders will be acting as when the compromise a site?

Of course we should care about insiders, and the insider threat is the only threat you can control. Outsiders are far more likely to cause an incident because, especially with the rise of client-side attacks, they are constantly interacting with your users. The larger the number of users you support, the greater the number of targets an outsider can exploit. Sure, more employees means more insider threats, but let's put this in perspective!

The fact that you offer a minimal external Internet profile does not mean you're "safe" from outsiders and that you can now shift to inside threats. The outsiders are deadlier now than they've ever been. They are in your networks and acting quietly to preserve their positions. Give Eva and Stan a break and don't forget Renfro. He's already in your company.

Holiday Reading Round-up

During some holiday downtime I managed to catch up on some reading. Recently I mentioned the ISO/IEC 27001 standard. The November 2006 ISSA Journal featured an article by Taiye Lambo of eFortresses, an ISO/IEC 27001 consultancy. From what I read it seems ISO/IEC 27001 is a good option for organizations leaning towards related ISO standards like 9000.

After posting NAC Is Fighting the Last War, I read another ISSA Journal article titled Beyond NAC: The value of post-admission control in LAN security by Jeff Prince of ConSentry. Jeff uses the terms "Network Admission Control" and "Network Access Control" to both describe NAC, although I believe he meant to use the former throughout the article. Jeff discusses the importance of controlling a user's activity once he is allowed onto the LAN, hence the "post-admission" aspect. This function will eventually find its way into everyone's switches, so I wouldn't rush out to buy separate new gear. I think post-admission NAC is a cool idea, but I would be surprised to see operators spending the time necessary to define policies and traffic flows properly. Thanks to those of you who responded to my post Smart Cards Everywhere. It turns out the 23 November 2006 issue of NWC featured Analysis: Physical/Logical Security Convergence by Jeff Foristal. The article mentioned solutions from AMAG, CoreStreet, Gemalto, and Intercede.

Although I haven't talked about these topics in detail before, I found Jeffrey Young's article Enterprise WANs to be helpful. I had never heard of Virtual Network Operators like Vanco before. I think it would be neat to get involved with the security aspects of these sorts of carrier-level issues. Please email me (taosecurity [at] gmail [dot] com) if you think I could help your team!) The November 2006 issue of Information Security featured two interesting articles. The first featured a face-off between Bruce Schneier and Marcus Ranum on the effectiveness of federal security regulations. I thought Bruce's "characteristics of good regulations" to be worth memorizing.

He said:

  1. They're targeted at a specific externality.

  2. The penalties are large enough to make the alternative more attractive.

  3. They put the entity able to fix a security problem in charge of the problem.


The same issue also featured a great story by Sandra Kay Miller on tapping fiber links. The paper Optical Taps (.zipped .doc) from Oyster Optics is good reading. The December 2006 issue of Information Security featured several helpful articles too. All of you HIPAAns out there must read HIPAA-ocrisy by Joseph Granneman. Essentially, management has decided to ignore HIPAA because there's only been one HIPAA conviction in the three years since HIPAA "enforcement" started.

Finally, guru Dan Geer wrote a great piece called Playing for Keeps in ACM Queue. It's highly recommended as a survey of the sorry state of our security industry.


Copyright 2006 Richard Bejtlich

Starting Out in Digital Security

Today I received an email which said in part:

I'm brand new to the IT Security world, and I figure you'd be a great person to get career advice from. I'm 30 and in the process of making a career change from executive recruiting to IT Security. I'm enrolled in DeVry's CIS program, and my emphasis will be in either Computer Forensics or Information Systems Security. My question is, knowing that even entry-level IT jobs require some kind of IT experience, how does someone such as myself, who has no prior experience, break into this exciting industry? My plan is to earn some of the basic certifications by the time I graduate (A+, Network+, Security+). What else should I be doing? What introductory books and resources can you recommend?

I thought I'd discussed this sort of question before, but all I found was my post on No Shortcuts to Security Knowledge and Thoughts on Military Service. I believe I cover this topic in chapter 13 of Tao.

To those who are also interested in this question, I recommend reading both of those posts first and then returning to this post. I'll do my best to provide some additional useful advice here.

Here are seven ways you can make yourself more attractive to security-minded employers.

  1. Represent yourself authentically. It's tough when starting out to recognize the size of the digital security world. It's taken me nearly ten years to grasp the scope of the field. You'll be successful if you can clearly identify just what you (think) you know, and what you definitely do not. You will not do anyone favors if you claim to be even somewhat proficient in all or nearly all aspects of digital security. It's extremely important to want to work in security for love of the field, and not the potential paycheck.

  2. Stop using Microsoft Windows as your primary desktop. This is not an anti-Microsoft rant. The reality is the vast majority of the world uses Windows. When you stop using Windows, you move yourself into a smaller group that needs to think and troubleshoot. Some see this as a problem, while others see it as a learning opportunity. If you are completely new, start with one of the easy Linux distros. As you feel adventurous try one of the BSDs. (Mac OS X doesn't really count as a non-Windows platform for the purposes of this point.) This does not mean you will never use Windows again. I dual-boot Windows and FreeBSD on my laptop.

  3. Attend meetings of local security group. Ideally you would have a group like NoVA Sec nearby, but you're more likely to have an ISSA chapter in your city. In either case, attend some meetings. Get immersed in the discussions that occur in those settings. Ask questions.

  4. Read books and subscribe to free magazines. You should start with the books on my Listmania Lists. Subscribe to Information Security, SC Magazine, NWC, and Cisco's IP Journal. I wouldn't bother with 2600. It costs money and more often than not you'll read about "hacking" point of sale terminals and the like.

  5. Create a home lab. No real security "pro" has a only single laptop/desktop connected to a DSL/cable modem. Most every security person I know maintains some sort of lab. If you are resource-constrained, install VMware Server and build a small virtual lab. Experiment with as many operating systems as you can.

  6. Familiarize yourself with open source security tools. Fyodor's Sectools.org is a good starting point. As you meet people and read, you'll learn of new techniques and tools to try.

  7. Practice security wherever you are, and leverage that experience. So many people are in security positions but do not recognize it. If you are a network administrator, you have security potential and responsibilities. If you are a system administrator, you have a platform to secure. If you are a developer, you should practice secure coding. If you set up a home lab, you need to operate it securely. It is both a blessing and a curse that anyone with a computing device is an administrator and a security practitioner. Whatever your background, consider how it might apply to security. For example, former software developers might become involved in application testing and/or source code review, instead of securing carrier networks.


Once you follow this advice, where can you work? A search for jobs with "network security" at Monster.com or similar job sites reveals plenty of opportunities. If you are just starting out, I recommend getting a job where you are a cog in the machine and not the whole machine. In other words, you are probably setting yourself up for failure if you land a job as an organization's sole security person -- and you are brand new. You won't know where to start and you'll have no one on site to mentor you.

It's best to pick a niche first, know that niche well, and then branch out as time passes. It also pays to know where you (want to) fit in the security community.

I appreciate anyone else's advice for this question-asker.

Monday, December 25, 2006

Christmas Wish: VMware FreeBSD Host Support

I noticed this BSD News story mentioned a long-running VMTN thread showing requests for FreeBSD to be supported as a VMware host OS. This means you could run VMware on FreeBSD, instead of Windows, Linux, or (soon) Mac OS X.

If you share this interest, please post to the VMTN thread and let your desire be known. Thank you.

Friday, December 22, 2006

Application Security Monitoring

I found the following quote by Microsoft's Ray Ozzie, in The Web 2.0 World According to Ozzie, to be fascinating:

"In terms of managing trust boundaries, one of the huge challenges that enterprises are going to have is...managing trust between components of composite applications...

"We believe there should be significant auditing within service components—such that when you do expose a partner to certain enterprise data...you have a complete record of the kinds of things that their app did."
(emphasis added)

I think Mr. Ozzie is advocating application security monitoring, a cousin of network security monitoring. If Mr. Ozzie is being as clever as I think he might be, he's realizing that it's going to be nearly impossible to run Web services and the like "securely." We're going to have to rely on monitoring and response since prevention will be far too complex. Resistance will be tried, but will be -- you guessed -- futile.

TIME on Risk

TIME magazine's cover story a few weeks ago was Why We Worry About The Things We Shouldn't... ...And Ignore The Things We Should. There's no direct relationship to digital security, but I found it interesting to read about risk perceptions in the analog world.

Wireshark Substitute Encourages Defensible Software

Thanks to nikns in #snort-gui for pointing me towards this 23rd Chaos Communication Congress talk on an alternative to Wireshark created by Andreas Bogk and Hannes Mehnert. This blog post explains the rationale behind this new tool, still in its infancy and nowhere nearly feature-complete as Wireshark. Two implementations exist. Here is a screenshot of GUI-sniffer:



Here is a screenshot of Network Night Vision:



These applications are written in the Dylan programming language, which is new to me. There's a lang/dylan FreeBSD port, but as you can see I just tried running the Windows binaries.

The authors have written a paper (.pdf) that describes the project in detail. From the first part of the paper:

The security industry is in a paradox situation: many security appliances and analysis tools, be it IDS systems, virus scanners, firewalls or others, suffer from the same weaknesses as the systems they try to protect. What makes them vulnerable is the vast amount of structured data they need to understand to do their job, and the bugs that invariably manifest in parsers for complex protocols if written in unsafe programming languages.

Since we noticed a lack of a decent secure framework for handling network packets, we have designed and implemented major parts of a TCP/IP stack in the high level programming language “Dylan”, focusing on security, performance and code reuse.

Dylan is a high level language that provides a number of features to detect and prevent data reference failures, one of the most common sources of vulnerabilities in C software.

Bounds checks for array accesses are inserted where needed by the compiler. Also a garbage collector is used, avoiding the need to care about manual memory management, and preventing bugs from early frees or double frees. Dylan is strongly typed, so bypassing the type system by doing casts and pointer arithmetic is not possible.

Even though it is as easy to use as common scripting languages, Dylan programs are compiled to machine code. It bridges the world between dynamic and static typing by doing optimistic type inferencing: bindings can be type annotated, and types of expressions can be computed at compile time. This often eliminates type checks or function dispatch in the code.


I am not in a position to critique the programming language used or the authors' implementation. However, I think the idea of building "defensible software," or software that has the best chance possible of resisting intrusions, is a great idea. It's the software equivalent of my "defensible network architecture" idea, which describes how to build an enterprise with the best chance possible of resisting intrusions.

I will probably add this tool and approach to my classes. When I teach network forensics I describe the importance of being aware of handling malicious traffic that might seek to compromise analysis tools like Wireshark or Snort. Thus far the anti-forensics movement seems to have concentrated on denying host-centric forensics, but exploits have always been available for subverting network inspection tools.

Incidentally, there are a ton of interesting talks at the CCC this year.

Zone-H Explains Defacement

Web site defacement mirror Zone-H posted a revealing report on the recent defacement of their own site. The intrusion resulted from a combination of human and technical failures.

The moral of the story is that anyone can be compromised, because the attacker has the initiative. The attacker is usually more motivated and has more time, and resources than the defender. In a world where anyone can be compromised, there is no excuse for not monitoring and preparing for incident response. Every digital resource is a future victim.

The "solution" to intrusions is analog: arresting the intruders. It is not technical.

Thursday, December 21, 2006

NAC Is Fighting the Last War

My post on the IETF Network Endpoint Assessment Working Group elicited a comment that suggested I expand on my thoughts, namely that Cisco Network Admission Control (NAC) / Microsoft Network Access Protection (NAP) / Trusted Network Connect (TNC) "are all fighting the last war." Let's see what the comment poster's own company has to say about NAC.

(Please note that although I use NAC in the text that follows [as used by my sources], I could just as easily say NAP or TNC or NEA. I only single out Cisco because they are investing so much effort into NAC.)

Network Admission Control (NAC), a set of technologies and solutions built on an industry initiative led by Cisco, uses the network infrastructure to enforce security policy compliance on all devices seeking to access network computing resources, thereby limiting damage from emerging security threats. Customers using NAC can allow network access only to compliant and trusted endpoint devices (PCs, servers, and PDAs, for example) and can restrict the access of noncompliant devices.

On the surface, that sounds reasonable. But lets think about the problem we are really trying to solve. Why does one seek to "enforce security policy compliance on all devices seeking to access network computing resources"? The answer here is "limiting damage from emerging security threats." What threat might that be? If you spend any time reading NAC and related marketing, you see a common theme:

[NAC] Ensures endpoints (laptops, PCs, PDAs, servers, etc.) conform to security policy... [NAC] Proactively protects against worms, viruses, spyware, and malware; focuses operations on prevention, not reaction...

Again, from this Q&A document:

NAC helps ensure that all hosts comply with the latest corporate security policies, such as antivirus, security software, and operating system patch, prior to obtaining normal network access...

Cisco NAC is able to control and reduce large-scale security events such as virus outbreaks...

Proactive protection against worms and viruses: Cisco NAC reduces and prevents large-scale infrastructure disruptions caused by vulnerability-based exploits...

It enables customers to use their existing network investments, including antivirus and other security and management software.


Basically, NAC and friends is an anti-virus/worm technology. It's a reaction to the biggest problem the industry faced five years ago: viruses and worms like Code Red and Nimda. That malware exploited vulnerabilities for which patches had been available for weeks or months. "If only hosts connecting to our network were patched!" was the complaint. NAC was developed as a "solution," specifically for the Blaster worm of 2003.

Let's assume NAC was instantly available and deployed in 2001. NAC still doesn't solve what any serious security person should consider the real problem: disclosure/theft, corruption, or denied access to a business' data. Malware can be a means to any of those ends, or it can be an end unto itself, simply spreading for the joy of it.

What does NAC have to say about the half-dozen or more publicly exposed zero-days discovered in 2006? Zippo. So what if you're fully patched and NAC lets you in? You could still be owned and then spread that ownage to the rest of the company.

Malware isn't the only way to break the CIA triad, however. NAC has nothing to say about exploiting poorly configured targets that may be fully patched. It does not address transitive trust. It fails when a system is compromised using stolen credentials. Worst of all, NAC is completely worthless when facing rogue or malicious users operating completely compliant endpoints.

I'm sure some NAC advocates will say my focus on malware misses the boat, perhaps citing the following:

Q. Can Cisco NAC provide additional enforcement beyond antivirus, firewall, and security patches?

A. Yes. Cisco NAC is designed to deliver broad policy compliance enforcement capabilities. Even though antivirus, firewall, and security patches are popular examples of security policy requirements, Cisco NAC can enforce other requirements as well. For example, if organizations want to protect the confidential data residing on their mobile users' laptops from physical loss or theft, they can use Cisco NAC to enforce encryption requirements.


That's great, but it doesn't address any of the issues I previously covered.

I've got a cheaper and easier way to ensure I'm not plugging an unpatched Windows host into someone's network. I run FreeBSD on my laptop.

Now, not all NAC is bad, as Richard Stiennon says. First he criticizes NAC as I have done:

I have written often enough about the absurdity of deploying the infrastructure to check the state of a device... My advice is to do NAC and run as fast as you can from system-health monitoring and quarantine...

Here is the twist:

[A]pplying network-level controls to user access... is an idea whose time has come...

I first heard about the concept of the user-defined network from John Roese, CTO of Enterasys Networks, about five years ago. The idea was that there would be a detailed policy for every user that would define a custom network enforced with virtual LANs (VLAN), that gave access only to resources needed to do the user's job.

Not only would an engineer, say, not be able to log into the payroll server but she could not even see it on the network. This subdividing of the network would limit the exposure to risk from the devices connected to it. Infected laptops could not spread their malware to others. Malicious employees would be hampered in their ability to scan the network for targets. Visitors would be limited to Internet access only.


That's a form of segmentation enforced on a per-user level. That's a neat idea, but I wonder just how willing administrators are to go through the trouble. I think 802.1X might be enough for most people, although the rise of virtual systems all sharing a single switch port makes that difficult.

RS has written and podcasted a LOT about NAC -- read/listen here.

So, in brief -- if you want a way to make sure your NAC-compatible devices are patched to some level you want, in order to reduce their vulnerability to malware, in the hope that running a certain version of Windows makes a difference, then use NAC to enforce compliance. Otherwise, don't waste your time or money.

Smart Cards Everywhere?

One of my clients wants to know if it's possible to implement something like the DoD Common Access Card (CAC, not "CAC card") in a commercial setting. In other words, you use a single card for building access, PC access, etc. Is anyone using something like that in their organization?

Thoughts on SAS 70 and Other Standards

I'm not an auditor or CPA, thank goodness. The first time I heard of SAS 70 (Statement on Auditing Standards No. 70, Service Organizations) happened when I visited Symantec in October. Last week, however, one of my clients asked what I knew about SAS 70. I knew Symantec used its SAS 70 results as a way to avoid having every Symantec managed security service client perform its own audit of Symantec. My client wanted to know if his company might also benefit from getting a SAS 70 audit.

I found an exceptionally helpful CSO Online article by Michael Fitzgerald about SAS 70. I'd like to share some insights from it.

A spokeswoman for the body that created SAS 70 doesn't actually recommend it for security purposes. "It isn't a measure of security, it's a measure of financial controls," says Judith Sherinsky, a technical manager on the audit and test standards team at the American Institute of Certified Public Accountants (AICPA), which created SAS 70...

For security audits, Sherinsky recommends a different AICPA standard: SysTrust, an attestation engagement that includes criteria for system security. SysTrust was developed to help CPAs gauge whether systems meet the following criteria: availability, security, processing integrity, online privacy and confidentiality.


That's extraordinary. It means all the customers who get a SAS 70 audit from a service provider aren't getting any real security assurances. It sounds like Common Criteria.

The Sarbanes-Oxley Act is essentially a mandate to establish internal controls so that corporate executives can't fudge their numbers. Sarbox requires that companies verify the accuracy of their financial statements, and establishes SAS 70 Type 2 audits as a way to verify that third-party providers meet those needs...

A SAS 70 audit does not rate a company's security controls against a particular set of defined best practices. In a SAS 70 audit, the service organization being audited must first prepare a written description of its goals and objectives. The auditor then examines the service organization's description and says whether the auditor believes those goals are fairly stated, whether the controls are suitably designed to achieve the control objectives that the organization has stated for itself, whether the controls have been placed in operations (as opposed to existing only on paper), and in a Type 2 engagement, whether these controls are operating effectively.

The fact that a company has conducted a SAS 70 audit does not necessarily mean its systems are secure. In fact, a SAS 70 may confirm that a particular system is not secure, by design.

"You can have control objectives to make any statement management may want to make," says Robert Aanerud, chief risk officer and principal consultant at security consultancy HotSkills. In effect, he says, management could decide that the company is OK with bad access control, and the auditor (who must be a CPA) then needs to ensure that access control is at least bad. The SAS 70 opinion would essentially say that, yes, the company has achieved its stated control objectives.
(emphasis added)

It sounds like reading the SAS 70 report is important!

Unfortunately, consultants say many companies are skipping the hard work and treating SAS 70 as a security rubber stamp. Sharon O'Bryan, head of O'Bryan Advisory Services, says she's aware of companies taking SAS 70 reports for potential service providers, sticking them someplace and never reading them...

Service providers say they're being asked more and more often for SAS 70 audits, often instead of governance standards like Cobit or ISO 17799. That's even true for companies that handle security functions, traditionally more oriented toward granular best-practice tests than the broad audit test of SAS 70.

Michael Scher, general counsel and compliance architect at Nexum, a security product and service provider, says his company is preparing to undergo its first SAS 70 audit. "It's an efficiency-type move," Scher says. It will save his company the trouble of having to be audited by every potential client, or generate reams of documentation in answer to questions.


If SAS 70 is so bad for security, is there an alternative? The AICPA quote earlier in the article mentions SysTrust. AICPA provides a comparison brochure for SAS 70 vs SysTrust. It includes the following:

SAS 70 intended purpose: To provide user auditors with information about controls at the service organization that may affect assertions in the user organizations' financial statements. This generally enables a user auditor to perform an audit.

Trust Services intended purpose: To provide assurance that an organization's system's controls meet one or more of the Trust Services principles and related criteria. Areas addressed by the Principles include: security, online privacy, availability, confidentiality and processing integrity.


It seems SAS 70 is not at all what the customers think it is. Alternatively, they know they are not getting any security assurances, but just want a rubber stamp. Apparently this is more make-work for CPAs, since SAS 70 work must be done by a CPA.

Speaking of work for CPAs, their What Skills Do I Need to Provide SysTrust Services? site is hilarious in a sick way:

Application of Current Skills and Knowledge. CPAs have the ethical standards and principles needed to evaluate and provide assurance on the reliability of systems. CPAs have skills in evaluating evidence, determining the effectiveness of internal controls, and reporting to third parties on the results of the work performed.

New Skills and Knowledge May Be Required. CPA in public accounting and CPAs industry may require additional competencies in addition to those from the traditional accounting, audit, and tax arena in order to provide services related to SysTrust. In order to deliver SysTrust-related advice or assurance, CPAs may need to use automated techniques.


So, in brief: CPAs are ethical but are going to rely on "automated techniques" to validate security effectiveness. Right...

What about the various ISO standards? Here Wikipedia is helpful:

ISO/IEC 27001 is an information security standard published in October 2005 by the International Organization for Standardization and the International Electrotechnical Commission. Its complete name is Information technology -- Security techniques -- Information security management systems -- Requirements. The current standard replaced BS 7799-2:2002, which has now been withdrawn.

ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System (ISMS). It specifies requirements for the management of the implementation of security controls. It is intended to be used in conjunction with ISO 17799:2005, a security Code of Practice, which offers a list of specific security controls to select from.

This is also the first standard in a proposed series of standards which will be assigned numbers within the ISO 27000 series. Others are anticipated to include a re-publication of ISO 17799, a standard for information security measurement and metrics, and potentially a version of the current BS7799-3 standard.

Prior to the release of the ISO 27001 standard, organizations could only be certified against the British Standard Institute's BS7799-2 standard. Now organizations can obtain the ISO 27001 certification, as the BS7799-2 certification is being phased out, and the standard itself has been withdrawn...

It should also be noted that this certification scheme now aligns with other ISO schemes, such as those for ISO 9001 and ISO 14001.


This blog entry has some opinion on ISO 27001 too.

Do any of you recommend a certain standard to show your company is implementing effective security practices?

Wednesday, December 20, 2006

Port-based Alerts Are a Bad Idea

For my 1700th post (as reported by the new Blogging infrastructure) I thought I would report on an issue I'm looking at in Sguil right now.

I have 1586 of the following alerts like the following aggregated in my Sguil console. This is the text representation.

Count:1 Event#1.130182 2006-12-15 15:57:32
DOS MSDTC attempt
a.b.c.d -> e.f.g.h
IPVer=4 hlen=5 tos=0 dlen=1388 ID=16858 flags=0 offset=0 ttl=55
chksum=38030
Protocol: 6 sport=10000 -> dport=3372

Seq=3640110148 Ack=536397245 Off=5 Res=0 Flags=***A**** Win=65535 urp=15810
chksum=0
Payload:
35 69 86 C2 00 00 04 1E B1 B6 7E 19 FC 4A 28 87 5i........~..J(.
18 7C 2E F4 12 68 F1 79 66 F6 D0 17 D0 26 5A 48 .|...h.yf....&ZH
C6 0A 54 AB 58 42 9A F4 83 7A 85 3F E3 40 AD CB ..T.XB...z.?.@..
AF 1C 03 EE DE CD 38 94 1E 1F 55 8C 99 E8 1A 4E ......8...U....N
...truncated...

They were caused by this rule:

alert tcp $EXTERNAL_NET any -> $HOME_NET 3372 (msg:"DOS MSDTC attempt";
flow:to_server,established; dsize:>1023; reference:bugtraq,4006;
reference:cve,2002-0224; reference:nessus,10939; classtype:attempted-dos;
sid:1408; rev:11;)

Basically this says any packet with more than 1023 bytes of application data to port 3372 TCP will generate an alert, although the flow keyword says the packet should be from a client to a server.

I think this is really traffic from a client (e.f.g.h) using source port 3372 TCP to a server (a.b.c.d) using destination port 10000 TCP, perhaps Veritas backup?

One way to validate that idea is to find other sessions involving a.b.c.d and e.f.g.h, where a.b.c.d is also offering port 10000 TCP. Sure enough, when I check Sguil and look at sessions involving those IPs, I find exactly that.

If you really need to see what is happening, you look for full content data and find the actual three way handshake:

10:47:15.932338 IP e.f.g.h.3372 > a.b.c.d.10000: S 536218454:536218454(0) win 65535
10:47:15.959516 IP a.b.c.d.10000 > e.f.g.h.3372: S 3638950519:3638950519(0)
ack 536218455 win 65535
10:47:15.970914 IP e.f.g.h.3372 > a.b.c.d.10000: . ack 1 win 65535
10:47:16.886497 IP e.f.g.h.3372 > a.b.c.d.10000: . 1:871(870) ack 1 win 65535
...truncated...

I used to cringe when our Air Force ASIM sensors fired alerts on port 1524, causing hours of wasted analyst time. Thankfully our tools for inspecting these sorts of alerts have improved, although the rationale behind rules like this is still weak.

Switched to New Blogger

I switched today to the new Blogger infrastructure. A few of my students from USENIX LISA were Google employees. They encouraged me to switch. I did try to do so last week, but I received an error saying my blog had too many postings (or something to that effect). Today, however, I was able to move all my blogs to the new system. Let me know if you see any problems. Thank you.

December 2006 (IN)SECURE Magazine

The December 2006 (.pdf) issue of (IN)SECURE Magazine is available. Interesting articles include Web 2.0 Defense with AJAX Fingerprinting and Filtering by Shreeraj Shah, and another "virtual trust" article by Ken Belva and Sam DeKay.

IETF Network Endpoint Assessment Working Group

Dark Reading posting an article on the new Network Endpoint Assessment (nea) IETF working group. The description says, in part:

Network Endpoint Assessment (NEA) architectures have been implemented in the industry to assess the "posture" of endpoint devices for the purposes of monitoring compliance to an organization's posture policy and optionally restricting access until the endpoint has been updated to satisfy the posture requirements. An endpoint that does not comply with posture policy may be vulnerable to a number of known threats that may exist on the network. The intent of NEA is to facilitate corrective
actions to address these known vulnerabilities before a host is exposed to potential attack. Note that an endpoint that is deemed compliant may still be vulnerable to threats that may exist on the network. The network may thus continue to be exposed to such threats as well as the range of other threats not addressed by maintaining endpoint compliance.
I have a feeling these Cisco Network Admission Control (NAC) / Microsoft Network Access Protection (NAP) / Trusted Network Connect (TNC) plans are all fighting the last war. Others have criticized NEA, and I tend to agree with their conclusions. I have a feeling that "business realities" are going to prevent security people from restricting access to NEA-noncompliant devices. At some point NEA will be another part of configuration management anyway.

Tuesday, December 19, 2006

Thoughts on Check Point Acquisition of NFR

Earlier this year I covered Check Point's attempt to purchase Sourcefire. Well, Check Point bought another vendor -- NFR -- for $20 million. Talk about market valuation; Sourcefire's sale price was $225 million. NFR is also down to 22 employees, according to the press release. Although the FAQ says

Check Point intends to continue to sell, support, and develop an independent NFR Security product line.

I doubt that will last. It doesn't make sense to buy the technology but not integrate it into Check Point's firewalls, and then discard the separate box.

At this point it seems we're left with the following IDS/IPS vendors:

Let's see how that relates to the idea that all network security functions will collapse to switches. The first four sell switches, so I expect them to lead that drive. The fifth (ISS) is owned by IBM, who is more interested in services these days. I expect IBM will discontinue or sell off that product line, following Symantec's lead, to focus on services.

I don't think McAfee's prospects are good. I think Microsoft will eventually crowd out the anti-virus/anti-malware/anti-spyware/NAC/host defense market. All host-centric security will collapse into the operating system. That knocks out a huge chunk of McAfee's product line. This is really going out on a limb, but I could see McAfee being sold off in pieces, with Microsoft acquiring host-centric assets, Cisco or another switch vendor buying Intrushield, and IBM acquiring the services part.

Where does this leave Sourcefire? If they eventually do go public, I think they will still end up being purchased by someone -- maybe Cisco. At some point Cisco will realize their IDS is not that great, and they will buy better technology. The Feds will see Cisco as a perfectly acceptable suitor and will approve the deal.

Returning to Check Point, they will probably be acquired by a switch vendor at some point too.

Did I miss anyone? I don't count all the vendors repackaging Snort.

Saturday, December 16, 2006

Two Prereviews

Two publishers were kind enough to send new books last week. I plan to read and review both early next year. The first is McGraw-Hill/Osborne's Hacking Exposed: VoIP by David Endler and Mark Collier. The best Hacking Exposed books introduce a new technology, then demonstrate ways to break it that a reader can duplicate. I like seeing new HE books on specific issues, rather than having everything rolled into a single book. The second is Syngress' Wireshark & Ethereal Network Protocol Analyzer Toolkit by Angela Orebaugh and friends. This looks like an updated edition of 2004's Ethereal Packet Sniffing, which I really liked. Jose Nazario's review gave it four stars, partly due to editing problems. I plan to read this book and let you know what I think.

Duronio Postscript: 97 Months

In June and July this year I devoted several posts to covering the Duronio intrusion where my friend Keith Jones served as prosecution expert witness. Keith called this week to tell me Roger Duronio was sentenced to 8 years and one month jail time for his crimes. Great work Keith!

Pointer to Snort 3.0 Briefing Summary

Saad Kadhi kindly pointed me to this blog post which summarizes a talk given by Marty Roesch. Saad describes Marty's plans for Snort 3.0, and I recommend taking a look.

Saturday, December 09, 2006

Matasano Is Right About Agents

I've been exceptionally busy teaching all week at USENIX LISA, so blogging has been pushed aside. However, I literally read the Matasano Blog first, of all the Bloglines feeds I watch. This evening I read their great post Matasano Security Recommendation #001: Avoid Agents. They really mean "Minimize Agents," as noted in their summary:

Enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities, by:

1. Minimizing the number of machines that run agent software.
2. Minimizing the number of different agents supported in the enterprise as a whole.


I absolutely agree with these statements. One of the first signs that you are dealing with a clueless security manager is the requirement to run anti-virus on every system. I shared the pain of such a foolish idea yesterday with a student who is struggling to meet such a mandate. He must deploy anti-virus on his Unix-like servers (I forget what OS -- something not common, however), and he's not allowed to use any open source solution. He's ended up with the only vendor in the world who sells a so-called "AV" solution for his platform, and it's absolutely a waste of money.

Worse, as is the case any time you add code to a platform, you are adding vulnerabilities. Write the following on your security policy management clue-bat: Running AV is not cost-free. In other words, running AV on any system may introduce vulnerabilities that were not present before. Try perusing the results of querying Secunia or OSVDB to see lists of AV products with security problems -- some of them allowing privilege escalation and compromise.

The only problem I have with the Matasano approach is the slide I posted above. Agents or Enterprise Management Applications are never "threats." They may offer vulnerabilities which can be exploited by threats, but agents themselves are not a threat.


Copyright 2006 Richard Bejtlich

Tuesday, December 05, 2006

Bejtlich Book Signing Thursday 1230 in DC

I will attend a book signing event at USENIX LISA 06 at the Wardman Park Marriott Hotel in Washington DC from 1230-1330 on Thursday 7 December. Representatives from Reiters will be selling books there as part of the conference expo from 1000-1400 on Thursday. Please stop by to say hello if you'd like a book signed.

I'll return to LISA on Friday to teach Network Security Monitoring with Open Source Tools. You can still sign up onsite if you'd like to attend. Thank you.

TCP/IP Weapons School Part 1 Wrap-Up

I'd like to address a few issues that arose during class Sunday and Monday.

First, someone asked about interoperability between the various Ethernet frame types. Page 75 of the excellent Troubleshooting Campus Networks states

Two stations cannot communicate unless they share a common frame format, which is sometimes beneficial. For example, if you have two networks on a physical medium that you wish to keep separate for security reasons, you can configure the networks for different frame types and they won't communicate with each other.

I don't agree with the "security" aspect, since the a station on a SPAN port can still see the traffic through promiscuous sniffing. Still, now you know that a host using Ethernet II framing can't talk to one using 802.3 LLC SNAP, for example.

One of you asked how a host knows the length of an Ethernet II frame if the frame doesn't carry a length filed like 802.3. This FAQ claims:

How is the length of an Ethernet II frame calculated?

The length of an Ethernet II frame is not present in the frame itself. It depends on the Ethernet network interface used. When the interface sends a frame to the network device driver, it supplies the length of the received frame.


The IP header length only specifies the length of the IP header. TCP contains an offset to application data which helps us know the length of the TCP header. UDP has a length header.

One of you asked how multicast traffic is handled by switches. According to page 97:

Bridges and switches forward broadcast and multicast frames out all ports, unless configured to do otherwise. The forwarding of broadcast and multicast frames can result in performance problems in large, flat (switched or bridged) networks.

Cisco deals with multicast using CGMP and IGMP Snooping:

Multicast traffic becomes flooded because a switch usually learns MAC addresses by looking into the source address field of all the frames it receives. A multicast MAC address is never used as source address for a packet. Such addresses do not appear in the MAC address table, and the switch has no method for learning them.

The first solution to this issue is to configure static MAC addresses for each group and each client. This solution works well, however, it is neither scalable nor dynamic....

The second solution is to use CGMP, which is a Cisco proprietary protocol that runs between the multicast router and the switch. CGMP enables the Cisco multicast router to understand IGMP messages sent by hosts, and informs the switch about the information contained in the IGMP packet.

The last (and most efficient) solution is to use IGMP snooping. With IGMP snooping, the switch intercepts IGMP messages from the host itself and updates its MAC table accordingly. Advanced hardware is required to support IGMP snooping.


One of you asked about multicast MAC addresses. Cisco says:

Multicast IP addresses are Class D IP addresses. Therefore, all IP addresses from 224.0.0.0 to 239.255.255.255 are multicast IP addresses. They are also referred to as Group Destination Addresses (GDA).

For each GDA there is an associated MAC address. This MAC address is formed by 01-00-5e, followed by the last 23 bits of the GDA translated into hex, as shown below.

- 230.20.20.20 corresponds to MAC 01-00-5e-14-14-14.
- 224.10.10.10 corresponds to MAC 01-00-5e-0a-0a-0a.

Consequently, this is not a one-to-one mapping, but a one-to-many mapping, as shown below.

- 224.10.10.10 corresponds to MAC 01-00-5e-0a-0a-0a.
- 226.10.10.10 corresponds to MAC 01-00-5e-0a-0a-0a as well.


Someone asked about detecting ARP poisoning with Snort. The snort.conf includes the following:

# arpspoof
#----------------------------------------
# Experimental ARP detection code from Jeff Nathan, detects ARP attacks,
# unicast ARP requests, and specific ARP mapping monitoring. To make use of
# this preprocessor you must specify the IP and hardware address of hosts on
# the same layer 2 segment as you. Specify one host IP MAC combo per line.
# Also takes a "-unicast" option to turn on unicast ARP request detection.
# Arpspoof uses Generator ID 112 and uses the following SIDS for that GID:

# SID Event description
# ----- -------------------
# 1 Unicast ARP request
# 2 Etherframe ARP mismatch (src)
# 3 Etherframe ARP mismatch (dst)
# 4 ARP cache overwrite attack

#preprocessor arpspoof
#preprocessor arpspoof_detect_host: 192.168.40.1 f0:0f:00:f0:0f:00

I've never tried it, but I might now.

If you have any other questions, please post them as comments here. Thank you.

I still have a few open seats left for part 2 of the course on Saturday 9 Dec 06 and Sunday 10 Dec 06, which covers the topics addressed in this class outline. We will cover layers 4 through 7. The registration form is here. Part 2 is held at the Marriot Wardman Park Hotel as well, in the Harding Room. Students who are already registered will be hearing from me shortly. Basically you'll need Ethereal or Wireshark to decode the traces we'll examine.


Copyright 2006 Richard Bejtlich

Saturday, December 02, 2006

Two Prereviews

Two publishers were kind enough to send new books last week. I plan to read and review both early next year. The first is Apress' Beginning C, 4th Ed by Ivor Horton. What, learn C? I don't expect or plan to become any C wizard by reading this and a few other books. Rather, I'd like to be able to understand code I come across, or perhaps make small modifications to otherwise useful programs. Any original programming I plan for 2007, I expect to use Python. Second is Syngress' FISMA Certification & Accreditation Handbook by Laura Taylor. Talk about moving from something useful (C) to something not (FISMA). Still, this seems like the only book on the subject, and FISMA is always a big discussion item at my local beltway bandit ISSA meetings. I hope this book will let me better understand the FISMA racket and why it's a waste of money. Of course, the book will not use those terms, but I will report what I find when I review it early next year.

Notes for TCP/IP Weapons School Part 1 Students

This note is intended for students in days one and two of TCP/IP Weapons School on 3-4 December 2006 at USENIX LISA in Washington, DC.

These are the tools that will be discussed. Remember, this is a class on TCP/IP -- tools are not the primary focus. However, I needed something to generate interesting traffic.

The traces we will analyze are available at www.taosecurity.com/taosecurity_tws_v1_traces.zip. You will need to have Ethereal, Wireshark, or a similar protocol analyzer installed to review the traces. Tcpdump might be somewhat limited for this class but you can at least inspect packets with it.

There are still a few seats available for TCP/IP Weapons School Part 2, which covers a little more on layer 3 and then covers layers 4,5,6 and probably 7. I will post a summary of that class' contents soon. If you want to register for Part 2, please visit my training page for details, or just email me: training [at] taosecurity [dot] com. Thank you.