Thursday, April 27, 2006

Risk Mitigation

If you've been following the last few days of posts, I've been thinking about security from a more general level. I've been wondering how we can mitigate risks in a digital world where the following features are appearing in nearly every digital device.

Think about digital devices in your possession and see if you agree with this characterization of their development. Digital devices are increasingly:

  • Autonomous: This means they act on their own, often without user confirmation. They are self-updating (downloading patches, firmware) and self-configuring (think zeroconf in IPv6). Users could potentially alter this behavior, but probably not without breaking functionality.

  • Powerful: A cell phone is becoming as robust as a laptop. Almost any platform will be able to offer a shell to those who can solicit it . There is no way to prevent this development -- and would we really want to?

  • Ubiquitous: Embedded devices are everywhere. You cannot buy a car without one. I expect my next big home appliance to have network connectivity. Users can't do much about some of these developments.

  • Connected: Everything will be assigned an IPv4 (or soon) an IPv6 address. Distance is seldom a problem. Every digital maniac is a few hops away.

  • Complex: I am scared by the thought of running Windows Mobile on my next phone. Can I avoid it? Probably not. How many lines of code are running on that mini-PC -- I mean "phone" -- I'll be using?

In my opinion, this digital world is increasingly resembling the analog one. In fact, those five attributes could describe people as easily as complex machines!

The key factor in this new world will not be static vulnerabilities, but dynamic threats. The number of opportunities for threats to play havoc will vastly dwarf the chances for defenders to address vulnerabilities.

Think about how we deal with security in a typical city. I call it the "local police model."

  • Police can never prevent all crimes, although they can try.

  • Police more often respond to crimes. They proceed to track and jail criminals.

  • By prosecuting criminals, the justice system removes threats.

  • No one spends time or money putting bars on windows or replacing door locks in the average suburban neighborhood.

  • Crime still happens, but society survives as long as the level of crime is acceptable.

Why did a police model rise? Back in the cave man days, we lived in tribes. If you didn't belong to my tribe, I could beat you back with my club. As societies evolved, communication and ties between tribes prevented this simple model from working. More sophisticated threats with ingenious attacks (e.g., white collar crime) took advantage of these social ties.

Guess what -- this is where we are now in the digital world. Once upon a time you might have been able to restrict access based on trusted IPs. Then you had to shut down ports that couldn't be shared. Now we do business with everyone, and I can't be sure that the Microsoft SMB/CIFS that I'm exchanging with a business partner is normal or malicious when I use a standard access control device.

A threat-centric approach to security has served the analog world well enough. I think that is the only way to move forward as the digital world becomes as complex as the analog.

One more thought: The number of assets continues to rise. The number of vulnerabilities in those assets continues to rise. The number of threats continues to rise. The ability of security experts to apply countermeasures can not keep pace with this world. Is it time for autonomous agents to work on behalf of "the good guys?" I am beginning to agree with Dave Aitel's idea of nematodes that act on behalf of human agents.

It is becoming increasingly difficult for humans to even understand the digital environment. The only real way to know exploitation is not possible is for exploitation to be tried and then found to fail. Nematode agents may roam the network constantly testing intrusion scenarios and reporting their progress. Perhaps next-generation detection devices will monitor nematode activity. When they see another agent that is not a registered nematode exploit a target, that will be the sign that an intrusion has occurred.

Analog Security is Threat-Centric

If you were to pass the dark alley in the image at left, I doubt you would want to enter it. You could imagine all sorts of nasty encounters that might deprive you of property, limb, or life. Yet, few people can imagine the sorts of danger they encounter when using a public PC terminal, or connecting to a wireless access point, or visiting a malicious Web site with a vulnerable browser.

This is the problem with envisaging risk that I discussed earlier this week. Furthermore, security in the analog world is much threat-centric. If I'm walking near or in a dark alley, and I see a shady character, I sense risk. I don't walk down the street checking myself for vulnerabilities, ignoring the threats watching me. ("Exposed neck? Could get hurt there. Bare hands? Might get burnt by acid." Etc...)

It seems like the digital security model is like an unarmed combatant in a war zone. Survivability is determined solely by vulnerability exposure, the attractiveness of one's assets to a threat, and any countermeasures that might disrupt threats.

In the analog world, one can employ a variety of tactics to improve survivability. Avoiding risky areas is the easiest, but let's assume one has to enter dangerous locations. A potential victim could arm himself, either using a weapon or martial arts. He could travel in groups, hire a bodyguard, or enlist the police's aid.

The term "hack-back" crops up in the digital scenario. This is really not a useful approach, because hacking the system attacking you does absolutely nothing to address the real threat -- the criminal at the keyboard.

In the analog world, consider the consequences for "hacking back." If you shoot an assailant, you'll have to explain yourself to the police or potentially a court of law. You probably can't shoot someone for simply being on your property, but you can if they threaten or try to harm you.

On a related note, we need some means to estimate threat level in a systematic, repeatable manner. When I say "threat" I mean threat, not vulnerability. Something like a system of distributed honeypots with distinct configurations might be helpful. Time-to-exploit for a given patch set might be tracked. I know the Honeynet Project periodically issues reports on how long it takes to 0wn a box, but it might be neat to see this in a regular, formal manner.

Why Prevention Can Never Completely Replace Detection

So-called intrusion prevention systems (IPS) are all the rage. Since the 2003 Gartner report declaring intrusion detection systems (IDS) dead, the IPS has been seen as the "natural evolution" of IDS technology. If you can detect an attack, goes a popular line of reasoning, why can't (or shouldn't) you stop it? Here are a few thoughts on this issue.

People who make this argument assume that prevention is an activity with zero cost or down side. The reality is that the prevention action might just as easily stop legitimate traffic. Someone has to decide what level of interruption is acceptible. For many enterprises -- especially those where interruption equals lost revenue -- IPS is a non-starter. (Shoot, I've dealt with companies that tolerated known intrusions for years because they didn't want to "impact" the network!)

If you're not allowed to interrupt traffic, what is the remaining course of action? The answer is inspection, followed by manual analysis and response. If a human decides the problem is severe enough to warrant interruption, then a preventative measure is deployed.

In some places, prevention is too difficult or costly. I would like to know how one could use a network-based control mechanism to stop a host A on switch X from exploiting host B on switch X. Unless the switch itself enforces security controls, there is no way to prevent this activity. However, a sensor on switch X's SPAN port could detect and report this malicious activity.

Note that I think we will see this sort of access control move into switches. It's another question whether anyone will activate these features.

I think traffic inspection is best used at boundaries between trusted systems. Enforcement systems make sense at boundaries between trusted and untrusted systems. Note that if you don't trust individual hosts inside your organization (for whatever reason), you should enforce control on a per-host basis within the access switch.

Thoughts on Patching

As I continue through my list of security notes, I thought I would share a few ideas here. I recorded these while seeing Ron Gula discuss vulnerability management at RMISC.

Many people recommend automated patching, at least for desktops. In the enterprise, some people believe patches should be tested prior to rollout. This sounds like automated patching must be disabled. I'm wondering if anyoen has implemented delayed automated patching. In other words, automatic updates are enabled, but with a two or three day delay.

Those two or three days give the enterprise security group time to test the patch. If everything is ok, they let the automated patch proceed. If the patch breaks something critical, they instruct the desktops to not install the patch until further orders. I think this approach strikes a good balance since I would prefer to have automated patch installation be the default tactic, not manual installation.

Determining which systems are vulnerable results in imagining a continuum of assessment tactics.

  • At the most unobtrusive level we have a "paper review" of an inventory of systems and their reported patch levels.

  • Next comes passive assessment of traffic to and from clients and servers.

  • Traditional vulnerability scanning, without logging in to the target, is the next least obtrusive way to assess hosts.

  • Logging in to a host with credentials is another option.

  • Installing an agent on the host is a medium-impact approach.

  • Exploiting the host is the final way to definitively see if a host is vulnerable.

On a related note, Ron mentioned that the costs of demonstrating compliance far exceed those of maintaining compliance. This is sad. Ron also noted he believes auditors should work for the CFO and not the CIO. I agree.

Wednesday, April 26, 2006

Return on Security Investment

Just today I mentioned that there is no such thing as return on security investment (ROSI). I was saying this two years ago. As I was reviewing my notes, I remembered one true case of ROSI: the film Road House. If you've never seen it, you're in for a treat. It's amazing that this masterpiece is only separated by four years from Swayze's other classic, Red Dawn. (Best quote from Red Dawn: A member of an elite paramilitary organization: "Eagle Scouts.")

In Road House, Swayze plays a "cooler" -- a bouncer who cleans up unruly bars. He's hired to remove the riff raff from the "Double Deuce," a bar so rough the band is protected by a chicken wire fence! I personally would have hired Jackie Chan, but that's a story for another day. Swayze's character indeed fights his way through a variety of local toughs, in the process allowing classier and richer patrons to frequent the Double Deuce. The owner clearly sees a ROSI; the money he pays Swayze is certainly less than the amount he now receives from a more upscale establishment.

Is there a lesson to be drawn for the digital security world? Notice the focus on threats. The Double Deuce owner didn't hire Swayze to build higher walls or cover windows with iron bars. Instead of addressing vulnerabilities, he sought threat removal. This is not a process the average company can implement; usually law enforcement and intelligence agencies have this power.

I have heard the term "friendly force presence" being used within certain military circles. This seems to refer to keeping assessment teams on the lookout for indications of the adversary on our networks. This certainly works in the physical world, but it may be difficult to translate into the virtual one.

One example: when I visited Ottawa recently, I stopped at a McDonald's to get a quick meal. The place was teeming with teenagers, most of whom were just lounging around. I considered leaving because the place was so full. I saw a manager appear a few minutes after I arrived, and with him came a uniformed police officer. The officer had a word with one or two of the larger teens and suddenly the restaurant started to empty. Within five minutes hardly anyone was left, and no one under the age of 18. It was amazing.

Two Good IEEE Security and Privacy Articles

One of my favorite aspects of attending USENIX conferences is receiving free copies of magazines like IEEE Security and Privacy. The March/April 2005 issue (ok, I'm way behind when I use the freebie method) features two articles that might be interesting to security folks. First, if you want a good summary of trusted computing, read Protecting Client Privacy with Trusted Computing at the Server (.pdf). To get insights on the differences between computer science and computer engineering, try Turing is from Mars, Shannon in from Venus (.pdf). Since Dartmouth faculty wrote both articles, they're published free through

GAO Hammers Common Criteria

I've written about Common Critera before. If you also think CC is a waste of money, read GAO: Common Criteria Is Not Common Enough by Michael Arnone. It summarizes and comments upon a report by the Government Accounting Office titled INFORMATION ASSURANCE: National Partnership Offers Benefits, but Faces Considerable Challenges. Mr. Arnone writes:

GAO also criticized the National Information Assurance Partnership (NIAP) for not providing metrics or evidence that the Common Criteria actually improves product security. In addition, the Common Criteria process takes so long to complete that agencies often find that the products they need are not on the list of certified offerings or that only older versions have been accredited, GAO’s report states...

Pescatore said GAO’s call for increased education and awareness of NIAP’s function is overblown. Large vendors already know the process well and can afford millions of dollars for tailor-made product evaluations, he said.

Any education efforts should target smaller vendors — with $10 million to $50 million a year in annual revenue — that don’t know about the NIAP process, don’t know how expensive it is and have trouble affording it, Pescatore said. NIAP must do more than educate, he added. It must provide subsidies or reduce prices so smaller vendors can participate, he said.

It sounds like Common Criteria is becoming nothing but a hurdle to keep smaller companies from providing products to government agencies.

Forensics Warnings from CIO Magazine

The April 2006 issue of CIO Magazine features an article called CSI for the Enterprise?. It addresses the rise of electronic data discovery (eDiscovery in some quarters) tools. For a management magazine, the article makes several useful points:

Beware the Forensics Label

Many salespeople attach the label "forensics" to their security and compliance analysis tools, and that can be very misleading. In law enforcement circles, "forensics" means a well-defined set of discovery and investigative processes that hold up in court for civil or criminal proceedings. An enterprise that relies on these tools' records or analysis in, for example, a wrongful termination suit, is probably in for an unpleasant surprise. "It may not hold up in court," says Schwalm, a former Secret Service agent. "Very few vendors have an idea of what the requirements [are for proof, from a legal perspective]. They're really providing just a paper trail. You should challenge what the vendor means by ‘forensics capability,'" he adds.

One gotcha of using EDD tools for legal purposes is proving the inviolability of the data. Tools that keep or aggregate event logs may not provide access control that lets the enterprise prove that the underlying data is unaltered and accurate.

This issue is particularly critical because most vendors pitch their EDD tools as a way of detecting internal threats. Yet an insider is in the best position to access and alter data to cover his tracks or deflect blame to someone else, making truly secure access control and data management policies a must to even consider relying on EDD tools in a legal case. To thwart insider manipulations, critical functions such as setting up new vendors or changing payment destinations should require multiple levels of approval. "One person shouldn't be minding the whole store," says 2Checkout's Denman.

A related concern is being able to go back to the original raw data, since most EDD tools alter the original data to put it into a searchable database and to make formats from different types of monitoring appliances consistent. Such regularization is necessary to analyze the records, but to be legally effective, there must be a defensible way to show that it didn't distort the original data, says Gartner's Litan.
(emphasis added)

Amen. Since we're talking about centralized logs, has anyone tried Splunk? This "Google for system logs" seems like a really neat idea.

To hear a vendor's take on the important of electronic data discovery tools, John Patzakis from Guidance Software wrote a good article called Why the eDiscovery Revolution is Important to InfoSec (.pdf, ISSA membership required). It's basically a cost-avoidance argument, like everything else should be with security. (There is no return on security investment.) John states:

In 2006, companies will spend $2 billion on eDiscovery services, and that figure is expected to climb to $3 billion in 2007... corporate information security can play a very key role in solving this critical problem by dramatically reducing costs and improving compliance... According to standard price lists from top eDiscovery providers, a company can expect to pay $11,000 to $15,000 for the processing of a single hard drive...

A key reason for these high costs involves the traditional role of outside counsels who represent the company and typically oversee and manage the eDiscovery process on a per-case basis. These law firms habitually rely on their own consultants to handle the eDiscovery needs of the case at hand, and both the law firm and their consultants typically approach the issue as a case-specific litigation support project. Thus, the focus is on addressing the immediate case, and not on solving the end client’s long-term problems by establishing a systematic methodology...

When a company transitions from outsourced eDiscovery to establishing a largely in-house process, the cost savings are dramatic. Many major organizations are now saving tens of millions of dollars in “hard” out-of-pocket costs annually when they turn to their internal resources and enterprise class computer investigation technology to collect, search and process computer data for eDiscovery.

In other words, buy Encase and do it yourself! Learn how to use Encase at the 2006 Computer and Enterprise Investigations Conference in Lake Las Vegas, NV. I am speaking there on Thursday, 4 May 2006 from 1400-1530 on Network Forensics.

Disaster Stories Help Envisage Risks

The April 2006 issue of Information Security Magazine features an article titled Security Survivor All-Stars. It profiles people at five locations -- LexisNexis, U Cal-Berkeley, ChoicePoint, CardSystems, and Georgia Technology Authority -- who suffered recent and well-publicized intrusions. My guess is that InfoSecMag managed to arrange these interviews by putting a "happy face spin" on the story: "We know your organization was a security mess, but let's look on the bright side and call you an all-star!" Although the article is light on details, I recommend reading these disaster stories. They help make security incidents more real to management.

ChoicePoint is one of the companies profiled. That story really bothers me. To know why, read The Five Most Shocking Things About the ChoicePoint Debacle and The Never-Ending ChoicePoint Story by Sarah D. Scalet. I noticed the InfoSecMag did not interview ChoicePoint chairman and CEO Derek V. Smith, author of The Risk Revolution: Threats Facing America & Technology’s Promise for a Safer Tomorrow and A Survival Guide in the Information Age (both published prior to the ChoicePoint debacle).

InfoSecMag also avoided interviewing former ChoicePoint CISO Rich Baich, author of Winning as a CISO. No, I am not making this up. This is the same Mr. Baich about whom Ms. Scalet wrote the following. Baich is speaking:

"Look, I'm the chief information security officer. Fraud doesn't relate to me." He indicated that he would be doing the CISO community a service by explaining to the media why fraud was not an information security issue. (The company later denied his request to grant the interview.)

The feds, however, are acting as if it's an information security issue. ChoicePoint has indicated that the Federal Trade Commission is "conducting an inquiry into our compliance with federal laws governing consumer information security and related issues."

In this interview with TechTarget, Baich says:

It's created a media frenzy; this has been mislabeled a hack and a security breach. That's such a negative impression that suggests we failed to provide adequate protection. Fraud happens every day. Hacks don't.

Wow, this guy is out of touch. Instead of having difficulty finding work, now he's on the speaking circuit as a Managing Director with PriceWaterhouseCoopers. And why is he still a CISSP? This is an excellent example of problems with the CISSP -- no one loses their certification.

For a stark contrast, peruse the Maryland Real Estate Commission - Disciplinary Actions site. You can read about the real estate workers who lost their licenses for mispractice. It is sad to think that information security is treated less seriously than selling real estate.

By the way -- everyone who wants an overview of risk management frameworks should read Alphabet Soup by Shon Harris in the same InfoSecMag issue.

Risk and Metrics

I ran across some thought-provoking articles in the April 2006 CIO Magazine. The editor's introduction summarizes a major problem with calculating IT spending:

As sophisticated as the technology and its countless uses have become, all too often the benchmark used to determine the proper level of an enterprise’s IT spending is alarmingly simplistic: the percentage of overall revenue for which IT accounts...

Benchmarking IT spending as a percentage of revenue is a truly useless metric. Unfortunately, according to Koch [mentioned next], it remains the most popular way to evaluate IT spending, and also unfortunately (as most of you already know), it doesn’t say anything about how effective or productive your spending is. Even more unfortunately, benchmarking by percentage of revenue casts IT in the role of a cost to be controlled, defining success simply as lowering the percentage over time.

This is a really amazing insight. How many of you see progress in security management through the eyes of reducing spending to zero? The "Koch" mention refers to the article The Metrics Trap...And How to Avoid It by Christopher Koch. As you might guess there is really no simplistic way to solve this problem. Koch's article includes gems like the following, though:

Joe Drouin... found that [his] company was spending less on IT as a percent of overall revenue than the industry average, which was about 1.5 to 2 percent.

Not one to look a gift horse in the mouth, Drouin played the metric for everything it was worth, highlighting it in every PowerPoint presentation he could during his first year as CIO...

At one point, the CEO, who believed that inexpensive IT was good IT, joked that he expected to see Drouin and his staff outfitted with T-shirts that had the percentage stamped across their chests in big, block numbers...

In this zero sum game, success is defined simply as lowering the percentage over time. "It's not clear how low it should go," says Drouin. "Joking with the CEO, I said, 'In your mind it should be zero.' We had a good laugh, but at what point do we decide it's at the right level and you don't drive it down further?"

That CEO's attitude disgusts me. Would you expect him to do the same for the human resources department? They don't bring in any customer revenue. How about finance and accounting? Now that creative bookkeeping can put the CEO is jail, that isn't a place that brings in customer revenue either. Yet, neither "cost center" is expected to reduce its percentage of overal revenue to zero.

At least as far as security goes, the inability to see the value of security spending relates to management's inability to perceive the risk of being exposed and vulnerable. I came across this insight in a recent issue of the Economist, featuring the article The New Paternalism (subscription probably required):

This acute sensitivity to losses is not the only bias behaviouralists have discovered. People also have great difficulty understanding risks. The weight a person gives to a scenario—flood, fire, winning the lottery—should depend on its likelihood. In fact, it depends on how easily it can be envisaged. People will pay more for air-travel insurance against "terrorist acts" than against death from "all possible causes."

Canny governments can work with the grain of this psychology. The grisly campaigns against smoking aim to put the dangers firmly in people's minds; to turn a statistical risk into a visceral image. They have been effective, perhaps too effective. There is some evidence that people now overestimate the risks of smoking.
(emphasis added)

In other words, management cannot imagine the destruction caused by security incidents. It is impossible for them to envisage an incident causing their company losing market share, intellectual property, or its ability to provide services. As a result, they base their decisions on laws, regulations, and what their peers are doing.

This explains the resources poured into worm defense a few years ago. When management's own computers are affected, when they see worm reporting on CNN, when a worm is the discussion over lunch -- they start to take the problem seriously. When a stealthy intruder has lodged himself inside a company, management has no clue how to handle the situation. In fact, most management has no clue how to handle existing rogue employees now. They turn to platitudes like "we trust our employees" because they can't fathom why someone would turn against their beloved company. After all, management has been treated really well!

I don't think spending-related metrics are of much use. Performance-related metrics are the only ones which I think have some value. Drilling network security operations teams (preventers, intrusion detectors, incident responders, etc.) to see if they stop, identify, and remove controlled threat simulators (vulnerability assessors, pen testers and red teams) is the best way to see if your money is being well spent.

Tuesday, April 25, 2006

Insights from Dr. Dobbs

I've been flying a fair amount recently, so that means I've been reading various articles and the like. I want to make note of those I found interesting.

The March 2006 issue of Dr, Dobb's Journal featured a cool article on Performance Analysis and Multicore Processors. I found the first section the most helpful, since it differentiates between multithreading and hyperthreading. I remember when the FreeBSD development team was criticized for devoting so many resources to SMP. Now it seems SMP will be everywhere.

In the same issue Ed Nisley writes about Crash Handling. I call out this article for this quote:

Mechanical and civil engineers must consider how their projects can fail, right at the start of the design process, by calculating the stress applied to each component and the strength required to withstand it. Electrical engineers apply similar calculations to their circuits by considering voltage, current, and thermal ratings. In each case, engineers determine the project's expected behavior based on well-known properties of the bulk material and the finished components.

Software design isn't engineering simply because it does not deal with physical materials that have known properties. Software components don't exhibit a graded response to increasing stress: A single, trivial, data-dependent error can cause complete, instantaneous failure with no prior warning. Programs simply don't have a well-defined safe operating area.

If that's true, we're finished. I'm afraid it may be, but also in the same DDJ issue we read these comments on Project Management by Gregory V. Wilson:

[W]hile Stepanek [Bejtlich: author of a book reviewed by Wilson] refers to many of the classic works in software engineering, he seems to have missed most of what's appeared in the primary literature in the last 10 years. New journals such as Empirical Software Engineering have both reflected, and encouraged, new studies of what actually works and doesn't—studies that are methodologically sounder than many of their predecessors—and I think they ought to be required reading for anyone writing about software engineering today. I share Stepanek's sense of frustration, but think that software engineering isn't as special as it is often convenient for software engineers to believe.

I agree with that statement. I find too many people use the term "art" to make themselves feel special. If they took the time to document their approaches (analysis, administration, coding, pen testing, reverse engineering, etc.) they would find their processes repeatable and far less "special." Science is boring; art is cool. Science can also be automated, which could mean replacing humans. While I always believe humans are the best operators, hiding behind "art" is cowardly.

Ethereal 1.0 Looms

Thanks to Anthony Spina for pointing out that Ethereal 0.99 was released yesterday. Jumping from 0.10.14 in late December to 0.99 now indicates to me that 1.0 will finally appear any day now.

The release notes mention a new tool -- dumpcap. Dumpcap is a pure packet capture application, unlike Tcpdump or Tethereal. Those two programs are also protocol analyzers, and at least in the case of Tethereal that means larger memory footprints. I tried the Windows version of Dumpcap.

First, let's see the options Dumpcap offers, and start it.

Notice that Dumpcap is a simple capture application, but it also supports the ring buffer support I love in Tethereal. Nice work.

Here is Dumpcap's memory allocation on Windows during the preceeding capture.

Here are Tethereal's options.

I start Tethereal using syntax similar to Dumpcap. Note Tethereal supports disabling name resolution with -n, while Dumpcap offers no name resolution options.

tethereal -n -i 3 -c 10 -w d:\tmp\tethereal1.lpc

Here is Tethereal's memory allocation on Windows during the preceeding capture.

As you can see, Tethereal's memory footprint is five times that of Dumpcap.

I look forward to trying Dumpcap on FreeBSD.

Monday, April 24, 2006

ENIRA Partners with Lancope

I've wanted to say something about ENIRA for several months now, but I've been under a non-disclosure agreement. This morning, however, I noticed this press release which quotes me.

What's the fuss? ENIRA is a nearby company (in northern Virginia) that sells a Network Response System. It's essentially an incident containment appliance that isolates hosts when directed to do so. It's neither an IDS nor firewall -- layer 3, 4, 7 (IPS), or otherwise. ENIRA learns your network topology by accessing infrastructure devices (switches, routers, firewalls, etc.) and implements a containment policy when told to isolate a host or segment.

The isolation mechanism makes the best possible choices, based on any policies and restrictions you have provided. It keeps track of its actions and acts like a "network engineer in a box." I think this is a great network-centric incident response product. Lancope is going to use it to implement short-term incident containment when StealthWatch identifies suspicious or malicious activity.

Saturday, April 22, 2006

Three New Pre-Reviews

Several publishers were kind enough to send me review copies of three books last week. The first is Securing Storage: A Practical Guide to SAN and NAS Security by Himanshu Dwivedi. I have very little practical experience with SAN and NAS, and less with security for those technologies. I hope this book can get me up to speed on those topics.

The second book is Practical VoIP Security by Thomas Porter. VoIP is being deployed everywhere, and I doubt security is being taken as a serious consideration. In many cases, VoIP traffic is being carried on the same network that transports data. I hope this book will examine these issues and offer real strategies for secure VoIP operation.

The third book is PGP & GPG by Michael Lucas. Besides being a BSD expert, Michael is an amazing author. This is his fourth book. I expect this title to provide an accessible discussion of email encryption.

You can keep tabs on my reading schedule through my reading page. I track new books on my Wish List. Finally, you can expect to see books I read appear in my reviews.

Friday, April 21, 2006

Future Public Training Dates

Most of my training is private. I wanted to let you know of a few public one-day or more classes I will be providing in the coming months. I will teach a one day course on Network Security Monitoring with Open Source Tools at the USENIX 2006 Annual Technical Conference in Boston, MA on Friday, 2 June 2006. This is the course to attend if you want to learn the essential components of network security monitoring. We will use tools on my Sguil VM in this class.

I am happy to report that USENIX accepted a proposal for a new class as well. I will teach a brand new, two day course called TCP/IP Weapons School at USENIX Security 2006 in Vancouver, BC on 31 July and 1 August 2006. Are you a junior security analyst or an administrator who wants to learn more about TCP/IP? Are you afraid to be bored in routine TCP/IP classes? TCP/IP Weapons School is the class you need to take!

USENIX LISA will take place 3-8 December 2006 in Washington, DC. I plan to propose either or both of the following new classes for that conference: (1) Enterprise Network Instrumentation and (2) Detecting Intrusions with Snort and Sguil. The first class will be one day, and will cover issues related to accessing network traffic for security monitoring and prevention purposes. The second class will also be one day, and will describe how to install, operate, and optimize a Sguil-based incident detection system. I may be asked to present other material; we'll see.

I will be unable to travel during the last four months of the year. (Luckily USENIX LISA will be in my back yard.) If you are considering approaching me to teach a class for your organization, please contact me as soon as possible. My schedule is rapidly filling, as are the seats in my only public Network Security Operations class this year -- 13-16 June 2006 in Fairfax, Virginia. Discounted registration ends soon.

Check out my events page for details on other teaching and speaking engagements.

Tuesday, April 18, 2006

Best Comment of the Year

If you don't read the comments for this blog you missed the best response of the year, attached to my earlier story on T. Arthur points out the irony of a Hacking Exposed author pointing the finger at Apparently Hacking Exposed is "the best selling computer security book ever, with more than 500,000 copies sold."

Does that mean Stu and friends created half a million more threats? Are they responsible for all the script kiddies running attacks they learned about in HE? If you follow McAfee's logic, the answer is yes. If you follow mine, the answer is no.

Dealing With Sguil Partition Issue

I operate several Sguil sensors in production environments for clients. At one location I have a single box deployment where the Sguil sensor, server, and database occupy a single FreeBSD platform. This wasn't the original configuration, but I am making do with what I was given.

Here is the current df -h output.
# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/aacd0s2a 989M 76M 834M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/aacd0s2f 989M 106K 910M 0% /home
/dev/aacd0s2h 436G 363G 38G 90% /nsm
/dev/aacd0s2e 989M 562M 348M 62% /tmp
/dev/aacd0s2d 5.9G 986M 4.4G 18% /usr
/dev/aacd0s2g 4.8G 3.8G 639M 86% /var

As you can see, /var is approaching the 90% mark. /nsm is already there, but Sguil's script rotates the full content files stored there. Notice I keep all /nsm data in its own partition, to avoid catastrophe if a runaway program were to begin filling the drive. I've found myself on many Red Hat boxes that have only a / (root) and swap partition, which is an invitation to disaster.

Because everything runs on this system, it stores the sguildb MySQL database in the /var partition. You'll see /var is at 86%. What's taking up all the space?

/var/db/mysql# du -ch .
214K ./mysql
2.0K ./test
3.7G ./sguildb
3.8G .
3.8G total

Let's see the 20 biggest files in the sguildb directory.

/var/db/mysql/sguildb# ls -alhS | head -n 20
total 3929694
-rw-rw---- 1 mysql wheel 77M Apr 4 19:17 sancp_scn-sensor-colo_20060324.MYI
-rw-rw---- 1 mysql wheel 69M Apr 4 19:17 sancp_scn-sensor-colo_20060323.MYI
-rw-rw---- 1 mysql wheel 63M Apr 4 19:17 sancp_scn-sensor-colo_20060330.MYI
-rw-rw---- 1 mysql wheel 62M Mar 9 14:00 data_scn-sensor-colo_20060309.MYD
-rw-rw---- 1 mysql wheel 61M Apr 16 22:59 sancp_scn-sensor-colo_20060322.MYI
-rw-rw---- 1 mysql wheel 59M Apr 4 19:17 sancp_scn-sensor-colo_20060331.MYI
-rw-rw---- 1 mysql wheel 59M Mar 22 19:00 sancp_scn-sensor-colo_20060314.MYI
-rw-rw---- 1 mysql wheel 58M Apr 16 19:57 sancp_scn-sensor-colo_20060404.MYI
-rw-rw---- 1 mysql wheel 58M Mar 12 07:02 data_scn-sensor-colo_20060310.MYD
-rw-rw---- 1 mysql wheel 57M Mar 22 19:00 sancp_scn-sensor-colo_20060309.MYI
-rw-rw---- 1 mysql wheel 57M Apr 18 18:53 sancp_scn-sensor-colo_20060412.MYI
-rw-rw---- 1 mysql wheel 57M Apr 18 18:53 sancp_scn-sensor-colo_20060410.MYI
-rw-rw---- 1 mysql wheel 57M Apr 4 19:17 sancp_scn-sensor-colo_20060403.MYI
-rw-rw---- 1 mysql wheel 56M Mar 22 19:00 sancp_scn-sensor-colo_20060317.MYI
-rw-rw---- 1 mysql wheel 56M Apr 4 19:17 sancp_scn-sensor-colo_20060328.MYI
-rw-rw---- 1 mysql wheel 56M Apr 18 18:53 sancp_scn-sensor-colo_20060411.MYI
-rw-rw---- 1 mysql wheel 55M Apr 4 19:17 sancp_scn-sensor-colo_20060329.MYI
-rw-rw---- 1 mysql wheel 55M Apr 4 19:17 sancp_scn-sensor-colo_20060327.MYI
-rw-rw---- 1 mysql wheel 55M Apr 18 18:53 sancp_scn-sensor-colo_20060407.MYI

The sancp tables are session data.

# mysql -u sguil -p sguildb -e "describe sancp"
Enter password:
| Field | Type | Null | Key | Default | Extra |
| sid | int(10) unsigned | NO | MUL | | |
| sancpid | bigint(20) unsigned | NO | | | |
| start_time | datetime | NO | MUL | | |
| end_time | datetime | NO | | | |
| duration | int(10) unsigned | NO | | | |
| ip_proto | tinyint(3) unsigned | NO | | | |
| src_ip | int(10) unsigned | YES | MUL | NULL | |
| src_port | smallint(5) unsigned | YES | MUL | NULL | |
| dst_ip | int(10) unsigned | YES | MUL | NULL | |
| dst_port | smallint(5) unsigned | YES | MUL | NULL | |
| src_pkts | int(10) unsigned | NO | | | |
| src_bytes | int(10) unsigned | NO | | | |
| dst_pkts | int(10) unsigned | NO | | | |
| dst_bytes | int(10) unsigned | NO | | | |
| src_flags | tinyint(3) unsigned | NO | | | |
| dst_flags | tinyint(3) unsigned | NO | | | |

The data tables contain packet payloads.

# mysql -u sguil -p sguildb -e "describe data"
Enter password:
| Field | Type | Null | Key | Default | Extra |
| sid | int(10) unsigned | NO | MUL | | |
| cid | int(10) unsigned | NO | | | |
| data_payload | text | YES | | NULL | |

At this point it looks like I should back up some of the session data elsewhere, as it is taking up the most space.

First I make sure sensor_agent.tcl, sguild, and Barnyard are not running. I leave Snort and SANCP running, so the sensor continues to collect alert data, session data, and full content data.

Next I shut down MySQL.

# /usr/local/etc/rc.d/ stop
Stopping mysql.
Waiting for PIDS: 76074, 76074.

I create a directory in the partition with the most free space that isn't going to be used any time soon.

# mkdir /usr/db_backups

Now I see how much space the SANCP tables occupy.

/var/db/mysql/sguildb# du -ch *sancp*
3.3G total

Yes, they are the culprit. I have files reaching back to 8 March. Let's see how much space I can reclaim if I move all of March's entries.

/var/db/mysql/sguildb# du -ch *sancp*200603*
1.9G total

That will work fine. Now to tar up the files elsewhere.

/var/db/mysql/sguildb# tar -czvf /usr/db_backups/sancp_200603.tar.gz *sancp*200603*
a sancp_scn-sensor-colo_20060308.MYD
a sancp_scn-sensor-colo_20060308.MYI
a sancp_scn-sensor-colo_20060308.frm
a sancp_scn-sensor-colo_20060331.MYD
a sancp_scn-sensor-colo_20060331.MYI
a sancp_scn-sensor-colo_20060331.frm

/var/db/mysql/sguildb# ls -alh /usr/db_backups/
total 434788
drwxr-xr-x 2 root wheel 512B Apr 18 19:16 .
drwxr-xr-x 20 root wheel 512B Apr 18 19:11 ..
-rw-r--r-- 1 root wheel 424M Apr 18 19:20 sancp_200603.tar.gz

Now to delete the files, along with the sancp.MRG and sancp.frm files. They will be rebuilt automatically.

/var/db/mysql/sguildb# rm *sancp*200603*
/var/db/mysql/sguildb# rm sancp.MRG
/var/db/mysql/sguildb# rm sancp.frm

# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/aacd0s2a 989M 76M 834M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/aacd0s2f 989M 106K 910M 0% /home
/dev/aacd0s2h 436G 363G 38G 91% /nsm
/dev/aacd0s2e 989M 562M 348M 62% /tmp
/dev/aacd0s2d 5.9G 1.4G 4.0G 26% /usr
/dev/aacd0s2g 4.8G 1.9G 2.5G 43% /var

Great -- /var has plenty of room to grow now. However, I don't have ready access to session data from March. If I need it, though, I can restore it.

Now restart MySQL, then the Sguil components.

# /usr/local/etc/rc.d/mysql-server start
Starting mysql.
# su - sguil
sguil@scn-sensor-colo$ ./
pid(77008) Loading access list: ./sguild.access
pid(77008) Sensor access list set to ALLOW ANY.
pid(77008) Client access list set to ALLOW ANY.
pid(77008) Email Configuration:
pid(77008) Config file: ./
pid(77008) Enabled: No
sguil@scn-sensor-colo$ ./
sguil@scn-sensor-colo$ ./
Barnyard Version 0.2.0 (Build 32)

A quick log in to Sguil shows that everything is working.

How Could I Have Missed This

It took this Slashdot thread to connect me with one of the greatest pieces of music produced in this century:

Symantec Revolution

If you believe that, you deserve to listen to all 3:10 of it.

This is right up there with the Balmer videos, except there's only audio.

Update: It gets better. Here's Check Point's anthem. I like the Symantec one better.

McAfee Points Its Finger in the Wrong Direction Again

I just read Does Open Source Encourage Rootkits? and the associated McAfee report. In the article we have this quote:

Rootkits are becoming more prevalent and difficult to detect, and security vendor McAfee says the blame falls squarely on the open source community.

In its "Rootkits" report being published today, McAfee says the number of rootkits it has collected as malware samples has jumped ninefold this quarter compared with the same quarter a year ago. Almost all the rootkits McAfee has identified are intended to hide other code (such as spyware or bots) or conceal processes running in Windows systems.

"The predominant reason for the growth in use of stealthy code is because of sites like," says Stuart McClure, senior vice president of global threats at McAfee.

Let's start debunking this argument with the easiest parts of this quote. First, is Stuart McClure in charge of parties with the capabilities and intentions to exploit a target (i.e., a threats)? Probably not. SVP of Global Threats is a weird title, reminiscent of other problems McAfee/Foundstone has with defining threats properly.

Second, there's nothing new about Windows rootkits. I referenced this SecurityFocus article three years ago. The problem is McAfee is late to the game.

Third, the main reason McAfee has any shot at detecting the latest rootkits is they can look at the code published at Here's what is happening at McAfee AVERT:

  1. Rootkits are deployed, based on code not publicly available. They are tough to detect. AVERT doesn't see them.

  2. Rootkits like NT Rookit, Hacker Defender, and FU are published at

  3. AVERT looks at these rootkits, gets clued in, and starts looking for them elsewhere.

  4. AVERT publishes a report saying it sees code everywhere and blames the site and "open source" for the world's problems.

For shame. Let's face the truth -- for years the underground has been using techniques revealed in code at I saw rootkits on Solaris eight years ago that are better than most everything that's published today. Sites like have helped defenders because they give us a clue as to what the bad guys are already doing. Rootkits expose the broken host protection model offered by vendors like McAfee. AVERT should be glad they can learn something from Without it, a window to the underground would be closed.

Update: Here is Greg Hoglund's response.

Monday, April 17, 2006

Cool News Taps from Net Optics

You know I am always on the prowl for new networking gear to perform network security monitoring. In fact, I may write a whole new book about the subject, pulling enterprise network instrumentation coverage from future editions of The Tao and other books and concentrating it in a single volume.

In the spirit of sharing information on new gear, I am happy to let you know about two cool new products from Net Optics. The first is the 10/100 Teeny Tap, pictured above. This is a fully-functional, dual-power, dual output traditional 10/100 Mbps tap. It's functionally equivalent to the 10/100 Ethernet Tap.

The second neat product is the iTap Gigabit Dual Port Aggregator. This is a Gigabit tap that provides two outputs where each are combinations of the two TX input streams. This tap is similar to the Gigabit Dual Port Aggregator with several major differences, which I noted last month. I ran some traffic through this tap today and I really liked seeing the traffic load on the LCD screen. I did not get a chance to try the remote management features, but I plan to soon.

I took some photos of the Teeny Tap sitting above a traditional Ethernet tap, on top of the copper iTap. (There will also be a fiber iTap.) You can see just how tiny the Teeny Tap is. You can also buy the Teeny Tap online. It ships with a smart black canvas carrying case that looks like a digital camera container. It is big enough to hold the tap and two power supplies, along with some cables. I intend to take one with me on all of my engagements.

Thank you to Net Optics for sharing these with me. Who in your organization could use a Teeny Tap? I'm sure any consultants who travel as frequently as I do with a laptop bag would love to replace their existing setup with a Teeny Tap. As for the iTap, expect to see more of these everywhere -- the statistical functions are awesome.

Profiling Sensors with Bpfstat

In the TaoSecurity lab I have three physical boxes that perform monitoring duties. I wanted to see how each of them performed full content data collection.

Note: I do not consider what I am about to blog as any sort of thorough or comprehensive test. In fact, I expect some of you to flail about in anger that I didn't take into account your favorite testing methodologies!

I would be happy to hear constructive feedback. I am aware that anything resembling a test brings out some of the worst flame wars known to man. With those caveats aside, let's move on!

These are rough specifications for each system.

  • bourque: Celeron 633 MHz midtower with 320 MB RAM, 9541 MB Quantum Fireball HDD, 4310 MB Quantum Fireball HDD, Adaptec ANA-62044 PCI quad NIC

  • hacom: Via Nehemia 1 GHz small form factor PC with 512 MB RAM, 238 MB HDD, and three Intel Pro/1000 Gigabit adapters

  • shuttle: Intel PIV 3.2 GHz small form factor PC with 2 GB RAM, 2x74 GB HDDs, integrated Broadcom BCM5751 Gigabit Ethernet and Intel Pro/1000 MT Dual Gigabit adapter

Each sensor runs FreeBSD 6.0 with binary security updates applied.

How does one verify that a sensor is logging full content data appropriately? I decided to have each sensor listen to the two outputs of a traditional Fast Ethernet tap, provided by Net Optics. The tap watched a link between a FreeBSD FTP client and a Debian Linux FTP server. I had the FTP client download a 87 MB .zip file (my first Sguil image, in fact) from the server while the sensor watched.

I bonded the two interfaces monitoring the tapped link so that a single ngeth0 interface would see both sides of the FTP data transfer. I ran Tcpdump on the sensor like so:

tcpdump -n -i ngeth0 -s 1515 -w testX.lpc

Before starting the FTP transfer, I started Bpfstat on the sensor to watch for packet drops.

Here's how the FTP transfer looked.

ftp> get
local: remote:
227 Entering Passive Mode (172,16,1,1,128,85)
150 Opening BINARY mode data connection for '' (91706415 bytes).
100% |*************************************| 89557 KB 9.72 MB/s 00:00 ETA
226 Transfer complete.
91706415 bytes received in 00:08 (9.72 MB/s)

As you can see I'm transferring at around 78 Mbps. This is a limitation of the hardware. I was able to run Iperf at over 90 Mbps, but no data was being saved.

After transferring the file via FTP from server to client, I used Tcpflow on the sensor to reassemble the FTP data stream carrying the 87 MB .zip file.

The original 87 MB file was 91706415 bytes. Every time I reassembled the FTP data session, I got files 91706415 in size. I ran a MD5 hash of each reconstructed file, however, and found none of them matched the original 87 MB .zip. This meant I was dropping packets somewhere.

To identify the bottleneck, I decided to use Bpfstat.

Here are the results when run on sensor bourque:

bourque:/root# bpfstat -i 5 -I ngeth0
pid netif flags recv drop match sblen hblen command
993 ngeth0 p--s- 11 0 11 0 0 tcpdump
993 ngeth0 p--s- 39 0 39 0 0 tcpdump
993 ngeth0 p--s- 21540 15475 21540 32740 31480 tcpdump
993 ngeth0 p--s- 75819 53392 75819 32740 32740 tcpdump
993 ngeth0 p--s- 95851 67142 95851 0 0 tcpdump

Wow, that's a lot of dropped packets. Here are the results for sensor hacom:

hacom:/root# bpfstat -i 5 -I ngeth0
pid netif flags recv drop match sblen hblen command
635 ngeth0 p--s- 41322 217 41322 32740 31480 tcpdump
635 ngeth0 p--s- 94843 420 94843 14124 0 tcpdump

That's a bit better. Let's look at shuttle, the most robust sensor available.

shuttle:/root# bpfstat -i 5 -I ngeth0
pid netif flags recv drop match sblen hblen command
689 ngeth0 p--s- 0 0 0 0 0 tcpdump
689 ngeth0 p--s- 7 0 7 0 0 tcpdump
689 ngeth0 p--s- 39 0 39 0 0 tcpdump
689 ngeth0 p--s- 23810 0 23810 17356 0 tcpdump
689 ngeth0 p--s- 77414 19 77414 15656 0 tcpdump
689 ngeth0 p--s- 95851 19 95851 0 0 tcpdump

That's excellent, but Bpfstat still reports dropping packets. That means I will not be able to reconstruct the FTP data session using this equipment.

None of this hardware is similar, but you can see how a progression from a slower CPU, less RAM, and less respected NICs to a faster CPU, more RAM, and Intel NICs results in better performance. All of these systems used 32 bit 33 MHz PCI buses for add-on cards, so I would expect PCI-X or PCI Express would improve performance.

If you're thinking that the ngeth0 bonded system might have degraded performance, that wasn't the case. Here is Bpfstat output for one of the transfers, when watching only the data from FTP server to client.

shuttle:/root# bpfstat -i 5 -I em1
pid netif flags recv drop match sblen hblen command
803 em1 p--s- 0 0 0 0 0 tcpdump
803 em1 p--s- 14106 0 14106 16852 0 tcpdump
803 em1 p--s- 49470 35 49470 27576 0 tcpdump
803 em1 p--s- 63341 35 63341 0 0 tcpdump

It appears to have dropped more traffic than the ngeth0 system. I had similar results on the other boxes.

Friday, April 14, 2006

FreeBSD Status Report First Quarter 2006

The FreeBSD Status Report First Quarter 2006 has been posted. Notable items include Colin Percival meeting his fundraising goal -- thank you! Remember that BSDCan 2006 takes place 12-13 May in Ottawa. I will be elsewhere that week and unable to attend.

The Status Report lists lots of cool developments that are worth perusing. I noticed the End-of-life security schedule says FreeBSD 5.4 will no longer be supported after 30 May 2006.

Thursday, April 13, 2006

Share Pictures of Your Network Gear

I'm creating a class describing how to access network traffic in order to conduct network security monitoring. I'd like to know if anyone would mind sharing photos of their network closets, with descriptions of the gear in the rack and their network diagram. I'm looking to learn how you get connectivity from your ISP, where that link goes, and what your core, distribution, and access layers look like. I don't need to know about your desktops or whatever. I really just want students to get a look at a network closet and the sorts of connectors, cables, and rack gear they might expect to find.

I will not name any names. I'd just like to provide some real-world photos for students. If you can help, please email your photos Friday, along with short descriptions of what's shown. Even pictures taken with camera phones are fine. Thank you very much!

Installing FreeBSD Java Binaries

I just posted about the new FreeBSD Java packages. I figured I would try them out and show how the process works. It's been a while since I last described installing Java, back when compiling from source was required.

After downloading the binary for FreeBSD 6.0, I tried to install it.

orr:/tmp# ls -al diablo-jdk-freebsd6-
-rw-r--r-- 1 richard wheel 54624741 Apr 13 07:30 diablo-jdk-freebsd6-
orr:/tmp# pkg_add -v diablo-jdk-freebsd6-
Requested space: 218498964 bytes, free space: 4397770752 bytes in /var/tmp/instmp.FMG03P
Package 'diablo-jdk-' depends on 'xorg-libraries-6.8.2' with
'x11/xorg-libraries' origin.
- already installed.
Package 'diablo-jdk-' depends on 'javavmwrapper-2.0_5' with
'java/javavmwrapper' origin.
pkg_add: could not find package javavmwrapper-2.0_5 !
pkg_add: 1 package addition(s) failed

That didn't work. Let me add the package it requires.

orr:/tmp# setenv PACKAGESITE
orr:/tmp# pkg_add -vr javavmwrapper
looking up
connecting to
setting passive mode
opening data connection
initiating transfer
x man/man1/checkvms.1.gz
x man/man1/javavm.1.gz
x man/man1/registervm.1.gz
x man/man1/unregistervm.1.gz
x man/man5/javavms.5.gz
x bin/classpath
x bin/javavm
x bin/registervm
x bin/unregistervm
x bin/checkvms
tar command returns 0 status
Running pre-install for javavmwrapper-2.0_6..
extract: Package name is javavmwrapper-2.0_6
extract: CWD to /usr/local
extract: /usr/local/man/man1/checkvms.1.gz
extract: /usr/local/man/man1/javavm.1.gz
extract: /usr/local/man/man1/registervm.1.gz
extract: /usr/local/man/man1/unregistervm.1.gz
extract: /usr/local/man/man5/javavms.5.gz
extract: /usr/local/bin/classpath
extract: /usr/local/bin/javavm
extract: /usr/local/bin/registervm
extract: /usr/local/bin/unregistervm
extract: /usr/local/bin/checkvms
extract: CWD to .
Running mtree for javavmwrapper-2.0_6..
mtree -U -f +MTREE_DIRS -d -e -p /usr/local >/dev/null
Running post-install for javavmwrapper-2.0_6..
Attempting to record package into /var/db/pkg/javavmwrapper-2.0_6..
Package javavmwrapper-2.0_6 registered in /var/db/pkg/javavmwrapper-2.0_6

Now I'll add the package again.

orr:/tmp# pkg_add -v diablo-jdk-freebsd6-
Requested space: 218498964 bytes, free space: 4397742080 bytes in /var/tmp/instmp.PLZjeE
Package 'diablo-jdk-' depends on 'xorg-libraries-6.8.2' with 'x11/xorg-libraries' origin.
- already installed.
Package 'diablo-jdk-' depends on 'javavmwrapper-2.0_5' with 'java/javavmwrapper' origin.
- already installed.
Running pre-install for diablo-jdk-

Diablo Caffe Version 1.5.0-0 ("Software")


You may install this Software only if you are currently a licensee
of FreeBSD (including substantially similar versions of FreeBSD) for
your own internal use only with your copy(ies) of FreeBSD (including
substantially similar versions of FreeBSD). If you are an OEM - a person
who will bundle the Software with other software before distributing the
bundled product to end users - you must read and "accept" the provisions
of the OEM License Agreement. You must read the License Agreement and
enter "YES" below to continue your install. By doing so, you agree to
be bound by all of the terms of this License Agreement.
Do you agree to the above license terms? [yes or no]
extract: Package name is diablo-jdk-
extract: CWD to /usr/local
extract: /usr/local/diablo-jdk1.5.0/COPYRIGHT
extract: /usr/local/diablo-jdk1.5.0/LICENSE
extract: /usr/local/diablo-jdk1.5.0/README.html
extract: /usr/local/diablo-jdk1.5.0/THIRDPARTYLICENSEREADME.txt
extract: /usr/local/diablo-jdk1.5.0/bin/ControlPanel
extract: /usr/local/diablo-jdk1.5.0/bin/HtmlConverter
extract: /usr/local/diablo-jdk1.5.0/bin/appletviewer
extract: /usr/local/diablo-jdk1.5.0/bin/apt
extract: /usr/local/diablo-jdk1.5.0/bin/extcheck
extract: /usr/local/diablo-jdk1.5.0/bin/idlj
extract: /usr/local/diablo-jdk1.5.0/bin/jar
extract: /usr/local/diablo-jdk1.5.0/bin/jarsigner
extract: /usr/local/diablo-jdk1.5.0/bin/java
extract: /usr/local/diablo-jdk1.5.0/sample/nio/server/
extract: /usr/local/diablo-jdk1.5.0/
extract: execute '/usr/local/bin/registervm
"/usr/local/diablo-jdk1.5.0/bin/java # DiabloCaffe${JDK_VERSION}"'
extract: CWD to .
Running mtree for diablo-jdk-
mtree -U -f +MTREE_DIRS -d -e -p /usr/local >/dev/null
Running post-install for diablo-jdk-
Attempting to record package into /var/db/pkg/diablo-jdk-
Trying to record dependency on package 'xorg-libraries-6.8.2' with 'x11/xorg-libraries' origin.
pkg_add: warning: package 'diablo-jdk-' requires 'xorg-libraries-6.8.2',
but 'xorg-libraries-6.9.0' is installed
Trying to record dependency on package 'javavmwrapper-2.0_5' with 'java/javavmwrapper' origin.
pkg_add: warning: package 'diablo-jdk-' requires 'javavmwrapper-2.0_5', but
'javavmwrapper-2.0_6' is installed
Package diablo-jdk- registered in /var/db/pkg/diablo-jdk-

To see if it worked, I tried launching the Java program Metacoretex. Since I could get the program to run, it looks like the Java binaries are operating properly.

FreeBSD News

freebsd.png" align=left>I have a few news items of interest to FreeBSD users. First, FreeBSD 6.1-RC1 is now available. The schedule has not been updated, but I'm hoping to see the new release before or during the first week in May. I bet the developers will try to get it out the door before the end of this month, though.

If you use Java on FreeBSD, you'll be happy to hear that Java JRE 1.5 and JDK are available as binaries, courtesy of the FreeBSD Foundation. Securing the license to make this happen cost $35,000. This is how our donations help open source software.

Tuesday, April 11, 2006

Review of The Definitive Guide to MySQL, 3rd Ed Posted just posted my three star review of The Definitive Guide to MySQL, 3rd Ed. From the review:

I read and reviewed MySQL Press' MySQL Tutorial by Luke Welling and Laura Thomson two years ago. I thought Tutorial was a great, concise (267 pages including index) MySQL overview. I hoped The Definitive Guide to MySQL 5, 3rd Ed (DG, 748 pages) would extend my understanding of MySQL beyond the coverage in the Tutorial. Unfortunately, I found the Tutorial did a better job addressing important information than the DG. While there is some good information in the DG, I recommend staying with books published by MySQL Press.

I currently have 205 reviews at Eight of those are non-tech books. That means after my next two reviews, the third will be my 200th.

Tips on MySQL Accounts in Sguil VM

In an otherwise unremarkable book on MySQL, I found good advice on database accounts and authentication.

Here is what the accounts look like in the Sguil VM I just released.

taosecurity:/home/analyst$ mysql -u root -p
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 6 to server version: 5.0.18

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select user, host, password from user;
| user | host | password |
| root | localhost | *7561F5295A1A35CB8E0A7C46921994D383947FA5 |
| root | | |
| | | |
| | localhost | |
| sguil | localhost | *B31EC1E21F3433DC60AF70BEFD8F40A01D6B77E3 |
5 rows in set (0.02 sec)

Hmm. Although I set passwords for the root and sguil users, I only did so involving logins from localhost. There are also two "null" users with no password at all.

Let's fix the root passwords first.

mysql> update user set password = password('r00t') where user = 'root';
Query OK, 1 row affected (0.02 sec)
Rows matched: 2 Changed: 1 Warnings: 0

mysql> select user, host, password from user;
| user | host | password |
| root | localhost | *7561F5295A1A35CB8E0A7C46921994D383947FA5 |
| root | | *7561F5295A1A35CB8E0A7C46921994D383947FA5 |
| | | |
| | localhost | |
| sguil | localhost | *B31EC1E21F3433DC60AF70BEFD8F40A01D6B77E3 |
5 rows in set (0.04 sec)

Now I delete the unneeded accounts.

mysql> delete from user where user = '';
Query OK, 2 rows affected (0.02 sec)

mysql> select user, host, password from user;
| user | host | password |
| root | localhost | *7561F5295A1A35CB8E0A7C46921994D383947FA5 |
| root | | *7561F5295A1A35CB8E0A7C46921994D383947FA5 |
| sguil | localhost | *B31EC1E21F3433DC60AF70BEFD8F40A01D6B77E3 |
3 rows in set (0.01 sec)

I plan to incorporate these changes in future VMs.

I learned one other cool tip from that database book. I can check to see how I am connected to the database using the status command.

mysql> status;
mysql Ver 14.12 Distrib 5.0.18, for portbld-freebsd5.4 (i386) using 4.3

Connection id: 6
Current database: mysql
Current user: root@localhost
SSL: Not in use
Current pager: more
Using outfile: ''
Using delimiter: ;
Server version: 5.0.18
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: latin1
Db characterset: latin1
Client characterset: latin1
Conn. characterset: latin1
UNIX socket: /tmp/mysql.sock
Uptime: 22 hours 10 min 25 sec

Threads: 3 Questions: 2991 Slow queries: 0 Opens: 11 Flush tables: 1
Open tables: 45 Queries per second avg: 0.037

You can see that I am using a Unix socket when connected to localhost.

I can still connect to localhost, but force a TCP connection using the following syntax.

taosecurity:/home/analyst$ mysql -u root -p --protocol=tcp
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7 to server version: 5.0.18

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> status;
mysql Ver 14.12 Distrib 5.0.18, for portbld-freebsd5.4 (i386) using 4.3

Connection id: 7
Current database:
Current user: root@localhost
SSL: Not in use
Current pager: more
Using outfile: ''
Using delimiter: ;
Server version: 5.0.18
Protocol version: 10
Connection: localhost via TCP/IP
Server characterset: latin1
Db characterset: latin1
Client characterset: latin1
Conn. characterset: latin1
TCP port: 3306
Uptime: 22 hours 11 min 19 sec

Threads: 3 Questions: 2996 Slow queries: 0 Opens: 0 Flush tables: 1
Open tables: 45 Queries per second avg: 0.038

That's not earth-shattering, but I thought it was interesting.

Monday, April 10, 2006

Bug in Latest VMware Server Beta Affects Sguil VM

A bug in the latest VMware Server Beta (22874) affects my newest Sguil VM. I like to deploy the VM so that VM management interface lnc0 is bridged to /dev/vmnet0, and the sniffing interface lnc1 is bridged to /dev/vmnet2. On Linux this means that /dev/vmnet0 corresponds to eth0 and /dev/vmnet2 corresponds to eth1.

You can see in the screen capture at right that my second interface is listed as a "custom" network associated with "VMnet2".

When I tried starting my VM today, I got an error message saying VMnet2 was not available. After some searching I found the following thread discussing the same problem.

The solution is simple. Rather than accept the listing that VMware provides, replace VMnet2 with /dev/vmnet2. The screen capture at left shows this configuration.

Now the VM boots without any problem. Remember to alter permissions on /dev/vmnet2 if you want to use it for promiscuous sniffing. Change permissions when the VM is not booted.

Saturday, April 08, 2006

Simple Bandwidth Measurement

If you read my first book you know I prefer small applications that run in Unix terminals to more complicated programs. I decided to get a sense of the bandwidth being monitored at several sensors deployed at client sites. I did not want to install MRTG or Ntop to answer simple questions like "What is the maximum bandwidth seen by the sensor?" or "What is an average amount of traffic seen?"

I decided to try bwm-ng. It's in the FreeBSD ports tree as bwm-ng. (Don't think I'm abandoning FreeBSD for Debian. Nothing can beat FreeBSD's package system in terms of number and variety of applications and up-to-date versions.)

Start bwm-ng by telling it the interface you want monitored.

# bwm-ng -I em2

The default screen looks like this.

bwm-ng v0.5 (probing every 0.500s), press 'h' for help
input: getifaddrs type: rate
| iface Rx Tx Total
em2: 8.27 KB/s 0.00 KB/s 8.27 KB/s
total: 8.27 KB/s 0.00 KB/s 8.27 KB/s

This screen shows the instantaneous traffic rate as measured by bwm-ng in KBps. Instantaneous rates aren't that helpful. To learn more options, I hit the 'h' key.

lqbwm-ng v0.5 - Keybindings:qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x x
x 'h' show this help x
x 'q' exit x
x '+' increases timeout by 100ms x
x '-' decreases timeout by 100ms x
x 'd' switch KB and auto assign Byte/KB/MB/GB x
x 'a' cycle: show all interfaces, only those which are up, x
x only up and not hidden x
x 's' sum hidden ifaces to total aswell or not x
x 'n' cycle: input methods x
x 'u' cycle: bytes,bits,packets,errors x
x 't' cycle: current rate, max, sum since start, average for last 30s x
x x
mq press any key to continue... qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

On screen the qqqqq and such is a line, not letters.

The 't' options looks helpful. If I hit the 't' key three times, I end up with the following display.

bwm-ng v0.5 (probing every 0.500s), press 'h' for help
input: getifaddrs type: avg (30s)
/ iface Rx Tx Total
em2: 9.70 KB/s 0.00 KB/s 9.70 KB/s
total: 9.70 KB/s 0.00 KB/s 9.70 KB/s

Now I have a 30 second average. I prefer to see bits, not bytes, so I hit the 'u' key once.

bwm-ng v0.5 (probing every 0.500s), press 'h' for help
input: getifaddrs type: avg (30s)
- iface Rx Tx Total
em2: 91.68 Kb/s 0.00 Kb/s 91.68 Kb/s
total: 91.68 Kb/s 0.00 Kb/s 91.68 Kb/s

Now I have a 30 second average measured in Kbps.

For a sensor, the max traffic measured is very important. If I leave bwm-ng running for a while (perhaps in a screen(1) sessions), I can see surges. To have bwm-ng show me those maximum events, I can hit the 't' key to cycle through to the max report.

bwm-ng v0.5 (probing every 0.500s), press 'h' for help
input: getifaddrs type: avg (30s)
- iface Rx Tx Total
em2: 91.68 Kb/s 0.00 Kb/s 91.68 Kb/s
total: 91.68 Kb/s 0.00 Kb/s 91.68 Kb/s

If I hit the 'd' key bwm-ng will switch from using Kilo units to something it considers more appropriate.
bwm-ng v0.5 (probing every 0.500s), press 'h' for help
input: getifaddrs type: max
/ iface Rx Tx Total
em2: 4.69 Mb/s 0.00 b/s 4.69 Mb/s
total: 4.69 Mb/s 0.00 b/s 4.69 Mb/s

Here we see this interface topped out at 4.69 Mbps.

This is the sort of data I need to determine if my sensor can handle this sort of load. The longer I leave bwm-ng running, the more I will know about this site's traffic characteristics.

If you read bwm-ng's man page you'll see you can also run the program as a daemon and output measurements to .csv and other formats.

Remember you can also use Bpfstat on FreeBSD 6 and higher to get Bpf performance data from the kernel. Here I measure every 10 seconds. Notice that the drop figures aren't changing.

# bpfstat -i 10 -I em2
pid netif flags recv drop match sblen hblen command
91593 em2 p--s- 156908 0 156908 1012 0 snort
18669 em2 p--s- 73065540 47 73065540 928 0 snort
33252 em2 p--s- 253633385 429 253633385 424 0 sancp
91593 em2 p--s- 157501 0 157501 750 0 snort
18669 em2 p--s- 73066133 47 73066133 662 0 snort
33252 em2 p--s- 253633978 429 253633978 326 0 sancp
91593 em2 p--s- 158625 0 158625 11355 0 snort
18669 em2 p--s- 73067257 47 73067257 10051 0 snort
33252 em2 p--s- 253635102 429 253635102 2927 0 sancp
91593 em2 p--s- 161417 0 161417 11838 0 snort
18669 em2 p--s- 73070049 47 73070049 11838 0 snort
33252 em2 p--s- 253637894 429 253637894 6530 0 sancp
91593 em2 p--s- 162303 0 162303 166 0 snort
18669 em2 p--s- 73070935 47 73070935 166 0 snort
33252 em2 p--s- 253638780 429 253638780 414 0 sancp

Friday, April 07, 2006

Virtualization is the New Web Browser

I read the first post by the president of VMware, Diane Greene. She discusses a subject that has been gnawing at my brain since I heard that Microsoft began offering Virtual Server as a free download. Ms. Greene makes two points. First, she promotes VMware's Virtual Machine Disk Format (VMDK) as an open alternative to Microsoft's Virtual Hard Disk Image Format Specification (VHD). I would obviously like to see an open standard prevail against a closed one.

Second, she argues discusses "the question of whether virtualization should be tightly integrated into the operating system or instead a separate wholly independent layer." As you might guess she wants separation: "Tight integration comes at the unfortunate cost of giving up bias-free choice of operating system and thus software stack (i.e. OS and application program)."

This is the Web browser-in-the-OS argument all over again. Microsoft said last year that "Microsoft will build virtualization capabilities into the Windows platform based on Windows hypervisor technology, planned for availability in the next product wave of the Windows operating system, code-named Windows 'Longhorn.' This integrated hypervisor technology in the Windows operating system will be designed to provide customers with a high-performance virtualization solution for Windows and heterogeneous environments."

Everyone's doing it. Check out Red Hat's initiatives, which include integrating Xen into Fedora Core 5. FreeBSD is also bringing Xen into its source tree, probably for FreeBSD 7.0 but maybe 6.2.

You might also consider virtualization to be like a TCP/IP stack. I remember installing Trumpet Winsock on Windows 3.1 so I could dial-in to the Internet in the early 1990s. Now every OS ships with a TCP/IP stack.

With word that Microsoft Virtual Server 2007 is delayed until 2007, like Vista and Office 2007, the only question is Microsoft's ability to execute. I consider VMware to be the leader in the virtualization space. Will there come a day, however, when I'll just use the built-in virtualization technology on Windows instead of adding a third-party product? Maybe. Then again, I'm posting these ideas in Firefox -- not IE.

Converted FreeBSD SMP System to Debian

I decided my Dell PowerEdge 2300 needed to switch from FreeBSD to Debian. I wanted to try using this SMP system to run VMware Server Beta, which runs on Windows or Linux. I'd like to record two notes about how I got this system running Debian with the 2.4 kernel.

First, the Dell PowerEdge 2300 uses a Megaraid RAID system that is not supported by the 2.6 kernel that ships with Debian. I couldn't get the 2.4 version of the installation process to recognize the RAID either, meaning Debian didn't see a hard drive on which to install itself. I found sites like Debian on Dell Servers and considered using custom .iso's for installation. Luckily I found a much simpler solution.

During the installation, after the hardware check failed to find my hard drive, I ran the following commands.

cd /lib/modules/2.4.27-2-386/kernel/drivers/scsi
insmod megaraid.o

That allowed the Megaraid to be recognized, after which a hardware re-check found the RAID and permitted installing the OS.

I had to re-run the install several times. At one point I was getting coredumps during package installation. I originally made progress using the 'expert' installation, which allowed me to select a SMP kernel. At my last reboot I didn't select that option and instead used the standard 'linux' install. That put a non-SMP kernel on my system. I was able to apt-get my way to a SMP kernel, so that didn't cause much trouble.

Second, as far as installing VMware Server Beta went, I used the newest 22874 version and followed my own instructions. (That is why I blog.) I tested the new VMware Server using my latest Sguil VM. After running bunzip2 and tar, I remembered to 'chmod 777 *.vmx' to allow the VM to run in the VMware Server Console. I can report that the Sguil VM runs fine in this new setup.

Specifications for my Next Laptop

I've been running Windows 2000 and FreeBSD on my Thinkpad a20p for six years, and I've been considering replacements. That machine offered various features for which I had waited many months, such as a graphics card with 16 MB RAM, mini-PCI architecture for onboard Fast Ethernet, etc.

Now I find myself considering the features I would like to see in my next-generation laptop. While I don't have any specific vendor or model in mind, here are the features I want:

I don't see anything on Intel's roadmap which offers these capabilities yet, but The Register indicates units will ship around March 2007. Mike's Hardware and provide some tips on Merom models as well.

That should give enough time for vendors to include Windows Vista. I think I will run the 64 bit Enterprise version. I plan to dual-boot with FreeBSD, but I will also use some version of VMware. I might run the new, free VMware Server, as long as it supports the same snapshot features found in VMware Workstation.