Posts

Showing posts from February, 2009

Using Responsible Person Records for Asset Management

Image
Today while spending some time at the book store with my family, I decided to peruse a copy Craig Hunt's TCP/IP Network Administration . It covers BIND software for DNS. I've been thinking about my post Asset Management Assistance via Custom DNS Records . In the book I noticed the following: "Responsible Person" record? That sounds perfect. I found RFC 1183 from 1990 introduced these. I decided to try setting up these records on a VM running FreeBSD 7.1 and BIND 9. The VM had IP 172.16.99.130 with gateway 172.16.99.2. I followed the example in Building a Server with FreeBSD 7 . First I made changes to named.conf as shown in this diff: # diff /var/named/etc/namedb/named.conf /var/named/etc/namedb/named.conf.orig 132c132 < // zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; }; --- > zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; }; 274,290d273 < zone "example.com"

Sample Lab from TCP/IP Weapons School 2.0 Posted

Image
Several of you have asked me to explain the difference between TCP/IP Weapons School (TWS), which I first taught at USENIX Security 2006 , and TCP/IP Weapons School 2.0 (TWS2), which I first taught at Black Hat DC 2009 Training last week. This post will explain the differences, with an added bonus. I have retired TWS , the class I taught from 2006-2008. I am only teaching TWS2 for the foreseeable future. TWS2 is a completely brand-new class. I did not reuse any material from TWS, my older Network Security Operations class, or anything else. TWS2 offers zero slides . Students receive three handouts and a DVD. The handouts include an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide. The DVD contains a virtual machine with all the tools and evidence needed to complete the labs, along with the network and memory evidence as stand-alone files. TWS2 is heavily lab-focused . I've been teaching professionally since 2002, and I've r

Inputs vs Outputs, or Why Controls Are Not Sufficient

Image
I have a feeling my post Consensus Audit Guidelines Are Still Controls is not going to be popular in certain circles. While tidying the house this evening I came across my 2007 edition of the Economist's Pocket World in Figures . Flipping through the pages I found many examples of inputs (think "control-compliant") vs outputs (think "field-assessed"). I'd like to share some of them with you in an attempt to better communicate the ideas my last post. Business creativity and research Input(s): Total expenditures on research and development, % of GDP Output(s): Number of patents granted (per X people) Education Input(s): Education spending, % of GDP; school enrolment Output(s): Literacy rate Life expectancy, health, and related categories Input(s): Health spending, % of GDP; population per doctor; number of hospital beds per citizen; (also add in air quality, drinking and smoking rates, etc.) Output(s): Death rates; infant mortality; and s

Consensus Audit Guidelines Are Still Controls

Image
Blog readers know that I think FISMA Is a Joke , FISMA Is a Jobs Program , and if you fought FISMA Dogfights you would always die in a burning pile of aerial debris. Now we have the Consensus Audit Guidelines (CAG) published by SANS . You can ask two questions: 1) is this what we need? and 2) is it at least a step in the right direction? Answering the first question is easy. You can look at the graphic I posted to see that CAG is largely another set of controls. In other words, this is more control-compliant "security," not field-assessed security. Wait, you might ask, doesn't the CAG say this? What makes this document effective is that it reflects knowledge of actual attacks and defines controls that would have stopped those attacks from being successful. To construct the document, we have called upon the people who have first-hand knowledge about how the attacks are being carried out. That excerpt means that CAG defines defensive activities that are believed

Asset Management Assistance via Custom DNS Records

In my post Black Hat DC 2009 Wrap-Up, Day 2 I mentioned enjoying Dan Kaminsky's talk. His thoughts on the scalability of DNS made an impression on me. I thought about the way the Team Cymru Malware Hash Registry returns custom DNS responses for malware researchers, for example. In this post I am interested in knowing if any blog readers have encountered problems similar to the ones I will describe next, and if yes, did you / could you use DNS to help mitigate it? When conducting security operations to detect and respond to incidents, my team follows the CAER approach . Escalation is always an issue, because it requires identifying a responsible party. If you operate a defensible network it will be inventoried and claimed , but getting to that point is difficult. The problem is this: you have an IP address, but how do you determine the owner? Ideally you have access to a massive internal asset database, but the problems of maintaining such a system can be daunting. Th

HD Moore on the Necessity of Disclosure

Image
HD Moore posted a great defense of full disclosure in his article The Best Defense is Information on the latest Adobe vulnerability . The strongest case for information disclosure is when the benefit of releasing the information outweighs the possible risks. In this case, like many others, the bad guys already won. Exploits are already being used in the wild and the fact that the rest of the world is just now taking notice doesn't mean that these are new vulnerabilities. At this point, the best strategy is to raise awareness, distribute the relevant information, and apply pressure on the vendor to release a patch. Adobe has scheduled the patch for March 11th. If you believe that Symantec notified them on February 12th, this is almost a full month from news of a live exploit to a vendor response. If the vendor involved was Microsoft, the press would be tearing them apart right now. What part of "your customers are being exploited" do they not understand? Richard Bej

Buck Surdu and Greg Conti Ask "Is It Time for a Cyberwarfare Branch?"

Image
The latest issue of the Information Assurance Technology Analysis Center's IANewsletter features "Army, Navy, Air Force, and Cyber -- Is It Time for a Cyberwarfare Branch of [the] Military?" by COL John "Buck" Surdu and LTC Gregory Conti. I found these excerpts enlightening. The Army, Navy, and Air Force all maintain cyberwarfare components, but these organizations exist as ill-fitting appendages that attempt to operate in inhospitable cultures where technical expertise is not recognized, cultivated, or completely understood. The services have developed effective systems to build traditional leadership and management skills. They are quite good at creating the best infantrymen, pilots, ship captains, tank commanders, and artillerymen, but they do little to recognize and develop technical expertise. As a result, the Army, Navy, and Air Force hemorrhage technical talent, leaving the Nation’s military forces and our country under-prepared for both the ongoing c

More Information on CNCI

Image
In response to my post Black Hat DC 2009 Wrap Up, Day 1 , a commenter shared a link to a Fairfax Chamber of Commerce briefing by Boeing on the Comprehensive National Cybersecurity Initiative (CNCI) that I last mentioned in FCW on Comprehensive National Cybersecurity Initiative . I've extracted a few slides below to highlight several points. The first slide I share shows abbreviated definitions for Computer Network Defense, Computer Network Exploitation, and Computer Network Attack. These mirror what I cited in China Cyberwar, or Not? in late 2007. The second slide supports what I said in my Predicitons for 2008 post: Expect greater military involvement in defending private sector networks . Notice DNI and DoJ are said to be "authorized to conduct domestic intrusion detection," and DNI and DoD are allowed "involvement with domestic networks." The three phased approach is displayed next. Note mentions of deployment of sensors, counter-intrusion plans

VirtualBSD: FreeBSD 7.1 Desktop in a VM

Image
Want to try FreeBSD 7.1 in a comfortable, graphical desktop, via a VMWare VM? If your answer is yes, visit www.virtualbsd.info and download their 1.5 GB VM. I tried it last night and got it working with VMware 1.0.8 by making the following adjustments: Edit VirtualBSD.vmx to say #virtualHW.version = "6" virtualHW.version = "4" and VirtualBSD.vmdk to say #ddb.virtualHWVersion = "6" ddb.virtualHWVersion = "4" and you will be able to use the VM on VMware Server 1.0.8. Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Black Hat Briefings Justify Supporting Retrospective Security Analysis

Image
One of the tenets of Network Security Monitoring, as repeated in Network Monitoring: How Far? , is collect as much data as you can, given legal, political, and technical means (and constraints) because that approach gives you the best chance to detect and respond to intrusions. The Black Hat Briefings always remind me that such an approach makes sense. Having left the talks, I have a set of techniques for which I can now mine my logs and related data sources for evidence of past attacks. Consider these examples: Given a set of memory dumps from compromised machines, search them using the Snorting Memory techniques for activity missed when those dumps were first collected. Review Web proxy logs for the presence of IDN in URIs. Query old BGP announcements for signs of past MITM attacks. You get the idea. The key concept is that none of us are smart enough to know how a certain set of advanced threats are exploiting us right now, or how they exploited us in the past. Once we get a

Black Hat DC 2009 Wrap-Up, Day 2

Image
This is a follow-up to Black Hat DC 2009 Wrap-Up, Day 1 . I started day two with Dan Kaminsky. I really enjoyed his talk. I am not sure how much of it was presented last year, since I missed his presentation in Las Vegas. However, I found his comparison of DNS vs SSL infrastructures illuminating. The root name servers are stable, dependable, centrally coordinated, and guaranteed to be around in ten years. We know what root name servers to trust, and we can add new hosts to our domains without requesting permission from a central authority. Contrast that with certificate authorities. They have problems, cannot all be trusted, and come and go as their owning companies change. We do not always know what CAs to trust, but we must continuously consult them whenever we change infrastructure. Dan asked "are we blaming business people when really our engineering is poor?" I thought that was a really interesting question. Imagine that instead of being a security engineer,

Black Hat DC 2009 Wrap-Up, Day 1

Image
I taught the first edition of TCP/IP Weapons School 2.0 at Black Hat DC 2009 Training in Arlington, VA last week to 31 students. Thanks to Steve Andres from Special Ops Security and Joe Klein from Command Information for helping as teaching assistants, and to Ping Look and the whole Black Hat staff for making the class successful. I believe the class went well and I am looking forward to teaching at Black Hat Europe 2009 Training in April. Very soon I will post a sample lab from the class on this blog so you can get a feeling for this class, since it is completely new and totally slide-free. I hope to blog a little more now that the class is done. I spent the vast majority of my free time over the last three months preparing the new class, even completing the coursework three days before the class, printing the books and burning the DVDs myself. I expect preparations for Amsterdam and eventually Las Vegas to be easier. After my training I attended Black Hat DC 2009 Briefi

Thoughts on Air Force Blocking Internet Access

Image
Last year I wrote This Network Is Maintained as a Weapon System , in response to a story on Air Force blocks of blogging sites. Yesterday I read Air Force Unplugs Bases' Internet Connections by Noah Shachtman: Recently, internet access was cut off at Maxwell Air Force Base in Alabama, because personnel at the facility "hadn't demonstrated — in our view at the headquarters — their capacity to manage their network in a way that didn't make everyone else vulnerable," [said] Air Force Chief of Staff Gen. Norton Schwartz. I absolutely love this. While in the AFCERT I marvelled at the Marine Corps' willingness to take the same actions when one of their sites did not take appropriate defensive actions. Let's briefly describe what needs to be in place for such an action to take place. Monitored. Those who wish to make a blocking decision must have some evidence to support their action. The network subject to cutoff must be monitored so that authoriti

Back from Bro Workshop

Image
Last week I attended the Bro Hands-On Workshop 2009 . Bro is an open source network intrusion detection and traffic characterization program with a lineage stretching to the mid-1990s. I finally met Vern Paxson in person, which was great. I've known who Vern was for about 10 years but never met him or heard him speak. I first covered Bro in The Tao of Network Security Monitoring in 2004 with help from Chris Manders. About two years ago I posted Bro Basics and Bro Basics Follow-Up here. I haven't used Bro in production but after learning more about it in the workshop I would be comfortable using some of Bro's default features. I'm not going to say anything right now about using Bro. I did integrate Bro analysis into most of the cases in my all-new TCP/IP Weapons School 2.0 class at Black Hat this year. If TechTarget clears me for writing again in 2009 I will probably write some Bro articles for Traffic Talk . Richard Bejtlich is teaching new classes in E

Last Day to Register Online for TCP/IP Weapons School 2.0 in DC

Image
Black Hat was kind enough to invite me back to teach a new 2-day course at Black Hat DC 2009 Training on 16-17 February 2009 at the Hyatt Regency Crystal City in Arlington, VA. This class, completely new for 2009, is called TCP/IP Weapons School 2.0 . This is my only scheduled class on the east coast of the United States in 2009. The short description says: This hands-on, lab-centric class by Richard Bejtlich focuses on collection, detection, escalation, and response for digital intrusions. Is your network safe from intruders? Do you know how to find out? Do you know what to do when you learn the truth? If you need answers to these questions, TCP/IP Weapons School 2.0 (TWS2) is the Black Hat course for you. This vendor-neutral, open source software-friendly, reality-driven two-day event will teach students the investigative mindset not found in classes that focus solely on tools. TWS2 is hands-on, lab-centric, and grounded in the latest strategies and tactics that work against

New Online Packet Repository

Image
As of a few weeks ago I am no longer involved with OpenPacket.org. One of the reasons is a great new online packet repository sponsored and run by Mu Dynamics called Pcapr . I've had an account there for a few months, but it looks like the site is now open to the general public. Check it out -- there's a lot of cool features already. Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Benefits of Removing Administrator Access in Windows

Image
I think most security people advocate removing administrator rights for normal Windows users, but I enjoy reading even a cursory analysis of this "best practice" as published by BeyondTrust and reported by ComputerWorld . From the press release: BeyondTrust’s findings show that among the 2008 Microsoft vulnerabilities given a "critical" severity rating, 92 percent shared the same best practice advice from Microsoft to mitigate the vulnerability: "Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights." This language, found in the "Mitigating Factors" portion of Microsoft’s security bulletins, also appears as a recommendation for reducing the threat from nearly 70 percent of all vulnerabilities reported in 2008. Other key findings from BeyondTrust’s report show that removing administrator rights will better protect companies against the exploit

More on Weaknesses of Models

Image
I read the following in the Economist ; Edmund Phelps, who won the Nobel prize for economics in 2006, is highly critical of today’s financial services. "Risk-assessment and risk-management models were never well founded," he says. "There was a mystique to the idea that market participants knew the price to put on this or that risk. But it is impossible to imagine that such a complex system could be understood in such detail and with such amazing correctness... the requirements for information... have gone beyond our abilities to gather it. " This is absolutely the problem I mentioned in Are the Questions Sound? and Wall Street Clowns and Their Models . Phelps could easily be describing information security models. Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Notes on Installing Sguil Using FreeBSD 7.1 Packages

Image
It's been a while since I've looked at the Sguil ports for FreeBSD, so I decided to see how they work. In this post I will talk about installing a Sguil sensor and server on a single FreeBSD 7.1 test VM using packages shipped with FreeBSD 7.1. To start with the system had no packages installed. After running pkg_add -vr sguil-sensor, I watched what was added to the system. I'm only going to document that which I found interesting. The sguil-sensor-0.7.0_2 package installed the following into /usr/local. x bin/sguil-sensor/log_packets.sh x bin/sguil-sensor/example_agent.tcl x bin/sguil-sensor/pcap_agent.tcl x bin/sguil-sensor/snort_agent.tcl x etc/sguil-sensor/example_agent.conf-sample x etc/sguil-sensor/pcap_agent.conf-sample x etc/sguil-sensor/snort_agent.conf-sample x etc/sguil-sensor/log_packets.conf-sample x share/doc/sguil-sensor <- multiple files, omitted here x etc/rc.d/example_agent x etc/rc.d/pcap_agent x etc/rc.d/snort_agent Note that you have to co

Data Leakage Protection Thoughts

Image
"Data Leakage Protection" (DLP) appears to be the hot product everybody wants. I was asked to add to the SearchSecurity section I wrote two years ago, but I'm not really interested. I mentioned "extrusion" over five years ago in What Is Extrusion Detection? This InformationWeek story had an interesting take: What constitutes DLP? Any piece of backup software, disk encryption software, firewall, network access control appliance, virus scanner, security event and incident management appliance, network behavior analysis appliance--you name it--can be loosely defined as a product that facilitates DLP. For the purposes of this Rolling Review, we will define enterprise DLP offerings as those that take a holistic, multitiered approach to stopping data loss, including the ability to apply policies and quarantine information as it rests on a PC (data in use), as it rests on network file systems (data at rest), and as it traverses the LAN or leaves the corporate b

Humans, Not Computers, Are Intrusion Tolerant

Image
Several years ago I mentioned the human firewall project as an example of a security awareness-centric defensive measure. I thought it ironic that the project was dead by the time I looked into it. On a similar note, I was considering the idea of intrusion tolerance recently, loosely defined as having a system continue to function properly despite being compromised. A pioneer in the field describes the concept thus: Classical security-related work has on the other hand privileged, with few exceptions, intrusion prevention... [With intrusion tolerance, i]nstead of trying to prevent every single intrusion, these are allowed, but tolerated: the system triggers mechanisms that prevent the intrusion from generating a system security failure. It occurred to me recently that, in one sense, we have already fielded intrusion tolerant systems. Any computer operated, owned, or managed by a person who doesn't care about its integrity is an intrusion tolerant system. People tolera