Friday, June 29, 2007

Bejtlich Teaching at Forensec Canada 2007

I just wrapped up teaching at GFIRST and the number of events left on my TaoSecurity training page are dwindling. My last scheduled event open to the general public will take place at Forensec Canada 2007 in Regina, SK on 15-16 September 2007. This is a great opportunity to attend some excellent forensics training, since the conference (17-18 September) follows my class, and MANDIANT's Incident Response Management class wraps up the event on 19-20 September. Each class only holds 12 students.

I am teaching TCP/IP Weapons School, covering layers 2-7 in two days. This is the same class as the one I am teaching at Black Hat USA 2007 in Las Vegas. One of my two Black Hat training sessions is already full and the second is close (since it is colored in yellow on the registration page).

Those of you who attended TCP/IP Weapons School layers 2-3 in Santa Clara last week may want to join me at USENIX Security 2007 in Boston on 6-7 August. I will be teaching layers 4-7 there in-depth for two days. That is the last time I will teach that course.

I am only teaching Network Security Operations twice more -- in Cincinnati and Chicago in August. Please see my TaoSecurity training page for details. Both of those classes are filling too.

Update: I'm afraid I won't be able to present this class.

Saturday, June 23, 2007

Three Reviews Posted

I'm happy to announce three new reviews, partially due to my flights between Washington Dulles and San Jose for USENIX 2007. The first is two stars (yes, unfortunately) for Practical Packet Analysis by Chris Sanders. From the review:

To use "American Idol" lingo, you've already read reviews by Randy Jackson and Paula Abdul. It's time for the truth from Simon Cowell -- Practical Packet Analysis (PPA) is a disaster. I am not biased against books for beginners; see my five star review of Computer Networking by Jeanna Matthews. I am not biased against author Chris Sanders; he seems like a nice guy who is trying to write a helpful book. I am not a misguided newbie; I've written three books involving traffic analysis. I did not skim the book; I read all of it on a flight from San Jose to Washington Dulles. I do not dislike publisher No Starch; I just wrote a five star review for Designing BSD Rootkits by Joseph Kong.

PPA is written for beginners, or at least it should be intended for beginners givens its subject matter. It appears the author is also a beginner, or worse, someone who has not learned fundamental networking concepts. This situation results in a book that will mislead readers who are not equipped to recognize the numerous technical and conceptual problems in the text. This review will highlight several to make my point. These are not all of the problems in the book.

Read the review to see all of the examples.

The second is Designing BSD Rootkits by Joseph Hong. From the review:

I loved Designing BSD Rootkits (DBR) by Joseph Kong, and I'm not even a kernel hacker. Rather, I'm an incident responder and FreeBSD administrator. This book is directly on target and does not waste the reader's time. If you understand C and want to learn how to manipulate the FreeBSD kernel, Designing BSD Rootkits is for you. Peer into the depths of a powerful operating system and bend it to your will!

DBR covers much of the same sorts of material found in the earlier Rootkits: Subverting the Windows Kernel by Greg Hoglund and James Butler, except Kong's book is all about FreeBSD. I actually read the Windows text first, but found Kong's more direct language and examples easier than the Hoglund/Butler text. After reading DBR I have a stronger understanding of each of the main chapters' techniques, i.e., kernel modules, hooking, direct kernel object manipulation, kernel object hooking, run-time kernel memory patching, and detection mechanisms. I particularly liked the author showing his sample rootkit's effectiveness against Tripwire, simply to demonstrate his methods.

The third is Rootkits: Subverting the Windows Kernel by Greg Hoglund and Jamie Butler. From the review:

I read Rootkits: Subverting the Windows Kernel last year, but waited until I read Joseph Kong's Designing BSD Rootkits before reviewing both books. In a head-to-head comparison, I thought Kong's book was easier to comprehend and directly covered the key techniques I wanted to see. If I could give this book 4 1/2 stars I would, but Amazon doesn't allow that luxury.

Hoglund and Butler should be commended for writing this book. It really does assemble the parts (meaning techniques and code) necessary to implement a Windows rootkit, at least prior to Windows Vista. My only concern is that, at times, the authors are not as clear as I hoped they might be. This is probably due to the fact that they are two of the best rootkit writers on the planet, so they probably do not remember what it was like to not understand "hooking" and other techniques.

Thank you to No Starch and Addison-Wesley for the review copies.

Friday, June 22, 2007

Internet Traffic Study

I found this press release from Ellacoya Networks to be interesting. HTTP is approximately 46% of all traffic on the network. P2P continues as a strong second place at 37% of total traffic. Newsgroups (9%), non-HTTP video streaming (3%), gaming (2%) and VoIP (1%) are the next widely used applications.

Breaking down application types within HTTP, the data reveals that traditional Web page downloads (i.e. text and images) represent 45% of all Web traffic. Streaming video represents 36% and streaming audio 5% of all HTTP traffic. YouTube alone comprises approximately 20% of all HTTP traffic, or nearly 10% of all traffic on the Internet.

There's some dispute regarding these numbers with respect to HTTP vs P2P, but overall I found these numbers surprising. I am surprised by the high newsgroups count -- is alt.whatever that significant?

Frame Check Sequence Recorded in STP

This evening I was preparing to teach day 2 of my TCP/IP Weapons School class at USENIX. I decided I wanted to get a trace of Spanning Tree Protocol (STP) so I connected back to a box in my lab and ran Tshark. When I brought the trace back to my desktop to view in Wireshark, I saw the following:

How/why Tshark capture the FCS for this frame? I looked at other traffic (i.e., non-STP traffic) and did not see a FCS. The only other interesting aspect of this frame is the fact that it is pure 802.3 and not 802.3 with a LLC SNAP header, like this CDP frame:

I usually see 802.3 with a LLC SNAP header or just Ethernet II.

Does anyone have any ideas?

Thursday, June 21, 2007

Open Source Initiative Stands Up

Thanks to this Slashdot article I learned of this blog post by Michael Tiemann, president of the Open Source Initiative. Essentially he writes:

Enough is enough. Open Source has grown up. Now it is time for us to stand up. I believe that when we do, the vendors who ignore our norms will suddenly recognize that they really do need to make a choice: to label their software correctly and honestly, or to license it with an OSI-approved license that matches their open source label.

This is great. I wrote Real Open Source in April and I am glad OSI is joining this battle. It will be interesting to see how they proceed. Perhaps they can start by "naming names," i.e., listing companies or projects claiming to be "open source" but not using an Open Source license. Incidentally, reading the Slashdot post is worthwhile, if only to see Bruce Perens respond to arguments opposing OSI's position.

Latest Plane Reading

Tuesday afternoon I flew from Washington Dulles to San Jose, to teach at USENIX 2007.

En route I read a few interesting articles that I'd like to mention.

  • When I saw NWC mention the Omni Virtual Network Service, I thought something cool might be on hand. Their Web site states:

    The migration to blade chassis-based virtual servers has created a new blind spot in the enterprise: the traffic between virtual servers in the same blade chassis. This “invisible traffic” never crosses any network segment where it can be easily captured. As a result, engineers have little or no visibility into the traffic among virtual servers...

    A new addition to the OmniAnalysis Platform, the Omni Virtual Network Service is a lightweight traffic-capture service that enables IT engineers to capture and analyze traffic on virtual servers...

    The Omni Virtual Network Service is a small, lightweight service that runs on any Windows XP or Windows 2003 virtual server.

    Oh... so Omni implemented remote capture, which I blogged about in 2003 as implemented on Winpcap, and only works on Windows. Oh well.

    Incidentally, a quick check of VMware Server 1.0.2 build-39867 showed that when VM 1 pings VM 2 with all NICs in bridged mode, VM 3 cannot see the ICMP traffic. Does this mean VMware Server is no longer a hub like I described a year ago? Watching the physical Linux interface of the host OS showed two copies of each packet, however.

  • The same issue of NWC mentioned the NetXen 10G Ethernet Expansion card, saying:

    The NetXen adapter offers dual-channel 10GbE connectivity at a cost of less than $550 per port, and provides bonus dual- or quad-gigabit ports, depending on the chip. But what makes the NetXen line really interesting is the investment protection it offers through its field-programmable and IO-virtualization capabilities. Already supporting RDMA, iSCSI and TCP/IP off-loading, the NetXen Protocol Processing Engine can be reprogrammed to handle changed or new protocols, like iSER and iWARP, through a simple driver update.

    The NetXen Website confirms this:

    The fully-programmable architecture of the Intelligent NIC® protects network equipment investments in the face of rapidly changing market needs and evolving protocols. It is the only solution on the market whose functionality can be changed completely in firmware.

    Are you thinking what I'm thinking? Say it with me: NIC rootkit -- or how about a NICkit?

  • Recently I've been blogging about CALEA. I found the diagrams in this Procera Networks marketing slick helped me understand some of the different approaches, like traditional CALEA (top diagram) vs Procera's approach (bottom diagram):

  • Speaking of CALEA, I got a chance to read a new paper by my favorite covert channel and traffic analysis guru Steven Murdoch -- Sampled Traffic Analysis by Internet-Exchange-Level Adversaries. Basically, there's a good chance that Tor users monitored at an Internet eXchange (IX) can be identified via sampled traffic analysis. Renting a botnet is still your best means to stay anonymous, apparently.

  • Finally, I also read Inadvertent Disclosure – Information Leaks in the Extended Enterprise (.pdf) by M. Eric Johnson and Scott Dynes. This very interesting paper described the authors' search for sensitive documents on P2P networks. My only problem was the dreadful repeated misuse of terms like threat, when risk was probably the right term to use. A sentence like this encapsulates much of my frustration:

    While these searches could be seen as benign, they would also uncover sensitive files and thus the expose [sic] vulnerabilities that could still represent a threat to the institution and its customers.

    Vulnerabilities never represent a threat to anyone. Almost all the places where the authors say "threat" they really mean risk. For example:

    We also characterize the threat of loss...

    That should read "We also characterize the risk of loss..."

    In this example an application is mischaracterized as a "threat."

    This next breed of file sharing systems has proven to be far more difficult to control and a much larger security threat.

    Applications which offer services are not threats. Applications may offer vulnerabilities which can be attacked and exploited by threats, but the application is not the threat itself -- the application is a target.

Expect more reports from the flight back to NoVA.

Tuesday, June 19, 2007

More on Enterprise Data Centralization

I'd like to respond to a few comments to my post Enterprise Data Centralization. The first paragraph includes the following:

However, I haven't written about a natural complement to thin client computing -- enterprise data centralization. In this world, the thin client is merely a window to a centralized data store (sufficiently implemented according to business continuity processes and methods like redundancy, etc.).

The bolded part is my answer to those who think my "centralization" plan means building the Mother of All Storage Servers/Networks. Please. Do you think I would really advocate that? The bolded part is my shorthand for saying I do NOT mean to build the Mother of All Storage Servers/Networks.

Instead, I envision something similar to the way Google operates. One of you used Google as an example of data decentralization. Sure, the data is decentralized at the level of bits on media, but it's exceptionally centralized where it matters -- the user interface. I can access all of my Google-related content through one portal. If my data needed to be explored for ediscovery purposes, all you need is my Google login. Easy. That's the kind of centralization I'm talking about.

That explanation should also calm those who think I'm building the Mother of All Targets; i.e., nuke the primary and secondary data centers and the whole company is dead. Again, you're thinking at the level of bits and media. I'm thinking in terms of a single interface to all company data.

Now you might be thinking that what I'm advocating isn't all that special. Consider this: do you have a single place to go for all of your company data? If you do, that is awesome. I doubt that it's the case for most of us, however. Unfortunately, we have to move in that direction if we wish to meet legal business requirements.

Christopher Hoff used the term "agile" several times in his good blog post. I think "agile" is going to be thrown out the window when corporate management is staring at $50,000 per day fines for not being able to produce relevant documents during ediscovery. When a company loses a multi-million dollar lawsuits because the judge issued an adverse inference jury instruction, I guarantee data will be centralized from then forward.

The May 2007 ISSA Journal features a great article titled E-discovery: Implications of FRPC Changes on IT Risk Management by Bradley J. Schaufenbuel. It features this excerpt:

Adverse inference jury instruction: If electronic evidence is not produced in a timely manner, a judge may instruct the jury to assume that the missing evidence would have been adverse to the party that failed to produce it. This will greatly diminish this party’s chances of legal success.

Two highly visible examples include Zubulake v. UBS Warburg and Coleman v. Morgan Stanley. The defendant financial institutions in both lawsuits lost their cases due to their failure to adequately produce e-mail evidence, and the resulting assumption that evidence was willfully destroyed or withheld. Laura Zubulake, a former UBS employee, was awarded $29 million in 2005 in her sexual discrimination lawsuit.

And billionaire Ronald Perelman was awarded $1.45 billion in 2005 based on his claim that Morgan Stanley defrauded him in the 1998 sale of his company, camping goods manufacturer Coleman.

Email provides a good example of a place to start centralizing data. Look at the trouble the White House has created in the story House Report Shows White House Officials Sent Thousands of Official Emails Using Outside Accounts.

It's fine to be advocating Google Gears and all these other Web 2.0 applications and systems. There's one force in the universe that can slap all that down, and that's corporate lawyers. If you disagree, whom do you think has a greater influence on the CEO: the CTO or the corporate lawyer? When the lawyer is backed by stories of lost cases, fines, and maybe jail time, what hope does a CTO with plans for "agility" have?

Incidentally, I wouldn't be promoting centralization if I thought it was impossible. Centralization was a word in the first sentence the GE CTO said to me during out first meeting.

Hired Gun No More

The June 2007 Information Security Magazine features a story called When to Call in the Hired Guns. The magazine includes a chart titled VAR Excellence (.pdf) that mentions TaoSecurity. The selection process seems to have no method to its madness; I only recognize a few of the other companies. Furthermore, I did not pay anything for the listing.

I don't like to see TaoSecurity listed as a "VAR" since I don't sell any products as a regular business offering. It's funny to see TaoSecurity listed in a chart like this, two weeks before I start working at GE.

Now, it would be nice if Information Security Magazine would reinstate my subscription. They dropped me last year, even though I write the Snort Report for a sister publication. I even know a former contributing editor (with his name printed at the beginning of the magazine) who is no longer getting a subscription!

Web-Centric Short-Term Incident Containment

You may have read Large Scale European Web Attack from Websense and other news sources. One or more Italian Web hosting companies have been compromised, and the contents of the Web sites they host have been modified.

Malicious IFRAMEs like the one below are being added to Web sites. These IFRAMEs like to malicious code hosting by a third party under the control of the intruder. When an innocent Web browser visits the compromised Web site, the browser is attacked by the contents of the IFRAME. This is not a new problem. I responded to an intrusion in 2003 that used the same technique. It's the reason why I discussed having the capability to use an extrusion method to modify traffic as it leaves a site. This is an example of Short Term Incident Containment. This technique does not remediate the compromised Web sites or Web servers. It does help clean malicious traffic before it reaches Web browsers. I suggest using Netsed or Snort in inline mode to replace the malicious IFRAME with something benign. It would be helpful to have this sort of capability widely deployed as an application-level incident response tool.

Note: I do not use Gentoo. However, Gentoo's package site nicely provides images for all the tools in their repository. When one is trying to write slides and add images that have some relationship with the material at hand, Gentoo's "package boxes" make nice additions.

As a preventative measure on the Web client side, Greg Castle's Whitetrash would help prevent the third-party content being loaded into the IFRAME.

On a somewhat related note, I am glad to see developments like that from Palo Alto Networks, described by Dark Reading. It's a firewall that makes blocking decisions based on application recognition, not port number. Parts of this functionality are available elsewhere, but eventually all network blocking and inspection products will make decisions using the same method.

Saturday, June 16, 2007

Enterprise Data Centralization

I've written about thin client computing for several years. However, I haven't written about a natural complement to thin client computing -- enterprise data centralization. In this world, the thin client is merely a window to a centralized data store (sufficiently implemented according to business continuity processes and methods like redundancy, etc.). That vision can be implemented today, albeit really only where low-latency, uninterrupted, decent bandwidth is available.

Thanks to EDD Blog I just read an article that makes me think legal forces will drive the adoption of this strategy: Opinion: Data Governance Will Eclipse CIO Role by Jay Cline. He writes:

In response to the new U.S. Federal Rules on Civil Procedure regarding legal discovery, for example, several general counsels have ordered the establishment of centralized "litigation servers" that store copies of all of the companies’ electronic files. They think this is the only way to preserve and cheaply produce evidence for pending or foreseeable litigation. It’s a very small leap of logic for them to propose that all of their companies’ data, not just copies, should be centralized...

Data must soon become centralized, its use must be strictly controlled within legal parameters, and information must drive the business model. Companies that don’t put a single, C-level person in charge of making this happen will face two brutal realities: lawsuits driving up costs and eroding trust in the company, and competitive upstarts stealing revenues through more nimble use of centralized information.

The rest of the article talks about the role of CIOs, CTOs, "chief information strategists," etc., but I don't care about that. I care about the data centralization aspect.

For me, data centralization will be a major theme in my new job. If only to meet ediscovery requirements, at the very least, copies of all business information will need to be stored centrally. This strategy will give users of any computing platform the flexibility to create information locally, but that data will quickly find a second home (at the very least) in the central data store. Ideally (once bandwidth is ubiquitous) all business data will reside centrally, from creation to destruction (in accordance with data rentention and data destruction policies). Furthermore, that data will be subjected to protections at the document level, not just at the application, OS, and platform level.

This strategy addresses many problems very nicely.

  1. Ediscovery: With at least copies of all data stored locally, all relevant data can be searched and produced.

  2. Business Continuity: If your computing platform is destroyed, all your data (or at least a copy) is stored elsewhere.

  3. Incident Recovery: As I said in my Five Thoughts on Incident Response:

    Today, in 2007, I am still comfortable saying that existing hardware can usually be trusted, without evidence to the contrary, as a platform for reinstallation. This is one year after I saw John Heasman discuss PCI rootkits (.pdf)... John's talks indicate that the day is coming when even hardware that hosted a compromised OS will eventually not be trustworthy.

    One day I will advise clients to treat an incident zone as if a total physical loss has occurred and new platforms have to be available for hosting a reinstallation. If you doubt me now, wait for the post in a few years where I link back to this point. In brief, treat an incident like a disaster, not a nuisance. Otherwise, you will be perpetually compromised.

    With thin client computing and data centralization, incident recovery means discarding the old computing platform and starting with a fresh one.

  4. System Administration: We will avoid Marcus Ranum's "Infocalypse," preventing every man, woman, and child from becoming a Windows system administrator. Scare IT staff will administer centralized systems and end users will no longer have the power or need to install software. The Personal Computer will be replaced by a window to the Business Computer, although the platform itself might be a consumer platform like a smartphone. (Of course, PCs will still be options outside business needs.)

  5. Information Lifecycle Management: ILM includes data classification, defense, retention, and destruction. With all data in a central location, it will be easier to classify it and apply classification-appropriate handing and defense tools and techniques. It is important to remember that not all data is worth the same value and trying to protect it all with the same tools and techniques is too costly. (Did you know the US Postal Service will carry up to Secret classified data within the US, provided it is wrapped appropriately and sent via registered mail? The idea is that the risk of interception is worth the savings over having a courier transport Secret material.)

What data centralization and/or thin computing is your organization pursuing, and why?

Friday, June 15, 2007

DHS Einstein Demonstrates Value of Session Data

If you're looking for case studies to show management to justify collecting session data, check out Einstein keeps an eye on agency networks. I've known about this program for several years but waited until a high-profile story like this to mention it in my blog. Basically:

Since 2004, Einstein has monitored participating agencies’ network gateways for traffic patterns that indicate the presence of computer worms or other unwanted traffic. By collecting traffic information summaries at agency gateways, Einstein gives US-CERT analysts and participating agencies a big-picture view of bad activity on federal networks.

US-CERT’s security analysts use Einstein data to correlate cross-agency security incidents. Participating agencies can go to a secure Web portal to view their own network gateway data.

Einstein doesn’t eliminate the need for intrusion-detection systems on agencies’ networks, said Mike Witt, deputy director of US-CERT. But the 24-hour monitoring program does give individual agencies a view of activity in other parts of the federal network infrastructure that could affect their own networks...

Ten agencies participate in Einstein, and four or five others have indicated they plan to join by the end of the year. Witt said DHS officials hope to have most Cabinet-level agencies in the program by the end of 2008. DHS will try to expand participation to more of the midsize and small federal agencies later, he said.

“Einstein is not mandatory, so we have to do a sales job with agencies,” Witt said. Witt wouldn’t name the agencies that have signed up. In a public presentation last year, however, a DHS official identified eight participants. They were DHS, DOT, the departments of State, Treasury and Education, the Federal Trade Commission, the Securities and Exchange Commission, and the U.S. Agency for International Development. The Justice Department has since joined the program.

This is just the sort of project I'd like to roll out at my new job, possibly combining Argus with ArgusEye, or maybe just Sguil without Snort. The idea is to be an internal security awareness provider for business units, offering them better insights into their network activity while using that data to monitor for attacks and respond to incidents more effectively.

After a pilot program to demonstrate the value of the approach, I would consider more robust options like an internally-developed product or a commercial option. I know of at least one large customer of mine who read my first book and built their own session and full content capture appliance for about $50,000, rated up to OC-48 for full content collection.

Note that Einstein is session data only, and from what I hear some people find its capabilities and data format lacking -- hence the desire to run something else, pairing session data with full content. Session data is very helpful but never sufficient for real investigations.

Hope for Air Force Cyberoperators

Last November I wrote about the Air Force Cyberspace Command. I said:

I'd like to see the new Cyberspace Command sponsor a new Air Force Specialty Code (AFSC) for information warriors. The current Intel or Comm paradigm isn't suitable.

Today I read Air Force moves to populate Cyberspace Command:

The Air Force is developing plans for a dedicated force to populate the ranks of the service’s new Cyberspace Command, its commanding general said today.

Lt. Gen. Robert Elder, commander of the 8th Air Force and chief of the new command, said the service will finish deliberations on a force structure for the command within a year and then start filling those positions.

Once service officials have laid out career paths and training guidelines for the jobs, Elder said, recruits will be able to join what he called the Air Force’s cyberforce just as they could opt to become fighter pilots or navigators.

I hope this "cyberforce" is an AFSC for "cyberoperators." These "cyber" terms make me cringe, but whatever they are called it's important to create a new AFSC for these people. I left the USAF in 2001 because I was the only company-grade intelligence officer in the Air Force with hands-on computer network defense skills. Because I wasn't a comm officer or engineer, the personnel weenies at Air Force Personnel Command had no career path for me. That's why I pulled the ejection handles and joined the civilian workforce, where I could control my own career path.

I'd still like to see an independent Cyber Force to centralize information warfare capabilities alongside the Army, Navy, Air Force, and Marines.

Thursday, June 14, 2007

Seats for Bejtlich at Black Hat 2007 Filling

I'll be teaching two sessions of TCP/IP Weapons School, Black Hat Edition at Black Hat in Las Vegas, 28-29 July and 30-31 July 2007. This is the same class, just offered twice. The second session is already wait-listed. The only remaining seats are available for the first session. Thank you.

Wednesday, June 13, 2007

Why Digital Security?

Today I received the following email:

Hi Richard,

(Sorry for my bad English, i speak French...)

I'm one of your blog readers and i have just a little question about your (Ex) job, Consultant in IT security...

I'm very interested by IT security and i want to get a degree in this. In France, we have to write "motivation letter" to show why we are interested by the diploma. That's why i write to you to know a few things that you do in your job, what is interesting and what is boring ??

I figured I would say a few words here and then let all of you blog readers post your ideas too.

  • Likes:

    • Constant learning

    • Defending victims from attackers -- some kind of desire for justice

    • Community that values learning (but not necessarily education -- there's a difference)

    • Working with new technology

    • Financially rewarding for those with valuable skills

  • Dislikes:

    • Constantly changing landscape requires specialization and potential loss of big picture

    • Most attackers remain at large, meaning as a whole "security" never improves

    • Learning is being increasingly rated by the string of letters after one's name

    • Family system administration, especially for user applications on Windows that I have never seen; "But you work with computers!"

    • Charlatans, especially with letters and/or security clearances, rotating around the Beltway making lots of money without delivering value beyond a "filled billet"

What do you think?

Two Pre-Reviews

I'd like to mention two books that publishers were kind enough to send me recently. I plan to read these during upcoming flights or as part of my new, structured reading regimen that will accompany my plans for the second half of 2007. The first book is Windows Forensic Analysis Including DVD Toolkit by Harlan Carvey. I expect to learn a lot about Windows forensics reading this book. I do not perform host-based forensics regularly so I think Harlan's experience will be appreciated. The second book is Practical Packet Analysis by Chris Sanders. I'm reading this book for the same reason I read Computer Networking by Jeanna Matthews -- I want to see if it is a good book for beginners. The content of Chris' book seems very simple, but it might be just the right book for people starting their network traffic inspection careers. Incidentally, if you like the approach of using Ethereal/Wireshark to look at traffic that the author explains, you should look at Jeanna's 2005 book.

Security Application Instrumentation

Last year I mentioned ModSecurity in relation to a book by its author. As mentioned on the project Web site, "ModSecurity is an open source web application firewall that runs as an Apache module." In a sense Apache is both defending itself and reporting on attacks against itself. I consider these features to be forms of security application instrumentation. In a related development, today I learned about PHPIDS:

PHPIDS (PHP-Intrusion Detection System) is a simple to use, well structured, fast and state-of-the-art security layer for your PHP based web application. The IDS neither strips, sanitizes nor filters any malicious input, it simply recognizes when an attacker tries to break your site and reacts in exactly the way you want it to. Based on a set of approved and heavily tested filter rules any attack is given a numerical impact rating which makes it easy to decide what kind of action should follow the hacking attempt. This could range from simple logging to sending out an emergency mail to the development team, displaying a warning message for the attacker or even ending the user’s session.

This sort of functionality needs to be built into every application. It is not sufficient (reasons to follow) but it is required.

We used to (and still do) talk about hosts defending themselves. I agree that hosts should be able to defend themselves, but that does not mean we should abandon network-level defenses (as the misguided Jericho Forum advocates).

Today we need to talk about applications defending themselves. When they are under attack they need to tell us, and when they are abused, subverted, or breached they would ideally also tell us.

In the future (now would be nice, but not practical yet) we'll need data to defend itself. That's a nice idea but the implementation isn't ready yet (or even fully conceptualized, I would argue).

Returning to applications: why is it necessary for an application to detect and prevent attacks against itself? Increasingly it is too difficult for third parties (think network infrastructure) to understand what applications are doing. If it's tough for inspection and prevention systems it's even tougher for humans. The best people to understand what's happening to an application are (presumably) the people who wrote it. (If an application's creator can't even understand what he/she developed, there's a sign not to deploy it!) Developers must share that knowledge via mechanisms that report on the state of the application, but in a security-minded manner that goes beyond the mainly performance and fault monitoring of today.

(Remember monitoring usually develops first for performance, then fault, then security, and finally compliance.)

So why isn't security application instrumentation sufficient? The problem is one should not place one's trust entirely in the hands of the target. One of Marcus Ranum's best pieces of wisdom for me was the distinction between "trusted" and "trustworthy." Just because you trust an application doesn't make it worthy of that trust. Just because you have no alternative but to "trust" an application doesn't make it trustworthy either. Trustworthy systems behave in the manner you expect and can be validated by systems outside of the influence of the target.

For most of my career my mechanism for determining whether systems are trustworthy has been network sensors. That's why they sit at the top of my TaoSecurity Enterprise Trust Pyramid. In a host- and application-centric world I might consider a second system with one-way direct memory access to a target to be the most trusted source of information on the target, followed by a host reporting its own memory, then other mechanisms including application state, logs, etc.

You can't entirely trust the target because it can be compromised and told to lie. Of course all elements of my trust pyramid (or any trust pyramid) can be compromised but the degree of difficulty (should) increase as isolation from the target is achieved.

I'll end this post with a plea to developers. Right now you're being taught (hopefully) "secure coding." I would like to see the next innovation be security application instrumentation, where you devise your application to report not only performance and fault logging, but also security and compliance logging. Ideally the application will be self-defending as well, perhaps offering less vulnerability exposure as attacks increase (being aware of DoS conditions of course).

Eventually we should all be wearing the LogLogic banner at right, because security will be more about analyzing and acting on instrumented applications and data and less about inspecting a security product's interpretation of attacks.

I am not trying to revoke my Response to Bruce Schneier Wired Story. SAI doesn't mean the end of the security industry. I saw in this story that Bruce Schneier is still on another planet:

In a lunch presentation, security expert Bruce Schneier of BT Counterpane also predicted a sea change. "Long term, I don't really see a need for a separate security market," he said, calling for greater integration of security technology into everyday hardware, software, and services.

"You don't buy a car and then go buy anti-lock brakes from the company that developed them," Schneier quipped. "The safety features are bought and built in by the company that makes the car." Major companies such as Microsoft and Cisco are paving the way for this approach by building more and more security features directly into their products, he noted.

"That doesn't mean that security becomes less important or that there won't be innovation," Schneier said. "But in 10 years, I don't think we'll be going to conferences like these, that focus only on security. Those issues will be handled as part of broader discussions of business and technology."

Schneier needs to study more history. I'll be at Black Hat or its equivalent in ten years, and he'll probably be there as another keynote!

Pete Lindstrom reminds me of my post that says car analogies fail unless the security concern is caused by an intelligent adversary. Inertia is not an intelligent adversary with certain threat advantages.

One final note on adversaries: first they DoS'd us (violating availability). Now they're stealing from us (violating confidentiality). When will they start modifying our data in ways that benefit them in financial and other ways (violating integrity)? We will not be able to stop all of it and we will need our applications and data to help tell us what is happening.

Incidentally, since I'm on the subject of logs I wanted to briefly say why I usually disagree with people who use the term "Tcpdump logs" or "Pcap logs." If you're storing full content network traffic, you are not "logging." You are collecting the actual data that was transferred on the wire. That is collection, not logging. If I copy and store every fax that's sent to a department, I'm not logging the faxes -- I am collecting them. A log would say:

1819 Wed 13 Jun 07 FAX RMB to ARB 3 pgs

or similar. In this session session data could be considered logging, since sessions are records of conversations and not the actual conversations.

That said, logs are great because a single good log message can be more informative than a ton of content. For example, I would much rather read a log that says file X was transferred via SMB from user RMB to user ARB, etc., than try to interpret the SMB traffic manually.

Tuesday, June 12, 2007

Threat Model vs Attack Model

This is just a brief post on terminology. Recently I've heard people discussing "threat models" and "attack models." When I reviewed Gary McGraw's excellent Software Security I said the following:

Gary is not afraid to point out the problems with other interpretations of the software security problem. I almost fell out of my chair when I read his critique on pp 140-7 and p 213 of Microsoft's improper use of terms like "threat" in their so-called "threat model." Gary is absolutely right to say Microsoft is performing "risk analysis," not "threat analysis." (I laughed when I read him describe Microsoft's "Threat Modeling" as "[t]he unfortunately titled book" on p 310.) I examine this issue deeper in my reviews of Microsoft's books.

In other words, what Microsoft calls "threat modeling" is actually a form of risk analysis. So what is a threat model?

Four years ago I wrote Threat Matrix Chart Clarifies Definition of "Threat", which showed the sorts of components one should analyze when doing threat modeling. I wrote:

It shows the five components used to judge a threat: existence, capability, history, intentions, and targeting.

That is how one models threats. It has nothing to do with the specifics of the attack. That is attack modeling.

Attack modeling concentrates on the nature of an attack, not the threats conducting them. I mentioned this in my review of Microsoft's Writing Secure Code, 2nd Ed:

[W]henever you read "threat trees," [in this misguided Microsoft book] think "attack trees" -- and remember Bruce Schneier worked hard on these but is apparently ignored by Microsoft.

That is still true -- Bruce Schneier's work on attack trees and attack modeling is correct in its terminology and its applications. Attack trees are a way to perform attack modeling. Attack modeling can be done separate from threat modeling, meaning one can develop an attack tree that any sufficient threat could execute.

This understanding also means most organizations will have more useful results performing attack modeling and not threat modeling, because most organizations (outside law enforcement and the intel community) lack any real threat knowledge. With the help of a pen testing team an organization can develop realistic attack models and therefore effective countermeasures. This is Ira Winkler's point when he says most organizations aren't equipped to deal with threats and instead they should mitigate vulnerabilities that any threat might attack.

This does not mean I am embracing vulnerability-centric security. I still believe threats are the primary security problem, but only those chartered and equipped to deter, apprehend, prosecute, and incarcerate threats should do so. The rest of us should focus our resources on what we can, but take every step to get law enforcement and the military to do the real work of threat removal.

I'm Not Dead

Several of you leaving comments, posting your own blog entries, and sending me email seem to think my job at General Electric means I am dead. I am not dead, God willing. Let me reprint the second-to-last paragraph from that post:

What about writing here, or articles, or books? My boss supports my blogging and writing. I have never made a practice of posting "Look what I found at this client!" and he does not expect me to start doing so at GE. You can expect to read more about the sorts of techniques I'm using to address security concerns but never incident specifics or any information which would compromise my relationship with GE. The same goes for articles and books. I plan to continue writing the Snort Report and eventually write the new works listed on my books page.

This blog has never been a site for "tell-all" activity. I don't discuss specifics about clients, or national security matters, or private information shared in a confidential manner. I started this blog when I worked at Foundstone, continued it at ManTech, and kept blogging with TaoSecurity. I intend to remain blogging, time- and interest-willing. Thank you.

One for Ken Belva

I mentioned Ken Belva's thoughts in Thoughts on Virtual Trust last year. If you don't know Ken's thoughts on "virtual trust" please read that post before continuing further. I refrained from pointing a finger at Ken's Apple DRM example after Steve Jobs posted his Thoughts on Music, where DRM won't apply to Apple music (thereby depriving Ken of one of his case studies and questioning his logic).

Now I'd really like an answer to this article: Retailers Fuming Over Card Data Security Rules; Claim PCI standard shifts burden to them, could alienate customers. Here are a few excerpts:

Several retailers last week bristled at having to comply with the Payment Card Industry (PCI) Data Security Standard, complaining that they carry an unfair burden in securing credit card data.

In interviews and speeches at the annual ERIexchange conference here, retail executives also complained that implementing the PCI standard is costly and could alienate customers...

Robert Fort, director of IT at Virgin Entertainment Group Inc. in Los Angeles... contended that meeting the requirements doesn’t boost a retailer’s bottom line. “There’s no direct return on investment,” he said. “It will not help us sell CDs.”
(emphasis added)

Ken -- what do you think about that? I would respond to the vendor by saying customers who can't trust vendors won't give the vendor their business. I might also use an argument that says vendors could be held liable for negligence. Those are two thoughts.

Monday, June 11, 2007

Cisco Router as DNS Server Demonstrates Functional Aggregation

Did you know that a sufficiently new Cisco router can be a DNS server? Apparently this functionality is not that new (dating from 2005), but I did not hear of it until I saw the article Cisco Router: The Swiss Army Knife of Network Services. I think this is a good example of what I may start calling "functional aggregation," whereby features previously provided on separate servers are collapsed to one box. I know others call that "convergence," but that term applies to so many topics (voice + video + data, etc.) that I'll use FA here. It doesn't matter anyway, because some marketing drone will invent a catchy name that everyone will end up using at some point.

One interesting aspect of this story is that it points to a simple blog post called Use your Cisco router as a primary DNS server that shows how easy it is to configure this feature. That post is then followed by a new article called Protecting the primary DNS server on your router, which explains how a router as DNS server can be overwhelmed faster than a separate, robust server. The comments to the second post also provide a justification for DNS on router functionality, namely it saves the cost of a dedicated DNS box if your router is underutilized.

The danger not mentioned in those posts is that a DNS server is another potentially exploitable service. The greater the number of services exposed to the public on a system, the greater the likelihood for compromise. It's one of the reasons people have tried to run separate services on separate servers for years.

I think we'll see the following trends based on these sorts of developments.

  • The poorest businesses (in terms of budget, expertise, and time) will seek to not maintain any IT infrastructure at all, and will rely on outsourced services. FA means nothing to them because they don't maintain gear.

  • Moderately equipped businesses will adopt some FA solutions because they are "good enough" or "just good enough," given their constraints.

  • Well-equipped businesses whose staff can make the case for stand-alone functionality (i.e., separate DNS servers, etc.) will avoid FA solutions for critical infrastructure. Otherwise they will outsource or use FA to save money.

I think these arguments apply equally well to security services such as those found in so-called "unified" security appliances.

Bejtlich Joining General Electric as Director of Incident Response

Two years ago this month I left my corporate job to focus on being an independent consultant through TaoSecurity. Today I am pleased to announce a new professional development. Starting next month I will be joining General Electric as Director of Incident Response, based near Manassas, VA, working for GE's Chief Information Security Officer, Grady Summers at GE HQ in Fairfield, CT.

My new boss reads my blog and contacted me after reading my Security Responsibilities post five months ago. He has created the new Director position as a single corporate focal point for incident response, threat assessment, and ediscovery, working with GE's six business units and corporate HQ security staff. Grady reports to GE's Chief Technology Officer, Greg Simpson, and works closely with GE's Chief Security Officer, Brig Gen (USAF, ret) Frank Taylor. I will be building a team and I am pleased to have already met my first team member, a forensic investigator.

I am very excited about this new job. First, the scope of the challenge is enormous. GE is probably just bigger than the Air Force (my closest related employer), with 350,000 users. The company's revenues last year exceeded $160 billion and its market capitalization currently exceeds $380 billion. GE is number 6 on the 2007 Fortune 500. In brief, I don't think there's a way for me to get bored working to address GE's digital security concerns.

Second, I look forward to building and working with a team that has a defined, long-term objective. With few exceptions my consulting work has been short-duration engagements which don't allow me to develop security processes or implement products for the long term. I have been impressed by all of the security staff from GE I've met thus far, and encouraged by articles like Does GE Have the Best IT? and GE's repeated rank as the number one most admired company in America.

Third, I hope this new role will improve my family's quality of life. As an independent consultant I was constantly juggling marketing, public relations, business development, client relationships, accounting, invoicing, and other non-tech tasks while trying to deliver quality services to customers and stay current on threats, vulnerabilities, and assets. Knowing my new "customer" on a continuous basis means I can focus my energy on my corporate work and not consider every waking moment a reason to accomplish another TaoSecurity task. While the financial rewards of working independently probably exceeded those of working for a corporation, the personal cost of maintaining that business cycle is very high. I am also confident my travel requirements will be less for GE than they were for TaoSecurity.

What does this mean for TaoSecurity? Simply put, I will not be accepting any new consulting work or private teaching requests that cannot be accomplished by the end of this month. I am currently fulfilling existing obligations, some of which may extend beyond the end of the month. I am not joining GE because my independent work dried up; in fact, I've had to turn down four large engagements within the last week because they would have to occur after the end of this month.

If you're wondering about public training classes, I recommend you review my TaoSecurity training schedule. You'll see only the following are left:

That's it. I do not have any plans to be teaching again, although I have not ruled out the occasional conference presentation. There will definitely not be any private classes, and I imagine the only public venue for a half-, full-, or two-day class would be USENIX or perhaps Black Hat Training next year, if either are interested. The bottom line is that if you want to take one of these classes before I no longer offer them, please sign up as soon as possible.

What about writing here, or articles, or books? My boss supports my blogging and writing. I have never made a practice of posting "Look what I found at this client!" and he does not expect me to start doing so at GE. You can expect to read more about the sorts of techniques I'm using to address security concerns but never incident specifics or any information which would compromise my relationship with GE. The same goes for articles and books. I plan to continue writing the Snort Report and eventually write the new works listed on my books page.

Finally, I should note that both of my grandfathers retired from GE, so I have some personal history with the company. I'd like to thank Grady Summers and everyone at GE that have helped me join this great organization.

Sunday, June 10, 2007

Triple-Boot Thinkpad x60s

Many years ago I thought multibooting operating systems was quite the cool thing to do. This was before VMware when my budget was tighter and so was my living space. Recently with my new laptop configuration I moved to an all-Ubuntu setup, upon which I loaded VMware Server. VMware Server had Windows XP and FreeBSD 6.2 VMs at its disposal. I've spent nearly all my time in Ubuntu, never really needing to turn to Windows or FreeBSD for desktop work.

With the arrival of Ubuntu 7.04, I decided to try a new approach with my laptop. The OEM HDD was 60 GB, which is somewhat small given my use of VMs. Furthermore, I fairly regularly buy brand new hard drives when I make major operating system shifts. I think the best backup I could ever have is an entire old hard drive, and HDDs are cheap compared to the value of the data on them. Moving from 6.10 to 7.04 seemed like a good time to replace the 60 GB HDD with a Seagate Momentus 5400.3 ST9160821AS 160GB 5400 RPM 8MB Cache Serial ATA150.

I also decided to go back to a multiboot situation for those extraordinary circumstances when VMware just won't do. I foresee two situations which require something besides Linux. First, I've been unable to use Skype or other sound utilities on Ubuntu due to some weird sound driver issues. This compels me to reload Windows XP from the recovery CD in order to access the Windows sound drivers shipped by Lenovo. Second, I am attending Black Hat this summer, and I don't trust Windows or Linux to that crowd. Sure, FreeBSD is "just as vulnerable" but the majority of the attackers will be looking for Windows and Linux users. Booting into FreeBSD and staying there will reduce my exposure surface.

In order to triple-boot, I started by reinstalling Windows XP from the Lenovo recovery CD and DVD. Good grief, what a painful and long process. Sure, it worked, but it just looked ugly. Thankfully the media booted from a USB optical drive. I also have to remove all the vendor garbage installed on top of Windows. Ugh. At least Windows XP is available now.

Next, I installed Ubuntu 7.04 (desktop edition), again using the external optical drive. I used Gparted to create a partition for FreeBSD, then let Ubuntu take the remaining biggest chunk for itself. Ubuntu installed without a hitch -- very nice.

Finally, I installed FreeBSD. Being my favorite OS, I was ambitious. I decided to try the newest 7.0 CURRENT snapshot (200706), released within the last few days. Unfortunately, I couldn't get FreeBSD to install from the external optical drive. I decided to try PXE booting, but I couldn't get all the way through the installation. I then downshifted to 6.2 RELEASE and my life got easier. Here's what I set up.

I made my old Thinkpad a20p the PXE server. I created a /freebsd directory to hold the contents of the /boot directory on the 6.2 RELEASE CD-ROM, i.e.:

orr:/# ls -ald /freebsd
drwxr-xr-x 3 root wheel 512 Jun 10 20:18 /freebsd
orr:/# ls /freebsd/boot/
beastie.4th boot2 kernel loader.rc screen.4th
boot cdboot loader mbr support.4th
boot0 defaults loader.4th mfsroot
boot0sio device.hints loader.conf modules
boot1 frames.4th pxeboot

Notice the presence of mfsroot in that directory. That is not what ships on the CD -- mfsroot.gz is the original file:

orr:/# ls -al /cdrom/boot/mfsroot.gz
-r--r--r-- 1 root wheel 1063814 Jan 12 06:33 /cdrom/boot/mfsroot.gz

Use 'gzip -d mfsroot.gz' to create the mfsroot file needed by the installation process. Also, edit loader.conf to have the following:

orr:/# cat /freebsd/boot/loader.conf

Now I enabled TFTP and told it where to find what the installation needed:

#tftp dgram udp wait root /usr/libexec/tftpd tftpd -l -s /tftpboot
tftp dgram udp wait root /usr/libexec/tftpd tftpd -l /freebsd

Note what the original says and how I changed it. The omission of the -s flag is probably not needed. Be sure to start inetd via 'inetd' as root.

PXE needs a DHCP server. I installed isc-dhcp3-server and created the following conf file:

orr:/# grep -v ^# /usr/local/etc/dhcpd.conf

option domain-name "";
option domain-name-servers;

default-lease-time 6000;
max-lease-time 72000;

ddns-update-style ad-hoc;

log-facility local7;

subnet netmask {
option routers;

host neely {
hardware ethernet 00:16:D3:23:7C:A7;
filename "boot/pxeboot";
option root-path "";

The PXE/DHCP server is and it's connected via crossover cable to, the x60s.

I added these to /etc/rc.conf to enable DHCP.

dhcpd_enable="YES" # dhcpd enabled?
dhcpd_flags="-q" # command option(s)
dhcpd_conf="/usr/local/etc/dhcpd.conf" # configuration file
dhcpd_ifaces="fxp0" # ethernet interface(s)

fxp0 is the interface connected to the x60s.

Thus far the PXE client will be able to access the pxeboot program, but the installer needs NFS to continue the process. For that I created this /etc/exports file:

orr:/# cat /etc/exports
/freebsd -ro -network -mask
/cdrom -ro -network -mask

These lines in /etc/rc.conf enabled inetd and NFS:


It's a good idea to test what's exported.

orr:/# showmount -e
Exports list on

Initially I wanted to set up the a20p as a NATing gateway from the x60s, so the x60s could reach the Internet. I ended up just pointing the installer towards and using NFS to retrieve the installation sets. I installed the User distribution because I want to try the new modular Xorg 7.2 later. When done FreeBSD looked like this via df -h:

Filesystem Size Used Avail Capacity Mounted on
/dev/ad4s3a 1.9G 36M 1.7G 2% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad4s3e 989M 22K 910M 0% /home
/dev/ad4s3g 2.9G 4.0K 2.7G 0% /nsm
/dev/ad4s3h 1.1G 12K 1.0G 0% /tmp
/dev/ad4s3d 9.7G 306M 8.6G 3% /usr
/dev/ad4s3f 2.9G 7.9M 2.7G 0% /var

The major setback for the x60s with FreeBSD is lack of native support for the wireless NIC. I plan to try the ClearChain Intel 3945ABG driver at some point. Right now I'm just using an old wireless NIC recognized as wi0.

To enable FreeBSD in Ubuntu's GRUB boot loader, I added this entry:

title FreeBSD
root (hd0,2,a)
kernel /boot/loader
chainloader +1

I based this on the following fdisk -l output from Linux.

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 3060 24579418+ 7 HPFS/NTFS
/dev/sda2 18830 19457 5044410 6 FAT16
/dev/sda3 3061 6081 24266182+ a5 FreeBSD
/dev/sda4 6082 18829 102398310 5 Extended
/dev/sda5 * 6082 18305 98189248+ 83 Linux
/dev/sda6 18306 18829 4208998+ 82 Linux swap / Solaris

Partition table entries are not in disk order

Overall I'm pleased with this setup. I would have liked trying FreeBSD 7.0 CURRENT but 6.2 will meet my needs. FreeBSD on the Lenovo Thinkpad X60s by M.C. Widerkrantz has some tips, as does

I plan to begin moving data to the new setup using a AZiO ENC211SU31 eSATA+USB 2.0 External 2.5" Hard Drive Enclosure that will hold the original 60 GB HDD.

Saturday, June 09, 2007

PowerLite S4 Multimedia Projector

This week I taught TCP/IP Weapons School, Layers 2-3 at Techno Security 2007 in Myrtle Beach, SC. I enjoyed teaching the class, especially since several students were repeat customers. Two were even alumni from classes I taught at Foundstone five years ago! Because the cost of renting a projector and screen from the hotel (and even from seemed outrageous, I decided to buy my own. I purchased an Epson PowerLite S4 Multimedia Projector and Da-Lite 72263 Versatol Tripod Screen 70"x70" Matte White with Keystone Elim for use in the class. I was extremely pleased with both. In fact, right after I bought the Epson projector I saw it covered in a USA TODAY review, which helped validate my purchase.

If you're in the market for a projector and screen combination for less than $800 (or even $700 if you're not time-crunched, as I was) then I think you'll like these products.