Friday, October 27, 2006

Response to Daily Dave Thread

I don't subscribe to the Daily Dave (Aitel) mailing list, but I do keep a link to the archives on my interests page. Some of the offensive security world's superstars hang out on that list, so it makes for good reading.

The offensive side really made an appearance with yesterday's thread, where Dave's "lots of monkeys staring at a" thread says:

My feeling is that IDS is 1980's technology and doesn't work anymore. This makes Sourcefire and Counterpane valuable because they let people fill the checkbox at the lowest possible cost, but if it's free for all IBM customers to throw an IDS in the mix then the price of that checkbox is going to get driven down as well.

First, it's kind of neat to see anyone speaking about "IDS" instead of "IPS" here. I think this reflects Dave's background working for everyone's favorite three letter agency. The spooks and .mil types (like me) tend to be the last people to even think about detection these days.

Second, it seems to be popular to think of "IDS" as strictly a signature-based technology, as Gadi Evron believes:

IDS devices are signature based and try to detect bad behaviour using, erm, a sniffer or equivalent.

That's hasn't been true for a while, even if you're talking about Snort. Sure, there are tons of signatures, but they're certainly not just for content matching. If you're thinking about Bro, signatures aren't really even the main issue -- protocol anomaly detection is.

Python demigod Dave posts another message that is a little worrisome:

Making IDS part of a defense in depth strategy is giving it some credit for actually providing defense, which it doesn't do. The people who win the IDS game are the people who spend the least money on it. This is why security outsourcing makes money - it's just as worthless as maintaining the IDS yourself, but it costs less. Likewise, Snort is a great IDS solution because it does nothing but it does it cheaper.

The technology curve is towards complex, encrypted, asynchronous protocols. The further into time you look, the worse the chances are that sniffing traffic is an answer to anything.

The market is slowly realizing this technology's time has past, but in the meantime lots of people are making giant bus-loads of cash. Good for them. But IDS technology isn't relevant to a security discussion in this day and age and it's not going to be anytime soon.

I will agree that many commercial managed security monitoring services are worthless, to the extent that they are ticket- and malware-oriented. However, the idea that Snort "does nothing" is just wrong. Hopefully Dave is just being inflammatory to spur discussion. Sure, Snort is not going to detect an arbitrary outbound encrypted covert channel using port 443. That doesn't mean Snort isn't useful for the hundreds of other attack patterns still seen in the wild.

Since the majority of the posters to this thread are offensive, I doubt they have read any of my books. For example, reverse engineering guru Halvar Flake follows up with this insight:

I still agree with the concept of replacing an IDS with just a large quantity of tapes on which to archive all traffic. IDSs will never alert you to an attack-in-progress, and by just dumping everything onto a disk somewhere you can at least do a halfways-decent forensics job thereafter. Since everybody and his dog is doing cryptoshellcode these days you won't be all-knowing, but at least you should be able to properly identify which machine got owned first.

Welcome to network security monitoring, albeit at least a decade late. The fact that the criminal underground is using covert and encrypted channels now doesn't mean they weren't used 10 plus years ago, when smart people in the spook and .mil worlds needed a way to gain some sort of awareness of network activities by more dangerous adversaries.

Most respected IDS old-school critic Tom Ptacek isn't convinced:

I am waiting for someone to tell me the story about how an IDS saved their bacon. I'm not interested in the story about how it found the guy with the spyware infection or the bot installation; secops teams find those things all the time in their firewall logs and they don't
freak out about it when they do.

The last times I manned a console full-time as a "SOC monkey," for the Air Force in 1998-2001 and at Ball Aerospace in 2001-2002, we found intrusions all the time. I expect several people in the #snort-gui channel where I idle on also have stories to share. I'll have more to say on this later.

Tom continues:

This "signature" vs. "real intrusion detection" thing is a big red herring. Intrusion detection has been an active field of research for over 15 years now and apart from Tripwire I can't point to anything operationally valuable it has produced.

This sounds like the "Snort is worthless" argument Dave proposed. Finally:

Halvar, when you figure out how to parallelize enough striped tape I/O to keep up with a gigE connection, then, Halvar, then I will respect you.

This is another common argument. Most every detection critic argues their pipes are too big to do any useful full content collection. Let's just say that is not a problem for everyone. Many, many organizations connect to the Internet using OC-3s (155 MBps), fractional OC-3s, T-3s (45 Mbps) and below. Full content collection, especially at the frac OC-3 (say 60 Mbps) and lower, is no problem -- even for commodity hardware, if you use Intel NICs, a solid OS, and fast, large hard drives. Even if you drop some small percentage of the traffic, so what? What are the odds that you drop everything that is relevant to your investigation, all the time?

What if your pipes really are too big for full content collection, say in the core of the network? I would argue that's not the place to do full content collection, but let's say you are told to "do something" about detection in a high-bandwidth environment. That's where the other NSM data types come into play -- namely session data and statistical data. Can't save every packet, or you don't want to? Save sessions describing who talked to who, when, using what protocols and services, and how much data was transferred. That is absolute gold for traffic analysis, and it doesn't matter if it's encrypted. At the very least you can profile the traffic statistically.

The root of this problem with this discussion is the narrow idea that a magic box can sit on an arbitrary network and tell you when something "bad" happens. That absolutely won't be possible, at least not for every imaginable "bad" case. The "IDS" has been pigeonholed in the same way the "firewall" has -- as a product and not a real system.

A standard "IDS" isn't an "intrusion detection system" at all; it's an attack indication system. Snort gives you a hint that something bad might be happening. You need the rest of your NSM data to determine what is going on. You can also start with non-alert NSM data (as described in this war story) and investigate intrusions.

Similarly, a firewall isn't necessarily stopping attacks; it should be enforcing an access control policy.

A real detection system identifies deviations from policy, and perhaps should be called a network policy violation detector. A real network policy enforcement system prevents policy violations. The point is that neither has to be boxed into an appliance and sold as a "NPVD" or "NPES". (As you can see, acronyms which tend to accurately describe a system's functionality are completely marketing-unfriendly.)

I'll conclude by saying that I agree with Dave about "monkeys" staring at screens. Many of those sorts of analysts are not doing NSM-centric work that would truly discover intrusions. Yes, the network is a tough place to detect. However, I've argued before that in an age of ubiquitous kernel-mode rootkits, NSM is needed more than ever. If you can't trust a rootkit-controlled host to tell you what's happening, why would you ignore the network? Sure, the traffic could be covert, encrypted, and so forth, but if the pattern of activity isn't normal you can verify that at least something suspicious is happening.

It's time for another book.

Thoughts on Sourcefire IPO

In the spirit of not trying to repeat what everyone else blogs, I'll keep this post on the Sourcefire IPO brief. The must-read post belongs to Mike Rothman -- great work Mike.

I'm excited by this development. I'll probably even buy some Sourcefire stock, just so I can attend the shareholders meeting. I've never owned stock in a friend's company, so this would be novel enough to justify the purchase.

However, in the long term I expect Sourcefire to be acquired anyway. I stand by my ideas that all network security functions will collapse to the switch, something Richard Stiennon called Secure Network Fabric. This means Sourcefire either needs to sell switches that compete with Cisco (unlikely) or be bought by Cisco (possibly) or a Cisco competitor (probably).

Customers are growing increasingly disillusioned with buying more and more point products. If they simply perceive that existing equipment (switches and routers) can be upgraded to implement new security features, they'll pursue that path. Alternatively, they'll include the new functionality in the next switch/router technology refresh. At the most I see a "switch plus one" model, where no more than one stand-alone security device will support the core switch/router infrastructure. Everything that a switch/router cannot perform, security-wise, will be expected of the "firewall," which Marcus Ranum originally defined as a security system and not simply a product.

At some point a majority of hosts will be virtualized, and many network and host security measures will be performed by the hypervisor anyway.

Wednesday, October 25, 2006

Counterpane Bought: Investors Relax

Eighteen months after MCI bought MSSP NetSec, another telecom has bought another MSSP. This time, BT bought Counterpane. I guessed that Counterpane was desperate. At least the investors who poured four rounds of venture capital into Counterpane can realize some sort of return. The announcement concluded with this statement:

As at 31 December 2004 the audited gross assets of the business were $6.8m.

That doesn't sound very promising.

I expect a good amount of reorganization and removal of personnel. BT will want the low-level analysts to stay, but some will probably leave. The middle-managers will want to stay, but BT will send them packing. Since Counterpane's brain trust has largely disappeared, they only need to keep Bruce Schneier as their "visibility guy" or "mantlepiece."

Good luck to them -- I imagine they will be morphed into protecting BT's cloud.

Update: After reading helpful comments and stories like this, it appears Counterpane's investors took a big loss if the company sold for around $40 million. According to the Counterpane series C VC funding press release:

The Goldman Sachs Group, Inc., and Morgan Stanley Dean Witter Private Equity, who all invested further in this round, bringing the total capital raised by Counterpane to $58 million.

Then add $20 million of series D VC funding and the total is $78 million. It looks like the "return on investment" I mentioned earlier was very negative.


Counterpane will run as a standalone operation until April 2007, before being integrated in to BT's Professional Services organisation.

Tuesday, October 24, 2006

Bejtlich Speaking on Tenable Webinar

Ron Gula of Tenable Security invited me to speak at an upcoming Tenable Webinar. You can register for the event now. It will take place 1000 ET Friday 17 November 2006. We'll talk about network security problems facing the enterprise, my favorite security books and resources, and take questions live.

Monday, October 23, 2006

Bejtlich Speaking on Insider Threat

I will participate in the DE Communications Inside Job Webinar at 1100 ET on Thursday 9 November 2006. I plan to discuss why traditional externally-focused security techniques and tools are not well suited to deterring, detecting, and removing insider threats.

By insider threat I do not mean flawed services on desktops. I mean parties with the capabilities and intentions to exploit vulnerabilities in assets. I guarantee you will hear me say that the "80%" figure is a myth.

Even though I am appearing with at least one other speaker (Jerry Shenk), this is not a debate. It will be a few people discussing an import subject. I have a few other Webinars in the works and all should be free. Please join us if you have the time and bandwidth.

Update: Here's a press release. I'm glad they included this quote:

"Insiders do not account for the mythical 80% of security incidents, but their privileged access allows them to inflict devastating harm upon organizations. Security tools and tactics designed to combat the traditional external threat will not work as well, or at all, against insiders," commented Mr. Bejtlich.

Right on.

Sunday, October 22, 2006

Pre-Review of Four Books

Several publishers were kind enough to send me review copies of four new books. The first, which I requested, is Cisco Press' Storage Networking Protocol Fundamentals by James Long. I requested a copy of this book while starting to read a book on securing storage area networks and network attached storage. Basically, the book I was reading is a disaster. I decided this new Cisco Press book looked promising, so I plan to read it first and then turn to the security-specific SAN/NAS book. I'll review the two as a set later. Next is Syngress' Hack the Stack: Using Snort and Ethereal to Master the 8 Layers of An Insecure Network by Michael Gregg and friends. This book was interesting to me because I am already teaching TCP/IP Weapons School (TWS), which teachers TCP/IP by examining security-related traffic at various OSI model layers. A quick look at this book makes it seem worth reading, but there is definitely room for a future book based on TWS.

Remember I am teaching days one and two of TWS through USENIX LISA and days three and four independently at the same hotel, after USENIX LISA. See the information at the bottom of this post for more details. I am not sure if I will read the next two books. Prentice Hall shipped me Security in Computing, 4th Ed By Charles P. Pfleeger and Shari Lawrence Pfleeger. I've never read anything by either author. This book looks like a university text, so I may read it in tandem with Matt Bishop's Computer Security: Art and Science in preparation for academic study. The last book is Addison Wesley's Telecommunications Essentials, 2nd Ed by Lillian Goleniewski. I read and reviewed the first edition, which I liked as a thorough review of the telecom space. This makes me hesitant to devote reading time to this second edition. might let me review it (unlike some other later edition books) because I do not see my old review (or any reviews) listed with this new edition.

Right now I am in the middle of a massive reading push. I have several "free" hours each night between baby feedings, so I am working my way through a pile of books on software security. I haven't read a lot in this area, because I am not a professional programmer. About two years ago I did read, review, and enjoy Building Secure Software by Gary McGraw and John Viega. Thus far, Gary's latest book (Software Security: Building Security In) is my favorite, particularly for its proper use of terms like "threat" and its criticism of those who abuse it (e.g., Microsoft). I'll have far more to say this in the reviews of these books, probably next week.

Thursday, October 19, 2006

Sign Up for Tenable Webinars

I'm not sure if you're aware of these, but Ron Gula of Tenable Security is conducting a series of Webinars on a variety of interesting network security topics. I watched Tuesday's edition on vulnerability management.

The Webinars are not a selling vehicle for Tenable products. Instead, Ron explains one or more aspects of the security scene. If you know Ron you recognize he knows network security better than almost anyone out there. The next Webinar is scheduled for today, and all are free.

Tuesday, October 17, 2006

Bloom's Hierarchy for Digital Security Learning

Twenty years ago, when some of my readers were busy being born, I was a high school freshman. My favorite instructor, Don Stavely, taught history. One of the educational devices he used was Bloom et al.'s Taxonomy of the Cognitive Domain, pictured at left. This hierarchy, which travels from bottom to top, is a way to describe a student's level of understanding of a given subject.

These descriptions from Purdue are helpful:

  • Knowledge entails the ability to recall or state information.

  • Comprehension entails the ability to give meaning to information.

  • Application entails the ability to use knowledge or principles in new or real-life situations.

  • Analysis entails the ability to break down complex information into simpler parts and to understand the relationships among the parts.

  • Synthesis entails the act of creating something that did not exist before by integrating information that had been learned at lower levels of the hierarchy.

  • Evaluation entails the ability to make judgments based on previous levels of learning to compare a product of some kind against a designated standard.

I find this to be a useful way to evaluate mastery of a given subject.

For example, I propose many people detest technical certifications because they perceive the candidates as simply working at the knowledge level.

I think many people were disappointed by the removal of the SANS practical requirement, because meeting that challenge required work at the synthesis level -- a very high mark indeed.

I keep this hierarchy in mind when I review books. If I am reading material related to network security monitoring, I can absolutely make judgements not only about accuracy but also about relevance and worth. That's an evaluation level activity. On the other hand, books about reverse engineering malicious code might strain my ability to review at the comprehension or even the knowledge level when discussing assembly language.

If you're responsible for hiring people, you might consider using some of these ideas in your interviews. A security architect should demonstrate skills at the synthesis or evaluation levels, while those on the entry level should function at least at the knowledge level.

Thoughts on Gates Security Memo

While reading Gary McGraw's great book Software Security, I had a chance to re-read the famous Bill Gates security memo of January 2002. I wasn't blogging back then, so I didn't record my reaction to it. Almost five years later, the following excerpt struck me:

[E]ven more important than any of these new capabilities is the fact that it is designed from the ground up to deliver Trustworthy Computing. What I mean by this is that customers will always be able to rely on these systems to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony.

Today, in the developed world, we do not worry about electricity and water services being available. With telephony, we rely both on its availability and its security for conducting highly confidential business transactions without worrying that information about who we call or what we say will be compromised. Computing falls well short of this, ranging from the individual user who isn't willing to add a new application because it might destabilize their system, to a corporation that moves slowly to embrace e-business because today's platforms don't make the grade.
(emphasis added)

Hold the phone (no pun intended). "[A]vailable, reliable and secure as electricity, water services and telephony"? You mean the solid electricity system that blacked out the northeast US in 2003? Or the two water pipes running into New York City that could be disrupted or poisoned? Or the telephone system owned by the Phone Masters in the late 1990s or spied upon by three letter agenices in this decade?

I propose that the main reason that electricity, water, and telephony are (wrongly) considered "secure" is the fewer number of threats facing them. Networked digital resources are exposed to far greater numbers of threats than analog resources like electrical power plants, water treatment facilities, and telephone closets. This is changing as all of these analog resources are being controlled by IP-enabled systems with global reachability.

This makes me wonder if digital security is being held to a higher, possibly impossible, standard. Is there any other system in the world that could be accessed by any threat, at any time? This is not a wise-guy question -- I'd appreciate your thoughts on this. What sorts of man-made systems are relentlessly under attack by intelligent adversaries? I'm adding intelligence here to remove comparisons to diseases, weather, earthquakes, and so on.

The first system that came to mind was the modern casino. People are always trying to cheat, so the threat level is high. A variety of financial systems come to mind, although I'm trying to avoid systems with close ties to digital functionality. Physical security probably has a few useful lessons.

What do you think?

Enterprise Rights Management

The October 2006 Information Security Magazine features a great story titled Safe Exchanges. It discusses software it calls "enterprise rights management" (ERM):

Enterprise rights management is technology that allows corporations to continuously control and protect documents, email and other corporate content through the use of encryption and security policies that determine access rights.

I found this case study compelling:

Fenwick & West was an early adopter, choosing ERM software by startup SealedMedia, a company recently acquired by Stellent.

Kesner took advantage of SealedMedia's free 30-day trial, tested it with several clients and was wowed by the results. His law firm's clients use hundreds of data types, including Microsoft Office, Adobe Acrobat, accounting databases, architectural drawings and computer-aided design documents--all of which SealedMedia supports.

In addition to the software's broad support, he was impressed by its ease of use. For the firm's lawyers, clients and outsiders to access protected files, they download a small plug-in to their computers. When they try to open protected files on the extranet, the plug-in checks in with Fenwick & West's servers to make sure they have the right to access the documents. It takes about five minutes to get most users up and running.

We're seeing defenses collapse to the level of data, as described by luminaries like Dan Geer. So-called ERM software helps implement this defensive strategy.

ERM, or what might also be called Digital Rights Management (DRM), is no panacea, however. An intruder sitting on a company desktop can read all the documents that the legitimate user can read, at least when the documents are being displayed to the user. Documents cannot be considered "secure" when they must be rendered to users of vulnerable platforms.

I expect to see systems like ERM widely deployed, although I wonder how well they will be accepted when encryption products tend to stump most users. We don't see ubiquitous deployment of encrypted email or documents, even though the technology has been around for years. Perhaps moving the trust decision out of the hands of non-technical users (as must be the case with technologies like PGP/GPG) will help facilitate deployment?

Monday, October 16, 2006

Extrusion Detection Sightings

I've noticed the term extrusion detection appearing more frequently, usually tied to the latest buzzphrase -- "insider threat." The GSA-loving magazine Federal Computer Weekly recently mentioned the following:

Emerging tools known as extrusion-detection systems are helping government agencies and private companies detect whether sensitive information is leaving their organizations...

“Our goal is to monitor traffic from the inside going out,” said Daniel Hedrick, product manager at Vericept and a former intelligence officer in the Air Force. “If I see content going out the door, with or without the approval or the knowledge of the user, I will automatically encrypt it.”
(emphasis added)

Wow, that's something. So once this "content" is "encrypted," what does the intended recipient do with it? I'm hoping this is an example of a writer misreporting Mr. Hedrick's answers to questions.

I mildly dislike seeing terms become hyphenated (e.g., "extrusion-detection") for no reason. I strongly dislike people claiming to invent terms. Consider the following story:

Symantec Corp. says its latest products and partnerships will thwart online outlaws who attempt to raid company databases for sensitive information that can be used for a variety of fraud...

Symantec executives cited the growing number of data breaches and the resulting exposure of confidential information as the motivating factors behind the release of the tool. "It has a feature that I call extrusion detection, which alerts administrators when sensitive data is leaving the network," Steve Trilling, Symantec's vice president of research and advanced development, said in an interview recently. "And it operates on a copy of the network traffic, so it doesn't slow anything down."
(emphasis added)

Now I know who coined the phrase extrusion detection... not. As I wrote three years ago, Robert Mozkowitz and Franke Knobbe have the best claims, dating back to November 1999.

Finally, this morning I stepped one toe into the audiobook world by recording an excerpt of my latest book Extrusion Detection within a joint Addison-Wesley and (free) project. I don't know why people pirate my books when more and more parts are appearing online in one form or another!

When this recording (about 10 minutes) is available, I'll post a notice here. If you find the idea interesting, please let me know.

Thanks also for the many kind comments about the birth of my daughter. My family (including me, obviously!) appreciates it greatly. I also like seeing the sites that blindly repost my content (without attribution) reporting the addition to my family. :)

Wednesday, October 11, 2006

More Reasons to Discuss Threats

The word "threat" is popular. What used to be Bleeding Edge Snort is now Bleeding Edge Threats. It's a great site but I think it should have avoided using the term "threat." I think "Bleeding Edge Security" would have been better, but apparently that's not cool enough?

I noticed the OWASP is trying to define various security terms as well. (Because OWASP means Open Web Application Security Project, I didn't say "OWASP project." Those who say "ATM machine," "NIC card," and "CAC card," please take note.) OWASP has Wiki pages for attack, vulnerability, countermeasure, and, yes, threat.

For an example of a project that is largely not falling for the threat hype, check out the Vulnerability Type Distributions in CVE published last week. It provides research results on publicly reported vulnerabilities.

It might be helpful to look at already published work when thinking about what these terms mean. Good sources include the following.

The CWE Classification Tree contains a section labelled "Motivation/Intent," with an "Intentional" subsection containing items like "Trojan Horse," "Trapdoor," "Logic/Time Bomb," and "Spyware." Note these are not intended to be considered weaknesses, in the sense of a calling a "Trojan Horse" a "weakness." Rather, it seems the CWE considers the inclusion of such code to be a weakness in and of itself. This might be similar to an "Easter Egg."

While you're busy thinking of these security issues, you might want to download the latest release of Helix. I used it to try a recent version of Brian Carrier's Sleuthkit. I launched the Helix Live CD .iso within VMware, then used NFS on another system to export a dd image from Real Digital Forensics for browsing within Autopsy. I am sad to see the Sguil client is not in Helix anymore, though.

Tuesday, October 10, 2006

Pre-Review: Programming Python, 3rd Ed

I'd like to thank the fine folks at O'Reilly for sending me a review copy of Programming Python, 3rd Ed. I've added this book to my other set of programming books waiting to be read. I'll probably start with several tiles from Apress, namely Beginning Python, Dive Into Python, and then end the Apress titles with Foundations of Python Network Programming, since network programming is my main interest.

I'll use O'Reilly's Programming Python, 3rd Ed and Python Cookbook, 2nd Ed as references. Two years ago I tried reading Learning Python, 2nd Ed but found it not that helpful as an introduction -- hence my interest in the new Apress titles.

Monday, October 09, 2006

Reviews of Digital Forensics Books Posted just posted three new reviews on digital forensics books. The first is File System Forensics Analysis by Brian Carrier. Here is a link to the five star review.

The second is Windows Forensics by Chad Steel. Here is a link to the four star review.

The third is EnCase Computer Forensics by Steve Bunting and William Wei. Here is a link to the three star review.

All three books share the same introduction.

I decided to read and review three digital forensics books in order to gauge their strengths and weaknesses: "File System Forensic Analysis" (FSFA) by Brian Carrier, "Windows Forensics" (WF) by Chad Steel, and "EnCase Computer Forensics" (ECF) by Steve Bunting and William Wei. All three books contain the word "forensics" in the title, but they are very different. If you want authoritative and deeply technical guidance on understanding file systems, read FSFA. If you want to focus on understanding Windows from an investigator's standpoint, read WA. If you want to know more about EnCase (and are willing to tolerate or ignore information about forensics itself), read ECF.

In the spirit of full disclosure I should mention I am co-author of a forensics book ("Real Digital Forensics") and Brian Carrier cites my book "The Tao of Network Security Monitoring" on p 10. I tried to not let those facts sway my reviews.

Sunday, October 08, 2006

Government Contracting Lists from FCW

As a consultant near the Beltway, it helps to understand the competition and potential partners. I found the following lists to be helpful. They appeared in the 4 Sep 06 print issue of FCW.

Saturday, October 07, 2006

Security Is Not Refrigeration

Analogies are not the best way to make an argument, but they help when debating abstract concepts like "virtual trust".

Consider the refrigerated train car at left. Refrigeration is definitely a "business enabler." Without refrigeration, food producers on the west coast couldn't sell their goods to consumers on the east coast. Refrigeration opened new markets and keeps them open.

However, refrigeration is not the business. Refrigeration is a means to an end -- namely selling food to hungry people. Refrigeration does not generate value; growing and selling food does. (Refrigeration is only the business for those that sell refrigerated train cars and supporting devices.)

You might think "security" is like refrigeration. Like refrigeration, security could be said to "enable" business. Like refrigeration, security does not generate value; selling a product or service through a "secure" channel does.

So why is "security" really not refrigeration? The enemy of refrigeration is heat. Heat is an aspect of nature. Heat is not intelligent. Heat does not adapt to overcome the refrigeration technology deployed against it. Heat does not choose its targets. One cannot deter or jail or kill heat.

The enemy of "security" is the intruder. The intruder is a threat, meaning a party with the capabilities and intentions to exploit a vulnerability in an asset. Threats are intelligent, they adapt, they persist, they choose, and they react to their environment. In fact, an environment which on Monday seems perfectly "secure" can be absolutely compromised on Wednesday by the release of an exploit in response to Tuesday's Microsoft vulnerability announcements.

Returning to the idea of "enablement" -- honestly, who cares? I'll name some other functions that enable business -- lawyers, human resources, facility staff. The bottom line is that "virtual trust" is an attempt to "align" (great CISO term) security with "business objectives," just as IT is trying to "align" with business objectives. The reason "IT alignment" has a chance to succeed in creating real business value is that IT is becoming, in itself, a vendor of goods and services. Unless a business is actually selling security -- like a MSSP -- security does not generate value.

Why is anyone even bothering to debate this? The answer is money. If your work is viewed as a "cost center," the ultimate goal is to remove your budget and fire you. If you're seen as an "enabler," you're at least seen as being relevant. If you can spin "enablement" into "revenue generation," that's even better! Spend $X on security and get $Y in return on investment! Unfortunately that is not possible.

Finally, I don't think anyone would consider me "anti-security." I'm not arguing that security is irrelevant. In fact, without security a business can be absolutely destroyed. However, you won't find me saying that security makes anyone money. Some argue that spending money on security prevents greater loss down the line, perhaps by containing an intrusion before it avalanches into an immense compromise. That's still loss prevention. Of course security "enables" business, but enablement doesn't generate revenue; it supports a revenue-generating product or service.

This is probably my last word on this in a while. I need to turn back to my own business!

Thoughts from IATF Meeting

I try to attend meetings of the Information Assurance Technical Forum once a year. I last visited in 2003 and 2005. The following are some thoughts from the meeting I attended two weeks ago. They are not an attempt to authoritatively summarize or describe years of net-centric thought and work by the US Department of Defense. These are just a few thoughts based on the presentations I saw in an unclassified environment.

Prior to seeing this diagram I had heard a lot about "net-centric warfare" but I had no real grasp of the underlying. It seemed more of a buzzword. Now I understand the idea of getting information from any source to the people who need it, instead of, say Air Force sensors sending data to Air Force decision-makers who feed Air Force assets.

Given the net-centric model, DoD needs to move away from a "System High" model of security to a "Transactional" model. In the System High world, you essentially define a perimeter by classification (e.g., "Secret"), build a network for that classification (e.g., SIPRNET), and let anyone with a Secret clearance see anything on SIPRNET. This is procedure-based security at its best (actually, it's worst).

This figure shows how DoD wants to transition from the existing model to the new model. It's moving away from the current "baseline" in "increments," starting with Increment 1. Completion of Increment 1 is expected with the conclusion of the next five-year DoD plan, lasting 2008-2013 (!).

DoD's experience with coalition warfare in Iraq and Afghanistan is driving these changes. One example involved the need to establish a separate network for every coalition partner. In one so-called "coalition village," DoD had to build 14 separate networks (US-UK, US-Poland, US-Italy, etc.). This is obviously costly and time-consuming. In the new model, these coalition partners will share one network but have access regulated by the transactional security model. Virtualization and Digital Rights Management are going to be key for implementing these ideas.

This figure explains the layers present in the Enterprise Sensor Grid, and the idea that DoD is moving away from a strictly perimeter-based monitoring model to one that reaches all the way to end hosts.

One of the briefers offered a set of metrics that I found interesting.

  • % of Red Team simulated attacks that are successful

  • % of remote access to DoD information systems and DoD access to the Internet regulated by positive technical controls such as proxy services and screened subnets

  • % of DoD systems and applications that conform to the DoD Ports, Protocols, and Service Policy

  • % of standardized and approved IA and IA enabled COTS products available to protect DoD sensitive and classified data (vs. GOTS)

  • % of NIPRNet Traffic that is encrypted

I found the first metric to be especially interesting because the DoD uses it to measure the number of attacks they are missing. In other words, DoD (wisely) understands that it doesn't detect all attacks. They use Red Team exercises to estimate the number of attacks they don't detect. This is exactly the sort of metric I believe to be useful. DoD also measures (and wants to vastly decrease) the amount of time needed to detect and respond to intrusions. Again, they base their metrics on Red Team exercises.

I left the conference with a few concerns. First, I heard an emphasis on "pushing systems to DMZs." The idea is to remove systems accessible to the public or coalition partners out of internal networks and into DMZs. While I agree that security zones are important, I wondering if DoD is fighting the last war again. Server-side attacks were the predominant model of the 1990s, bleeding somewhat into this decade. For the last three or four years, however, client-side attacks have been all the rage. DoD is moving to persistent and then pervasive monitoring (which includes end hosts), but I hope they don't ignore client-side vulnerabilities. On the other hand, the rise of Web Services might reinvigorate DMZ-focused security.

After hearing one briefer joke about not understanding so-called "honey" technologies (honeypots, etc.) I worried that some of these decision-makers don't understand the attacks facing DoD networks. They of course understand the threat, because those groups change but not as quickly as the tools and techniques they employ. Keeping up with the vulnerabilities and ways to exploit them is extremely challenging, and does not map at all to five-year plans.

A second problem involves the so-called "black core." This is an encrypted network which will eventually dominant the DoD enterprise security model. This is similar to Microsoft's reported internal use of IPSec everywhere.

The black core emphasizes confidentiality, and to a certain degree integrity. It seems to sacrifice visibility. An IATF attendee made this point in a question. If just about everything is encrypted, what's the point of network-based monitoring? One of the speakers replied that flow-based monitoring (i.e., analyzing session data) will still be helpful, but sessions only take you so far. I wonder if DoD might implement some "visibility taps" that decrypt-inspect-encrypt at various points in the black core?

Speaking of encryption, crypto appeared to be a major factor in DoD plans. This isn't surprising, given the IATF's sponsor. Still, most security incidents don't happen because of failed crypto. Intrusions are often the result of improper configuration and operation. I didn't hear much about addressing those problems, yet I heard that DoD is cutting IA personnel and funding!

Third, IPv6 wasn't formally mentioned in any of the presentations. An attendee had to ask about it specifically. An audience member from the Office of the Secretary of Defense said IPv6 would work with DoD's plans, but I think the audience was skeptical.

Finally, DoD is relying heavily on vendors to build the equipment and technology needed to implement its vision. Ideas of "protecting data in transit" (PDIT) and "protecting data at rest" (PDAR) are great concepts, but implementation questions remain. DoD also seemed not interested in data stored outside of its own systems. An attendee questioned how DoD is going to handle that concern, but it appears to have been largely ignored. This is a growing issue as we approach the end of this decade, and I predict it will be a major issue in the next decade.

Incidentally, while driving to the IATF meeting I listened to a story on NPR about Boeing's plans for border security. One of the engineers in the story talked about the false positive problems caused by tumbleweed! Maybe I should consider investigating cross-discipline intrusion detection problems (i.e., digital, financial, homeland security, and so on)?


Thursday, October 05, 2006

Review of Web Application Security Books Posted just posted my two reviews on books about Web application security. The first is Hacking Exposed: Web Applications, 2nd Edition by Joel Scambray, Mike Shema, and Caleb Sima. Here is a link to the five star review.

The second is Professional Pen Testing for Web Applications by Andres Andreu. Here is a link to the four star review.

Both reviews share the same introduction.

I recently received copies of Hacking Exposed: Web Applications, 2nd Ed (HE:WA2E) by Joel Scambray, Mike Shema, and Caleb Sima, and Professional Pen Testing for Web Applications (PPTFWA) by Andres Andreu. I read HE:WA2E first, then PPTFWA. Both are excellent books, but I expect potential readers want to know which is best for them. I could honestly recommend readers buy either (or both) books. Most people should start by reading HE:WA2E, and then fill in gaps by reading PPTFWA.

Update: A torrrent for the Web App Honeypot is here. You can download the VMware image directly from Wrox here. The root password is Pa55w0rd.

Wednesday, October 04, 2006

Notes on Net Optics Think Tank

Last week I attended and spoke at the latest Net Optics Think Tank. I've presented for Net Optics twice before, but this was the first event held in northern Virginia.

The first half of the event consisted of two briefings. The first discussed tap technology. This was supposed to be a basic introduction but I learned quite a bit, especially with regards to fiber optics. Specifically, I learned of some cases where customers reverse cables when plugging in their taps, thereby causing lots of tough-to-troubleshoot problems. Furthermore, as customers move from Gigabit over fiber to 10 Gigabit over fiber, they are encountering cabling issues. Gigabit is much more forgiving than 10 Gig. At 10 Gig, you apparently have to pay close attention to the specifications, such as core size.

I learned that Net Optics is considering ways to "tag" or "label" packets collected by their link aggregator taps. When discussing matrix switches, it occurred to me that those devices are a great way to implement on-demand monitoring while keeping true to the tenets of Visiblel Ops. Rather than monkeying around with a switch SPAN port, risking making a problematic change, you tell the matrix switch which port you want to monitor. The switch is never touched.

The same idea applies to bypass switches. Net Optics (and their customers) basically convinced me that it's a bad idea to ship an appliance with a bypass switch embedded as a NIC in a security appliance. It's far better (if you have the rack space) to have a separate bypass switch. This allows you to completely power down and remove the "inline" security appliance with no effect on the network. This isn't possible with an integrated bypass NIC. The second briefing covered the Net Optics iTap product line, which I covered several months ago. Dennis Carpio (pictured at left) gave that briefing. Basically Net Optics is moving this "intelligent Tap" functionality into all of their products. I told them I would like to see the tap inspect and classify the traffic it sees, namely by doing port independent protocol identification. I would also like to see the iTaps support 802.1X, IPv6, SNMPv3, and a HTTPS Web interface.

The iTap might also support filtering at the monitoring ports. This would reduce the load of a sensor on the tap. For example, you could tell the iTap not to pass ARP or non-IP traffic to the sensor. Besides continuing to add features to taps without adding cost, Net Optics is also reducing their size. They will be able to fit six taps into 1U. They're also moving to replacing fixed ports with SFPs.

During the second half of the day Net Optics shared ideas for future products. I'll keep this to myself, since this was not exactly meant for broadcast on the Internet. Basically, if you have a network traffic access requirement you're trying to meet, get in contact with me. I can put you in touch with the right people at Net Optics and they will be able to meet your demands. I am not getting any kind of referral fee -- I just trust the people at this company to do the right thing.

Expect to see more reporting on their gear as I get demo products to test.

Tell Intel What You Think

This thread clued me in to the problems OpenBSD is having getting documentation and firmware restribution rights for Intel wireless NICs. Theo's letter is not what I would want an Intel decision-maker to read. However, Kenneth J Hendrickson's comment is exactly what I used as a template for an email to Intel's point of contact on this manner -- majid [dot] awad [at] intel [dot] com.

As a FreeBSD user, I recognize that drivers for Linux are not going to help me use my wireless cards. This Slashdot comment explains key points as well.

If you want to use Intel NICs with native drivers, send an email like the one Kenneth sent (but not a duplicate -- explain the situation in your own words). I just did.

Thoughts on Virtual Trust

I've said before that there is no return on security investment (ROSI). This argument appears to have morphed again in the form of a paper titled Creating Business Through Virtual Trust. A Technorati search will show you other comments on this idea. These are mine.

First, I agree with others who say "virtual trust" should not be "virtual" -- it's either "trust" or it's not. That's not a major point though.

Second, the thesis for the paper appears to be the following, as shown in the abstract.

Business is concerned with the creation of new entities and assets that generate cash. Information security, by contrast, is traditionally concerned with protecting these entities and assets. In this paper we examine a perspective which currently exists but is largely dormant in the information security field. We maintain that information security can be actively involved in the creation of business and that the skills required to create commercial activity must be added to the information security professional's intellectual tool set. We also present evidence to demonstrate that the capability of security to create business, which we designate by the term "virtual trust", may become a dominant paradigm for how to think about information security.

The authors provide this example:

Apple' iTunes employed Digital Rights Management (DRM) technologies to create a new product and, hence, a new revenue stream. Over 1 billion songs have been downloaded from iTunes. In the case of iTunes, DRM works by restricting the number of CPUs on which the .mp3 will play. The songs are also stored in a proprietary, encrypted format. These two factors, at minimum, erect a prohibitive barrier and thereby reduce the likelihood that an end user will trade songs. The various security mechanisms used by Apple's iTunes DRM created the Virtual Trust necessary to persuade the music industry that their rights will be protected digitally and be profitable.

I see nothing wrong with this statement. However, security is not making money in this example -- iTunes sales are making money. Imagine a world without DRM. Someone buys a song, then gives it to their friends. Apple and the music companies believe those extra copies are lost sales. What have we returned to? That's right -- a loss prevention model.

"Virtual Trust" is just another name for the Road House security model. Security is not making money for anyone in the bar Patrick Swayze patrols. Alcohol and food sales are making money.

Security may be a necessary condition for sales and a thousand other activities, but it doesn't make any money. Imagine this exchange between executives:

SecGuy: "Hey boss, I have a great idea for enabling business through virtual trust."

Boss: "What is it?"

SecGuy: "I'm going to secure a business initiative that will make millions!"

Boss: "What is the initiative?"

SecGuy: "Hmm, I don't know. But whatever it is I will secure it and enable business through virtual trust!"

Boss: "Sigh."

You can watch one of the authors of this paper post his thoughts on his blog.

Visit to Symantec Security Ops Center

Last week I was invited to visit the Symantec Security Operations Center (SOC) in Alexandria, VA. I had been there twice before, before they acquired Riptech and after. Jonah Paransky, Director of Product Management for MSS, answered many of my technical and business questions.

On this trip I learned that Symantec operates two 24x7x365 SOCs (in the USA and the UK), along with one in Europe, one in Japan, and other support centers. They do not collect and store security data at the SOC; instead, they have they data pouring into colocation facilities elsewhere.

Jonah said they see 3000-4000 "potential" incidents per day, of which about 100 are considered "hard kills." I couldn't tell if that meant actual compromise or not, but those 100 events per day prompt calls to customers.

We discussed the nature of their customer base. Symantec provides managed security services to many global 500 companies, some "with security staffs larger than Symantec's." I asked why such a company would bother with MSSP? Jonah responded with these points:

  1. It's expensive to hire a team of analysts to inspect and react to security events on a 24x7x265 basis. My own experience says that requires hiring 12 people if you want 2 people on shift.

  2. Analysts watching a single company -- even a very large one -- get bored fast. This is true. If your company cares enough to staff a security operation like this, they are probably not bleeding like a .edu. (Ever notice the best papers come from .edu's and not's?)

  3. Symantec's global perspective, combined with local data, gives customers a sense of what is happening on the Internet and their networks. In other words, customers hire Symantec to provide a data feed that those large security staffs can interpret and work. Customers see "everything" that Symantec's analysts see. This is a change from the environment five years ago.

Jonah noted that Symantec undergoes a Statement on Auditing Standards (SAS) No. 70 Type II audit every year. The process takes 2 months out of 12 (ouch). The end result, however, is a document that shows all of the processes Symantec follows, along with a measurement of Symantec's adherence to those processes. This is very valuable for customers, who previously would require Symantec to undergo on-demand audits prior to signing up for MSS. Now, Symantec hands them the SAS 70 Type II audit report and the customer is SOX-satisfied. Symantec also follows ISO 27001 standards.

Overall, I was impressed by what I saw. I was a little concerned by the emphasis on process over outcomes, however. While it's good to be audited for adherence to processes, it would be better to determine if those processes result in improved incident detection and response. I doubt the auditors attack Symantec clients to test the responsiveness of the analyst staff.

This thought came to mind during the visit, and also later when one of my customers called. They're planning to bring their monitoring services back in-house, as they fear their vendor is too malware-focused. This customer wants me to help them build an in-house network security monitoring, incident response, and forensics operation, to be completed by the end of next year. That should be a great project.

I'm considering the value of something like Symantec Early Warning Services for my customers who run their own MSS SOCs. An early warning service can provide indicators that might be absent in an operation focused on a single company.

Thank you to Jonah and the other folks from Symantec and their partners who answered my questions and let us tour their facility.

Chapter 3 from Extrusion Online

In addition to Chapter 18 from Tao, I noticed Chapter 3 from my third book, Extrusion Detection: Security Monitoring for Internal Intrusions is also online at

This book has been getting some attention because it starts with the premise that your internal network is compromised. Given that assumption, how do you detect, contain, and eradicate intruders on your network? The model applies well to insider and outsider threats.

I consider Extrusion to be a companion volume to Tao, and as such I recommend reading Tao first and then Extrusion. Real Digital Forensics is a book where network security monitoring, network incident response, and network forensics are intergrated with host- and memory-centric security operations.

Bejtlich in Australia in May 2007

I mentioned earlier that I was invited to speak at the AusCERT Asia Pacific Information Technology Security Conference in Gold Coast, Australia. The conference takes place Sunday 20 May - Friday 25 May 2007.

I accepted the invitation, and I will probably deliver a short presentation and a longer (half-day or day-long) tutorial. After AusCERT, I plan to teach one or two-day classes in Brisbane and/or Sydney. I will probably teach condensed versions of my training classes Network Security Operations and TCP/IP Weapons School.

As I develop the plans for all of these classes I will post details here and at If you would like me to keep you informed via email please write me: training [at] taosecurity [dot] com. Thank you.

Chapter 18 from Tao Online

With the launch of the new site, I can report that chapter 18 of my first book, The Tao of Network Security Monitoring: Beyond Intrusion Detection is now available online. Chapter 18 is "Tactics for Attacking Network Security Monitoring." It outlines technical means attackers may degrade or deny operations to detect and respond to intrusions.

Keep an eye on I am working with the editor on a plan to contribute regular content for the site.

Recovering from Bad FreeBSD Packages

Recently I've encountered problems with some of the packages built by the FreeBSD team. In the case I described earlier, and were somehow damaged in the .tbz packages I installed on one of my systems. I recovered by using good copies from another system.

Yesterday I ran into the following error after I upgraded my packages.

orr:/home/richard$ firefox
/libexec/ /usr/local/lib/
Undefined symbol "gethostbyname_r"

orr:/home/richard$ thunderbird
/libexec/ /usr/local/lib/
Undefined symbol "gethostbyname_r"

Uh oh. Email I can live without, but it's difficult to troubleshoot a problem without a Web browser. I had to turn to another laptop running Windows (for shame) to search for clues. I found one post on a Chinese Website with the same errors, but nothing else.

I found the pkg-plist for the linux-firefox and linux-thunderbird ports contained this entry:


so it appeared the problem was one in the package for Firefox and Thunderbird.

I don't run either app on other computers, so I decided to try recovering by building new packages myself. I built them on another system, poweredge.

poweredge:/usr/ports/www/firefox# make package-recursive BATCH=1

I used "BATCH=1" to accept the defaults, thereby avoiding problems where the build process stops while waiting for me to select various options. I used package-recursive so the end result would include all packages needed for Firefox, in the event I needed to do a wholesale replacement of packages on my primary system.

When done I compared the libraries on the broken and package building systems.

orr:/home/richard$ ls -al /usr/local/lib/libpld*
-rw-r--r-- 1 root wheel 8960 Sep 24 02:59 /usr/local/lib/libplds4.a
lrwxr-xr-x 1 root wheel 13 Sep 24 02:59 /usr/local/lib/
-rwxr-xr-x 1 root wheel 184784 Sep 24 02:59 /usr/local/lib/

poweredge:/home/richard$ ls -al /usr/local/lib/libpld*
-rw-r--r-- 1 root wheel 8960 Oct 3 17:32 /usr/local/lib/libplds4.a
lrwxr-xr-x 1 root wheel 13 Oct 3 17:32 /usr/local/lib/
-rwxr-xr-x 1 root wheel 185062 Oct 3 17:32 /usr/local/lib/

The sizes are certainly different -- no need to hash them. I then copied over the 185062 file from poweredge and moved the 184784 version on orr out of the way. Sure enough, I was able to start Firefox and Thunderbird without any problems with the new in place.

Tuesday, October 03, 2006

FreeBSD Update with IPv6

Is it possible to use FreeBSD Update with a host running FreeBSD in an IPv6 only scenario? It's not acceptable to leave it unpatched. The system in question is also extremely slow (P200, 32 MB RAM) so building via CVS is not a good option.

Maybe FreeBSD Update is hosted on an IPv6 dual-stack system?

p200:/root# freebsd-update fetch
Fetching updates signature...
fetch: Network is unreachable

Shoot. Well, I can reach a host (we'll call it "dualstack") that has both IPv4 and IPv6 addresses. dualstack can also reach my Squid proxy on the IPv4 network. I'll use SSH to port forward traffic needed by FreeBSD Update.

p200:/home/richard$ ssh -p 22022 -L 3128:squidproxy:3128 user@dualstack

In a new window I'll set the appropriate proxy environment variable.

p200:/root# setenv HTTP_PROXY http://localhost:3128

Now I run FreeBSD Update.

p200:/root# freebsd-update fetch
Fetching updates signature...
Fetching updates...
Fetching hash list signature...
Fetching hash list...
Examining local system...
Fetching updates...

It works very well. SSH port forwarding is only one solution to this problem, but it worked well enough here.

Essential FreeBSD Ports

In the spirit of documenting my FreeBSD system administration practices, I thought I would mention the FreeBSD ports I install on every system -- regardless of function. In the future you may see some of these migrate into the base installation, as happening with Portsnap. Others are well-established but have stayed out of the base system for various reasons.

You can read a summary of many of these tools here as well.

Installing Screen Port with Remote FreeBSD Ports Tree

I don't like to keep ports trees on all of my FreeBSD systems. I prefer to install packages whenever possible. Upgrading those packages requires the ports tree, however. To use Portupgrade I NFS mount /usr/ports from a single system that keeps an up-to-date ports tree.

The major problem with this plan involves the sysutils/screen port. No package is created, and you can't build one yourself.

poweredge:/usr/ports/sysutils/screen# make package
===> screen-4.0.2_4 may not be packaged: Tends to loop using 100% CPU when used from
package - perhaps it hard-codes information about the build host.

Is there a way to build Screen without installing the ports tree?

First I tried just NFS mounting /usr/ports and trying to build the port. Here, poweredge is th box with the ports tree and mwmicro needs to run screen.

mwmicro:/root# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1f on /home (ufs, local, soft-updates)
/dev/ad0s1g on /tmp (ufs, local, soft-updates)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
/dev/ad0s1e on /var (ufs, local, soft-updates) on /usr/ports (nfs)

Note that poweredge already installed Screen using the ports tree.

mwmicro:/usr/ports/sysutils/screen# make
mwmicro:/usr/ports/sysutils/screen# make install
mwmicro:/usr/ports/sysutils/screen# which screen
screen: Command not found.

That didn't work. Why? It's because poweredge, the box exporting the ports tree, already installed Screen. There's a "work" directory already built. I can't issue a "make clean" command here.

mwmicro:/usr/ports/sysutils/screen# make clean
===> Cleaning for screen-4.0.2_4
===> /usr/ports/sysutils/screen/work not writable, skipping

Ok, maybe I can issue "make clean" on poweredge and continue?

poweredge:/usr/ports/sysutils/screen# make clean
===> Cleaning for screen-4.0.2_4

Now back to mwmicro:

mwmicro:/usr/ports/sysutils/screen# make
===> Vulnerability check disabled, database not found
===> Extracting for screen-4.0.2_4
=> MD5 Checksum OK for screen-4.0.2.tar.gz.
=> SHA256 Checksum OK for screen-4.0.2.tar.gz.
mkdir: /usr/ports/sysutils/screen/work: Permission denied
*** Error code 1

Stop in /usr/ports/sysutils/screen.

Oh, that's right. /usr/ports is mounting read-only. I probably don't want to overwrite the ports tree anyway by exporting it read-write. Luckily I read FreeBSD Handbook after fzzzt in #freebsd suggested changing the work directory. I found the directive to change the location of the work directory and used it thus:

mwmicro:/usr/ports/sysutils/screen# make WRKDIRPREFIX=/tmp
===> Vulnerability check disabled, database not found
===> Extracting for screen-4.0.2_4
=> MD5 Checksum OK for screen-4.0.2.tar.gz.
=> SHA256 Checksum OK for screen-4.0.2.tar.gz.
===> Patching for screen-4.0.2_4
===> Applying FreeBSD patches for screen-4.0.2_4
===> Configuring for screen-4.0.2_4
configure: WARNING: you should use --build, --host, --target
this is screen version 4.0.2
checking for i386-portbld-freebsd6.1-gcc... cc
checking for C compiler default output... a.out
mwmicro:/usr/ports/sysutils/screen# make install WRKDIRPREFIX=/tmp
===> Installing for screen-4.0.2_4
===> Generating temporary packing list
mwmicro:/usr/ports/sysutils/screen# rehash
mwmicro:/usr/ports/sysutils/screen# which screen

Note that I used WRKDIRPREFIX=/tmp for both make and make install. Using that directive automatically made appropriate directories in /tmp:

mwmicro:/tmp/usr/ports/sysutils/screen/work# ls
.PLIST.flattened .configure_done.screen._usr_local
.PLIST.mktmp .extract_done.screen._usr_local
.PLIST.objdump .install_done.screen._usr_local
.PLIST.setuid .patch_done.screen._usr_local
.PLIST.writable screen-4.0.2

I plan to use this system whenever I need to build an application using the ports tree and cannot make a package to share on other systems.

Update: If you try to build from a remote ports tree but the distfile for the desired port hasn't been downloaded, use 'make fetch' on the NFS server:

poweredge:/usr/ports/sysutils/screen# make fetch
=> screen-4.0.3.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch from
Service not available, closing control connection
=> Attempting to fetch from
screen-4.0.3.tar.gz 100% of 820 kB 52 kBps 00m00s

The continue with the steps shown above.