Sunday, May 31, 2009

Information Security Incident Rating


I've been trying to describe to management how close various individual information assets (primarily computers -- desktops, laptops, etc.) are to the doomsday scenario of sensitive data exfiltrated by unauthorized parties. This isn't the only type of incident that worries me, but it's the one I decided to tackle first. I view this situation as a continuum, rather than a "risk" rating. I'm trying summarize the state of affairs for an individual asset rather than "model risk."

In the far left column I've listed some terms that may be unfamiliar. The first three rows bear "Vuln" ratings. I list these because some of my businesses consider the discovery of a vulnerability in an asset to be an "incident" by itself. Traditional incident detectors and responders don't think this way, but I wanted to include this aspect of our problem set. For these first three rows, I consider these assets to exist without any discoverable or measurable adversary activity. In other words, assets of various levels of vulnerability are present, but no intruder is taking interest in them (as far as we can tell).

The next four rows (Cat 6, 3, 2, 1) should be familiar to those of you with military CIRT background. About 7 or 8 years ago I wrote this Category Descriptions document for Sguil. You'll remember Cat 6 as Reconnaissance, Cat 3 as Attempted Intrusion, Cat 2 as User Intrusion, and Cat 1 as Root/Admin Intrusion. I've mapped those "true incidents" here. These incidents indicate an intruder is taking interest in a system, to the degree that the intruder gains user or root level control of it. In the event the intruder doesn't need to gain control of the asset in order to steal data, you can simply jump to the appropriate description of the event in the final three rows.

The final three rows (Breach 3, 2, 1) are what you might consider "post exploitation" activities, or direct exploitation activities if no control of the asset is required in order to accomplish the adversary's data exfiltration mission. They loosely map to the reinforcement, consolidation, and pillage phases of compromise I outlined years ago. I've used the term "Breach" here to emphasize the seriousness of this aspect of an intrusion. (Gunter's recent post Botnet C&C Participation is a Corporate Data Breach reinforced my decision to use the term "breach" in situations like this.) Clearly Breach 3 is a severe problem. You might still be able to avoid catastrophe if you can contain the incident at this phase. However, intruders are likely to quickly move to Breach 2 and 1 phases, when it's Game Over.

If there has to be an "impact 0" rating, I would consider that to be the absence of an information asset, i.e., it doesn't exist. Any asset whatsoever has value, so I don't see a 0 value for any existing systems.

At the other end of the spectrum, if we have to "crank it to 11," I would consider an 11 to be publication of incident details in a widely-read public forum like a major newspaper or online news site.

I use the term "impact" in this sense: what is the negative impact of having the individual asset in the state described? In other words, the negative impact of having an asset with impact 1 is very low. We would all like to have assets that require an intruder to apply substantial effort to compromise the asset and exfiltrate sensitive data. At the other end of the spectrum we have the "game over" impact -- the intruder has exfiltrated sensitive data or is suspected of exfiltrating sensitive data based on volume, etc. Even if you can't tell exactly what an intruder exfiltrated, if you see several GBs of data leaving a system that houses or access sensitive data, you can be fairly confident the intruder grabbed it.

I listed some sample colors for those who understand the world in those terms.

I've reproduced the text below for future copying and pasting.

  1. Vuln 3 / Impact 1 / Intruder must apply substantial effort to compromise asset and exfiltrate sensitive data

  2. Vuln 2 / Impact 2 / Intruder must apply moderate effort to compromise asset and exfiltrate sensitive data

  3. Vuln 1 / Impact 3 / Intruder must apply little effort to compromise asset and exfiltrate sensitive data

  4. Cat 6 / Impact 4 / Intruder is conducting reconnaissance against asset with access to sensitive data

  5. Cat 3 / Impact 5 / Intruder is attempting to exploit asset with access to sensitive data

  6. Cat 2 / Impact 6 / Intruder has compromised asset with access to sensitive data but requires privilege escalation

  7. Cat 1 / Impact 7 / Intruder has compromised asset with ready access to sensitive data

  8. Breach 3 / Impact 8 / Intruder has established command and control channel from asset with ready access to sensitive data

  9. Breach 2 / Impact 9 / Intruder has exfiltrated nonsensitive data or data that will facilitate access to sensitive data

  10. Breach 1 / Impact 10 / Intruder has exfiltrated sensitive data or is suspected of exfiltrating sensitive data based on volume, etc.


What do you think of this rating system? I am curious to hear how others explain the seriousness of an incident to management.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Update: Since writing this post, I've realized it is more important to think of these events as intrusions. The word "incident" applies to a broader set of events, including DDoS, lost or stolen devices, and the like. My use of the word "intruder" throughout the post indicates my real intention.

Saturday, May 30, 2009

President Obama's Real Speech on Cyber Security

I was very surprised to read REMARKS BY THE PRESIDENT ON SECURING OUR NATION'S CYBER INFRASTRUCTURE, delivered yesterday. TaoSecurity Blog had received a copy of the President's prepared remarks, but about 2/3 of the way through the live version the President went off-copy. For the sake of my readers I've published the material the President omitted.

...And last year we had a glimpse of the future face of war. As Russian tanks rolled into Georgia, cyber attacks crippled Georgian government websites. The terrorists that sowed so much death and destruction in Mumbai relied not only on guns and grenades but also on GPS and phones using voice-over-the-Internet.

[Here is where the Presidential train left the tracks.]

When considering cyber security, we must recognize that our problems are multi-dimensional.

The first dimension involves the information assets we are trying to protect. Cyber security requires protecting information inputs, information outputs, and information platforms. Inputs include data that has value before processing, such as personally identifiable information. Outputs include data that has value after proecessing, such as intellectual property. Information platforms are the computing devices that process data, such as computers and the networks that connect them.

The second dimension involves the custodians of information assets, which we collect into three broad groups. The first group includes Federal, state, and local governments, with their various departments and agencies. The second group includes corporate, nonprofit, university, and related elements of the private sector. The third group includes individual citizens.

The third dimension involves threats to our information assets, which we collect into three broad groups. The first group includes criminals who attack information assets primarily for financial gain. When the term criminal applies to terrorists we must also consider their desire to achieve political ends as well. The second group includes economic competitors, taking the form of companies acting independently or in concert with national governments. The third group includes nation-state actors and countries, who threaten information assets through espionage or direct attack.

These three dimensions -- the nature of information assets, the varied custodians of information assets, and the many threats to information assets -- prevent the centralization of cyber security in the portfolio of any single "cyber czar" or other government figurehead.

In addition to the three dimensions of cyber security, we must recognize certain environmental factors that weigh upon possible approaches.

First, traditional cyber security thinking has focused on vulnerabilities in the digital world. Many believe that addressing vulnerabilities through better coding or asset management would solve the cyber security problem. However, outside the digital world, vulnerabilities are all around us. Every human is vulnerable to being shot, yet none of us in this room is wearing a bullet-proof vest. Well, almost no one. [laughter] If you leave this building, you still won't wear a bullet-proof vest in public. Why is that? You're exposed, you're vulnerable, but what keeps you safe from threats to your well-being? The answer is that our government and its protective agencies -- police, the military, and so on -- focus more on threats than on vulnerabilities. We deter criminals and prosecute those who do harm us. Cybersecurity is no different. Behind every cyber attack is a human agent acting for personal, organizational, or national gain. However, too much effort is applied to addressing vulnerabilities, when the real problem has always been the threats who seek to exploit vulnerabilities.

Second, cyber security incidents are extremely opaque compared to their non-digital counterparts. If criminals shoot down an airliner, no one can ignore the disaster. Following the previous point, few people turn to the construction of the aircraft when such a heinous act occurs; rather, the perpetrators are hunted and brought to justice. However, when personally identifiable information is stolen from a company, the true victims -- the American citizens now at risk for identity theft -- may never know what happened. Many states have breach disclosure laws, but those laws do not require an explanation of the nature of the attack. As a result, no other organizations can learn how security controls failed at the victimized company.

Third, the costs of cyber security incidents are often not borne by those who should be protecting information assets from attack. This results in the misalignment of incentives. If a company processing personally identifiable information is breached, the majority of the cost is borne by the citizens whose identities are stolen. The company may pay for credit monitoring services, but that cost is insignificant compared to that borne by the citizen. If a software company ships a product riddled with bugs, it generally bears no cost whatsoever if intruders exploit that software once deployed by the customer. The marketplace tends to not punish vendors who sell vulnerable software because the benefits of the software are perceived to outweigh the costs. This makes sense when the customer is a company, and the breach results in stolen PII -- with costs again borne by the citizen, not the company.

These three environmental factors point to a need to change the mindset around cyber security, as well as the need for greater transparency and better alignment of incentives and costs with those who receive benefits from information assets.

Given this understanding of the problem, my administration will take the following actions regarding cyber security.

  1. We will make the Federal government an example for others to follow. We cannot expect any other party to take cyber security seriously if the Federal government doesn't lead by example. We will work to make the Federal government a defensible network architecture. We will finally recognize that, while important, controls are not the solution to our problems. Rather than being control-compliant, we will identify field-assessed metrics to measure our success.

  2. We will work with Congress to establish a national breach disclosure law, and we will require publicly traded companies to outline digital risks in their annual 10-K filings. Then, we will create a National Digital Security Board modeled on the National Transportation Safety Board. The NDSB will have the authority to investigate information security breaches reported by victim organizations. The NDSB will publish reports on its findings for the benefit of the public and other organizations, thereby increasing transparency in two respects. First, intrusions will have real costs beyond those directly associated with the incident, by bringing potentially poor security practices and software to the attention of the public. Second, other organizations will learn how to avoid the mistakes made by those who fall victim to intruders. In some circumstances national security interests may limit the audience for these findings. Those who consider this approach draconian should consider how NTSB reporting improves the safety of transportation over time.

  3. We will consult with the law enforcement community to determine what additional resources they need to deter and prosecute cyber criminals, and fund those requirements. We will be satisfied when a victim of cyber crime has the option to call the police for assistance, rather than rely on hiring their own forensic investigators. If cyber crime is a real crime, then victims should not be forced to outline digital dead bodies without official, expert assistance.

  4. We will vigorously encourage our law enforcement and intelligence services to work with private industry to combat cyber espionage and cyber attack. As with cyber crime, victims should not be expected to defend themselves against professional corporate cyber thieves or foreign cyber warfare experts. This will include funding and fast-tracking deployments of secure communications channels like SIPRNET, and granting security clearances to appropriate parties without specific government contracts, so that victimized organizations can securely communicate with our defense and intelligence communities.

  5. We will instruct the Secretary of Defense to examine the creation of a Cyber Force as an independent military branch. Just as we fight wars on land, at sea, and in the aerospace domains, we should promote warfighters throroughly steeped in the intricacies of defense and attack in the cyberspace domain. We will also make it clear to our national adversaries that a cyber attack upon our national interests is equivalent to an attack in any other domain, and we will respond with the full range of diplomatic, information, military, and economic power at our disposal.

  6. We will drastically expand the Scholarship for Service or Cyber Corps program to include providing assistance to private sector actors and individual citizens who ask for help. Just as the Peace Corps provides physical assistance to developing countries, the Cyber Corps will provide digital assistance to those who apply for it.

  7. We will work with Congress to dramatically increase cyber funding applied research. It is clear that the defensive models we have applied for the last thirty years need, at the very least, a serious review. Funding researchers who can thoughtfully consider different approaches is well worth the effort. This funding will include support for open source software projects that benefit the cyber community at large. We will also aggressively work to deploy more secure protocols to replace those whose threat model has collapsed as the computing environment has changed.


These seven steps are concrete actions that will have more impact than appointing a single person to try to "coordinate" cyber security across the multiple dimensions and environmental factors I described earlier. Thank you for you time. [applause]


Note: If you read this far I am sure you know this was not the President's "real speech." This is what I would have liked to have heard.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Saturday, May 23, 2009

Defender's Dilemma vs Intruder's Dilemma

This is a follow-up to my post Response for Daily Dave. I realized I had a similar exchange three years ago, summarized in my post Response to Daily Dave Thread. Since I don't seem to be making much progress in this debate, I decided to render it in two slides.

First, I think everyone is familiar with the Defender's Dilemma.



The intruder only needs to exploit one of the victims in order to compromise the enterprise.

You might argue that this isn't true for some networks, but in most places if you gain a foothold it's quickly game over elsewhere.

What Dave and company don't seem to appreciate is that there is a similar problem for attackers. I call it the Intruder's Dilemma.



The defender only needs to detect one of the indicators of the intruder’s presence in order to initiate incident response within the enterprise.

What's interesting about this reality is that it applies to a single system or to a collection of systems. Even if the intruder only compromises a single system, the variety of indicators available make it possible to detect the attacker. Knowing where and when to look, and what to look for, becomes the challenge. However, as the scope of the incident expands to other systems, the probability of discovery increases. So, perversely, the bigger the incident, the more likely someone is going to notice.

Whether or not you can actually detect the intruder's presence depends on the amount of visibility you can achieve, and that is often outside the control of the security team because the security team doesn't own computing assets. However, this point of view can help you argue why you need the visibility to detect and respond to intrusions, even though you can't prevent them.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Publication Notice: The Rootkit Arsenal

Bill Blunden was kind enough to send me a copy of his new book The Rookit Arsenal. I plan to read it in a few months, due to my schedule and reading backlog. According to Bill, readers of the book will learn how to do the following:

  • Hook kernel structures on multi-processor systems

  • Use a kernel debugger to reverse system internals

  • Inject call gates to create a back door into Ring-0

  • Use detour patches to sidestep group policy

  • Modify privilege levels on Vista by altering kernel objects

  • Utilize bootkit technology

  • Defeat live incident response and post-mortem forensics

  • Implement code armoring to protect your deliverables

  • Establish covert channels using the WSK and NDIS 6.0


I am interested in the anti-forensics material, as you might imagine.

I first learned about Bill's work when he produced this presentation on rootkits. Slide 34 caught my attention:



That's pretty cool, but I am reminded of my post last summer on getting the job done. I wrote:

I have encountered plenty of roles where I am motivated and technically equipped, but without resources and power. I think that is the standard situation for incident responders, i.e., you don't have the evidence needed to determine scope and impact, and you don't have the authority to change the situation in your favor.

I think that is the main problem with incident detection and response, and probably computer security in general, these days.

Thanks again to Bill for the book, and be sure to check it out at Amazon.com.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Response for Daily Dave

Recently on the Daily Dave mailing list, Dave Aitel posted the following:

...The other thing that keeps coming up is memory forensics. You can do a lot with it today to find trojan .sys's that hackers are using - but it has a low ceiling I think. Most rootkits "hide processes", or "hide sockets". But it's an insane thing to do in the kernel. If you're in the kernel, why do you need a process at all? For the GUI? What are we writing here, MFC trojans? There's not a ton of entropy in the kernel, but there's enough that the next generation of rootkits is going to be able to avoid memory forensics as a problem they even have to think about. The gradient here is against memory forensics tools - they have to do a ton of work to counteract every tiny thing a rootkit writer does.

With exploits it's similar. Conducting memory forensics on userspace in order to find traces of CANVAS shellcode is a losing game in even the medium run. Anything thorough enough to catch shellcode is going to have too many false positives to be useful. Doesn't mean there isn't work to be done here, but it's not a game changer.


Since I'm not 31337 to get my post through Dave's moderation, I'll just publish my reply here:

Dave and everyone,

I'm not the guy to defend memory forensics at the level of an Aaron Walters, but I can talk about the general approach. Dave, I think you're applying the same tunnel vision to this issue that you apply to so-called intrusion detection systems. (We talked about this a few years ago, maybe at lunch at Black Hat?)

Yes, you can get your exploit (and probably your C2) by most detection mechanisms (which means you can bypass the "prevention" mechanism too). However, are you going to be able to hide your presence on the system and network -- perfectly, continuously, perpetually? (Or at least as long as it takes to accomplish your mission?) The answer is no, and this is how professional defenders deal with this problem on operational networks.

Memory forensics is the same. At some point the intruder is likely to take some action that reveals his presence. If the proper instrumentation and retention systems are deployed, once you know what to look for you can find the intruder. I call this retrospective security analysis, and it's the only approach that's ever worked against the most advanced threats, analog or digital. [1] The better your visibility, threat intelligence, and security staff resources,
the smaller the exposure window (compromise -> adversary mission completion). Keeping the window small is the best we can do; keeping it closed is impossible against advanced intruders.

Convincing developers and asset owners to support visibility remains a problem though.

Sincerely,

Richard

[1] http://taosecurity.blogspot.com/2009/02/black-hat-briefings-justify-supporting.html


I encounter Dave's attitude fairly often. What do you think?


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Thursday, May 21, 2009

Cheap IT Is Ultimately Expensive

I'm positive many of you are familiar with the idea that there are benefits to detecting software security defects early.

[Image reference: Software Security Engineering: A Guide for Project Managers.]

In other words, it is ultimately cheaper to design, code, sell, and support a more secure software product than a more insecure software product. Achieving this goal requires recognizing this advantage, investing in developers and processes that work, and dealing with exceptions (defects) as soon as possible through detection and response capabilities, even including customer-facing organizations (like PSIRTs).

I'm not aware of any studies supporting the following assertion, but I would be interested in feedback if you know any. I think it should be obvious that it's also cheaper to design, build, run, and support more secure computing assets than more insecure computing assets. In other words:

  • It is not cheaper to run legacy platforms, operating systems, and applications because "updates break things."

  • It is not cheaper to delay patching because of "business impact."

  • It is not cheaper to leave compromised systems operating within the enterprise because of the "productivity hit" taken when a system must be interrupted to enable security analysis.

  • It is not cheaper to try to manually identify and remove individual elements of malware and other persistence mechanisms, rather than rebuild from the ground up (and apply proper updates and configuration improvements to resist future compromise).

  • It is not cheaper to watch intellectual property escape the enterprise in order to prove that intruders are serious about stealing an organization's data.


Security doesn't make money; security is a loss prevention exercise. It's tough to justify security spending. However -- and these are the killers:

  • It's easy to show cost savings when experienced, professional system administrators are replaced by outsourced providers who are the lowest bidders.

  • It's easy to show the financial benefit of continuous availability of a revenue-producing system, or, conversely, easy to show the financial cost of downtime of a revenue-producing system.


Unfortunately, being seduced by those arguments ignores intrusion debt. One day the intrusion debt of poorly-run systems will be claimed by the intruders already inside the enterprise or those who are unleashed like an earthquake. Worse for you and me, the costs of dealing with the disaster are likely to be borne by the security team!

I thought of this vicious cycle when reading about The Sichuan earthquake in last week's Economist magazine:

In the days after the earthquake, senior officials vowed to investigate whether shoddy construction was to blame for the destruction of more than 7,000 classrooms in the disaster. But the issue was soon played down...

Mr Ai [investigating the disaster] says the refusal of central leaders to admit policy failures has exacerbated parents’ frustration. In the 1990s, he says, shoddy school buildings were erected across China because of the government’s drive to provide enough classrooms for all children to undergo nine years of compulsory education. Building costs were supposed to be shared by central and local authorities, but the latter often failed to chip in. This led to quality problems.


Ultimate, security is an IT problem, not a "security" problem. The faster asset owners realize this and be held responsible for the security of their systems, the less intrusion debt will mount and the greater the chance that enterprise assets will survive digital earthquakes. Cheap IT is ultimately expensive -- more expensive than proper investment in IT in the first place.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Check Out Hakin9

I recently received copies of the last three issues of Hakin9 magazine. There are many good articles being published these days. One of my favorites appears in the 3/2009 issue, titled Automating Malware Analysis, by Tyler Hudak. Tyler is our team's reverse engineer and he authors the The Security Shoggoth blog. Check out the magazine!


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Harlan Carvey on Talk Forensics

Earlier today I listed to the Talk Forensics podcast featuring Harlan Carvey. I thought it was interesting to hear a forensics expert discuss the sorts of cases he has been working. Harlan mentioned how he witnessed intruders integrate obfuscation techniques into their SQL injection attacks. These techniques successfully achieved their goals while introducing a secondary effect: their anti-forensic nature complicated analysis. Harlan mentioned how previously one could search Web server logs for SQL DECLARE statements, but after obfuscation was introduced the analyst had to be more diligent.

Harlan also mentioned that TaoSecurity Blog helped inspire him to start his Windows Incident Response blog, which is probably the best blog on the subject. Thanks Harlan! Also, I'm looking forward to Harlan's second edition of Windows Forensic Analysis. If you check the link you'll see that Syngress has introduced a new cover scheme, their first in probably 10 years. Finally, Harlan and I will be speaking at the SANS WhatWorks Summit in Forensics and Incident Response 2009, which will be the best collection of IR practitioners anywhere. One of my team, Ken Bradley, will also be speaking there.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

The Real Deal on Kylin

If you want the real deal on Kylin, the best public discussion is probably taking place at the Dark Visitor Blog. As you might expect of a blog that's run by people who actually speak Chinese and follow that country's scene, the story there is more believable than the sensationalism posted elsewhere.

I downloaded and tried installing KYLIN-2.1-1A.iso but didn't get far. It seems far newer versions are available if you know where to look.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

PSIRT Equals Getting Serious About Product Security

Last fall I wrote Tips for PSIRTs, pointing to a new CERT document giving advice for Product Security Incident Response Teams. Today I read Adobe shifts to Microsoft patching process, incident response plan by Robert Westervelt. The company maintains an Adobe Secure Software Engineering Team and an Adobe Product Security Incident Response Team. All of this is a sign that Adobe is getting serious about product security. It mirrors Microsoft's evolution, and I am glad to see it happening.

I'd like to be able to do a search for "Oracle PSIRT" or "Apple PSIRT" and get real results. The Google Online Security Blog isn't a real PSIRT, either. Just as you should have a CIRT if you use computers, you should have a PSIRT if you sell software.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Monday, May 18, 2009

24th Air Force to be Headquartered at Lackland AFB

Congratulations to Lackland AFB in San Antonio, Texas for being chosen to host the headquarters for 24th Air Force, a "cyber numbered Air Force." Lackland is home to the AF ISR Agency (previously AIA), the AF Information Operations Center (previously AFIWC), and the 33rd Network Warfare Squadron (previously the 33 IOS, and before that the AFCERT).

It's been six years since I visited the place, but I think it's a great choice for the 24th.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Sunday, May 10, 2009

Insider Threat Myth Documentation

In my first book The Tao of Network Security Monitoring, published in July 2004, I tried to trace the origin of the "80% myth". In the following section reprinted from pages 31-34, and newly annotated now, I document what this means for insider vs outsider threat. (This section is also posted here at Informit.com.)


OUTSIDERS VERSUS INSIDERS: WHAT IS NSM’S FOCUS?

This book is about network security monitoring. I use the term network to emphasize the book’s focus on traffic and incidents that occur over wires, radio waves, and other media. This book does not address intruders who steal data by copying it onto a USB memory stick or burning it to a CD-ROM. Although the focus for much of the book is on outsiders gaining unauthorized access, it pertains equally well to insiders who transfer information to remote locations. In fact, once an outsider has local access to an organization, he or she looks very much like an insider. [10]

Should this book (and NSM) pay more attention to insiders? One of the urban myths of the computer security field holds that 80% of all attacks originate from the inside. This “statistic” is quoted by anyone trying to sell a product that focuses on detecting attacks by insiders. An analysis of the most respected source of computer security statistics, the Computer Crime and Security Survey conducted annually by the Computer Security Institute (CSI) and the FBI, sheds some light on the source and interpretation of this figure. [11] [Bejtlich: I question saying "most respected" now, but I wrote that in 2004 before we had other reporting.]

The 2001 CSI/FBI study quoted a commentary by Dr. Eugene Schultz that first appeared in the Information Security Bulletin. Dr. Schultz was asked:

I keep hearing statistics that say that 80 percent of all attacks are from the inside. But then I read about all these Web defacements and distributed denial of service attacks, and it all doesn’t add up. Do most attacks really originate from the inside?

Dr. Schultz responded:

There is currently considerable confusion concerning where most attacks originate. Unfortunately, a lot of this confusion comes from the fact that some people keep quoting a 17-year-old FBI statistic that indicated that 80 percent of all attacks originated from the [inside]...

Should [we] ignore the insider threat in favor of the outsider threat? On the contrary. The insider threat remains the greatest single source of risk to organizations. Insider attacks generally have far greater negative impact to business interests and operations. Many externally initiated attacks can best be described as ankle-biter attacks launched by script kiddies.

But what I am also saying is that it is important to avoid underestimating the external threat. It is not only growing disproportionately, but is being fueled increasingly by organized crime and motives related to espionage. I urge all security professionals to conduct a first-hand inspection of their organization’s firewall logs before making a claim that most attacks come from the inside. Perhaps most successful attacks may come from the inside (especially if an organization’s firewalls are well configured and maintained), true, but that is different from saying that most attacks originate from the inside. [12]


Dr. Dorothy Denning, some of whose papers are discussed in Appendix B, confirmed Dr. Shultz’s conclusions. Looking at the threat, noted by the 2001 CSI/FBI study as “likely sources of attack,” Dr. Denning wrote in 2001:

For the first time, more respondents said that independent hackers were more likely to be the source of an attack than disgruntled or dishonest insiders (81% vs. 76%).

Perhaps the notion that insiders account for 80% of incidents no longer bears any truth whatsoever. [13]


The 2002 and 2003 CSI/FBI statistics for “likely sources of attack” continued this trend. At this point, remember that the statistic in play is “likely sources of attack,” namely the party that embodies a threat. In addition to disgruntled employees and independent hackers, other “likely sources of attack” counted by the CSI/FBI survey include foreign governments (28% in 2003), foreign corporations (25%), and U.S. competitors (40%).

Disgruntled employees are assumed to be insiders (i.e., people who can launch attacks from inside an organization) by definition. Independent hackers are assumed to not be insiders. But from where do attacks actually originate? What is the vector to the target? The CSI/FBI study asks respondents to rate “internal systems,” “remote dial-in,” and “Internet” as “frequent points of attack.” In 2003, 78% cited the Internet, while only 30% cited internal systems and 18% cited dial-in attacks. In 1999 the Internet was cited at 57% while internal systems rated 51%. These figures fly in the face of the 80% statistic.

A third figure hammers the idea that 80% of all attacks originate from the inside. The CSI/FBI study asks for the origin of incidents involving Web servers. For the past five years, incidents caused by insiders accounted for 7% or less of all Web intrusions. In 2003, outsiders accounted for 53%. About one-quarter of respondents said they “don’t know” the origin of their Web incidents, and 18% said “both” the inside and outside participated.

At this point the idea that insiders are to blame should be losing steam. Still, the 80% crowd can find solace in other parts of the 2003 CSI/FBI study. The study asks respondents to rate “types of attack or misuse detected in the last 12 months.” In 2003, 80% of participants cited “insider abuse of net access” as an “attack or misuse,” while only 36% confirmed “system penetration.” “Insider abuse of net access” apparently refers to inappropriate use of the Internet; as a separate statistic, “unauthorized access by insiders” merited a 45% rating.

If the insider advocates want to make their case, they should abandon the 80% statistic and focus on financial losses. The 2003 CSI/FBI study noted “theft of proprietary information” cost respondents over $70 million; “system penetration” cost a measly $2.8 million. One could assume that insiders accounted for this theft, but that might not be the case. The study noted “unauthorized access by insiders” cost respondents only $406,000 in losses. [14]

Regardless of your stance on the outsider versus insider issue, any activity that makes use of the network is a suitable focus for analysis using NSM. Any illicit action that generates a packet becomes an indicator for an NSM operation. One of the keys to devising a suitable NSM strategy for your organization is understanding certain tenets of detection, outlined next.

Footnotes for these pages:

10. Remember that “local access” does not necessarily equate to “sitting at a keyboard.” Local access usually means having interactive shell access on a target or the ability to have the victim execute commands of the intruder’s choosing.

11. You can find the CSI/FBI studies in .pdf format via Google searches. The newest edition can be downloaded from http://www.gosci.com.

12. Read Dr. Schultz’s commentary in full at http://www.chi-publishing.com. Look for the editorial in Information Security Bulletin, volume 6, issue 2 (2001). Adding to the confusion, Dr. Shultz’s original text used “outside” instead of “inside,” as printed in this book. The wording of the question and the thesis of Dr. Shultz’s response clearly show he meant to say “inside” in this crucial sentence. [Looking back on this five years later, I am still confused by Dr. Schultz's meaning. If he really meant to say "some people keep quoting a 17-year-old FBI statistic that indicated that 80 percent of all attacks originated from the outside," then why not say "this 17-year-old FBI statistic is the opposite of your claim?"]

13. Dr. Dorothy Denning, as quoted in the 2001 CSI/FBI Study.

14. Foreshadowing the popularization of “cyberextortion” via denial of service, the 2003 CSI/FBI study reported “denial of service” cost over $65 million—second only to “theft of proprietary information” in the rankings.


My biggest regret reading this section involves trying to interpret Dr. Schultz's comments. If anyone can find a copy of an "FBI study" from approximately 1984 that discusses insider vs outsider threat, please let me know!

Reading this section now, I see the primary value as finding documentation that the "80% myth" refers to the idea that "80 percent of all attacks are from the inside." If you agree that an attack is not the same as an "incident," then you can see how Dr. Denning's comment about "the notion that insiders account for 80% of incidents" introduces more problems by talking about incidents and not attacks. If someone wants to throw "risk" in there, you now have a third meaning.

What I find sad is that so many people carelessly cite the "FBI" or "CSI" studies as supporting whatever "80%" claim they want, but if asked to point to the actual study they could never do so. In my first book I at least tried to document what was available at that time.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Thursday, May 07, 2009

Highlights from 2009 Verizon Data Breach Report

Last year I posted Verizon Business Report Speaks Volumes, providing excerpts that resonated with me. Verizon released another edition last month, with plenty of commentary on their blog and elsewhere. I wanted to record a few highlights here for my own reference but also to counter arguments I continue to see elsewhere about the so-called prevalence of insider threats.

This is a polite way of trying to demolish the most deeply entrenched urban myth in security history.



This shows the 2009 results.



This is an historical way to look at breach source data.



The following chart is the one that insider threat proponents will try to use to justify their position. It shows that, on average, a breach caused by a single insider will result in many more records being stolen than one caused by an outsider. Incidentally, this is what I have said previously as well!



However, when looking at the problem in aggregate, outsiders cause more damage.



If the big red dot doesn't say it all, I don't know what will.

Verizon captures this scenario using a "pseudo-risk" calculation.



Pete Lindstrom makes an interesting point about this calculation, but I don't think it is necessarily without merit.

I'd like to briefly turn to the detection and response elements I found interesting.

The following shows someone from Verizon has been to the Best Single Day Class Ever. That big red dot shows "months" from compromise to discovery is dominant.



Detection methods continue to be pathetic.



This is probably because, although logs are collected, hardly anyone reviews them.



This is probably because only a third of companies have an IR team.



Most companies are probably relying on their anti-virus software to save them. This is too bad, because the explosion in customized malware means it probably won't.



All of this is why my TCP/IP Weapons School 2.0 class teaches students how to analyze data to detect and respond to intrusions, rather than rely on automated tools which fail.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Logs from the Cloud

I received an email with the following notice today:

Amazon CloudFront Adds Access Logging Capability:

AWS today released access logs for Amazon CloudFront. Access logs are activity records that show you details about every request delivered through Amazon CloudFront. They contain a comprehensive set of information about requests for your content, including the object requested, the date and time of the request, the edge location serving the request, the client IP address, the referrer and the user agent. It’s easy to get started using access logs: you just specify the name of the Amazon S3 bucket you want to use to store the logs when you configure your Amazon CloudFront distribution. There are no fees for using the access logs, beyond normal Amazon S3 charges to write, store and retrieve the logs.



The Amazon Elastic MapReduce team has also built a sample application, CloudFront LogAnalyzer, that will analyze your Amazon CloudFront access logs. This tool lets you use the power of Amazon Elastic MapReduce to quickly turn Access Logs into the answers to the most commonly asked questions about your business. Additionally, several partners have also built solutions that help you analyze these access logs; you can find more information about these in the AWS Solutions Catalog.


Looking at the Developer Guide entry for Access Logs, we see the following sorts of data will be recorded:


The log files use the W3C extended log file format
(for more information, go to http://www.w3.org/TR/WD-logfile.html).

The files contain information for each record in the following order:

Date of the request (in UTC)
Time (when the server finished processing the request; in UTC)
Edge location that served the request
(a variable-length string with a minimum of 3 characters)
Bytes served
Client IP address (no hostname lookups occur)
HTTP access method
DNS name (either the CloudFront distribution name or your CNAME,
whichever the end user specified in the request)
URI stem (e.g., /images/daily-ad.jpg)
HTTP status code (e.g., 200)
Referrer
User agent

An entry might look like this:

#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status
cs(Referer) cs(User-Agent)

02/01/2009 01:13:11 FRA2 182 10.10.10.10 GET d2819bc28.cloudfront.net /view/my/file.html 200
www.displaymyfiles.com Mozilla/4.0%20(compatible;%20MSIE%205.0b1;%20Mac_PowerPC)

02/01/2009 01:13:12 LAX1 2390282 12.12.12.12 GET www.singalong.com /soundtrack/happy.mp3 304
www.unknownsingers.com Mozilla/4.0%20(compatible;%20MSIE%207.0;%20Windows%20NT%205.1)

I think this is a good start, but I'll leave it to Cloudsecurity.org for expert commentary!


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Thoughts on Cyber Command

I've been blogging about various cyber command proposals for a few years, but right now there is some real movement at the combatant command level. Ellen Nakashima's article Cyber-Command May Help Protect Civilian Networks offers the latest details.

The Pentagon is considering whether to create a new cyber-command that would oversee government efforts to protect the military's computer networks and would also assist in protecting the civilian government networks, the head of the National Security Agency said yesterday [Tuesday].

The new command would be headquartered at Fort Meade, the NSA's director, Lt. Gen. Keith B. Alexander, told the House Armed Services terrorism subcommittee.

Alexander, who is a front-runner to assume control of the command if it is created, said its focus would be to better protect the U.S. military's computers by marrying the offensive and defensive capabilities of the military and the NSA.

Through the command, the NSA would also provide technical support to the Department of Homeland Security, which is in charge of protecting civilian networks and helps safeguard the energy grid and other critical infrastructure from cyber-attack, Alexander said.

He stressed that the NSA does not want to run or operate the civilian networks, but help Homeland Security improve its efforts...

As proposed by the Pentagon, the command would fall under the U.S. Strategic Command, which is tasked with defending against attacks on vital interests.


The highlighted sections reinforce number 2 of my Predictions for 2008 made in December 2007. A few months prior I argued that the US Needs Cyber NORAD.

The written testimonies are posted on the U.S. House of Representatives, House Armed Services Committee Web site.

The new Cyber Command will most likely be a subordinate unified command under US Strategic Command.

I'd like to briefly respond to Robert Graham's post Why Cyber Commands Fail. He says in part:

What the military wants is a hacker squad that they can give a specific objective, and have the hackers carry out that objective within a specific timeframe. For example, they might tell hackers to take out Iran's radar at midnight so that fighter jets can enter their airspace a few minutes later to bomb their nuclear plants. That's not going to work.

What you could do is tell hackers to go after Iran and do whatever they can to disrupt their nuclear developments. One hacker might find a way to shut down safety controls and cause a nuclear meltdown, another might jam the centrifuges, another might change the firmware on measuring equipment to incorrect measure the concentration of U238.

Or, you could give the hackers six months to infiltrate Iran's computers, then come back with a list of options. Maybe disabling the radar system will be one of them, maybe not. But that's not the sort of thing the military is tasked to do - that's more an intelligence operation the CIA would be doing..

China and Russia understand this. They don't directly employ hackers or tell the hackers to accomplish certain goals. They let the hackers have free range to do whatever they want. If the hackers come across something interesting, such as plans for the Joint Strike Fighter, the government buys it, but no government official ever told the hackers specifically to steal those plans...

So how can the United States get in on this sort of asymmetric warfare action?

The first thing is that you have to stoke some sort of nationalism in the way that Russia and China do. I'm not sure this is in our character (especially under the current president), however, so we'd probably have to find some alternative. Instead of pro-USA nationalism we could instead focus on human rights activism. The government could spend a lot of time talking to the press about the sorts of human rights abuses that go on in Russia and China. Get our own USA hackers thinking about human rights as their own causus belli.

The second thing they need to do is create a climate where our own hackers can operate. I would gladly hack into Iranian computers, but I'm not sure how this fits into US law...

This would be similar to the "letters of mark and reprisal" used by governments during the 1700s. In those days, national navies were too small to patrol the entire ocean. Therefore, governments licensed privateers to prey upon a hostile nation's shipping. The privateers kept half the booty, and gave the other half to their respective government. This is essentially what China and Russia have done.

A third thing our military would need to do is train our hackers in the target language. Foreign hackers usually learn English, but American hackers rarely learn foreign languages, especially Russian, Chinese, or Farsi (Iranian). If we want to encourage our hackers to go after those countries in the same way they come after us, we need to encourage them to learn those languages...

The fourth thing our military would need to do is fix their horrid purchasing processes...

Note that I think the individuals who run our military are very, very smart. I've met several generals and colonels who understand this. The problem is that while individuals are smart, the organization is dumb as a rock. The organization crushes precisely the sort of creative thinking need to have a successful "cyber" offensive capability.


Robert has a lot of good ideas here. In Air Force Cyber Panel I talked about a clash of models between the United States and places like China. On the one hand we have a military-industrial complex supported by a vast contracting force vs a country with a true "people's army," containing uniformed military, semi-military, and pure civilians who work with the others to achieve broadly common goals.

I don't think we will ever see any official support for the privateer concept. China doesn't even recognize their own people's involvement in hacking, since they frequently repeat the line that "China doesn't support hacking."

The major benefit I see from a Cyber Command is providing a career path and organizational support for military personnel. Until that exists many people who would want to be in the military doing cyber operations will reach a point where leaving their service is their best option.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Wednesday, May 06, 2009

OSVDB on Problems with Identifying Vulnerabilities

This post titled If you can't, how can we? described a problem I had not previously considered regarding identifying vulnerabilities. ("VDB" refers to Vulnerability Database.)

Steve Christey w/ CVE recently posted that trying to keep up with Linux Kernel issues was getting to be a burden. Issues that may or may not be security related, even Kernel devs don’t fully know... Lately, Mozilla advisories are getting worse as they clump a dozen issues with "evidence of memory corruption" into a single advisory, that gets lumped into a single CVE. Doesn’t matter that they can be exploited separately or that some may not be exploitable at all. Reading the bugzilla entries that cover the issues is headache-inducing as their own devs frequently don’t understand the extent of the issues. Oh, if they make the bugzilla entry public. If the Linux Kernel devs and Mozilla browser wonks cannot figure out the extent of the issue, how are VDBs supposed to?...

VDBs deal with thousands of vulnerabilities a year, ranging from PHP applications to Oracle to Windows services to SCADA software to cellular telephones. We’re expected to have a basic understanding of ‘vulnerabilities’, but this isn’t 1995. Software and vulnerabilities have evolved over the years. They have moved from straight-forward overflows (before buffer vs stack vs heap vs underflow) and one type of XSS to a wide variety of issues that are far from trivial to exploit. For fifteen years, it has been a balancing act for VDBs when including Denial of Service (DOS) vulnerabilities because the details are often sparse and it is not clear if an unprivileged user can reasonably affect availability. Jump to today where the software developers cannot, or will not tell the masses what the real issue is...

It is important that VDBs continue to track these issues, and it is great that we have more insight and contact with the development teams of various projects. However, this insight and contact has paved the way for a new set of problems that over-tax an already burdened effort. MITRE receives almost 5 million dollars a year from the U.S. government to fund the C*E effort, including CVE [Based on FOIA information]. If they cannot keep up with these vulnerabilities, how do their "competitors", especially free / open source ones [5], have a chance?

Projects like the Linux Kernel are familiar with CVE entries. Many Linux distributions are CVE Numbering Authorities, and can assign a CVE entry to a particular vulnerability. It’s time that you (collectively) properly document and explain vulnerabilities so that VDBs don’t have to do the source code analysis, patch reversals or play 20 questions with the development team. Provide a clear understanding of what the vulnerability is so that we may properly document it, and customers can then judge the severity of issue and act on it accordingly.


I think many of us just take for granted that assigning vulnerability identifiers is easy. Discovering the vulnerability is supposed to be the hard part. This is disturbing, because it means that the people with the most at stake -- the asset owners -- don't know how to assess risk. If you think about the risk equation, lack of knowledge of vulnerabilities just augments the problems of not knowing what you're protecting (assets) or who wants to exploit them (threats).

It's really an problem of incentives. The group with the strongest incentive to fully comprehend the vulnerability is the group that seeks to exploit it. Once they understand the vulnerability they have a strong incentive to not tell anyone else so they can financially or otherwise benefit from their asymmetric knowledge.

I am not a fan of government regulation or intervention, but it sounds like this incentive misalignment may require one or the other or both.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Lessons from CDX

In my post Thoughts on 2009 CDX I described my initial reaction to the Cyber Defense Exercise from the point of view of seeing the white and red cells in action. Thanks to this press release I learned the outcome of the event:

The National Security Agency/Central Security Service (NSA/CSS) is pleased to announce that the United States Military Academy at West Point has won the 2009 Cyber Defense Exercise (CDX) trophy for the third year in a row.

I found more detail here:

The USMA team won the exercise for the third year in a row––West Point’s fifth win since the competition began in 2001. That means they successfully fended off the NSA hackers better than the U.S. Naval Academy, U.S. Air Force Academy, U.S. Coast Guard Academy, U.S. Merchant Marine Academy, the Naval Postgraduate School, the Air Force Institute of Technology and Royal Military College of Canada...

"We had large attacks against our e-mail and Web server from multiple (Internet protocol) addresses (all NSA Red Team), Firstie Josh Ewing, cadet public affairs officer for the team, said. "We were able to withstand their attacks and blocked over 200 IPs that they were using to attack the network."

All the while, the cadets were tasked with extra projects such as network forensics. The cadets’ scores from these extra tasks contributed to their win, Adams said.


Based on my discussions with people from the exercise, it is clear that West Point takes the CDX very seriously. As in previous years, West Point dedicated 30-40 cadets to the event. They appear to use the CDX as a capstone exercise for a computer security class. Based on manpower alone they dwarf the other participants; for example, the Coast Guard had a team of less than 10 (6-7?) from what I heard.

Thinking about this exercise caused me to try classifying the various stages through which a security team might evolve.

  1. Ignorance. "Security problem? What security problem?" No one at the organization realizes there is even an issue to worry about.

  2. Denial. "I hear others have security problems, but we don't." The organization thinks they are special enough that they don't share the vulnerabilities and exploitation suffered by others.

  3. Incompetence. "We have to do something!" The organization accepts there is a problem but is not equipped to do what is required. They may or may not realize they are not equipped to handle the problem.

  4. Heroics. "Stand back! I'll fix it!" The organization develops or hires staff who can make a difference for the first time. This is a dangerous phase, because the situation can improve but it is not sustainable.

  5. Captitalization. "Now I have some resources to address this problem." The heroes receive some funds to advance their cause, but funding alone is not sufficient.

  6. Institutionalization. "Our organization is integrating our security measures into the overall business operations." This is real progress. The organization is taking the security problems seriously and it's not just the security team's problem anymore.

  7. Specialization. "We're leveraging our unique expertise in X and Y to defend ourselves and contribute back to the security community." The organization has matured enough that it can take advantage of its own environment to defend itself, as well as bring lessons to others in the community.


Based on what I know of the West Point team, they seem to be at the Institutionalization phase. Contrast their approach and success with a team that might only be at the Heroics phase. Heroics can produce a win here and there, but Institutionalization will produce the sort of sustainable advantage we're seeing in the West Point team.

You may find these labels apply to your security teams too.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Risk Assessment, Physics Envy, and False Precision

In my last post I mentioned physics. Longtime blog readers might remember a thread from 2007 which ended with Final Question on FAIR, where I was debating the value of numerical outputs from so-called "risk assessments." Last weekend I attended the 2009 Berkshire Hathaway Shareholder meeting courtesy of Gunnar Peterson. He mentioned two terms used by Berkshire's Charlie Munger that now explains the whole numerical risk assessment approach perfectly:

Physics Envy, resulting in false precision:

In October of 2003 Charlie Munger gave a lecture to the economics students at the University of California at Santa Barbara in which he discussed problems with the way that economics is taught in universities.One of the problems he described was based on what he called "Physics Envy." This, Charlie says, is "the craving for a false precision. The wanting of formula..."

The problem, Charley goes on, is, "that it's not going to happen by and large in economics. It's too complex a system. And the craving for that physics-style precision does nothing but get you in terrible trouble..."

When you combine Physics Envy with Charley's "man with a hammer syndrome," the result is the tendency for people to overweight things that can be counted.

"This is terrible not only in economics, but practically everywhere else, including business; it's really terrible in business -- and that is you've got a complex system and it spews out a lot of wonderful numbers [that] enable you to measure some factors. But there are other factors that are terribly important. There's no precise numbering where you can put to these factors. You know they're important, you don't have the numbers. Well practically everybody just overweighs the stuff that can be numbered, because it yields to the statistical techniques they're taught in places like this, and doesn't mix in the hard-to-measure stuff that may be more important...

As Charley says, this problem not only applies to the field of economics, but is huge consideration in security analysis. Here it can give rise to the "man with a spread sheet syndrome" which is loosely defined as, "Since I have this really neat spread sheet it must mean something..."

To the man with a spread sheet this looks like a mathematical (hard science) problem, but the calculation of future cash flows is more art than it is hard science. It involves a lot analysis that has nothing to do with numbers. In a great many cases (for me, probably most cases) involves a lot of guessing. It is my opinion that most cash flow spread sheets are a waste of time because most companies do not really have a predictable future cash flow.


You could literally remove any references to financial issues and replace them with risk assessments to have the same exact meaning. What's worse, people who do so-called "risk assessments" are usually not even using real numbers, as would be the case with cash flow analysis!

Physics envy, leading to false precision, are two powerful ideas I intend to carry forward.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Dan Geer on Marcus Ranum's 5th Rearguard Security Podcast

Last week while flying home from the midwest I listened to the fifth Rearguard Security podcast, featuring Dan Geer. If you like my blog you will enjoy the entire podcast. This was my favorite quote, from Dan:

"Internet security is quite possibly the most intellectually challenging profession on the planet... for two reasons... complexity... and rate of change [are] your enemy.

Take that, quantum physics!!

You might also like the line used to introduce the podcast:

The Rearguard Security podcast: where the elite meet to share a sense of defeat.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Thoughts on 2009 CDX

Last month Tony Sager was kind enough to invite me to visit NSA's Cyber Defense Exercise (CDX), an annual computer defense drill where cadets from the nation's military service academies defend training networks from red teams. I first mentioned CDX in 2003 and attended a great briefing on CDX summarized by my 2006 post Comments on SANS CDX Briefing.

For this event I drove to Elkridge, MD and visited the defense contractor hosting the CDX white and red cells. The red team conducts adversary simulation against the cadet teams while the white cell runs the exercise and keeps score. NSA did a great job hosting visitors, ranging from lowly bloggers like yours truly, all the way up to multi-star generals and their staffs. I'd like to mention a few points which caught my attention.

  • This is the second year that the participants were given a budget. This means that making changes to the architecture they were defending, such as installing software and taking other actions, inflicted costs. To me this makes enterprise defense much more realistic.

  • Three weeks prior to the exercise, the students receive the images they will be running during the event. This gives them three weeks to essentially conduct forensics against the systems to determine what is wrong with them. The NSA red team "taints" the systems prior to delivery, so they typically contain malware and other persistent backdoors that permit the red team to access and pillage the systems once the cadets deploy them in the exercise. This really tests the teams's forensic abilities but it seems highly unrealistic.

  • The room I visited held approximately 30 red teamers. They were focusing their efforts against 9 or 10 target teams. That level of effort helps you understand the sort of real adversary forces arrayed against real targets.

  • Points are lost when the teams fail to keep their services operational. The main services are Web/database, DNS, instant messaging, and email. While services are clearly important, the exercise doesn't test the sort of real-world scenarios we see, such as data exfiltration. Good threat agents don't disable any services. They steal while keeping everything running, like the good parasites they are.


I'll save comments on who won and why they might have won for a future post. Thanks to Tony and those who kindly hosted me and took time from the schedules to do so!


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Black Hat Class Outline Posted

The registration process for my TCP/IP Weapons School 2.0 class at Black Hat USA 2009 continues to be active. Several people have asked for something they could show their managers to explain the course in one page, so I created a class outline in .pdf format. No, this is not a malicious .pdf!

I am also available to answer questions on the class, so please feel free to ask here. Based on the feedback from my DC and Amsterdam sessions earlier this year, students are enjoying the new lab-centric format which focuses on teaching hands-on skills and an investigative mindset. In Amsterdam I also used a new question-and-answer approach where I "batched" questions asked by the students during the labs, and then set aside separate time to just answer questions on whatever security topic the students wanted to discuss.

Remember I also posted a Sample Lab a few months ago to give one example of the format used by this new class.

After Black Hat USA I will not be training again until 2010. If you want to attend my class your best bet is to sign up before 1 July. "Late" and "Onsite" registration is a possibility after that, but it's more expensive and seats are not as easy to get as earlier in the process. Last year I trained almost 140 students in two classes. Thank you.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

Review of Chained Exploits Posted

Amazon.com just posted my four star review of Chained Exploits by Andrew Whitaker, Keatron Evans, and Jack B. Voth. From the review:

I agree with some of the commentary by previous reviewers, but I think some of it is unduly harsh. I don't think it's strictly necessary for a book to contain brand new security techniques in order to qualify for publication. Book publishing is not the same as releasing a white paper or briefing at Black Hat. However, books should strive to *not* cover ground published in other books, or even in well-written white papers. In that respect I think Chained Exploits strikes a good balance. The book's novelty relies on presenting complete, technical examples of a variety of "intrusion missions." While not necessarily groundbreaking for experienced offensive security people, Chained Exploits will be informative for broader technical audiences.


Richard Bejtlich is teaching new classes in Las Vegas in 2009. Early Las Vegas registration ends 1 June.