Wednesday, October 31, 2007

A Plea to the Worthies

You may have seen stories like Cybersecurity Experts Collaborate with subtitles like A think tank has tapped several heavyweight security experts to staff a commission that will advise the president. That story continues:

The Center for Strategic and International Studies (CSIS) wants the commission to come up with a list of recommendations that the new president who takes office in January 2009 "can pick up and run with right away," said James Lewis, director of the CSIS Technology and Public Policy Program. The commission, made up of 32 cybersecurity experts, plans to finish its work by the end of 2008. I am fairly confident that nothing of value will come from this group, but there is one task which could completely reverse my opinion. Rather than wasting time on recommendations that will probably be ignored, how about taking a step in a direction that will have real impact: security metrics. That's right. Spend the first day (or two, if you are a slow reader or can't sit still for long periods) reading Andy Jaquith's book. Next, and this is the crucial part:

Figure out how to play and score the game before you pretend to think you can improve the score.

What does this mean? Just a few ideas include:

  • Propose definitions for security, risk, threat, vulnerability, inside threat, external threat, and all the other words we use yet upon which we never agree. Hold hearings and invite real security people (not just digital security people) to express their views.

  • Propose some metrics and see how other operations define success. Hold hearings on the results of that process.

  • Apply metrics to some real organizations and gain a baseline set of numbers. Repeat the process at determined time intervals. Try to identify correlations and if possible causations. Be anonymous if necessary, but use a real methodology and not the self-selection applied by CSI/FBI and others.


Do you see where I am going here? At the end of the process we could have a framework for seeing just what is happening. I defy anyone to tell me just how bad or good our digital security situation is right now. Some say the sky is falling, others say we're happy! happy!, others say we're just as secure as we need to be to continue limping along. It is a proper role for a panel of worthies to help figure out how the game is played and then what the score is. It is a waste of time to make recommendations before those basic steps have been taken.

Monday, October 29, 2007

Wake Up Corporate America

I am constantly hammered for downplaying the "inside threat" and focusing on external attackers. Several months ago I noted the Month of Owned Corporations as an example of enterprises demonstrating security failures exploited by outsiders. Thanks to Bots Rise in the Enterprise, it appears the external threat is finally getting more attention:

Who says bots are just for home PCs? Turns out bot infections in the enterprise may be more widespread than originally thought.

Botnet operators traditionally have recruited "soft" targets -- home users with little or no security -- and the assumption was that the more heavily fortressed enterprise was mostly immune. But incident response teams and security researchers on the front lines say they are witnessing significant bot activity in enterprises as well...

Rick Wesson, CEO of Support Intelligence, says the rate of botnet infection in the enterprise isn't necessarily increasing -- it just hasn't been explored in detail until recently. "What's changing is the perception. It's been underestimated, underreported, and underanalyzed," Wesson says. "Corporate America is in as bad shape as a user at home."

Wesson says his firm, which does security monitoring, instantly finds dozens of bot-infected client machines in an enterprise customer's network when it starts studying its traffic. "We find dozens of bot-compromised systems off the bat. The longer we stay in [there], the more we find."
(emphasis added)

Wake up, corporate America (and the world). When you open your eyes you're not going to like what you see, but dealing with the truth is better than pretending everything's ok.

Wednesday, October 24, 2007

Are You Secure? Prove It.

Are you secure? Prove it. These five words form the core of my recent thinking on the digital security scene. Let me expand "secure" to mean the definition I provided in my first book: Security is the process of maintaining an acceptable level of perceived risk. I defined risk as the probability of suffering harm or loss. You could expand my five word question into are you operating a process that maintains an acceptable level of perceived risk?

Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well.

For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it.

So, are you secure? Prove it.

  1. Yes. Then, crickets (i.e., silence for you non-imaginative folks.) This is completely unacceptable. The failure to provide any kind of proof is security by belief. We want security by fact.

  2. Yes, we have product X, Y, Z, etc. deployed. This is better, but it's another expression of belief and not fact. The only fact here is that technologies can be abused, subverted, and broken. Technologies can be simultaneously effective against one attack model and completely worthless against another.

  3. Yes, we are compliant with regulation X. Regulatory compliance is usually a check-box paperwork exercise whose controls lag attack models of the day by one to five years, if not more. A compliant enterprise is like feeling an ocean liner is secure because it left dry dock with life boats and jackets. If regulatory compliance is more than a paperwork self-survey, we approach the realm of real of evidence. However, I have not seen any compliance assessments which measure anything of operational relevance.

  4. Yes, we have logs indicating we prevented attacks X, Y, and Z. This is getting close to the right answer, but it's still inadequate. For the first time we have some real evidence (logs) but these will probably not provide the whole picture. Sure, logs indicate what was stopped, but what about activities that were allowed? Were they all normal, or were some malicious but unrecognized by the preventative mechanism?

  5. Yes, we do not have any indications that our systems are acting outside their expected usage patterns. Some would call this rationale the definition of security. Whether or not this answer is acceptable depends on the nature of the indications. If you have no indications because you are not monitoring anything, then this excuse is hollow. If you have no indications and you comprehensively track the state of an asset, then we are making real progress. That leads to the penultimate answer, which is very close to ideal.

  6. Yes, we do not have any indications that our systems are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and memory-based evidence for signs of violations. This is really close to the correct answer. The absence of indications of intrusion is only significant if you have some assurance that you've properly instrumented and understood the asset. You must have trustworthy monitoring systems in order to trust that an asset is "secure." If this is really close, why isn't it correct?

  7. Yes, we do not have any indications that our systems are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and memory-based evidence for signs of violations. We regularly test our detection and response people, processes, and tools against external adversary simulations that match or exceed the capabilities and intentions of the parties attacking our enterprise (i.e., the threat). Here you see the reason why number 6 was insufficient. If you assumed that number 6 was ok, you forgot to ensure that your operations were up to the task of detecting and responding to intrusions. Periodically you must benchmark your perceived effectiveness against a neutral third party in an operational exercise (a "red team" event). A final assumption inherent in all seven answers is that you know the assets you are trying to secure, which is no mean feat.


Incidentally, this post explains why deploying a so-called IPS does nothing for ensuring "security." Of course, you can demonstrate that it blocked attacks X, Y, and Z. But, how can you be sure it didn't miss something?

If you want to spend the least amount of money to take the biggest step towards Magnificent Number 7, you should implement Network Security Monitoring.

Microsoft, Explain Threats to Microsoft

The Microsoft Malware Protection Center recently published their third Security Intelligence Report. The front page of the report says

An in-depth perspective on software vulnerabilities and exploits, malicious code threats, and potentially unwanted software, focusing on the first half of 2007

Inside it continues:

This report provides an in-depth perspective on software vulnerabilities (both in Microsoft software and third-party software), software exploits (for which there is a related MSRC bulletin), malicious software, and potentially unwanted software. The lists below summarize the key points from each section of the report...

The number of disclosures of new software vulnerabilities across the industry continues to be in the thousands...


Contrast that proper use of the word vulnerabilities in those excerpts with the incorrect use of the word threat in the quotes I noted in Someone Please Explain Threats to Microsoft:

As you go about filling in the threat model threat list, it’s important to consider the consequences of entering threats and mitigations. While it can be easy to find threats, it is important to realize that all threats have real-world consequences for the development team...

When we’re threat modeling, we should ensure that we’ve identified as many of the potential threats as possible (even if you think they’re trivial). At a minimum, the threats we list that we chose to ignore will remain in the document to provide guidance for the future.


In that excerpt, all uses of the word threat should be replaced with the word vulnerability, with possible exception of the term "threat modeling." In reality it should be "attack modeling," but in all other cases Microsoft is clearly talking about discovering holes/flaws/problems in their software, i.e., vulnerabilities.

So, it seems that the people who have the big security picture -- those who write the Microsoft Security Intelligence Reports -- know the difference between a threat and a vulnerability. The developers who focus on Microsoft's software -- those exercising the Microsoft Security Development Lifecycle -- are using "threat" when they should be saying "vulnerability."

It would be good for the SIR people to talk to the SDLC people. Without that coordination Microsoft's developers will continue to view the security problem incorrectly, and by extension, so will the customers who look to Microsoft for intellectual guidance.

On a related note, I was happy to see the latest SIR available as a .pdf.

FreeBSD 7.0 Developments

I am happy to announce that progress is being made towards the release of FreeBSD 7.0. This announcement says the release cycles for FreeBSD 7.0 and 6.3 have begun. The first 7.0-BETA1 .iso's you might want to test on a fresh system have been published. The announcement says "Instructions on using FreeBSD Update to perform a binary upgrade from FreeBSD 6.x to 7.0-BETA1 will be provided via the freebsd-stable list when available."

The FreeBSD 7.0 release schedule is available, and it shows FreeBSD 7.0 is scheduled for publication on 17 Dec 07. I would love to see this happen, but it's likely to take place about a month later. However, given the time between now and December, it's possible 7.0 will arrive by the end of the year. It looks like the todo list is rather small.

While researching this story I found Bruce Mah's FreeBSD Release Documentation Snapshot Page. A large amount of documentation for each release is published there.

When available I will probably use 7.0 in production. I had no problem with 6.0 in production. This is a departure from the experience of 5.0 and 5.1. I didn't transition from the 4.x line in production until 5.2.1 was released.

Sunday, October 21, 2007

Counterintelligence and the Cyber Threat

Friday I attended an open symposium hosted by the Office of the National Counterintelligence Executive (ONCIX). It was titled Counterintelligence and the Cyber Threat and featured speakers and panels from government, law enforcement, industry, legal, and academic organizations. I attended as a representative of my company because our CSO, Frank Taylor, participated in the industry panel.

If you're not familiar with the term counterintelligence, let me reproduce a section from the OCNIX Web site:

Counterintelligence is the business of identifying and dealing with foreign intelligence threats to the United States. Its core concern is the intelligence services of foreign states and similar organizations of non-state actors, such as transnational terrorist groups. Counterintelligence has both a defensive mission — protecting the nation's secrets and assets against foreign intelligence penetration — and an offensive mission — finding out what foreign intelligence organizations are planning to better defeat their aims.

I also recommend reading the National Counterintelligence Strategy of the United States, 2007 (.pdf) which states:

Our adversaries -- foreign intelligence services, terrorists, foreign criminal enterprises and cyber intruders -- use overt, covert, and clandestine activities to exploit and undermine US national security interests. Counterintelligence is one of several instruments of national power that can thwart such activities, but its effectiveness depends in many respects on coordination with other elements of government and with the private sector.

During the Cold War, our nation's adversaries gained access to vital secrets of the most closely guarded institutions of our national security establishment and penetrated virtually all organizations of the US intelligence and defense communities. The resulting losses produced grave damage to our national security in terms of secrets compromised, intelligence sources degraded, and loves lost, and would have been catastrophic had we been at war.
(emphasis added)

Minor note 1: if we were not at war during the "Cold War," then why is it called a "War"? I believe the people who died fighting would call it a war.

Minor note 2: foreign intelligence services, terrorists, and foreign criminal enterprises are all specific parties. "Cyber intruders" are more often one of those previous parties. Those who perform digital attacks but do not fall into one of those three categories are usually script kiddies or recreational hackers, and should not be explicitly mentioned as counterintelligence targets. My guess is the report considers cyber-instantiated threats to be serious enough to somehow mention explicitly, but not enough intellectual rigor was applied to this sentence (like the Cold War section).

Major note: does the section about penetrating virtually all organizations of the US intelligence and defense communities surprise you? When I attended Air Force intelligence school in 1996-1997, one of our first instructors said:

"Most, if not all of the classified material you will see in your career has already been compromised. However, we have to act as if it's not."

I remembered thinking "What?!?" With hindsight, the more I hear about spies found inside government agencies, the more I understand that statement.

I found the symposium fascinating, so I'd like to share a few thoughts. Dr. Joel Brenner, the National Counterintelligence Executive, provided plenty of noteworthy comments. He said that counterintelligence is not security.

  • A security person sees a hole in a fence and wants to patch it.

  • A CI person sees a hole in a fence and wants to understand who created it, how it is being abused, and if it can be turned into an asset to use against the adversary.


Dr. Brenner said about 140 foreign intelligence surveillance organizations currently target the United States. Three strategic issues are at play:

  1. Threats to sovereign (US) networks, especially in the cyber domain. Dr. Brenner said There is growing acceptance that we face a cyber counterintelligence problem, not a security problem. I agree with this, and will have more to say about it in a future blog entry. He stressed the alteration attack (rather than the disclosure or destrucion attacks) as being the major problem facing US networks.

  2. Acquisition risk, i.e., supply chain risks. Dr. Brenner said we need technically literate lawyers and policymakers to address these risks.

  3. Collaboration, or the lack thereof. Dr. Brenner notes that out current "cooperation model" is a function of our "classification model," resulting in an antiquated system that serves no one well.


One of the most interesting comments was this:

Industry talks risk management but they really do risk acceptance, not risk mitigation.

How true that is!

Chris Inglis, Deputy Director of the NSA and a fellow USAFA grad, used a term I liked with regard to fighting the cyber adversary. He said we need to outmaneuver the adversary, not solve security problems. I love this because it implies "security" can't be "solved," and it provides a reason to review maneuver warfare as a way to counter the adversary.

John McClurg, Vice President for security at Honeywell, described his "validated data" approach to obtaining business buy-in for security initiatives. He collects data to support a security program and presents it to managers as a means to justify his work. This sounds a lot like showing evidence that a business unit is owned or about to be owned. I like this idea and my work with NSM would help provide such data.

Scott O’Neal, Chief Computer Intrusion Section, Cyber Division, FBI, said The adversary is clearly ahead of security. This is a fact we have to accept. This echoes statements I made earlier this year and at other times. The FBI addresses intrusions through three points of view: CT (counterterrorism), CI (counterintelligence) and criminal.

I'll have more to say on this subject in the months ahead.

Saturday, October 20, 2007

Russian Business Network

This week Brian Krebs of Security Fix wrote Shadowy Russian Firm Seen as Conduit for Cybercrime, Taking on the Russian Business Network, Mapping the Russian Business Network, and The Russian Business Network Responds. These are great articles, that, at the very least, bring a true threat to a wider audience. This Slashdot post featured a helpful thread providing some technical details on the network itself. If you would like to try identifying some of the networks involved, my post Routing Enumeration might be helpful. Searches via RIPE could also be illuminating.

While researching this post I found a few other incredible resources. First, there's a blog -- rbnexploit.blogspot.com -- that started last month. It's exclusively about RBN. Second, I found Nicholas Albright's blog, which covers botnets. Third, there's an absolutely amazing series of articles by Scott Berinato. They are lengthy but definitely worth reading.

Wednesday, October 17, 2007

Review of LAN Switch Security Posted

Amazon.com just posted my three star review of LAN Switch Security: What Hackers Know About Your Switches. From the review:

I really looked forward to reading LAN Switch Security (LSS), simply because it covered layer 2 issues. These days application security, rootkits, and similar topics get all the press, but the foundation of the network is still critical. Unfortunately, LSS disappointed me enough to warrant this three star review. I'm afraid those before me who wrote five star reviews 1) don't read enough other books or 2) don't set their expectations high enough.

The bottom line is that if you want to read a good Cisco security book, the best available is still Hacking Exposed: Cisco Networks.

Monday, October 15, 2007

CSI Annual 2007 Contest

I've been given a press pass to attend CSI 2007 in Washington, DC, 3-9 November 2007. In exchange for posting the following, I've also got a $100 discount for anyone using the code CSI2007.

CSI Annual Conference 2007
November 3-9, 2007
Hyatt Regency Crystal City
Arlington, Virginia
www.CSIAnnual.com

CSI 2007, held November 3-9 in Arlington, VA, delivers a business-focused overview of enterprise security. 2,000+ delegates, 80 exhibitors and features 100+ sessions/seminars convene to provide a roadmap for integrating policies and procedures with new tools and techniques.

Register now using code: CSI2007 and save $100 off the conference or get a Free Exhibition Pass at www.csiannual.com.


If you think it's not worth $100 to my readers to see the previous text, how about this: I have two free full conference passes (together worth over $3000), courtesy of CSI, to be awarded to blog readers.

How do I decide who should get them? I'm going to hold an essay contest. The two best essays (judged by me) that address any of the following categories will win:

  • Discuss why control-compliant approaches to security like FISMA are a disaster.

  • Explain why detection systems should be kept separate from policy enforcement systems; see Considering Convergence? for background.

  • Tell us how Network Security Monitoring helped you perform your detection and response mission for a real incident. (You can anonymize your employer or organization.)

  • Any of the other "hot button" topics you've read at this blog.


Please send an email to taosecurity [at] gmail [dot] com with your answer, or publish it at your blog. Winning entries will be published on this blog.

Entries for the contest must be received by me via email no later than 8 pm eastern time Tuesday 23 October 2007.

Thank you to CSI for providing these free passes and discounts.

Friday, October 12, 2007

Air Force Cyberspace Report

This week I attended Victory in Cyberspace, an event held at the National Press Club. It centered on the release of a report written by Dr. Rebecca Grant for the Air Force Association's Eaker Institute. The report is titled Victory in Cyberspace (.pdf). The panel (pictured at left) included Lt. Gen. Robert J. Elder, Lt Gen. (ret) John R. Baker, and Gen. (ret) John P. Jumper. Dr. Grant is seated at the far right.

As far as the event went, I found it interesting. If you are exceptionally motivated you can download the entire 90 min briefing in .wmv format here. I'd like to share a few thoughts.

First, I was impressed by all the speakers. Lt. Gen. Baker led AIA when I was a Captain there. At the same time Gen. Jumper led Air Combat Command, before becoming Chief of Staff. I learned Lt. Gen. Elder has a PhD in engineering.

Lt. Gen. Elder commented that cyberspace is a domain similar to the ocean, and he specifically drew parallels with the Navy. (This made me wonder why the Navy isn't taking the lead on defending cyberspace.) In order to use the ocean for commercial purposes, the domain must be controlled so ships are protected from harm. Cyberspace is similar, except that in addition to requiring control of the domain in order to use it, the domain must first be created. (No one needs to create an ocean.)

Control, however, does not mean "ownership." Elder specifically stated the Air Force does not plan to "own cyberspace;" cyberspace is more of a "strategic commons" like the ocean. Cyberspace is also not confined only to the Internet. A presentation by Dr. Lani Kass titled Cyberspace: A Warfighting Domain cites the classified National Military Strategy for Cyberspace Operations to define cyberspace as:

a domain characterized by the use of electronics and the electromagnetic spectrum store, modify and exchange data via networked systems and associated physical infrastructures.

(Speaking of the NMSCO, I read a Joint document is en route, according to Joint Staff readies cyber operations plan.)

Elder's presentation featured plenty of military jargon, like the great "OODA loop" (observe, orient, decide, act) and a new "effects chain" (find, fix, target, engage). (That sounds like the OODA loop, doesn't it?)

One of Elder's major points, reflected in the report, is the Air Force's recognition that cyberspace (broadly meaning communications, I believe) is the foundation for all Air Force operations. I would argue that all of the services are equally dependent on cyberspace. That reminds me of the role of United States Transportation Command. It makes sense to me that cyberspace activities are currently part of United States Strategic Command.

USSTRATCOM accomplishes its cyber mission through the Joint Task Force - Global Network Operations (JTF-GNO, led by the commander of Defense Information Systems Agency), Joint Functional Component Command - Network Warfare (JFCC-NW, led by the director of National Security Agency), and Joint Information Operations Warfare Command (JIOWC, led by the commander of Air Force Intelligence, Surveillance, and Reconnaissance Agency).

If cyberspace is truly a warfighting domain (alongside land, sea, aerospace), I don't see who can argue against an independent Cyber Force. (I don't argue for a separate Space Force because I think the Air Force will eventually be the Aerospace Force.) Elder rejects the idea of an individual Cyber Force in Dr. Grant's report, but the Army had the same feeling about the Air Corps before 1947. We can separate the world into physical and virtual, or as the military likes to say, "kinetic" and "non-kinetic." I find it hard to believe that a cyber operator who reads and manipulates hex is going to find much in common with someone who kills people by exploding ordnance.

Elder mentioned some of the tasks the Air Force expects to perform to better secure its networks. These included a "cyber standardization and evaluation team," application assurance testing, software tamper detection via signatures and hashes, clusters of systems voting on proper outcomes, "cyber sidearms" in the form of tools on individual laptops, and a specific cyber Air Force Specialty Code (AFSC). If this had happened 10 years ago my career would have been very different and probably much longer!

Elder finished his talk describing how the US Code affects Air Force activities. For example, Title 10 (Armed Forces) restricts the work of the active duty military. Similar restrictions affect the intelligence community through Title 50 (War and Defense). However, because the Air National Guard operates under Title 32 (National Guard), it has more room to help the commercial sector and local governments with network defense. Elder said he would like to see Guard cyber units in every state, from the size of a squadron up to a wing. I thought this was a fairly exciting concept, since the Guard is likely to contain people with industry experience.

Lt. Gen. Baker and Gen. Jumper only spoke for a few minutes each. Jumper really hammered the acquisition community for providing the "block 40 upgrade to the block 30 capability" and thinking that helps the warfighter. He recommended writing Concepts of Operations before deciding what to buy. (Wow, sounds just like the commercial world; don't let vendors drive your security program!) Jumper said we need a "PhD-quality Weapons School," aggressor forces, and policy and doctrine modeled on offensive and defensive counter-air operations.

In the question phase, when asked why the bad guys are "so much better" than the good guys, Jumper replied "Bad guys don't have policy constraints." I believe Baker stated that the biggest problem he sees in industry is the feeling that "we don't think it [breaches] can happen to us,", he said, "but it's happening every day."

As far as the report itself, I realized the author did not have any experience in the topic of computer network defense, exploitation, or warfare. Having just watched two shows on Army and Marine snipers, it made me think how it must sound to a sniper for a non-sniper to write a report on sniper craft. Disappointingly, the Estonia "cyberwar" was presented as the galvanizing action that should stir everyone's pot. In describing the event, the report author wrote:

The attackers also used illicitly linked computers around the globe to mount an enhanced onslaught. These attacks were conducted by networks of "bots" -- a bot being an automated program that accesses web sites and traverses the site by following links on its pages.

So, it appears we should pin the blame on Web crawlers. Sigh.

I also read about "Windows 1.0" being released in August 1995 and "Windows 2.0" in November 1995.

Apparently no one did a technical edit of this report. It's clear it took a lot of work to write this report, however. There's plenty of history, references and interviews. I would not have wanted to undertake this task, since I would have required a few years to get the history right.

I found this one item immensely interesting, so I'll close with it:

[One] difficulty is estimating the scope of the mission. "We are well past the $5 billion per year mark, and I don't know where the top end is," commented one STRATCOM official. "The $5 billion is mostly on defense. We buy huge amounts of software and people to run that, but it's totally ineffective against Tier III" cyber [advanced persistent] threats, this official noted. (emphasis added)

Thursday, October 11, 2007

Alternatives to "Expert Opinions"

If you read The Doomsday Clock you probably recognize I have a dim opinion of "expert opinion," especially by committee. At the risk of making a political statement, I rank expert opinion alongside central planning as some of the worst ways to make decisions -- at least where a large amount of complexity must be accommodated.

What is my alternative? I believe free markets are the best way to synthesize competing data points to produce an assessment. Does this sound familiar? If yes, you may be thinking of this 2003 story: The Case for Terrorism Futures:

Critics blasted policy-makers Tuesday for dropping a controversial plan to create a futures market to help predict terrorist strikes...

[S]upporters of the project point out that gathering intelligence is often a messy business, with payoffs to unsavory characters and the elimination of potential adversaries. The futures market, ugly as it may sound, doesn't involve any of those moral compromises, said Robin Hanson, one of the earlier promoters of the concept of trading floors for ideas and a PAM [Policy Analysis Market] project contributor. It's just a way of capturing people's collective wisdom...

Projects similar to PAM, like the Iowa Electronic Markets, which speculate on election results, have been surprisingly reliable indicators of what's going to happen next...

The price of orange juice futures has even been shown to accurately predict the weather...

Traders on the Hollywood Stock Exchange last year correctly picked 35 of the 40 Oscar nominees in the eight biggest categories, according to The New Yorker magazine...

Market mechanisms are more accurate than asking people their opinions because they're putting their money or reputation on the line," said Ken Killitz of the Foresight Exchange, which speculates on everything from the future of human cloning to the possibility that Roman Catholic priests will be allowed to marry. "It gives people an incentive to reveal what they know..."

[E]xchanges "tend to predict events really well when no one person knows the answer -- when information is distributed among many people with different knowledge bases," said Joyce Berg, a University of Iowa professor who helped organize the political trading floors...

Markets also bring together people with information about a particular subject in a way blue-ribbon panels of experts can't, added Hanson.

"You get people that know things about a subject, but don't have the credentials to say so," he said. "You get people who live in these areas (of the Middle East)."

There's also "less of an ability to spin" in markets than in policy debates, Hanson noted. "So you get what people actually think, not what they say."


I love this idea. The fact that intellectual pygmies in the Senate defeated it is a real shame.

I found many interesting articles on this subject by Robin D. Hanson from George Mason University and Oxford's Future of Humanity Institute; the latter offers a Global Catastrophic Risks program that is probably more interesting (but less marketing-savvy) than the Doomsday Clock.

If you're sufficiently motivated to start arguing against this idea, I will probably just point back into the literature (especially Hanson's) countering these complaints.

If you're wondering why I mention this at all, it ties into my mention of security breach derivatives in my post Excerpts from Ross Anderson / Tyler Moore Paper.

Wednesday, October 10, 2007

The Doomsday Clock

Tonight I finished watching a show called The Doomsday Clock, on the best TV channel (the History Channel, of course). I was vaguely aware of the clock, maintained by the Bulletin of the Atomic Scientists, but I didn't know the history of the project. According to Minutes to Midnight:
The Bulletin of the Atomic Scientists’ Doomsday Clock conveys how close humanity is to catastrophic destruction--the figurative midnight--and monitors the means humankind could use to obliterate itself. First and foremost, these include nuclear weapons, but they also encompass climate-changing technologies and new developments in the life sciences and nanotechnology that could inflict irrevocable harm.

Interesting -- you know what this is? It's a risk assessment. In my first book I defined risk as the probability of suffering harm or loss. The Doomsday Clock supposedly displays how close we are to world-ending catastrophe.

I find two aspects of the clock appealing.



First, as depicted by Information Aesthetics, the clock rapidly and clearly communicates its message. If you see fewer and fewer minutes until midnight, you sense something bad is about to happen. It's language-neutral and concise.



Second, the act of moving the hands and then tracking hand position over time provides a sense of risk trending. As depicted by Wikipedia above, you can get a historical reading of risk by watching the number of minutes to midnight rise and fall. The interval between the hand position changes is also significant.

The problem with the Doomsday Clock is the same problem found in many, if not most, risk assessments. It is more or less arbitrary. The creation of the clock and the initial position of its hands was completely arbitrary, in fact! The designer of the clock, artistic designer Martyl Langsdorf, invented the clock for the June 1947 issue of the Bulletin. She positioned the hands to be aesthetically pleasing, not to show how close we were to destruction. When you consider the amount of time she could have worked with (12 hours), limiting herself to a fifteen minute window set a precedent for the next sixty years. While the clock has moved outside this 15 minute window (for example, in 1991) the precedent was set too narrowly. What will the bulletin do when even greater threats exist -- move to second and then nano-second increments?

In response to the Soviet's 1949 detonation of their first atomic weapon, Bulletin founder and editor Eugene Rabinowitch told Langsdorf to move the hands from 7 minutes to midnight to 3 minutes to midnight. Again, this choice was basically to convey urgency. Only when the hands were moved on the magazine cover did readers start to appreciate the information conveyed by the clock.

From this point forward, the hands have moved back and forth as the Bulletin members and, more recently, outside parties have haggled about the position of the hands. I have a feeling these meetings would drive me crazy. It's a collection of people with opinions arguing about the location of hands on a clock created originally for artistic value. Still, as noted in my two "appealing" points, I think we can learn some lessons from the Doomsday Clock regarding the ability to quickly and powerfully communicate risk to others.

While researching this post I discovered that the ACLU jumped on the "clock bandwagon" with its Surveillance Society Clock. According to the ACLU, "It's six minutes before midnight as a surveillance society draws near within the United States." This is dumb for multiple reasons.

First, the ACLU chose a digital clock. I don't know about you, but for me a digital clock doesn't convey an amount of time as visually as an analog clock. It's like a speedometer; seeing it pegged to the right is more powerful than reading "101 MPH" or similar. Second, as Wired magazine astutely asked how do we know when we're there? It's tough to ignore Armageddon; it's easy to ignore a "surveillance state." Third, the ACLU painted itself into the same corner as the Bulletin did when it chose to set its initial time so close to midnight. What's the ACLU going to do with the clock when remote mind-reading is in use?

Be the Caveman Lawyer

A few weeks ago I recommended security people to at least Be the Caveman and perform basic adversary simulation / red teaming. Now I read Australia's top enterprises hit by laymen hackers in less than 24 hours:

A penetration test of 200 of Australia's largest enterprises has found severe network security flaws in 79 percent of those surveyed.

The tests, undertaken by University of Technology Sydney (UTS), saw 25 non-IT students breach security infrastructure and gain root or administration level access within the networks of Australia's largest companies, using hacking tools freely available on the Internet.

The students - predominately law practitioners - were given 24 hours to breach security infrastructure on each site and were able to access customer financial details, including confidential insurance information, on multiple occasions.

High-level business executives from the companies surveyed, rather than IT staff, were informed of the tests so the "day-to-day network security" of businesses could be tested.
(emphasis added)

Again, my advice is simple, but now it is modified. Be the Caveman Lawyer.

One other point from the article:

Most of the 21 percent of companies who passed the penetration tests owed their success to freeware Intrusion Detection Systems (IDSs), according to Ghosh.

Snort was mentioned earlier in the article. That means you can be a Cheap Caveman Lawyer and prepare for common threats.

Sunday, October 07, 2007

One Review and One Prereview

Amazon.com just published my five star review of Security Data Visualization by Greg Conti. From the review:

Security Data Visualization (SDV) is a great book. It's perfect for readers familiar with security who are looking to add new weapons to their defensive arsenals. Even offensive players will find something to like in SDV. The book is essentially an introduction to the field, but it is well-written, organized, and clear. I recommend all security analysts read SDV.

I give five star reviews to books that meet certain criteria. First, the book should change the way I look at a problem, or properly introduce me to thinking about a problem for which I have little or no frame of reference. Although I have been a security analyst for ten years, I have little visualization experience. Author Greg Conti spent just the right amount of time explaining the field, describing key terms (preattentive processing, occlusion, brushing) and displays (star plots, small multiples, TreeMaps). I loved the author's mention of Ben Shneiderman's visualization mantra: "overview first, zoom and filter, details on demand" (p 14).


I'd like to mention another great No Starch book called Linux Firewalls by my friend Mike Rash. Mike was kind enough to ask me to write the foreword. If you look at my quote on the front cover (click on the image) you might think "Wow, Bejtlich is creative." Here's the context for that quote, from the foreword:

I'd like to conclude these thoughts by speaking as a book reviewer and author. Between 2000 and mid-2007 I've read and reviewed nearly 250 technical books. I've also written several books, so I believe I can recognize a great book when I see it. "Linux Firewalls" is a great book. As a FreeBSD user, "Linux Firewalls" is good enough to make me consider using Linux in certain circumstances!

No Starch has several more great books on the way, including Absolute FreeBSD, 2nd Ed (on FreeBSD 7.x) and several others.

Saturday, October 06, 2007

Intruders Continue to Be Unpredictable

One of my three basic security principles is advanced intruders are unpredictable. Believing you can predict what intruders are going to do next results in soccer-goal security. As I said in Pescatore on Security Trends, advanced attackers are digital innovators. I think I will start calling advanced intruders intrupreneurs.

I just read and watched great examples of this principle in action courtesy of pdp at CITRIX: Owning the Legitimate Backdoor. I recommend reading the post and watching the two videos. If you are practicing Network Security Monitoring I recommend querying your session data for all incoming Citrix traffic, for as far back as you have stored, for unusual or unexpected activity. If you are not practicing NSM already I suggest beginning emergency NSM to watch your Citrix servers.

It's important to realize that you may not even know you have certain Citrix servers active on your network. The flip side of the intruders are unpredictable principle is that your network is probably unpredictable too! In other words, you could be happy thinking "we have no Citrix servers," but after looking via NSM you find you do. It's probable a bad guy found them before you did, but courtesy of NSM you have data about what happened. More often than not, that's the best you can do with your time and resources.

Monday, October 01, 2007

NSM and Sguil in October InfoSecMag

I just noticed that Russ McRee published an article on Network Security Monitoring and Sguil by discussing Knoppix-NSM in the October 2007 Information Security Magazine titled Putting Snort to Work. I really enjoy Russ' Toolsmith articles in the ISSA Journal.

Someone Please Explain Threats to Microsoft

It's 2007 and some people still do not know the difference between a threat and a vulnerability. I know these are just the sorts of posts that make me all sorts of new friends, but nothing I say will change their minds anyway. To wit, Threat Modeling Again, Threat Modeling Rules of Thumb:

As you go about filling in the threat model threat list, it’s important to consider the consequences of entering threats and mitigations. While it can be easy to find threats, it is important to realize that all threats have real-world consequences for the development team.

At the end of the day, this process is about ensuring that our customer’s machines aren’t compromised. When we’re deciding which threats need mitigation, we concentrate our efforts on those where the attacker can cause real damage.

When we’re threat modeling, we should ensure that we’ve identified as many of the potential threats as possible (even if you think they’re trivial). At a minimum, the threats we list that we chose to ignore will remain in the document to provide guidance for the future.


Replace every single instance of "threat" in that section with "vulnerability" and the wording will make sense.

Not using the term "threat" properly is a hallmark of Microsoft publications, as mentioned in Preview: The Security Development Lifecycle. I said this in my review of Writing Secure Code, 2nd Ed:

The major problem with WSC2E, often shared by Microsoft titles, is the misuse of terms like "threat" and "risk." Unfortunately, the implied meanings of these terms varies depending on Microsoft's context, which is evidence the authors are using the words improperly. It also makes it difficult for me to provide simple substitution rules. Sometimes Microsoft uses "threat" when they really mean "vulnerability." For example, p 94 says "I always assume that a threat will be taken advantage of." Attackers don't take advantage of threats; they ARE threats. Attackers take advantage of vulnerabilities.

Sometimes Microsoft uses terms properly, like the discussion of denial of service as an "attack" in ch 17. Unfortunately, Microsoft's mislabeled STRIDE model supposedly outlines "threats" like "Denial of service." Argh -- STRIDE is just an inverted CIA AAA model, where STRIDE elements are attacks, not "threats." Microsoft also sometimes says "threat" when they mean "risk." The two are not synonyms. Consider this from p 87: "the only viable software solution is to reduce the overall threat probability or risk to an acceptable level, and that is the ultimate goal of 'threat analysis.'" Here we see confusing threat and risk, and calling what is really risk analysis a "threat analysis." Finally, whenever you read "threat trees," think "attack trees" -- and remember Bruce Schneier worked hard on these but is apparently ignored by Microsoft.


These sentiments reappeared in my review of Security Development Lifecycle: Microsoft continues its pattern of misusing terms like "threat" that started with "Threat Modeling" and WSC2E. SDL demonstrates some movement on the part of the book's authors towards more acceptable usage, however. Material previously discussed in a "Threat Modeling" chapter in WSC2E now appears in a chapter called "Risk Analysis" (ch 9) -- but within the chapter, the terms are mostly still corrupted. Many times Microsoft misuses the term risk too. For example, p 94 says "The Security Risk Assessment is used to determine the system's level of vulnerability to attack." If you're making that decision, it's a vulnerability assessment; when you incorporate threat and asset value calculations with vulnerabilities, that's true risk assessment.

The authors try to deflect what I expect was criticism of their term misuse in previous books. On p 102 they say "The meaning of the word threat is much debated. In this book, a threat is defined as an attacker's objective." The problem with this definition is that it exposes the problems with their terminology. The authors make me cringe when I read phrases like "threats to the system ranked by risk" (p 103) or "spoofing threats risk ranking." On p 104, they are really talking about vulnerabilities when they write "All threats are uncovered through the analysis process." The one time they do use threat properly, it shows their definition is nonsensical: "consider the insider-threat scenario -- should your product protect against attackers who work for your company?" If you recognize that a threat is a party with the capabilities and intentions to exploit a vulnerability in an asset, then Microsoft is describing insiders appropriately -- but not as "an attacker's objective."

Don't get me wrong -- there's a lot to like about SDL. I gave the book four stars, and I think it would be good to read it. I fear, though, that this is another book distributed to Microsoft developers and managers riddled with sometimes confusing or outright wrong ways to think about security. This produces lasting problems that degrade the community's ability to discuss and solve software security problems.


No one is going to take us seriously until we use the right terms. Argh.