Saturday, September 29, 2007

Three Prereviews

I am fairly excited by several new books which arrived at my door last week. The first is Security Data Visualization by Greg Conti. I was pleased to see a book on visualization, but also a book in visualization in color! I expect to learn quite a bit from this book and hope to apply some of the lessons to my own work. The next book is End-to-End Network Security: Defense-in-Depth by Omar Santos. This book seems like a Cisco-centric approach to defending a network, but I decided to take a look when I noticed sections on forensics, visibility, and telemetry. The author includes several diagrams which show how to get information from a variety of devices in a manner similar to NSM. I hope to be able to operationalize this information as well. The last new book is LAN Switch Security: What Hackers Know About Your Switches by Eric Vyncke and Christopher Paggen. This book looks really interesting. It is probably going to be my favorite of these three. I don't spend much time in my classes talking about layer 2 defenses, so it is cool to see a modern book just about that topic. I believe most enterprises do little with layer 2 security, so perhaps this book can improve that situation.

Cyberinsurance in IT Security Management

One more thought before I retire this evening. I really enjoyed reading Cyberinsurance in IT Security Management by Walter S. Baer and Andrew Parkinson. Here are my favorite excerpts.

IT security has traditionally referred to technical protective measures such as firewalls, authentication systems, and antivirus software to counter such attacks, and mitigation measures such as backup hardware and software systems to reduce losses should a security breach occur. In a networked IT environment, however, the economic incentives to invest in protective security measures can be perverse. My investments in IT security might do me little good if other systems connected to me remain insecure because an adversary can use any unprotected system to launch an attack on others.

In economic terms, the private benefits of investment are less than the social benefits, making networked IT security a public good — and susceptible to the free-rider problem. As a consequence, private individuals and organizations won’t invest sufficiently in IT security to provide an optimal (or even adequate) level of societal protection.

In other areas, such as fire protection, insurance has helped align private incentives with the overall public good. A building owner must have fire insurance to obtain a mortgage or a commercial business license. Obtaining insurance requires that the building meet local fire codes and underwriting standards, which can involve visits from local government and insurance company inspectors. Insurance investigators also follow up on serious incidents and claims, both to learn what went wrong and to guard against possible insurance abuses such as arson or fraud. Insurance companies often sponsor research, offer training, and develop best-practice standards for fire prevention and mitigation.

Most important, insurers offer lower premiums to building owners who keep their facilities clean, install sprinklers, test their control systems regularly, and take other protective measures. Fire insurance markets thus involve not only underwriters, agents, and clients, but also code writers, inspectors, and vendors of products and services for fire prevention and protection. Although government remains involved, well-functioning markets for fire insurance keep the responsibility for and cost of preventive and protective measures largely within the private sector.

That is so compelling. Unfortunately, the cyberinsurance market is currently small:

[B]usinesses now generally buy stand-alone, specialized policies to cover cyberrisks. According to Betterley Risk Consultants surveys, the annual gross premium revenue for cyberinsurance policies has grown from less than US$100 million in 2002 to US$300 to 350 million by mid 2006. These estimates, which are based on confidential survey responses from companies offering cyberinsurance, are nearly an order of magnitude below earlier projections made by market researchers and industry groups such as the Insurance Information Institute.

But Betterley, like many other industry experts, believes that cyberinsurance will be one of the fastest growing segments of the property and casualty market over the next several years. With only 25 percent of respondents to the most recent Computer Security Institute/US Federal Bureau of Investigation Computer Crime and Security survey reporting that, “their organizations use external insurance to help manage cybersecurity risks,” the market has plenty of room for growth.

So what are the problems?

The reported 25 percent cyberinsurance adoption rate appears low to many observers, given well-publicized increases in IT security breaches and greater regulatory pressures to deal with them. Although we could partially attribute the slow uptake to how long it takes organizations to acknowledge new security risks and budget for them, several other factors seem to be of particular concern for cyberinsurance. They include problems of asymmetric information, interdependent and correlated risks, and inadequate reinsurance capacity...

Insurance companies feel the effect of asymmetric information both before and after a customer signs an insurance contract. They face the adverse selection problem—that is, a customer who has a higher risk of incurring a loss (through risky behaviors or other—perhaps innate—factors) will find insurance at a given premium more attractive than a lower-risk customer. If the insurer can’t differentiate between them—and offer differentiated premiums—it won’t be able to sustain a profitable business.

Of course, to some extent, insurance companies can differentiate between risk types; sophisticated models can predict risk for traditional property/casualty insurance, and health insurance providers try to identify risk factors through questionnaires and medical examinations. Insurers can also apply these mechanisms to cyberinsurance: they can undertake rigorous security assessments, examining in-depth IT deployment and security processes.

Although such methods can reduce the asymmetric information between insurer and policyholder, they can never completely eliminate it. Particularly in the information security field, because risk depends on many factors, including technical and human factors and their interaction, surveys can’t perfectly quantify risk, and premium differentiation will be imperfect.

The second impact of asymmetric information occurs after an insurance contract has been signed. Insured parties can take (hidden) actions that increase or decrease the risk of claiming (for example, in the case of car insurance, driving carelessly, not wearing a seatbelt, or failing to properly maintain the car), but the insurer can’t observe the insured’s actions perfectly. Under full insurance, an individual has little incentive to undertake precautionary measures because any loss is fully compensated—a problem economists term moral hazard.

Insurers may be able to mitigate certain actions through partial insurance (so making a claim carries a monetary or convenience cost) and clauses in the insurance contract—for example, policyholders must usually meet a set standard of care, and fraudulent or other criminal actions (such as arson) are prohibited. However, many actions remain unobservable, and it’s difficult to prove that a client didn’t meet a due standard of care.

Cyberinsurers could administer surveys at regular intervals and link coverage to a certain minimum standard of security. Although this might be feasible from a technical standpoint, human factors are often the weakest link in the chain and possibly unobservable, so the moral hazard problem might not be completely alleviated, implying that the purchase of cyberinsurance could in fact reduce efforts on information security. Nevertheless, purchasers also have incentives to increase effort—that is, to invest in security to obtain insurance or reduce premiums—that would outweigh moral hazard effects in a viable and well-functioning market.

The problem of asymmetric information is common to all insurance markets; however, most markets function adequately given the range of tactics used by insurance companies to overcome these information asymmetries. Many of these remedies have developed over time in response to experience and result in the well-functioning insurance markets we see today.

This gives me some hope. The article continues:

[G]overnment actions to spur development of the cyberinsurance market could include assigning liability for IT security breaches, mandating incident reporting, mandating cyberinsurance or financial responsibility, or facilitating reinsurance by indemnifying catastrophic losses. Clarifying liability law to assign liability “to the party that can do the best job of managing risk” would make good economic sense, but it seems a political nonstarter in the US—and the problem’s global nature would require a global response.

Similarly, government regulations that mandate reporting of cyberincidents (similar to that required for civil aviation incidents and contagious disease exposures) appear to have little political support. Probably more plausible in the short run would be contractual requirements that government contractors carry cyberliability insurance on projects highly dependent on IT security...

Jane Winn of the University of Washington School of Law has proposed a self-regulatory strategy, based on voluntary disclosures of compliance with security standards and enforcement through existing trade practices law, as a politically more viable alternative than new government regulation. Such a strategy would require increased public awareness of cybersecurity (with possible roles for government) as well as public demand that organizations disclose whether they comply with technical standards or industry best practices.

Disclosures would be monitored for compliance by their customers and competitors; and in the case of deceptive advertising, the US Federal Trade Commission could take enforcement action under existing regulation. This strategy could spur cyberinsurance adoption, which would indicate that the organization has passed a security audit or otherwise met underwriters’ security standards.

Perhaps the most important role for government would be to facilitate a full and deep cyberreinsurance market, as the UK and US have done for reinsurance of losses due to acts of terrorism.

What a great article. I recommend reading it.

Security Staff as Ultimate Insurance

I'm continuing to cite the Fifth Annual Global State of Information Security:

Speaking of striking back, the 2007 security survey shows a remarkable (some might say troubling) trend.

The IT department wants to control security again.

In the first year of collaboration on this survey, CIO, CSO and PWC noted that the more confident a company was in its security, the less likely that company's security group reported to IT. Those companies also spent more on security.

The reason CIO and CSO have always advocated for the separation of IT and security is the classic fox-in-the-henhouse problem. To wit, if the CIO controls both a major project dedicated to the innovative use of IT and the security of that project — which might slow down the project and add to its cost — he's got a serious conflict of interest. In the 2003 survey, one CISO said that conflict "is just too much to overcome. Having the CISO report to IT, it's a death blow."

Ouch. CIO continues:

What's going on here? Johnson has one theory: "Security seems to be following a trajectory similar to the quality movement 20 or 30 years ago, only with security it's happening much faster. During the quality movement, everyone created VPs of quality. They got CEO reporting status. But then in 10 years the position was gone or it was buried."

In the case of the quality movement, Johnson says, that may have been partly because quality became ingrained, a corporate value, and it didn't need a separate executive. But the evidence in the survey suggests that security is neither ingrained nor valued. It's not even clear companies know where to put security, which would explain the "gobs of dotted line" reporting structures.

That brings us to another theory: organizational politics. What if separating security from IT were creating checks on software development (not a bad thing, from a security standpoint)? What if all this security awareness the survey has indicated actually exposed the typical IT department's insecure practices?

One way for IT to respond would be to attempt to defang security. Keep its enemy close. Pull the function back to where it can be better controlled.

Interesting. The article finishes with these thoughts:

[M]aybe security was never as separate as it seemed. Companies created CISO-type positions but never gave them authority. "I continually see security people put in the position of fall guy," says Woerner of TD Ameritrade. "Maybe some of that separation was, subconsciously, creating a group to take the hit."

This leads me to the title of my post. What if security staff is the ultimate insurance -- for the CIO? In other words, what if the CIO performs "security theater," creating a CISO position and staff, but doesn't give the CISO the authority or resources to properly defend the enterprise? If no breaches (seem) to occur, then the CIO looks like a hero for keeping security spending low. If a breach does occur (and is discovered), the CIO blames the CISO. The CISO is fired and the CIO keeps his/her job -- at least for now. I don't see a CIO executing this strategy more than once successfully.

What do you think?

Friday, September 28, 2007

Visibility, Visibility, Visibility

CIO Magazine's Fifth Annual Global State of Information Security features an image of a happy, tie-wearing corporate security person laying bricks to make a wall, while a dark-clad intruder with a crow bar violates the laws of physics by lifting up another section of the wall like it was made of fabric. That's a very apt reference to Soccer Goal Security, and I plan to discuss security physics in a future post. Right now I'd like to feature a few choice excerpts from the story:

Awareness of the problematic nature of information security is approaching an all-time high. Out of every IT dollar spent, 15 cents goes to security. Security staff is being hired at an increasing rate. Surprisingly, however, enterprise security isn't improving...

Are you feeling the disquiet that comes from knowing there's no reason why your company can't be the next TJX? The angst of knowing that these modern plagues — these spam e-mails, these bots, these rootkits — will keep coming at you no matter how much time and money you spend trying to stop them? The chill that comes from knowing how much you don't know...

You're undergoing a shift from a somewhat blissful ignorance of the serious flaws in computer security to a largely depressing knowledge of them...

"That next level of maturity has not been reached," says Mark Lobel, a principal with PWC's advisory services. "We have the technology but still don't have our hands around what's important and what we should be monitoring and protecting.

Not everyone has shifted from "somewhat blissful ignorance" to "largely depressing knowledge" yet, but they'll get there eventually.

Five years ago, 36 percent of respondents to the "Global State of Information Security" survey reported that they had suffered zero security incidents. This year, that number was down to 22 percent.

Does this mean there are more incidents? We don't think so. We believe it simply means that more companies are aware of the incidents that they've always suffered but into which, until recently, they had no visibility. Those once inexplicable network outages are now known to be security incidents. Perhaps a spam outbreak wasn't considered a security incident before, but now that it can deliver malware, it is. Awareness is higher, and that's because companies have spent the past five years building an infrastructure that creates visibility into their security posture.

That's right -- visibility. I love it.

This year marks the first time "employees" beat out "hackers" as the most likely source of a security incident. Executives in the security field, with the most visibility into incidents, were even more likely to name employees as the source.

Have employees suddenly turned more malicious? Are inside jobs suddenly more fashionable and productive than they used to be? Probably not. Most security experts will tell you that the insider threat is relatively constant and is usually bigger than its victims suspect. None of us wants to think we've hired an untrustworthy person.

This spike in assigning the blame for breaches and attacks to employees is probably more like the dip in companies that report zero incidents — a reflection of awareness, of managers' ability to recognize what was always there but what they couldn't previously determine.

I'd agree with that. I would also blame misreporting surfing pr0n sites and the like as "security incidents." CIO continues:

But here's an odd paradox: Despite the massive buildup of people, process and technology during the past five years, and fewer people reporting zero incidents, 40 percent of respondents didn't know how many incidents they've suffered, up from 29 percent last year.

The rate of "Don't know" for the type of incident and the primary method used to attack also spiked.

It doesn't bode well that after years of buying and installing systems and processes to improve security, close to half of the respondents didn't have a clue as to what was going on in their own enterprises. But when close to a third of CSOs and CISOs, who presumably should have the most insight into security incidents, said they don't know how many incidents they've suffered or how these incidents occurred, that's even worse...

The truth is, systems, processes, tools, hardware and software, and even knowledge and understanding only get you so far. As [Ron] Woerner puts it, "When you gain visibility, you see that you can't see all the potential problems. You see that maybe you were spending money securing the wrong things. You see that a good employee with good intentions who wants to take work home can become a security incident when he loses his laptop or puts data on his home computer. There's so much out there, it's overwhelming."

Woerner and others believe that the security discipline has so far been skewed toward technology—firewalls, ID management, intrusion detection - instead of risk analysis and proactive intelligence gathering.

Check this out, too. Someone recognizes the nature of Attacker 3.0:

Furthermore, even a cursory look at security trends demonstrates that adversaries, be they disgruntled employees or hackers, have far more sophisticated tools than the ones that have been put in place to stop them. Antiforensics. Mass distribution of malware through compromised websites. Botnets. Keyloggers. Companies may have spent the past five years building up their security infrastructure, but so have the bad guys. Awareness includes a new level of understanding of how little you know about how the bad guys operate. As arms races go, the bad guys are way ahead.

So what can we do about this? Say it isn't so:

What can be done about all this? Be strategic. Security investment must shift from the technology-heavy, tactical operation it has been to date to an intelligence-centric, risk analysis and mitigation philosophy.

Information and security executives should, for example, be putting their dollars into industry information sharing. "Collaboration is key," says Woerner. They should invest in security research and technical staff that can capture and dissect malware, and they should troll the Internet underground for the latest trends and leads.
(emphasis added)

I would add that it's only appropriate to turn to advanced sources when you have the security basics in place. It's no use trying to learn how to defend against attacker 2.0 or 3.0 if you can't handle 1.0.

There's more to say about this survey, but I'll save the rest for a second post because the nature of it is so different from this one.

Excerpts from Ross Anderson / Tyler Moore Paper

I got a chance to read a new paper by one of my three wise men (Ross Anderson) and his colleague (Tyler Moore): Information Security Economics - and Beyond. The following are my favorite sections.

Over the last few years, people have realised that security failure is caused by bad incentives at least as often as by bad design. Systems are particularly prone to failure when the person guarding them does not suffer the full cost of failure...

[R]isks cannot be managed better until they can be measured better. Most users cannot tell good security from bad, so developers are not compensated for efforts to strengthen their code. Some evaluation schemes are so badly managed that ‘approved’ products are less secure than random ones. Insurance is also problematic; the local and global correlations exhibited by different attack types largely determine what sort of insurance markets are feasible. Cyber-risk markets are thus generally uncompetitive, underdeveloped or specialised...

One of the observations that sparked interest in information security economics came from banking. In the USA, banks are generally liable for the costs of card fraud; when a customer disputes a transaction, the bank must either show she is trying to cheat it, or refund her money. In the UK, the banks had a much easier ride: they generally got away with claiming that their systems were ‘secure’, and telling customers who complained that they must be mistaken or lying. “Lucky bankers,” one might think; yet UK banks spent more on security and suffered more fraud. This may have been what economists call a moral-hazard effect: UK bank staff knew that customer complaints would not be taken seriously, so they became lazy and careless, leading to an epidemic of fraud.

In 1997, Ayres and Levitt analysed the Lojack car-theft prevention system and found that once a threshold of car owners in a city had installed it, auto theft plummeted, as the stolen car trade became too hazardous. This is a classic example of an externality, a side-effect of an economic transaction that may have positive or negative effects on third parties. Camp and Wolfram built on this in 2000 to analyze information security vulnerabilities as negative externalities, like air pollution: someone who connects an insecure PC to the Internet does not face the full economic costs of that, any more than someone burning a coal fire. They proposed trading vulnerability credits in the same way as carbon credits...

Asymmetric information plays a large role in information security. Moore showed that we can classify many problems as hidden-information or hidden-action problems. The classic case of hidden information is the ‘market for lemons'. Akerlof won a Nobel prize for the following simple yet profound insight: suppose that there are 100 used cars for sale in a town: 50 well-maintained cars worth $2000 each, and 50 ‘lemons’ worth $1000. The sellers know which is which, but the buyers don’t. What is the market price of a used car? You might think $1500; but at that price no good cars will be offered for sale. So the market price will be close to $1000.

Hidden information, about product quality, is one reason poor security products predominate. When users can’t tell good from bad, they might as well buy a cheap antivirus product for $10 as a better one for $20, and we may expect a race to the bottom on price.

Hidden-action problems arise when two parties wish to transact, but one party’s unobservable actions can impact the outcome. The classic example is insurance, where a policyholder may behave recklessly without the insurance company observing this...

[W]hy do so many vulnerabilities exist in the first place? A useful analogy might come from considering large software project failures: it has been known for years that perhaps 30% of large development projects fail, and this figure does not seem to change despite improvements in tools and training: people just built much bigger disasters nowadays than they did in the 1970s. This suggests that project failure is not fundamentally about technical risk but about the surrounding socio-economic factors (a point to which we will return later).

Similarly, when considering security, software writers have better tools and training than ten years ago, and are capable of creating more secure software, yet the economics of the software industry provide them with little incentive to do so.

In many markets, the attitude of ‘ship it Tuesday and get it right by version 3’ is perfectly rational behaviour. Many software markets have dominant firms thanks to the combination of high fixed and low marginal costs, network externalities and client lock-in noted above, so winning market races is all-important. In such races, competitors must appeal to complementers, such as application developers, for whom security gets in the way; and security tends to be a lemons market anyway. So platform vendors start off with too little security, and such as they provide tends to be designed so that the compliance costs are dumped on the end users. Once a dominant position has been established, the vendor may add more security than is needed, but engineered in such a way as to maximise customer lock-in.

In some cases, security is even worse than a lemons market: even the vendor does not know how secure its software is. So buyers have no reason to pay more for protection, and vendors are disinclined to invest in it.

How can this be tackled? Economics has suggested two novel approaches to software security metrics: vulnerability markets and insurance...

Several variations on vulnerability markets have been proposed. Bohme has argued that software derivatives might be better. Contracts for software would be issued in pairs: the first pays a fixed value if no vulnerability is found in a program by a specific date, and the second pays another value if one is found. If these contracts can be traded, then their price should reflect the consensus on software quality. Software vendors, software company investors, and insurance companies could use such derivatives to hedge risks. A third possibility, due to Ozment, is to design a vulnerability market as an auction...

An alternative approach is insurance. Underwriters often use expert assessors to look at a client firm’s IT infrastructure and management; this provides data to both the insured and the insurer. Over the long run, insurers learn to value risks more accurately. Right now, however, the cyber-insurance market is both underdeveloped and underutilised. One reason, according to Bohme and Kataria, is the interdependence of risk, which takes both local and global forms. Firms’ IT infrastructure is connected to other entities – so their efforts may be undermined by failures elsewhere.

Cyber-attacks often exploit a vulnerability in a program used by many firms. Interdependence can make some cyber-risks unattractive to insurers – particularly those risks that are globally rather than locally correlated, such as worm and virus attacks, and systemic risks such as Y2K.

Many writers have called for software risks to be transferred to the vendors; but if this were the law, it is unlikely that Microsoft would be able to buy insurance. So far, vendors have succeeded in dumping most software risks; but this outcome is also far from being socially optimal. Even at the level of customer firms, correlated risk makes firms under-invest in both security technology and cyber-insurance. Cyber-insurance markets may in any case lack the volume and liquidity to become efficient.
(emphasis added)

If you made it this far, here's my small contribution to this paper: what about breach derivatives? To paraphrase the paper, contracts for companies would be issued in pairs: the first pays a fixed value if no breach is reported by a company by a specific date, and the second pays another value if one is reported. If these contracts can be traded, then their price should reflect the consensus on company security.

I understand the incentives for companies to stay quiet about breaches, but this market could encourage people to report. I imagine it could also encourage intruders to compromise a company intentionally, as the authors mention:

One criticism of all market-based approaches is that they might increase the number of identified vulnerabilities by motivating more people to search for flaws.

What do you think?

Microsoft's Anemone Project

While flying to Los Angeles this week I read a great paper by Microsoft and Michigan researchers: Reclaiming Network-wide Visibility Using Ubiquitous Endsystem Monitors. From the Abstract:

Network-centric tools like NetFlow and security systems like IDSes provide essential data about the availability, reliability, and security of network devices and applications. However, the increased use of encryption and tunnelling has reduced the visibility of monitoring applications into packet headers and payloads (e.g. 93% of traffic on our enterprise network is IPSec encapsulated). The result is the inability to collect the required information using network-only measurements.

To regain the lost visibility we propose that measurement systems must themselves apply the end-to-end principle: only endsystems can correctly attach semantics to traffic they send and receive. We present such an end-to-end monitoring platform that ubiquitously records per-flow data and then we show that this approach is feasible and practical using data from our enterprise network.

This is cool. How does it work?

Each endsystem in a network runs a small daemon that uses spare disk capacity to log network activity. Each desktop, laptop and server stores summaries of all network traffic it sends or receives. A network operator or management application can query some or all endsystems, asking questions about the availability, reachability, and performance of network resources and servers throughout the organization...

Ubiquitous network monitoring using endsystems is fundamentally different from other edge-based monitoring: the goal is to passively record summaries of every flow on the network rather than to collect availability and performance statistics or actively probe the network...

It also provides a far more detailed view of traffic because endsystems can associate network activity with host context such as the application and user that sent a packet. This approach restores much of the lost visibility and enables new applications such as network auditing, better data centre management, capacity planning, network forensics, and anomaly detection.

Using real data from an enterprise network we present preliminary results showing that instrumenting, collecting, and querying data from endsystems in a large network is both feasible and practical.

How practical?

For example, our own enterprise network contains approximately 300,000 endsystems and 2,500 routers. While it is possible to construct an endsystem monitor in an academic or ISP network there are significant additional deployment challenges that must be addressed. Thus, we focus on deployment in enterprise and government networks that have control over software and a critical need for better network visibility...

Even under ideal circumstances there will inevitably be endsystems that simply cannot easily be instrumented, such as printers and other hardware running embedded software. Thus, a key factor in the success of this approach is obtaining good visibility without requiring instrumentation of all endsystems in a network. Even if complete instrumentation were possible, deployment becomes significantly more likely
where incremental benefit can be observed...

[I]nstrumenting just 1% of endsystems was enough to monitor 99.999% bytes on the network. This 1% is dominated by servers of various types (e.g. backup, file, email, proxies), common in such networks.

Wow -- in other words, just pick the right systems to instrument and you end up capturing a LOT of traffic.

How heavy is the load?

To evaluate the per-endsystem CPU overhead we constructed a prototype flow capture system using the ETW event system [Event Tracing for Windows]. ETW is a low overhead event posting infrastructure built into the Windows OS, and so a straightforward usage where an event is posted per-packet introduces overhead proportional to the number of packets per second processed by an endsystem.

We computed observed packets per second over all hosts, and the peak was approximately 18,000 packets per second and the mean just 35 packets per second. At this rate of events, published figures for ETW [Magpie] suggest an overhead of a no more than a few percent on a reasonably provisioned server...

[F]or a 1 second export period there are periods of high traffic volume requiring a large number of records be written out. However, if the export timer is set at 300 seconds, the worst case disk bandwidth required is ≃4.5 MB in 300 seconds, an average rate of 12 kBps.

The maximum storage required by a single machine for an entire week of records is ≃1.5 GB, and the average storage just ≃64 kB. Given the capacity and cost of modern hard disks, these results indicate very low resource overhead.

This is great. I emailed the authors to see if they have an implementation I could test. The home for this work appears to be the Microsoft Anemone Project.

Be the Caveman

I just read a great story by InformationWeek's Sharon Gaudin titled Interview With A Convicted Hacker: Robert Moore Tells How He Broke Into Routers And Stole VoIP Services:

Convicted hacker Robert Moore, who is set to go to federal prison this week, says breaking into 15 telecommunications companies and hundreds of businesses worldwide was incredibly easy because simple IT mistakes left gaping technical holes.

Moore, 23, of Spokane, Wash., pleaded guilty to conspiracy to commit computer fraud and is slated to begin his two-year sentence on Thursday for his part in a scheme to steal voice over IP services and sell them through a separate company. While prosecutors call co-conspirator Edwin Pena the mastermind of the operation, Moore acted as the hacker, admittedly scanning and breaking into telecom companies and other corporations around the world.

"It's so easy. It's so easy a caveman can do it," Moore told InformationWeek, laughing. "When you've got that many computers at your fingertips, you'd be surprised how many are insecure."
(emphasis added)

So easy a caveman can do it? Just what happened here?

The government identified more than 15 VoIP service providers that were hacked into, adding that Moore scanned more than 6 million computers just between June and October of 2005. AT&T reported to the court that Moore ran 6 million scans on its network alone...

Moore said what made the hacking job so easy was that 70% of all the companies he scanned were insecure, and 45% to 50% of VoIP providers were insecure. The biggest insecurity? Default passwords.

"I'd say 85% of them were misconfigured routers. They had the default passwords on them," said Moore. "You would not believe the number of routers that had 'admin' or 'Cisco0' as passwords on them. We could get full access to a Cisco box with enabled access so you can do whatever you want to the box...

He explained that he would first scan the network looking mainly for the Cisco and Quintum boxes. If he found them, he would then scan to see what models they were and then he would scan again, this time for vulnerabilities, like default passwords or unpatched bugs in old Cisco IOS boxes. If he didn't find default passwords or easily exploitable bugs, he'd run brute-force or dictionary attacks to try to break the passwords.

So, we have massively widespread scanning, discovery of routers, and attempted logins. No kidding this is caveman-fu.

And Moore didn't just focus on telecoms. He said he scanned "anybody" -- businesses, agencies and individual users. "I know I scanned a lot of people," he said. "Schools. People. Companies. Anybody. I probably hit millions of normal [users], too."

Moore said it would have been easy for IT and security managers to detect him in their companies' systems ... if they'd been looking. The problem was that, generally, no one was paying attention.

"If they were just monitoring their boxes and keeping logs, they could easily have seen us logged in there," he said, adding that IT could have run its own scans, checking to see logged-in users. "If they had an intrusion detection system set up, they could have easily seen that these weren't their calls."
(emphasis added)

Didn't someone tell Robert Moore that "IDS is dead?" Apparently all of these victim companies heard it, and turned off their visibility mechanisms.

My advice? Be the caveman. Perform adversary simulation. This is the simplest possible way to pretend you are a bad guy and get realistic, actionable results.

  1. Identify all of your external IP addresses.

  2. Scan them.

  3. Try to log into remote administration services you find in Step 2.

  4. Report your findings to device owners when you gain access.

How difficult is that? This methodology is nowhere near to being effective against targeted threats who want to compromise you specifically, but they would work against this opportunistic threat.

PS: If I hear one more time that "scanning is too dangerous for our network" I will officially Lose It. Scanning of external systems happens 24x7. If you really don't want an authorized party to scan your external network, try setting up a passive detection systems like PADS and wait for a bad guy to ignore the fragility of your systems and scan them for you. Gather his results passively and then act on them.

Snort Report 9 Posted

My 9th Snort Report on Snort's Stream5 and TCP overlapping fragments is now available online. From the start of the article:

It's important for value-added resellers and consultants to understand how Snort detects security events. Stream5 is a critical aspect of the inspection and detection equation. A powerful Snort preprocessor, Stream5 addresses several aspects of network-centric traffic inspection. Sourcefire calls Stream5 a "target-based" system, meaning it can perform differently depending on the directives passed to it. These directives tell Stream5 to inspect traffic based on its understanding of differences of behavior in TCP/IP stacks. However, if Stream5 isn't configured properly, customers may end up with a Snort installation that is running but not providing much real value. In this edition of Snort Report I survey a specific aspect of Stream5, found in Snort 2.7.x and 2.8.x.

I'm working on the next Snort Report, which will look at new features in Snort 2.8.

Tuesday, September 25, 2007

DHS Debacle

Thanks to the Threat Level story FBI Investigates DHS Contractor for Failing to Protect Gov't Computer I learned of the Washington Post story Contractor Blamed in DHS Data Breaches:

The FBI is investigating a major information technology firm with a $1.7 billion Department of Homeland Security contract after it allegedly failed to detect cyber break-ins traced to a Chinese-language Web site and then tried to cover up its deficiencies, according to congressional investigators.

At the center of the probe is Unisys Corp., a company that in 2002 won a $1 billion deal to build, secure and manage the information technology networks for the Transportation Security Administration and DHS headquarters. In 2005, the company was awarded a $750 million follow-on contract.

On Friday, House Homeland Security Committee Chairman Bennie Thompson (D-Miss.) called on DHS Inspector General Richard Skinner to launch his own investigation.

As part of the contract, Unisys, based in Blue Bell, Pa., was to install network-intrusion detection devices on the unclassified computer systems for the TSA and DHS headquarters and monitor the networks. But according to evidence gathered by the House Homeland Security Committee, Unisys's failure to properly install and monitor the devices meant that DHS was not aware for at least three months of cyber-intrusions that began in June 2006.

Through October of that year, Thompson said, 150 DHS computers -- including one in the Office of Procurement Operations, which handles contract data -- were compromised by hackers, who sent an unknown quantity of information to a Chinese-language Web site that appeared to host hacking tools.

The contractor also allegedly falsely certified that the network had been protected to cover up its lax oversight, according to the committee.

"For the hundreds of millions of dollars that have been spent on building this system within Homeland, we should demand accountability by the contractor," Thompson said in an interview. "If, in fact, fraud can be proven, those individuals guilty of it should be prosecuted."

Wow. This is huge. I cannot remember any case like it. So what happened?

In the 2006 attacks on the DHS systems, hackers often took over computers late at night or early in the morning, "exfiltrating" or copying and sending out data over hours -- in one case more than five hours, according to evidence collected by the committee.

Five hours. That indicates one means of detecting this sort of activity: time-based analysis of session records.

In July 2006, a Unisys employee detected a possible intrusion but "downplayed it and low-level DHS security managers ignored it," the committee aide said.

It was not until Sept. 27, 2006, that two DHS systems managers noticed that their machines had been accessed with a hacking tool.

Unisys information technology employees began a probe and determined that the break-in affected more computers. They discovered that it reached back as far as June 13 that year and had continued through at least Oct. 1, eventually reaching 150 computers.

Among the security devices Unisys had been hired to install and monitor were seven "intrusion-detection systems," which flag suspicious or unauthorized computer network activity that may indicate a break-in. The devices were purchased in 2004, but by June 2006 only three had been installed -- and in such a way that they could not provide real-time alerts, according to the committee. The rest were gathering dust in DHS storage closets and under desks in their original packaging, the aide said.
(emphasis added)

This explains a lot!

Let's finish with this thought:

A Unisys spokeswoman, Lisa Meyer... said that Unisys has provided DHS "with government-certified and accredited security programs and systems, which were in place throughout 2006 and remain so today."

Exactly. C&A has absolutely zero operational security value, as I wrote in FISMA 2006 Scores.

I commend the Congressional committee tracking this problem and I welcome future reporting. I would love to be the expert witness in any trial between the government and Unisys, but that is outside the scope of my current employment!

Saturday, September 22, 2007

Review of Snort IDS and IPS Toolkit and One Prereview just posted my three star review of Snort IDS and IPS Toolkit. From the review:

Syngress published "Snort 2.0" in Mar 03, and I gave it a four star review in Jul 03. Syngress followed with "Snort 2.1" in May 04, and I gave it a four star review in Jul 04. I recommend reading those reviews, since the latest edition -- "Snort IDS and IPS Toolkit" (SIAIT) -- makes many of the same mistakes as its predecessors. Worse, it includes material that was already outdated in BOTH previous editions. If you absolutely must buy a book on Snort, this edition is your only real choice. Otherwise, I would stick with the manual and online articles.

SIAIT looks impressive page-wise, but it suffers from the multiple-author, no-editing, rush-to-production problems unfortunately inherent in many Syngress titles. One would think that including many contributing authors (11, apparently) would make for a strong book. In reality, the book contributes very little beyond what appears in "Snort 2.1," despite the fact that "only" chapters 8, 10, 11, and 13 appear to be repeats or largely rehashes of older material. Comparing to "Snort 2.1," these compare to old chapters 7, 10, 12, and 11, respectively.

The absolute worst part of this book is the re-introduction of all the outdated information in chapters 8 and 10. It is 2007 and we are STILL reading on p 353 that XML output is "our favorite and relatively new logging format" and on p 367 that "Unified logs are the future of Snort reporting." (I cited both of these as being old news in Jul 04!) I should note that these chapters are not entirely duplicates; if you compare output such as that on page 335 of "Snort 2.1" with page 365 in SIAIT you'll see the author replaced the original 2003 timestamps with 2006! This is the height of lazy publishing. Chapter 10 features similar tricks, where traffic is the same except for global replacements of IP addresses and timestamps; notice the ACK numbers are still the same and the test uses Snort 1.8.

You can read my reviews of Snort 2.1 and Snort 2.0 for reference. If I see Syngress publish another Snort book based on this line of material, I won't bother next time.

On a more positive note, thank you to O'Reilly for sending me a review copy of Security Power Tools. This book looks like it deserves a grunt from Tim the Toolman Taylor. The book appears to have lots of useful information, although why in Pete's name is there a chapter (11) on BO2k? Let it die, already. It's 2007.

Friday, September 21, 2007

Pescatore on Security Trends

The article Spend less on IT security, says Gartner caught my attention. Comments are inline, and my apologies if Mr. Pescatore was misquoted.

Organisations should aim to spend less of their IT budgets on security, Gartner vice-president John Pescatore told the analyst firm’s London IT Security Summit on 17 September.

In a keynote speech, he said that retailers typically spend 1.5% of revenue trying to prevent crime, then still lose a further 1.5% through shoplifting and staff theft, costing 3% in total.

Digital security is not comparable to shoplifting. It is not feasible for shoplifters to steal every asset from an a company in a matter of seconds, or subtly alter all of the assets so as to render them untrustworthy or even dangerous. I would also hardly consider shoplifters an "intelligent adversary."

But Gartner’s research suggests that the average organisation spends 5% of its IT budget on security, even with disaster recovery and business continuity work excluded, and IT managers are tired of requests for more. Security has dropped from first (in 2005) to sixth (in 2007) in the firm’s annual survey of chief information officers’ technical concerns.

I concur with this, especially with regard to IPS and SIM/SEM/SIEM. Managers spent a lot of money several years ago on this technology and they are "still getting hacked."

Pescatore said that managers are not impressed by the claim that “security is a journey” without a destination. “Can you imagine, ‘profit is a journey’?” he asked, pointing out that other areas of IT are often able to offer their organisations more functionality for less money, or some other kind of business benefit.

This could be the single greatest problem I see in this whole article. Please tell me how profit is not a journey, unless the goal of your company is to 1) enjoy a really awesome quarter (or year, etc.) and then disappear; or 2) dash for the acquisition line and then cash out. The operative word in business is not profit but profitability. A stock price reflects future value. Turning strictly to the security aspect, I'd like to hear Mr. Pescatore or his upset managers describe when security can end. This statement is clearly troubling.

Growing efficiencies could be possible for IT security too: “I really don’t think most of us need more and people,” he said, if organisations moved to a model he called ‘Security 3.0’. In this, IT security would anticipate threats, rather than fight them after they hit.

This is another poor statement. As I wrote in Attacker 3.0, security is at 1.0 (and that's being generous) while we approach Web 2.0 and fight Attacker 3.0. No one is ahead of the threat and no one could ever be. Advanced attackers are digital innovators. By definition they cannot be anticipated.

Pescatore said ways to prevent problems rather than fight them include buying and building secure systems, which means considering security during procurement and development, and rejecting products which are not adequately protected. This might mean spending more initially, but prevention is cheaper than cure.

This is all true and sounds nice, but it has never worked and will never work. Everyone is so excited to see the government finally working with Microsoft to secure the operating system, but at this point who really cares? It's all about applications now.

In response to a question, Pescatore dismissed the idea that insider threats are growing: he believes that attacks generated by malicious insiders are stable at 20-25%. Half come from mistakes made by insiders, while around 30% of attacks are made solely by outsiders, the majority of whom are cybercriminals.

I love to see the insider threat fans squashed.

Let's hear another view on this speech from Security to drop out of CIO spending top ten:

Security pros need to get more proactive about dealing with threats and adopt strategies to persuade their colleagues to take on security spending as part of their projects, according to analysts Gartner.

The changes in roles for security specialists come as the internet security market enters what Gartner described as the third major stage of its development.

Always a sector of the industry that relishes one-upmanship, the Web 2.0 phenomenon is accompanied by Security 3.0. The first stage of security, according to Gartner, belongs to the time of centralised planning and the mainframe. The widespread use of personal computers ushered in reactive security to deal with threats such as malicious computer hackers and worms (security 2.0). Security 3.0 is characterised by an era of more proactive security, according to John Pescatore, a VP and distinguished analyst at Gartner.

Security 3.0 involves an approach to risk management that applies security resources appropriately to meet business objectives. Instead of bolting security on as an afterthought, Security 3.0 integrates compliance, risk assessment and business continuity into every process and application.

For security managers the process involves persuading their counterparts in, for example, application development to include security functions in their projects. In this way security expenditure in real terms can go up even as security budgets (as such) stay flat or modestly increase. Security budgets freed from firefighting problems can then be invested with a view to managing future risks.

"Even a reduced security budget does not necessarily mean reducing security-related spending," Pescatore said. "Security professionals need to think in terms of changing who pays for security controls," so they can "move upstream" and spend their time and resources on more demanding projects, he added.

Now this makes sense to me. I do not understand why security as it relates to applications should be treated separately from those applications. Security should be another consideration that is built into the application, along with performance and other features. Security as an operational discipline doesn't need to be integrated into other businesses, but including security natively in projects is the right way forward.

Gartner predicts that security spending will rise 9.3 per cent in 2007, but will drop out the first ten spending priorities for CIOs for the first time since the prolific internet worms of 2003. Malware threats these days have evolved into targeted attacks featuring malware payloads designed not to draw attention to themselves.

This "run silent, run deep" malware means that security is a less high-profile function than before, as improving business processes and reducing costs become the pre-eminent priorities for IT directors.

This is true and it is killing us. Security got plenty of attention when managers could see the sky was falling. In other words, when their email and their boss' email was inaccessible or filled with spam and malware, or they couldn't surf the Web because their pipe was filled by DoS traffic, security failures couldn't be ignored. Now enterprises are silently and completely owned, and no one cares.

Finally, a few more thoughts from Managing IT risk in unchartered waters of "Security 3.0":

Gartner research suggests that throwing money a security is not working. At the summit, the firm said that there is no correlation between security spending and the security level of a system. The firm added that progress in security should see a reduction in security spending, not increase it.

I agree with this. The reasons are complex, but a major problem is that managers have no idea if the money they apply makes any difference in their security posture. To the degree they measure at all, they measure inputs of questionable value and ignore the output. However, I don't see how Gartner can say that success in security means spending falls. This is not the so-called "war on drugs" where a raise in the price of a drug means interdiction could be restricting supply. Security spending is determined by management; it is not an output of the security process.

Overall, it must have been an interesting speech! I fear the overall take-away for managers will be the "spend less on security" and "employ fewer people" headlines. That may be appropriate if you know how spending and manpower affects security outputs, but that is not the case. I believe management is spending plenty of money on the wrong tools and potentially people, and directing resources to other functions would be more effective.

Tactical Network Security Monitoring Platform

I am working both strategic and tactical network security monitoring projects. On the tactical side I have been looking for a platform that I could carry on a plane and fit in the overhead compartment, or at the very least under the seat in front of me. Earlier in my career I've used Shuttle and Hacom boxes, but I'm always looking for something better.

People often ask "Why don't you use a laptop?" Reasons to not use a laptop include:

  • Laptops don't have PCI, PCI-X or PCI Express slots to accommodate extra NICs, especially for fiber connections.

  • Laptops are not designed to run constantly.

  • Laptop storage is not as robust as server storage, since laptops usually accommodate up to two internal hard drives, with some capacity for external storage.

  • Laptops are consumer devices and not generally built for server-type operations.

Today I think I found the device I needed: NextComputing NextDimension Pro, pictured above. The specs are as follows:

  • Single dual-core 2.2 GHz AMD Opteron 275/940

  • 4 GB RAM (2 GB x 2, PC3200/400 MHz DDRAM)

  • Two Marvell Yukon 88E8052 Gigabit Ethernet

  • One NVIDIA nForce4 CK804 MCP9 Networking Adapter (Marvell 88E1111 Gigabit PHY)

  • Two 160 GB 7200 RPM SATA 2.5" Seagate Momentus HDDs connected to on-board four port SATA controller

  • Four 160 GB 7200 RPM SATA 2.5" Seagate Momentus HDDs connected to PCI-X four port SATA RAID controller

  • Four USB 2.0

  • Two external SATA ports

  • One RS232 serial port and one RS232 serial port with RS422/485 adaptor

  • DVD drive

  • Two PCI-X slots OR two PCI Express slots OR one PCI-X and one PCI Express; mine has one 16x PCI Express slot and one PCI-X full length slot.

  • Graphics out via Nvidia

I tried FreeBSD 7.0-CURRENT-200709-amd64-disc1.iso on this machine and it installed flawlessly. If you want to see dmesg output please visit Dmesgd courtesy of NYCBUG.

Check out the storage available. If I need to I could combine /nsm1 and /nsm2 into /nsm using Gconcat.

$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad4s1a 989M 194M 716M 21% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad4s1e 9.7G 24K 8.9G 0% /home
/dev/ad4s1f 77G 4.0K 71G 0% /nsm1
/dev/da0s1d 577G 4.0K 531G 0% /nsm2
/dev/ad4s1g 9.7G 12K 8.9G 0% /tmp
/dev/ad4s1d 39G 1.2G 34G 3% /usr
/dev/ad6s1d 144G 258K 133G 0% /var

I am really pleased FreeBSD 7.0 installs on this machine. I may try the i386 version at some point, but I hope to stick with the AMD64 version if possible.

Security Jersey Colors

I realized after my previous post that not everyone may be familiar with the "color" system used to designate various military security teams. I referenced a "red team" in my post NSA IAM and IEM Summary, for example.

I thought it might be helpful to post my understanding of these colors and to solicit feedback from anyone who could clarify these statements.

  • Red Team: A Red Team is an adversary simulation team. The Red Team attacks the asset to meet an objective. This activity is called penetration testing in the commercial world.

  • Blue Team: A Blue Team is a security posture assessment and evaluation team. The Blue Team determines the vulnerabilities and exposures of an enterprise. This activity is called vulnerability assessment in the commercial world.

  • White Team: A White Team (or usually a "White Cell") controls the environment during an exercise. The White Cell provides the framework in which the Red Team attacks friendly forces. (Note that in some situations the friendly forces are called the "Blue Team." This is not the same Blue Team that conducts vulnerability assessments and evaluations. Blue in this case is simply used to differentiate from Red.)

  • Green Team: The Green Team is usually a training group that helps the asset owners. Alternatively, the Green Team helps with long-term vulnerability and exposure remediation, as identified by the Blue Team. These descriptions are open for discussion because I haven't seen too many green team activities.

Did I miss any colors?

Tactical Traffic Assessment

When I wrote Extrusion Detection in 2004-5 I used the term Traffic Threat Assessment to describe a means of inspecting network traffic for signs of malicious activity. I differentiated among various assessments using this terminology.

  1. A vulnerability assessment identifies vulnerabilities and exposures in assets.

  2. A penetration test identifies at least one way that an adversary could exploit vulnerabilities and exposures to compromise a target or satisfy a related objective.

  3. A traffic threat assessment identifies traffic that indicates a network has already been compromised.

The goal of the customer determined which of the actions to perform.

I was not really comfortable with the term "traffic threat assessment," so I'm going to use Tactical Traffic Assessment starting now. That definition for TTA nicely differentiates between a short-term, focused, tactical effort and a long-term, enterprise-wide, strategic program like Network Security Monitoring.

Tactical Traffic Assessment removes the "threat assessment" part out of TTA, since "threat assessment" is more about characterizing the capabilities and intentions of an adversary and not whether he has compromised the enterprise.

Tactical Traffic Assessment also leaves room for findingnon-security issues like misconfigured devices or other troubleshooting-related network problems.

Wisdom from Ranum

The Face-Off article in the September 2007 Information Security Magazine contains a great closing thought by Marcus Ranum:

Will the future be more secure? It'll be just as insecure as it possibly can, while still continuing to function. Just like it is today.

"Continuing to function" is an interesting concept. The reason the "Internet" hasn't been destroyed by terrorists, organized crime, or others is that doing so would cut off a major communication and funding resource. Criminals and other adversaries have a distinct interest in keeping computing infrastructure working just well enough to exploit it.

Being "secure" is another wonderful idea. Marcus clearly shows that there is no secure -- i.e., there is no end game. None of us can retire "when our work is done." We will retire when we can hand off the problem to another generation.

Thursday, September 20, 2007


While I was teaching and speaking at conferences, I usually discussed research and coding projects with audience members. One of my requests involved writing a tool to reconstruct TFTP sessions. Because TFTP uses UDP, files transferred using TFTP cannot be rebuilt using Wireshark, TCPFlow, and similar tools. I was unaware of any tool that could rebuild TFTP transfers, despite the obvious benefit of being able to do so.

Today I was very surprised to receive an email from Gregory Fleischer, who directed me to his new tool TFTPgrab. He saw my ShmooCon talk earlier this year, heard my plea, and built a TFTP file transfer reconstruction tool! I downloaded and compiled it on FreeBSD 6.2 without incident, and here is I how I tested it.

I ensured a TFTP server was running on a FreeBSD system. I identified a small .gif to upload and download using TFTP.

richard@neely:~$ md5sum rss.gif
01206e1a6dcfcb7bfb55f3d21700efd3 rss.gif
richard@neely:~$ tftp
tftp> binary
tftp> trace
Packet tracing on.
tftp> verbose
Verbose mode on.
tftp> connect hacom
tftp> put rss.gif
putting rss.gif to [octet]
sent WRQ <file=rss.gif, mode=octet>
received ACK <block=0>
sent DATA <block=1, 451 bytes>
received ACK <block=1>
Sent 451 bytes in 0.0 seconds [inf bits/sec]

After the file was uploaded to the TFTP server I changed to /tmp, then downloaded the copy on the TFTP server.

richard@neely:~$ cd /tmp
richard@neely:/tmp$ tftp
tftp> verbose
Verbose mode on.
tftp> binary
mode set to octet
tftp> connect hacom
tftp> get rss.gif
getting from to rss.gif [octet]
Received 451 bytes in 0.0 seconds [inf bits/sec]
tftp> quit
richard@neely:/tmp$ md5sum rss.gif
01206e1a6dcfcb7bfb55f3d21700efd3 rss.gif

Notice the file I uploaded is exactly the same as the downloaded version, per the MD5 hashes.

The traffic looked like this.

I will have to be honest here and say that I expected to see everything happening over port 69 UDP. I didn't expect to see the server choose another port, but it's completely within spec and normal according to RFC 1350. Before commenting with a lecture on how TFTP works, please be aware that I read the relevant section of the RFC and understand transaction IDs and how ports are chosen.

I copied the trace to a system with TFTPgrab and let it process the trace.

hacom:/root/tftpgrab-0.2# ./tftpgrab -h
Usage: ./tftpgrab [OPTION]... [-r FILE] [EXPRESSION]
Reconstruct TFTP file contents from PCAP capture file.
With no FILE, or when FILE is -, read standard input.
-r PCAP file to read
-f overwrite existing files
-c print TFTP file contents to console
-E exclude TFTP filename when reconstructing
-v print verbose TFTP exchanges (repeat up to three times)
-X dump TFTP packet contents
-B check packets for bad checksums
-d specify debugging level
hacom:/root/tftpgrab-0.2# ./tftpgrab -r /tmp/tftpgrab.lpc
reading from file /tmp/tftpgrab.lpc, using datalink type EN10MB (Ethernet)
hacom:/root/tftpgrab-0.2# file 192*
GIF image data, version 89a, 36 x 14
GIF image data, version 89a, 36 x 14
hacom:/root/tftpgrab-0.2# md5 192*
MD5 ( =
MD5 ( =

As you can see, TFTPgrab pulled two files out of the trace and saved them to disk. They are identical to each other and to the original.

Thanks again to Gregory Fleischer for writing TFTPgrab!

Radiation Detection Mirrors Intrusion Detection

Yesterday I heard part of the NPR story Auditors, DHS Disagree on Radiation Detectors. I found two Internet sources, namely DHS fudged test results, watchdog agency says and DHS 'Dry Run' Support Cited, and I looked at COMBATING NUCLEAR
SMUGGLING: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment
(.pdf), a GAO report.

The report begins by explaining why it was written:

The Department of Homeland Security’s (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are key elements in our national defenses against such threats. DHS has sponsored testing to develop new monitors, known as advanced spectroscopic portal (ASP) monitors.

In March 2006, GAO recommended that DNDO conduct a cost-benefit analysis to determine whether the new portal monitors were worth the additional cost. In June 2006, DNDO issued its analysis. In October 2006, GAO concluded that DNDO did not provide a sound analytical basis for its decision to purchase and deploy ASP technology and recommended further testing of ASPs. DNDO conducted this ASP testing at the Nevada Test Site (NTS) between February and March 2007.

GAO's statement addresses the test methods DNDO used to demonstrate the performance capabilities of the ASPs and whether the NTS test results should be relied upon to make a full-scale production decision.

GAO recommends that, among other things, the Secretary of Homeland Security delay a full-scale production decision of ASPs until all relevant studies and tests have been completed, and determine in cooperation with U.S. Customs and Border Protection(CBP), the Department of Energy (DOE), and independent reviewers, whether additional testing is needed.
(emphasis added)

Notice that a risk analysis was not done. Rather, a cost-benefit analysis was done. This is consistent with the approach I liked in the book Managing Cybersecurity Resources, although in that book the practicalities of assigning certain values made the exercise fruitless. Here the cost-benefit approach has a better chance of working.

Next the report summarizes the findings:

Based on our analysis of DNDO’s test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO’s tests were not an objective and rigorous assessment of the ASPs’ capabilities. Our concerns with the DNDO’s test methods include the following:

  • DNDO used biased test methods that enhanced the performance of the ASPs. Specifically, DNDO conducted numerous preliminary runs of almost all of the materials, and combinations of materials, that were used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials.

    It is highly unlikely that such favorable circumstances would present themselves under real world conditions.

  • DNDO’s NTS tests were not designed to test the limitations of the ASPs’ detection capabilities -- a critical oversight in DNDO’s original test plan. DNDO did not use a sufficient amount of the type of materials that would mask or hide dangerous sources and that ASPs would likely encounter at ports of entry.

    DOE and national laboratory officials raised these concerns to DNDO in November 2006. However, DNDO officials rejected their suggestion of including additional and more challenging masking materials because, according to DNDO, there would not be sufficient time to obtain them based on the deadline imposed by obtaining Secretarial Certification by June 26. 2007.

    By not collaborating with DOE until late in the test planning process, DNDO missed an important opportunity to procure a broader, more representative set of well-vetted and characterized masking materials.

  • DNDO did not objectively test the performance of handheld detectors because they did not use a critical CBP standard operating procedure that is fundamental to this equipment’s performance in the field.

(emphasis added)
Let's summarize.

  • DNDO helped the vendor tune the detector.

  • DNDO did not test how the detectors could fail.

  • DNDO did not test the detectors' resistance to evasion.

  • DNDO failed to follow an important standard operating procedure.

I found all of this interesting and relevant to discussions of detecting security events.

Monday, September 17, 2007

The Academic Trap

I really enjoyed Anton's post Once More on Failure of Academic Research in Security where he cites Ian Greg's The Failure of the Academic Contribution to Security Science:

[A]cademics have presented stuff that is sometimes interesting but rarely valuable. They've pretty much ignored all the work that was done before hand, and they've consequently missed the big picture.

Why is this? One reason is above: academic work is only serious if it quotes other academic work. The papers above are reputable because they quote, only and fulsomely, other reputable work. And the work is only rewarded to the extent that it is quoted ... again by academic work.

The academics are caught in a trap: work outside academia and be rejected or perhaps worse, ignored. Or, work with academic references, and work with an irrelevant rewarding base. And be ignored, at least by those who are monetarily connected to the field.

By way of thought experiment, consider how many peer-review committees on security conferences include the experts in the field?

This is very interesting, but I'm not sure I agree. I think another reason might be the lack of ex-practitioners (with military and/or commercial hands-on experience) in the teaching ranks. Whatever the case, it should not be restricted to our field. There must be dozens of other professions with disconnects between academia and industry?

Incidentally, I was just invited to be on the peer-review committee for VizSec 2008, in conjunction with RAID 2008, in Boston next September. I am really excited to be attending both conferences. Maybe inviting me to be on the board is an indication of academia reaching out to industry?

A focus on practicality is one of the reasons I am drawn to the University of Cambridge Computer Laboratory, where the focus is on actionable security research, not theory.

Anton Chuvakin's Age of Compliance Reports

I didn't pay close enough attention when Anton Chuvakin first mentioned this series of articles he's writing. His "Age of Compliance" series addresses various operational security issues and then describes how certain legal frameworks (Federal Information Security Management Act, Payment Card Industry Data Security Standard, Health Insurance Portability and Accountability Act, etc.) influence those activities.

Thus far Anton has published:

These are great if you are trying to cite regulations for justifying security funding.

Friday, September 14, 2007

Hoff Interviews Andy Jaquith

Just a quick note -- Hoff conducted an excellent interview with Andy Jaquith at Take5 (Episode #6) - Five Questions for Andy Jaquith, Yankee Group Analyst and Metrician.... I liked this part (among others):

The arguments over metrics are overstated, but to the extent they are contentious, it is because "metrics" means different things to different people. For some people, who take a risk-centric view of security, metrics are about estimating risk based on a model. I'd put Pete Lindstrom, Russell Cameron Thomas and Alex Hutton in this camp.

For those with an IT operations background, metrics are what you get when you measure ongoing activities. Rich Bejtlich and I are probably closer to this view of the world. And there is a third camp that feels metrics should be all about financial measures, which brings us into the whole "return on security investment" topic. A lot of the ALE crowd thinks this is what metrics ought to be about. Just about every security certification course (SANS, CISSP) talks about ALE, for reasons I cannot fathom.

Once you understand that a person's point of view of "metrics" is going to be different depending on the camp they are in -- risk, operations or financial -- you can see why there might be some controversy between these three camps. There's also a fourth group that takes a look at the fracas and says, "I know why measuring things matter, but I don't believe a word any of you are talking about." That's Mike Rothman's view, I suspect.

China Cyberwar, or Not?

I've been writing about the Chinese threat for a while. I was glad to see Professor Spafford chime in with Who is Hacking Whom?:

It remains to be seen why so many stories are popping up now. It’s possible that there has been a recent surge in activity, or perhaps some recent change has made it more visible to various parties involved. However, that kind of behavior is normally kept under wraps. That several stories are leaking out, with similar elements, suggests that there may be some kind of political positioning also going on — the stories are being released to create leverage in some other situation.

Cynically, we can conclude that once some deal is concluded everyone will go back to quietly spying on each other and the stories will disappear for a while, only to surface again at some later time when it serves anoher political purpose. And once again, people will act surprised. If government and industry were really concerned, we’d see a huge surge in spending on defenses and research, and a big push to educate a cadre of cyber defenders.

You might also be wondering if the West and its allies is engaged in a "cyberwar" with China. Some might be asking if this is "information warfare." Here is my perspective.

DoD Joint Publication 3-13, Information Operations, differentiates between two sorts of offensive information operations.

  1. Computer Network Exploitation. Enabling operations and intelligence collection capabilities conducted through the use of computer networks to gather data from target or adversary automated information systems or networks. Also called CNE.

  2. Computer Network Attack. Actions taken through the use of computer networks to disrupt, deny, degrade, or destroy information resident in computers and computer networks, or the computers and networks themselves. Also called CNA.

You can think of CNE as spycraft, and CNA as warfare. In the physical world, the former is always occurring; the latter is hopefully much rarer. I would place all of the publicly reported activity from the last few months in the CNE category.

So why the war in the media over Chinese activity? I think this is part of the answer: what else can the West or China do? Consider similar situations and their consequences.

  • The UK seeks the extradition of Andrei Lugovo for the murder of Alexander Litvinenko. Russia refuses, so the UK expels four Russian "diplomats." Russia responds by expelling four UK "diplomats."

  • Russian bombers encroach on the North Sea. The UK scrambles interceptors.

  • The FBI discovers Robert Hansen is a Russian spy. The US expels six Russians, and the Russians seek to match that with their own expulsions.

This is how the international relations game is played. When the players have no way to express their concerns or make their intentions known, they are left with making statements to the media. The question is whether anything else might happen.

Thursday, September 13, 2007

US Needs Cyber NORAD

In addition to the previous Country v China stories I've been posting, consider the following excerpts. First, from China’s cyber army is preparing to march on America, says Pentagon:

Jim Melnick, a recently retired Pentagon computer network analyst, told The Times that the Chinese military holds hacking competitions to identify and recruit talented members for its cyber army.

He described a competition held two years ago in Sichuan province, southwest China. The winner now uses a cyber nom de guerre, Wicked Rose. He went on to set up a hacking business that penetrated computers at a defence contractor for US aerospace. Mr Melnick said that the PLA probably outsourced its hacking efforts to such individuals. “These guys are very good,” he said. “We don’t know for sure that Wicked Rose and people like him work for the PLA. But it seems logical. And it also allows the Chinese leadership to have plausible deniability.”

On one side we have the Chinese military organizing hackfests and sending work to the best. On the other side we have defense contractors often selected by lowest bidder. Worse, when those contractors are actually clueful and resourceful (like Shawn Carpenter), they are fired. From Cyberspies Target Silent Victims:

The U.S. Department of Defense confirmed last week that cyberspies have been sifting through some government computer systems. What wasn't said: The same spies may have been combing through the computer systems of major U.S. defense contractors for more than a year.

"There's been a massive, broad and successful series of attacks targeting the private sector," says Alan Paller, director of the SANS Institute, a Bethesda, Md.-based organization that hosts a response center for companies with cybersecurity crises. "No one will talk about it, but companies are creating a frenzy trying to stop it..."

None of the companies have publicly reported data breaches, though many have informed the Department of Defense. "Reporting an event like this would kill your stock price," says a source close to the military contractor industry who asked not to be named...

When Carpenter warned government officials in the Army and the FBI of his findings in 2004, he was fired. Sandia officials declined to comment on any subject relating to the Titan Rain hackings. Carpenter says his former employer's attempts to keep the incident quiet are typical.

In China as Victim I noted the following:

Lou said the electronic espionage against China has met with success. It therefore needs to be addressed by President Hu Jintao's government, he added, with additional investment in computer security and perhaps formation of a unified information security bureau.

That's China saying they need a high-level, concentrated group to protect Chinese assets. On what does the US rely? Apparently, the Department of Homeland Security and an assistant secretary for cyber-security and telecommunications.

Let's find this person on the DHS organizational chart.

Missed the assistant secretary for cyber-security and telecommunications? That's because he's not even in the top chart. He's working for the Under Secretary for National Protection Programs, whose peers include an Under Secretary for Management and an Under Secretary for Science and Technology. Seriously.

The more I think about it, the more of a disgrace this is. Consider: every single government agency uses computers. Not only that, every single US company uses computers. (If they don't, I doubt they qualify as a company!)

We often hear that the private sector should protect itself, since the private sector owns most of the country's critical infrastructure. Using the same reasoning, I guess that's the reason why Ford defends the airspace over Dearborn, MI; Google protects Mountain View, CA, and so on.

No? (By the way, I know that the US through the FAA "owns" the airspace over the country, but it's literally not the airspace itself that matters; it's what is underneath -- people, buildings, resources, and so on.)

I plan to develop this thought further, but for now I take comfort in knowing the Air Force Cyber Command is coming. Remember the Air Force started as

a small Aeronautical Division to take "charge of all matters pertaining to military ballooning, air machines and all kindred subjects"

on 1 August 1907. 100 years later, Cyber Command is coming. Hopefully a "Cyber NORAD" might follow. Remember, monitor first.

We might eventually get a new Cyber Force focused solely on defending the digital realm. Stay tuned.

Australia v China

My blog readers are quick. No sooner do I ask about Australia do I get a link to China 'hacked Australian government computers':

CHINA has allegedly tried to hack into highly classified government computer networks in Australia and New Zealand as part of a broader international operation to glean military secrets from Western nations.

The Howard Government yesterday would neither confirm nor deny that its agencies, including the Defence Department, had been subject to cyber attack from China, but government sources acknowledge that thwarting such assaults is a continuous challenge.

"It's a serious problem, it's ongoing and it's real," one senior government source said...

Australian Attorney-General Philip Ruddock is sufficiently concerned about cyber attacks to be spending more than $70 million to improve the e-security of government and private computer networks.

Air Force Cyber Command Provisionally at Barksdale

What a busy night. I just read Wynne taps Barksdale to host Cyber Command:

The Air Force Cyber Command will be headquartered, at least on an interim basis, at Barksdale Air Force Base, La., Secretary of the Air Force Michael Wynne announced Wednesday while visiting Barksdale and the base’s surrounding communities.

Wynne is expected to offer more details about Cyber Command on Tuesday as the part of the Air Force’s Pentagon celebration of its 60th anniversary.

The command will likely be led by a two-star general, officials said. While four-star generals traditionally head Air Force major commands, commands with fewer members, such as Air Force Special Operations Command, have two- or three-star generals in charge.

Like the other major commands, Cyber Command will answer directly to the secretary and the chief of staff.

There had been some consideration that Cyber Command would come under Air Combat Command’s 8th Air Force, headquartered at Barksdale. The 8th oversees much of the service’s computer network defense and information warfare capabilities. The commander of the 8th, Lt. Gen. Robert Elder, has been the service’s point man for mapping out Cyber Command’s structure and requirements for training members and acquiring equipment.

Barksdale won on an interim basis because the Air Force Network Operations Center is there, although this AFNOC Fact Sheet mentions that the AFNOC Network Security Division is at Lackland Air Force Base, Texas and the AFNOC Network Operations Division is at Gunter Annex, Alabama.

I think Air Force Cyber Command will be permanently based in San Antonio (to leverage AFISR Agency) or potentially a base near DC, to facilitate coordination with Ft Meade.

I am really looking forward to attending Victory in Cyberspace, hosted by the Air Force Association:

The Eaker Institute will release a report “Victory in Cyberspace” and host a panel discussion about the Cyberspace domain at 1 p.m., Tuesday, Oct. 9, at the National Press Club...

This Eaker Institute Panel will discuss how cyberspace should become equal with air and space in the Air Force’s mission set and how that affects the airman’s profession and the nation’s security priorities. Participants include Lt. Gen. Elder, who commands the Air Force headquarters for cyberspace, global strike and network operations, including establishing a new Cyber Command; Gen. Jumper (ret.), former Chief of Staff of the Air Force; and Lt. Gen. Baker (ret.), former Vice Commander, Air Mobility Command [and one of my AIA commanders].

I expect to hear more about Air Force Cyber Command on the Air Force 60th Birthday on 18 September.