Tuesday, October 28, 2008

Vulnerabilities and Exploits Are Mindless

Jofny's comment on my post Unify Against Threats asked the following:

So, Richard, I'm curious which security people - who are decision makers at a business level - are focusing on vulnerabilities and not threats?

If there are people like that, they really need to be fired.

This comment was on my mind when I read the story FBI: US Business and Government are Targets of Cyber Theft in the latest SANS NewsBites:

Assistant Director in charge of the US FBI's Cyber Division Shawn Henry said that US government and businesses face a "significant threat" of cyber attacks from a number of countries around the world. Henry did not name the countries, but suggested that there are about two dozen that have developed cyber attack capabilities with the intent of using those capabilities against the US. The countries are reportedly interested in stealing data from targets in the US. Henry said businesses and government agencies should focus on shoring up their systems' security instead of on the origins of the attacks.

The editors' comments are the following:

(Pescatore): It really doesn't matter where the attacks come from, businesses have been getting hit by sophisticated, financially motivated, targeted attacks for several years now.
(Ullrich): A very wise remark. It doesn't matter who attacks you. The methods used to attack you and the methods used to defend yourself are the same. We spend too much time worrying about geographic origins. In cyberspace, nation states are a legacy concept.

This is the mindset that worries me, even though the FBI AD agrees. It ignores this fact: Vulnerabilities and exploits are mindless. On the other hand, intelligent adversaries are not. Therefore, if you are doing more than defending yourself against opportunistic, puerile attackers, it pays to know your enemy by learning about security threats (as shown on the book cover to the right).

Once your security program has matured to the point where not any old caveman can compromise you, it pays to put yourself in the adversary's place. Who might want to exploit your organization's data? What data would be targeted? How could you defend it? How could you detect failure? When complaining to the government and/or law enforcement, to whom can you attribute the attack? Knowing the enemy helps prioritize what to defend and how to do it.

About the AD telling businesses not to worry about threat sources: he's just quoting official FBI policy. I wrote about this in More Threat Reduction, Not Just Vulnerability Reduction:

Recently I attended a briefing were a computer crimes agent from the FBI made the following point:

Your job is vulnerability reduction. Our job is threat reduction.

In other words, it is beyond the legal or practical capability of most computer crime victims to investigate, prosecute, and incarcerate threats.

Let's briefly address the "In cyberspace, nation states are a legacy concept." comment. We've been hearing this argument for fifteen years or more. Last time I checked, nation states were alive and well and shaping the way cyberspace works. Just this morning I read the following Economist article Information technology: Clouds and judgment; Computing is about to face a trade-off between sovereignty and efficiency:

The danger is less that the cloud will be a Wild West than that it will be peopled by too many sheriffs scrapping over the rules. Some enforcers are already stirring up trouble, threatening employees of online companies in one jurisdiction to get their employers based in another to fork over incriminating data for instance. Several governments have passed new laws forcing online firms to retain more data. At some point, cloud providers may find themselves compelled to build data centres in every country where they do business.

Finally, independent actors do not operate intelligence services who target our enterprises; nation states do. I've written about Counterintelligence and the Cyber Threat before. Part of the problem may stem from a distinction Ira Winkler made at RSA 2006, which I noted in my post RSA Conference 2006 Wrap-Up, Part 3:

I highly recommend that those of you who give me grief about "threats" and "vulnerabilities" listen to what Mr. Winkler has to say. First, he distinguishes between those who perform security functions and those who perform counter-intelligence. The two are not the same. Security focuses on vulnerabilities, while counter-intelligence focus on threats.

Maybe I spend more time on the counterintelligence problem than others, but I can't see how vulnerability-centric security is a good idea -- except for those who sell "countermeasures."

Unify Against Threats

At my keynote at the 2008 SANS Forensics and IR Summit I emphasized the need for a change in thinking among security practitioners. To often security and IT groups have trouble relating to other stakeholders in an organization because we focus on vulnerabilities. Vulnerabilities are inherently technical, and they mean nothing to others who might also care about security risks, like human resources, physical security, audit staff, legal staff, management, business intelligence, and others. I used the following slide to make my point:

My point is that security people should stop framing our problems in terms of vulnerabilities or exploits when speaking with anyone outside our sphere of influence. Rather, we should talk in terms of threats. This focuses on the who and not the what or how. This requires a different mindset and a different data set.

The business should create a strategy for dealing with threats, not with vulnerabilities or exploits. Notice I said "business" and not "security team." Creation of a business-wide strategy should be done as a collaborative effort involving all stakeholders. By keeping the focus on the threats, each stakeholder can develop detective controls and countermeasures as they see fit -- but with a common adversary in mind. HR can focus on better background checks; physical security on guns and guards; audit staff on compliance; legal staff on policies; BI on suspicious competitor activities, and so on. You know you are making progress when management asks "how are we dealing with state-sponsored competitors" instead of "how are we dealing with the latest Microsoft vulnerability?"

This doesn't mean you should ignore vulnerabilities. Rather, the common strategy across the organization should focus on threats. When it comes to countermeasures in each team, then you can deal with vulnerabilities and the effect of exploits.

Note that focusing on threats requires real all-source security intelligence. You don't necessarily need to contract with a company like iDefense, one of the few that do the sort of research I suggest you need. This isn't a commercial for iDefense and I don't contract with them, but their topical research reporting is an example of helpful (commercial) information. I would not be surprised, however, to find you already have a lot of the background you need already held by the stakeholders in the organization. Unifying against the threats is one way to bring these groups together.

Monday, October 27, 2008

Trying Secunia Vulnerability Scanning

One feature which most Unix systems possess, and that most Windows systems lack, is a native means to manage non-base applications. If I install packages through apt-get or a similar mechanism on Ubuntu, the package manager notifies me when an update is needed and it's easy for me to install them. Windows does not natively offer this function, so third party solutions must be installed.

I had heard about Secunia's vulnerability scanning offerings, but I had never tried them. I decided to try the online version (free for anyone) and then the personal version on a home laptop I hadn't booted recently.

You can see the results for the online scanner below. All that was needed was a JRE install to get these results.

The online scanner noticed I was running an older version of Firefox, and I needed to apply recent Microsoft patches. The fact that it checked Adobe Flash and Acrobat Reader was important, since those are popular exploit vectors.

Next I tried the personal version and got the results below.

This scan added more results, but only after I unchecked "Show only 'Easy-to-Patch' programs" on the Settings tab. I like that Secunia told me that my Intel wireless NIC driver needed patching. If I look for details I see this:

Clicking on the Download Solution icon took me to an Intel Web page, but at that point I needed to know what NIC driver I needed. That's why Secunia says "If you have the technical knowledge to handle more difficult programs, then we strongly recommend that you disable this setting" with respect to the "Show only 'Easy-to-Patch' programs" option.

I noticed Secunia doesn't check to see if WinSCP is patched, so I used the easy "Program missing? Suggest it here!" link to offer that idea to Secunia.

What do you use to keep the various applications installed on Windows up-to-date?

Sunday, October 26, 2008

Review of OSSEC HIDS Guide Posted

Amazon.com just posted my five star review of OSSEC HIDS Guide. From the review:

I'm surprised no one has offered serious commentary on the only book dedicated to OSSEC, an incredible open source host-based intrusion detection system. I first tried OSSEC in early 2007 and wrote in my blog: "OSSEC is really amazing in the sense that you can install it and immediately it starts parsing system logs for interesting activity." Stephen Northcutt of SANS quotes this post in his foreword to the book on p xxv. Once you start using OSSEC, especially with the WebUI, you'll become a log addict. OSSEC HIDS Guide (OHG) is your ticket to taking OSSEC to the next level, even though a basic installation will make you stronger and smarter.

I'm not kidding about the log addict part. I find myself obsessively hitting the refresh button on my browser when viewing the OSSEC WebUI, even though it refreshes itself. Sad.

Comment on New Amazon Reviewer Ranking System

I just happened to notice a change to my Amazon.com reviews page. If you look at the image on the left, you'll see two numbers: "New Reviewer Rank: 481" and "Classic Reviewer Rank: 434". I found the following explanation:

You may have noticed that we've recently changed the way top reviewers are ranked. As we've grown our selection at Amazon over the years, more and more customers have come to share their experiences with a wide variety of products. We want our top reviewer rankings to reflect the best of our growing body of customer reviewers, so we've changed the way our rankings work. Here's what's different:

  • Review helpfulness plays a larger part in determining rank. Writing thousands of reviews that customers don't find helpful won't move a reviewer up in the standings.

  • The more recently a review is written, the greater its impact on rank. This way, as new customers share their experiences with Amazon's ever-widening selection of products, they'll have a chance to be recognized as top reviewers.

  • We've changed the way we measure review quality to ensure that every customer's vote counts. Stuffing the ballot box won't affect rank. In fact, such votes won't even be counted.

We're proud of all our passionate customer reviewers and grateful for their investment of time and energy helping other Amazon customers.

On my overall profile page I found a second statistic, shown at left, which says that 90% of my votes are considered "helpful." That's cool! I appreciate any helpful votes I get. It's the main feedback for reviews I write so I am glad anytime I see someone logged into Amazon.com who votes for my reviews.

Apparently you shouldn't vote too often for me, because under the new system you're considered a fan voter and ignored!

Fan voters are people who consistently appreciate the author's reviews. These votes are not reflected in the total vote count to provide our customers with the most unbiased and accurate information possible.

Right now I have 131 "fan voters," so that's another reason my ranking dropped from 434 to 481.

The proof for me, however, regarding the new ranking system would be the effect on someone I know writes a dozen or more "reviews" per day, most of which I consider worthless. 4437 "reviews" (i.e., books read) since October 2002? That's two books per day -- no way! As you can see on the right, this person has fallen from the number 11 system using the Classic Ranking, down to 521. Ha ha.

Looking at the profile statistic, you can see a 75% rating. That's higher than I expected, but it definitely had an effect on the overall ranking. I think what really hurt this guy is his "fan voter" count: 892. I have a feeling Amazon.com believes these fans are fake accounts under the control of the reviewer, so Amazon.com has decided to just ignore them. For someone like Mr. 521 with "892 fans," I could see how that would affect his rank.

There's a hot debate in the Amazon.com forums about this topic now. Some people are really bent out of shape over these changes. Take it easy -- it's just Amazon.com.

Saturday, October 25, 2008

Security Event Correlation: Looking Back, Part 3

I'm back with another look at security event correlation. This time it's a June 2008 review of SIEM technology by Greg Shipley titled SIEM tools come up short. The majority of the article talk about non-correlation issues, but I found this section relevant to my ongoing analysis:

"Correlation" has long been the buzzword used around event reduction, and all of the products we tested contained a correlation engine of some sort. The engines vary in complexity, but they all allow for basic comparisons: if the engine sees A and also sees B or C, then it will go do X. Otherwise, file the event away in storage and move onto the next. We'd love to see someone attack the event reduction challenge with something creative like Bayesian filtering, but for now correlation-based event reduction appears to be the de facto standard...

Ok, that sounds like "correlation" to me. Let's see an example.

For example, one of the use cases we tackled was the monitoring of login attempts from foreign countries. We wanted to keep a particularly close watch on successful logins from countries in which we don't normally have employees in. To do this, there are a few things that had to be in place: We had to have authentication logs from the majority of systems that would receive external logins (IPsec and SSL VPN concentrators, Web sites, any externally exposed *NIX systems); we had to have the ability to extract usernames and IP addresses from these logs; and, we had to have the ability to map an IP address to a country code. Not rocket science to do without a SIEM, but not entirely trivial, either.

That doesn't quite seem to match. This use case says "if any system to which a user could log in registers a login from a foreign country, generate an alert." This is simply putting login records from a variety of sources in one place so that a generic policy ("watch for foreign logins") can be applied, after which an alert is generated. Do you really need a SIEM for that?

Here's a thought experiment for those who think "prevention" is the answer: why aren't foreign logins automatically blocked? "If you can detect it, why can't you prevent it?" The key word in the example is "usually," meaning "we don't know our enterprise or business well enough to define normality, so we can't identify exceptions which indicate incidents. We can't block the activity, but we'd like to know when it happens, i.e., drop the P, put back the D between I and S.

Back to correlation -- I think a real correlation case would be "if you see a successful login, followed by access to a sensitive file, followed by the exfiltration of that file, fire an alert." Hold on, this is where it gets interesting.

There are three contact points here, assuming the foreign login is by an unauthorized party:

  1. Access via stolen credentials: If it's not the user, the credentials were stolen. However, you didn't stop it, because you don't know the credentials are stolen.

  2. Access to a sensitive file: How did you know it was sensitive? Because the intruder is impersonating a user whose status is assumed to permit access, you don't stop it.

  3. Exfiltration of the file: If this account (under legitimate or illegitimate control) shouldn't be removing this file, why is that allowed to happen? The answer is that you don't know beforehand that it's sensitive, and there is no real control at the file level for preventing its removal.

If you knew enough to identify that this activity is bad, at each contact point you should have stopped it. If you're not stopping it, why? It's probably because you don't know any of these contact points are bad. You don't know the credentials are stolen (yet). The impersonated user probably has legitimate access to a file, so you're not going to block that. Legitimate users also probably can move files via authorized channels (such as would be the case via this "login"), so you don't block that.

In other words, if you're not smart enough to handle this, why would correlation via a SIEM be any smarter?

Cue my Hawke vs the Machine post from almost two years ago:

Archangel: They haven't built a machine yet that could replace a good pilot, Hawke.

Hawke: Let's hope so.

Back to Greg's case. It turns out that generic policy application against disparate devices appears to be the "win" here:

Q1 Labs' QRadar had all of the functionality to do this, and we were able to build a multi-staged rule that essentially said, "If you see a successful login event from any devices whose IP address does not originate from one of the following countries, generate an alert". Because of the normalization and categorization that occurs as events flow into the SIEM, it's possible to specify "successful login event" without getting into the nuances of Linux, Windows, IIS, VPN concentrators. This is the convenience that SIEM can offer. (emphasis added)

Is that worth the money?

Finally, I'm a little more suspicious about the following:

Most modern SIEM products also ship with at least a minimum set of bundled correlated rules, too. For example, when we brought a new Snort IDS box online, there was a deluge of alerts, the majority of them considered false-positives. Because of useful reduction logic, there was only one alert out of 6,000 that actually appeared on our console across all of the products tested. That alert was based on a predefined correlation rule that looked for a combination of "attack" activity and a successful set of logins within a set period of time.

It's more likely the SIEMs considered the "deluge" events to be of lower priority, so they never appeared on screen. Think of the myriad of ICMP, UPnP, and other alerts generated by any stock IDS ruleset being "tuned down" as "informational" so they don't make the front of the dashboard. If I knew this SIEM test correlated vulnerability data with IDS attack indications, the useful reduction logic would make more sense. I can't be sure but you can guess which way I'm leaning.

Security Event Correlation: Looking Back, Part 2

In my last post Security Event Correlation: Looking Back, Part 1 I discussed a story from November 2000 about security event correlation. I'd like to now look at Intrusion Detection FAQ: What is the Role of Security Event Correlation in Intrusion Detection? by Steven Drew, hosted by SANS. A look at the Internet Archive shows this article present as of August 2003, so we'll use that to date it.

[A]s pointed out by Steven Northcutt of SANS, deploying and analyzing a single device in an effort to maintain situational awareness with respect to the state of security within an organization is the "computerized version of tunnel vision" . Security events must be analyzed from as many sources as possible in order to assess threat and formulate appropriate response... This paper will demonstrate to intrusion analysts why correlative analysis must occur in order to understand the complete scope of a security incident.

Ok, let's go. I'll summarize the article rather than post clips here because I can make the point in a few sentences. The article shows how an adversary scans for CGI scripts phf, formmail, and survey.cgi, and how four data sources -- a router, a firewall, an IDS, and a Web server -- see the reconnaissance events. First the author shows the view of the "incident" from the perspective of each of the four data sources. Next he describes how looking at all of the data together results in a better overall understanding of the incident. He provides this "Ven" (sic) diagram and the following text:

The diagram shows that removing the analysis of even just one of the device's log data, our understanding of the incident can drop dramatically. For example, if we remove the analysis of the web server error_log, we would not have known that the script access attempt failed. If we had not analyzed the router, we would not have known the probing host scanned the entire class C of addresses for web servers. If we had not analyzed the www access_log, we would not have known that the probing host was likely using Lynx as the web browser to check for the scripts. If we had not analyzed the network IDS logs, we may not have known that the activity was related to well known exploit attempts. (emphasis added)

Do you see the same problems with this that I do? This is my overall reaction: aside from the access failed ("404") messages from the Web server error logs, who cares? Port scans: who cares. Using Lynx: who cares. Snort saw it: who cares. All that matters is the activity failed. In fact, since it was only reconnaissance, who could care at all? People who spend time on this sort of activity should be doing something more productive.

It appears that getting to the heart of the matter, i.e., looking at the target application logs (i.e., Apache) yields the information one really needs for this sort of incident. In other words, correlation isn't the governing principle; access to the right sort of evidence dominates. If the analyst in this case didn't have access to the Web server logs, we'd be much more concerned (or maybe not).

Furthermore, notice there is zero mention of whether the target of this incident matters, or what compensating controls might exist, or a dozen other lacking contextual issues. As I mentioned in my last post, these sorts of problems are the true obstacle to security event correlation.

Security Event Correlation: Looking Back, Part 1

I've been thinking about the term "correlation" recently. I decided to take a look back to determine just what this term was supposed to mean when it first appeared on the security scene.

I found Thinking about Security Monitoring and Event Correlation by Billy Smith of LURHQ, written in November 2000. He wrote:

Security device logging can be extensive and difficult to interpret... Along with lack of time and vendor independent tools, false positives are another reason why enterprise security monitoring in not easy...

The next advance in enterprise security monitoring will be to capture the knowledge and analytical capabilities of human security experts for the development of an intelligent system that performs event correlation from the logs and alerts of multiple security technologies.

Ok, so far so good.

For example Company A has a screening router outside of their firewall that protects their corporate network and a security event monitoring system with reliable artificial intelligence. The monitoring system would start detecting logs where the access control lists or packet screens on the screening router were denying communications from a certain IP address. Because the intelligent system is intelligent it begins detailed monitoring of the firewall logs and logs of any publicly accessible servers for any communications destined for or originating from the IP address. If the intelligent system determined that there was malicious communication, the system would have the capability to modify the router access control lists or the firewall configuration to deny any communication destined for or originating from the IP address. (emphasis added)

Ok, you lost me. The enterprise is already "denying communications," implying an administrator already knew to configure defensive measures. Because denied traffic is logged, the correlation system looks for traffic somewhere else in the enterprise, and then modifies access control lists it finds that currently allow said traffic? What is this, a mistake detection mechanism?

Let's look at the next example.

What if the intelligent system began detecting multiple failed logins to an NT server by the president of the company? It would be useful for this technology to determine where these failed logins were originating from and "look for" suspicious activity from this IP and/or user for some designated timeframe. If this system determined that the failed logins originated from a user other than the president of the company, it could begin to closely monitor for a period of time all actions by this user and the company president (the user could be impersonating the president). This monitoring could include card readers, PBXs or voice mail access, security alarms from secured doors and gates and access to other servers. If the monitoring system were not correlating events the user impersonating the company president would probably bypass all access control and security monitoring devices because the user's actions appear as "normal" activity. (emphasis added)

This example is a little better, until the end. Failed logins happen every day, but an excessive number of failed logins can indicate an attack. I'm not exactly sure how the inclusion of other log sources is supposed to make a difference here, however. Furthermore, if the "user's actions appear as 'normal' activity," just how is it supposed to be identified as suspicious?

The correlation argument fails to pieces in the penultimate paragraph of this article:

Today there is one major obstacle to intelligent event correlation enterprise-wide. There is no standard for logging security related information or alerts. Every vendor uses their own logging or alerting methodology on security related events. In many cases there are inconsistent formats among products from the same vendor. These issues make enterprise security monitoring difficult and event correlation almost impossible with artificial intelligence. The industry will need to impose a standard method or protocol for logging and alerting security related events before an intelligent system can be developed and successfully implemented enterprise-wide. (emphasis added)

Wow, that is absolutely off-target. Lack of a logging standard is problematic, but the absolute worst problems involve having no idea 1) what assets exist; 2) what assets matter; 3) what activity is normal; 4) who owns what assets; 5) what to do about an incident.

So far the there's nothing compelling about "correlation" here. The article hints that one might learn more about failed login attempts if an analyst could check physical access logs to verify the in-office presence of a person, but couldn't the source IP for the failed logins roughly indicate the same? Even if the company president is in the building, it doesn't mean he/she is at his/her computer.

In the next part of this article we'll move forward in time to look at more correlation history.

Thoughts on Security Engineering, 2nd Ed

One of my favorite all-time security books is Security Engineering by Prof Ross Anderson, which I read and reviewed in 2002. Earlier this year Wiley published Security Engineering, 2nd Ed. The first edition was a 612 page soft cover; the second edition is a massive 1040 page hard cover.

To learn more about the new edition, I recommend visiting Ross' book page. This title should be included in every academic security program. Cambridge University uses each of the three parts of the tome in three separate computer security classes, as noted on the book page. If you're in a formal security program and you've never heard of this book, ask your professors why it's not included. If your professors have never heard of this book, ask yourself why you are studying in that program.

Three years ago I posted What the CISSP Should Be, offering NIST SP 800-27, Rev. A, Engineering Principles for Information Technology Security (A Baseline for Achieving Security) as the basis for the CISSP. The CISSP should use Prof Anderson's book as an historical application and practical expansion of the core ideas of 800-27.

Security Engineering would make a great text for a year-long, meet every-other-week program, where participants read one chapter each week. (The book is 27 chapters, but the last is only a three page conclusion that could be wrapped into week 26.) Those taking the time to read, discuss, and understand the material in this book would know far more about security than anyone wasting time in a series of CISSP CBK cram sessions.

If you read the first edition, I still recommend buying and reading the second edition. As you'll see on the book page, Wiley allowed Ross to post 6 chapters in .pdf format, along with the table of contents, preface, acknowledgements, bibliography, and index. The entire first edition is also still online if you want to start there.

Security Book Publishing Woes

Practical UNIX and Internet Security, 2nd Ed (pub Apr 96) by Simson Garfinkel and Gene Spafford was the first computer security book I ever read. I bought it in late 1997 after hearing about it in a "UNIX and Solaris Fundamentals" class I took while on temporary assignment to JAC Molesworth. Although I never formally listed it in my Amazon.com reviews, I did list it first in my Favorite 10 Books of the Last 10 years in 2007.

Since reading that book, I've read and reviewed over 270 technical books, mostly security but some networking and programming titles. In 2008 I've only read 15 so far, but I'm getting serious again with plans to read 16 more by the end of the year. (We'll see how well I do. I only read 25 last year, but my yearly low was 17 in 2000. My yearly high was 52 in 2006, when I flew all over the world for TaoSecurity LLC and read on each flight.)

Security books are on my mind because I had a conversation with a book publisher this week. She told me the industry has been in serious decline for a while, meaning people aren't buying books. Apparently this decrease in sales is industry-wide, punishing both good books (those recognized as being noteworthy) and bad (which you would expect to sell poorly anyway).

Some people blame the book Hacking Exposed (6th edition due in Feb 09) for creating unrealistic expectations in the minds of book publishers. McGraw-Hill claims HE is the best-selling security book of all time. I've heard numbers between 500,000 and 1,000,000 copies across the editions (not counting the other titles in the HE line.) That blows away any other security book.

I've got about 50 titles on my reading list for the remainder of 2008 and the first half of 2009. About 1/3 are programming books, 1/4 are related to vulnerability discovery, 1/5 could be called "hacking" books, and the remainder deal with general security topics. I only plan to read what I would call "good books," so from my perspective there's plenty of good new-ish books around. However, thus far this year I've only read two five-star books, Applied Security Visualization and Virtual Honeypots.

What do you think of the security book publishing space? Are there too many books? Are there too few good books? Are books too expensive? What books would you like to see published?

Review of Applied Security Visualization Posted

Amazon.com just posted my five star review of Applied Security Visualization by Raffy Marty. From the review:

Last year I rated Greg Conti's Security Data Visualization as a five star book. I said that five star books 1) change the way I look at a problem, or properly introduce me to thinking about a problem for which I have little or no frame of reference; 2) have few or no technical errors; 3) make the material actionable; 4) include current research and reference outside sources; and 5) are enjoyable reads. Raffy Marty's Applied Security Visualization (ASV) scores well using these measures, and I recommend reading it.

Thursday, October 23, 2008

Windows Syslog Agents Plus Splunk

I've been mulling strategies for putting Windows Event Logs into Splunk. Several options exist.

  1. Deploy Splunk in forwarding mode on the Windows system.

  2. Deploy a Syslog agent on the Windows system.

  3. Deploy OSSEC on the Windows system and sending OSSEC output to Splunk.

  4. Deploy Windows Log Parser to send events via Syslog on a periodic basis.

  5. Retrieve Windows Event Logs periodically using WMIC.

  6. Retrieve Windows Event Logs using another application, like LogLogic Lasso or DAD.

I'd done number 2 before using NTSyslog, so I decided to see what might be newer as far as deploying Syslog agents on Windows goes.

I installed DataGram SyslogAgent, a free Syslog agent onto a Windows XP VM.

It was very easy to set up. I pointed it toward a free Splunk instance running on my laptop and got results like the following.

I noticed some odd characters inserted in the log messages, but nothing too extraordinary.

Next I tried the other modern free Syslog agent for Windows, SNARE. Development seems very active. I configured it to point to my Splunk server.

Next I checked the Splunk server for results.

As you can see the messages appear to be formatted a little better (i.e., no weird characters).

I was able to find logon messages recorded at different times by different Syslog agents. In the following screen capture, the top message is from SNARE and the bottom is from SyslogAgent.

I think if I decide to use a Syslog agent on Windows, I'll spend more time validating SNARE.

CWSandbox Offers Pcaps

Thanks to Thorsten Holz for pointing out that the latest online CWSandbox provides network traffic in Libpcap format for recently submitted malware samples.

I decided to give this feature a try, so I searched the Spam folder for one of my Gmail accounts. I found a suitable "Watch yourserlf in this video man)" email from 10 hours ago and followed the link. I was quickly reminded by Firefox 3 that visiting this site was a Bad Idea.

It took me a little while to navigate past my NoScript and Firefox 3 warnings to get to a point where I could actually hurt myself.

After downloading the "viewer.exe" file, I uploaded it to CWSandbox. That site told me:

The sample you have submitted has already been analysed. Please see the sample detail page for further information.

If you visit that page you'll find a PCAP link.

I took a quick look at the file with Argus and filtered out port 1900 traffic.

$ argus -r analysis_612050.pcap -w analysis_612050.pcap.arg

$ ra -n -r analysis_612050.pcap.arg - not port 1900
23:23:57.745266 e igmp -> 2 108 INT
23:23:59.079832 e tcp 2 114 RST
23:23:59.735571 e tcp -> 78 67219 RST
23:23:59.757777 e tcp -> 116 101525 RST
23:24:00.103663 e tcp 2 319 RST
23:24:08.147828 e tcp 2 319 RST
23:24:13.463815 e tcp 4 427 RST
23:24:16.556555 e tcp 3 168 RST
23:24:18.791427 e tcp 5 481 RST
23:24:26.456790 e udp <-> 2 250 CON
23:24:26.458842 e tcp -> 26 17295 FIN
23:24:26.600712 e tcp -> 10 1544 FIN
23:24:26.743598 e tcp -> 10 2099 FIN
23:24:26.854732 e tcp -> 10 1284 FIN
23:24:26.965697 e tcp -> 10 1545 FIN
23:24:27.070573 e tcp -> 14 6828 FIN
23:24:27.180786 e tcp -> 26 18334 FIN
23:24:27.310872 e tcp -> 12 4822 FIN
23:24:27.422057 e tcp -> 14 7415 FIN
23:24:27.527325 e tcp -> 11 3078 FIN

Here's a list of HTTP requests as filtered by Tshark.

$ tshark -n -r analysis_612050.pcap -R 'http.request == true and tcp.dstport != 1900'
11 2.097490 -> HTTP GET /scan.exe HTTP/1.1
12 2.097563 -> HTTP GET /cgi-bin/index.cgi?test7 HTTP/1.1
29 2.212609 -> HTTP GET /g.exe\330 HTTP/1.1
36 2.266404 -> HTTP GET /l.exe HTTP/1.1
119 2.475539 -> HTTP GET /g.exe\330 HTTP/1.1
186 3.308669 -> HTTP GET /g.exe\330 HTTP/1.1
188 3.390001 -> HTTP GET /g.exe\330 HTTP/1.1
230 28.765013 -> HTTP GET /bild15_biz.php?NN=a119 HTTP/1.1
256 28.906713 -> HTTP GET /adult.txt HTTP/1.1
266 29.049951 -> HTTP GET /pharma.txt HTTP/1.1
276 29.160854 -> HTTP GET /finance.txt HTTP/1.1
286 29.271530 -> HTTP GET /other.txt HTTP/1.1
296 29.376465 -> HTTP GET /promo/aol.com-error.html HTTP/1.1
310 29.486416 -> HTTP GET /promo/gmail.com-error.html HTTP/1.1
336 29.616847 -> HTTP GET /promo/google.com-error.html HTTP/1.1
348 29.727475 -> HTTP GET /promo/live.com-error.html HTTP/1.1
362 29.832947 -> HTTP GET /promo/search.yahoo.com-error.html HTTP/1.1

Kudos to CWSandbox for adding this capability.

Wednesday, October 22, 2008

What To Do on Windows

Often when I teach classes where students attain shell access to a Windows target, students ask "now what?" I found the blog post Command-Line Kung Fu by SynJunkie to be a great overview of common tasks using tools available within cmd.exe. It's nothing new, but I thought the author did a good job outlining the options and showing what they look like in his lab.

Monday, October 20, 2008

Trying Firefox with CMU Perspectives

The October issue of Information Security Magazine brought CMU's Perspectives Firefox plug-in to my attention. By now most of us are annoyed when we visit a Web site like OpenRCE.org that presents a self-signed SSL certification. Assuming we trust the site, we manually add an exception and waste a few seconds of our lives. I probably wouldn't follow this process for my online bank, but for a site like OpenRCE.org it seems like overkill.

Leveraging history appears to be one answer to this problem. That's what Perspectives does. As stated in this CMU article:

Perspectives employs a set of friendly sites, or "notaries," that can aid in authenticating websites for financial services, online retailers and other transactions requiring secure communications.

By independently querying the desired target site, the notaries can check whether each is receiving the same authentication information, called a digital certificate, in response. If one or more notaries report authentication information that is different than that received by the browser or other notaries, a user would have reason to suspect that an attacker has compromised the connection...

"When Firefox users click on a website that uses a self-signed certificate, they get a security error message that leaves many people bewildered," [author[ Andersen said. Once Perspectives has been installed in the browser, however, it can automatically override the security error page without disturbing the user if the site appears legitimate.

The system also can detect if one of the certificate authorities may have been tricked into authenticating a bogus website and warn the Firefox user that the site is suspicious.

That sounds pretty cool. I installed Perspectives and revisited OpenRCE.org. This time I sailed right through to the main page. In the screenshot you can see Perspectives bypassed the Firefox warning.

I can manually review the notary server results to see how they made their decision.

One note: I collected traffic during this event and noticed Perspectives use UDP:

08:20:43.900355 IP > UDP, length 32
08:20:43.900408 IP > UDP, length 32
08:20:43.900432 IP > UDP, length 32
08:20:43.900456 IP > UDP, length 32
08:20:44.015193 IP > UDP, length 233
08:20:44.033771 IP > UDP, length 233
08:20:44.034334 IP > UDP, length 233
08:20:44.044537 IP > UDP, length 233

Here's one exchange in detail:

08:20:43.900355 IP > UDP, length 32
0x0000: 4500 003c 0000 4000 4011 a1f1 c0a8 0267 E..<..@.@......g
0x0010: d88b fd24 ed43 3b71 0028 31c5 0101 0020 ...$.C;q.(1.....
0x0020: 0009 0016 0000 7777 772e 6f70 656e 7263 ......www.openrc
0x0030: 652e 6f72 673a 3434 332c 3200 e.org:443,2.
08:20:44.033771 IP > UDP, length 233
0x0000: 4520 0105 3d1c 0000 2e11 b5ec d88b fd24 E...=..........$
0x0010: c0a8 0267 3b71 ed43 00f1 3200 0103 00e9 ...g;q.C..2.....
0x0020: 0009 0016 00ac 7777 772e 6f70 656e 7263 ......www.openrc
0x0030: 652e 6f72 673a 3434 332c 3200 0001 0010 e.org:443,2.....
0x0040: 036d c8b9 0d28 09dc d349 cc79 7885 fa9a .m...(...I.yx...
0x0050: 6e48 bef4 d548 fc34 3b00 93f9 2c6a e349 nH...H.4;...,j.I
0x0060: d1c8 555b 4a66 7123 0057 79ee a19b 5250 ..U[Jfq#.Wy...RP
0x0070: 4d44 5ce2 811d 3092 93d4 382a be6d a596 MD\...0...8*.m..
0x0080: 53be c708 e235 b791 c358 921e 85f5 31ee S....5...X....1.
0x0090: c4e6 d938 bc52 9251 3675 0ba1 04cb 7c48 ...8.R.Q6u....|H
0x00a0: a667 c9af 3893 3f24 9c55 97f7 ffe7 5e48 .g..8.?$.U....^H
0x00b0: ce7e ea16 42df c532 4b5c 07f1 0ea1 6d0d .~..B..2K\....m.
0x00c0: ebf4 0a77 a318 5e3e 301e c6c5 16ff 7e9e ...w..^>0.....~.
0x00d0: 164e d4e8 89b3 952f 0ff1 b207 c973 a8e3 .N...../.....s..
0x00e0: f757 0c4a 1b8c a768 6601 b0bf 8f0a 7f84 .W.J...hf.......
0x00f0: 8218 6dc5 7a62 c4b4 cfae 4154 a51d 13cc ..m.zb....AT....
0x0100: 4520 7c1a 68 E.|.h

You can see the fingerprint 6D:C8:B9:0D:28:09:DC:D3:49:CC:79:78:85:FA:9A:6E if you look close enough. I'm not sure how I feel about UDP for this system. I'd have to look at the system more closely (maybe even the source code) to determine if UDP is "reliable" enough.

This approach is the future of security. Training the user is never 100% effective, so why not try other methods? We already benefit from this approach if we use systems like Gmail. If enough other users mark en email they receive as spam, the rest of us are far less likely to ever see it. We have to move beyond thinking about point defense and turn towards collaborative defense based on monitoring, the wisdom of the crowd, and reputation.

It's important to remember the problem this approach is trying to solve. The classic case is detecting and avoiding a man-in-the-middle attack against SSL while browsing at an Internet cafe. This approach will not help if someone creates a Web site advertising "avoid foreclosure!" if you visit their SSL-enabled site and enter your credit card details.

Thoughts on 2008 SANS Forensics and IR Summit

Last week I attended at spoke at the 2008 SANS WhatWorks in Incident Response and Forensic Solutions Summit organized by Rob Lee. The last SANS event I attended was the 2006 SANS Log Management Summit. I found this IR and forensics event much more valuable, and I'll share a few key points from several of the talks.

  • Steve Shirley from the DoD Cyber Crime Center (DC3) said "Security dollars are not fun dollars." In other words, what CIO/CTO wants to spend money on security when he/she could buy iPhones?

  • Rob Lee noted than an Incident Response Team (IRT) needs the independence to take actions during an emergency. I've called this authority the ability to declare a "Network State of Emergency" (NSOE). When certain preconditions are met, the IRT can ask a business owner to declare a NSOE, just like a state governor can declare a state of emergency during a forest fire or other natural disaster. The IRT can then exercise predefined powers (like host containment, memory acquisition, live response, etc.), acting under the business owner's authority without coordinating in the moment with IT or other parties. Rob also mentioned that SleuthKit 3 would arrive soon; it was released yesterday. Rob shared the idea that sharing IR information resembles the full disclosure debate.

  • Mike Poor from the newly renamed InGuardians provided the following advice when asked "what logs should we collect?" He responded: "Collect the logs to tell the story you want to tell." I thought this was a great response. Some enterprises don't want to tell a story. Some only know the middle, by virtue of being in the midst of an intrusion. Those who collect data that validates a successful resolution of an intrusion can tell the end of the story. Those with mature visibility and detection initiatives can tell the beginning of the story as well. Furthermore, during lunch Mike suggested I read Ed Skoudis' WMIC articles to understand Windows Management Instrumentation Commands.

  • Aaron Walters from Volatile Systems and Matt Shannon from F-Response announced that F-Response 2.0.3 can remotely acquire memory on target systems. Aaron mentioned that intruders have dynamically injected malicious code into processes, like Web servers, to offer one-time-use URLs that don't exist on disk. Aaron also noted cases where a system reports it is patched, but because of a driver conflict the system is really running vulnerable software. Aaron provided a short demo of Voltage, a commercial enterprise product for investigations. Aaron used the MIT Simile Timeline application to outline time series data visually.

  • Harlan Carvey cited Nick Petroni while defending the collection of memory on targets: "collecting memory now lets us answer new questions later." He said he sometimes arrives at a client site where all victim systems have been reinstalled and no logs are kept, yet the customer wants to know what happened.

  • Ovie Carroll, now Director of the Cybercrime Lab at U.S. Department of Justice Computer Crime and Intellectual Property Section, said he has been briefing judges on the need to collect volatile data during investigations. He said DoJ has to be ready to answer a defense attorney who says "by pulling the plug on my client's computer, you destroyed exculpatory evidence!" Ovie emphasized the importance of developing an investigative mindset in analysts, not simply concentrating on "data extraction." After his presentations we discussed how future investigations may have very little to do with individual PCs, since most of the interesting evidence might reside on provider applications and networks.

  • Mike Cloppert ruffled a few feathers (justifiably so) by stating "the advanced persistent threat has rendered the classical IR model obsolete." In other words, persistent threats make it difficult to start over when there is no end. Mike emphasized the need for "indicator management" and that "intelligence drives response." I agree; without having investigative leads, identifying intruders can be very difficult.

  • Eoghan Casey and Chris Daywalt warned of early containment and remediation during an incident. Do we want to disrupt an intruder or eject him?

I believe my keynote on day 2 went well. Rob stated he plans to hold a second conference in July near Washington, DC next year, so I look forward to attending it.

Hop-by-Hop Encryption: Needed?

Mike Fratto's article New Protocols Secure Layer 2 caught my attention:

[T]wo protocols -- IEEE 802.1AE-2006, Media Access Control Security, known as MACsec; and an update to 802.1X called 802.1X-REV -- will help secure Layer 2 traffic on the wire... 802.1AE ensures the integrity and privacy of data between peers at Layer 2. The enhancements in 802.1X-REV automate the authentication and key management requirements for 802.1AE.

802.1AE protects data in transit on a hop-by-hop basis... ensuring that the frames are not altered between Layer 2 devices such as switches, routers, and hosts.

I think the diagram explains 802.1AE well, and Mike notes the problems with this approach:

The default encryption algorithm, AES-GCM, will require a hardware upgrade in network infrastructure and host network interface cards...

[A]ny products that transparently process network traffic, like load balancers, traffic shapers, and network analyzers, will be blind to 802.1AE-protected traffic.

That's a significant problem in my opinion. Already I hear from network administrators who complain about IPSec because it renders the same tools and techniques useless.

The diagram shows that a network analyzer attached to a SPAN port avoids the blind spots introduced by 802.1AE. Another approach would be to introduce a 802.1AE-aware network tap. I do not believe anything like that exists yet, but I would like to see vendors offer this feature.

It appears 802.1AE might defeat the old school layer 2 hacking that continues to surface on modern networks. We'll see how it performs in real life.

The encryption mechanics deserve some attention:

802.1AE is only half the story, however, because it deals only with encryption and integrity -- both of which require keys. 802.1X-REV provides key management--creation, distribution, deletion, and renewal of encryption keys...

Many organizations' physical wiring has one physical LAN port per desk or cubicle, and 802.1X on a wired network was originally designed to be deployed on a one-host-per-port basis. However, it's now common for sites to have multiple hosts per port...

802.1X-REV addresses these issues by allowing multiple hosts to authenticate on a port.

But authenticating multiple hosts isn't enough. If a workstation is connected to a VoIP phone and was properly authenticated, someone could simply clone the workstation's MAC address and connect to the network through that VoIP phone. The bogus workstation would have network access until 802.1X required a reauthentication.

Pairing 802.1X-REV with a workstation NIC that supports 802.1AE enables multiple hosts to be authenticated simultaneously, and each host can have its own encrypted session. More important, bogus workstations can't simply plug in, because the impersonators won't have the encryption keys and therefore can't communicate with the switch.

That last point is significant. I have not personally configured port-based security in production, but I do wonder how people using port-based security handle situations like this. A related issue involves virtual machines sharing a NIC, connected to one physical switch port. Is it acceptable to manually configure the right number of MAC addresses for that port?

Friday, October 17, 2008

BGPMon.net Watches BGP Announcements for Free

Thanks to Jeremy Stretch's blog for pointing me to BGPMon.net, a free route monitoring service. This looks like a bare bones, free alternative to Renesys, my favorite commercial vendor in this space.

I created an account at BGPMon.net and decided to watch for route advertisements for Autonomous System (AS) 80, which corresponds to the network my company operates. The idea is that if anyone decides to advertise more specific routes for portions of that net block, and the data provided to BGPMon.net by the Réseaux IP Européens (RIPE) Routing Information Service (RIS) notices the advertisements, I will get an email.

I noticed that RIPE RIS provides dashboards for the prefix or AS 80 with interesting data.

Thursday, October 16, 2008

DHS to Fund Open Source Next Generation IDS/IPS

I checked in with the #emerging-threats IRC channel a few minutes ago and saw a link to www.openinfosecfoundation.org:

October 16, 2008 (LAFAYETTE, Ind.) – The Open Information Security Foundation (OISF, www.openinfosecfoundation.org) is proud to announce its formation, made possible by a grant from the U.S. Department of Homeland Security (DHS). The OISF has been chartered and funded by DHS to build a next-generation intrusion detection and prevention engine. This project will consider every new and existing technology, concept and idea to build a completely open source licensed engine. Development will be funded by DHS, and the end product will be made available to any user or organization.

According to Matt Jonkman, this project will not be a fork of existing code. The idea is to take a new approach, not just replicate something like Snort.

While I am excited by this development, I don't think it's the project I would have wanted to fund right now. Open source users already have Snort, Bro, and other open source security products. I would rather see DHS support a free alternative to Snort signatures or even Tenable vulnerability checks. Another possibility would be funding tools to manage and integrate existing open source technologies. Still, seeing DHS award a grant in the open source security space gives me hope that other activities could be forthcoming.

I'll report on this as events develop, but don't expect to see any code in the wild for months. This is a tough problem and the OISF is starting "from the ground up."

Friday, October 10, 2008

Traffic Talk 2 Posted

My second edition of Traffic Talk, titled Using Wireshark and Tshark display filters for troubleshooting, has been posted. From the article:

Welcome to the second installment of Traffic Talk, a regular SearchNetworkingChannel.com series for network solution providers and consultants who troubleshoot business networks. In these articles we examine a variety of open source network analysis tools. In this edition we explore Wireshark and Tshark display filters. Display filters are one of the most powerful, and sometimes misunderstood, features of the amazing Wireshark open source protocol analyzer. After reading this tip you'll understand how to use display filters for security and network troubleshooting.

Thursday, October 09, 2008

Whither Air Force Cyber?

I was disappointed to read in Air Force senior leaders take up key decisions that Air Force Cyber Command is effectively dead:

Leadership also decided to establish a Numbered Air Force for cyber operations within Air Force Space Command and discussed how the Air Force will continue to develop capabilities in this new domain and train personnel to execute this new mission.

Apparently that unit will be 24th Air Force. Since the Numbered Air Force is the unit by which the service presents combat forces to Unified Combatant Commanders in wartime, it makes some sense for cyber to at least be organized in that manner.

I guess the Air Force believes it needs to get its house in order before trying to establish a new command. The Air Force is also suffering the adverse effects of the way it advertised itself, essentially stealing the spotlight by not appearing "Joint" enough.

I am most concerned with the effect of not having Cyber Command upon the proposed cyber career field. From what my sources had told me, it was turning into a rebranded communications AFSC. The biggest problem the Air Force faces in cyber is not retaining skilled personnel. It suffers because there is no career path for cyber warriors. If that problem is not fixed, none of the reorganization efforts will matter anyway.

Saturday, October 04, 2008

FCW on Comprehensive National Cybersecurity Initiative

Brian Robinson's FCW article Unlocking the national cybersecurity initiative caught my attention. I found these excerpts interesting, although my late 2007 article Feds Plan to Reduce, Then Monitor discussed the same issues.

The cybersecurity initiative launched by the Bush administration earlier this year remains largely cloaked in secrecy, but it’s already clear that it could have a major and far-reaching effect on government IT operations in the future.

Everything from mandated security measures and standard desktop configurations across government to a recast Federal Information Security Management Act (FISMA) could influence the way agencies buy and manage their IT.

Overseeing all of this will be a central office run by the Homeland Security Department, the first time that the government’s efforts in cybersecurity will run through a single office tasked with coordinating the work of separate federal cybersecurity organizations...

[First was the] creation of a National Cybersecurity Center (NCSC), which will serve as the focus for improving federal government network defenses. Rod Beckstrom [mentioned here], a well-known technology entrepreneur, was appointed the center’s director in March...

[Second,] Trusted Internet Connections (TIC)... this program is designed to reduce the number of external connections that agencies have to the Internet to just a few centralized gateways that can be better monitored for security. In January, more than 4,300 agency Internet connections existed, and those had been cut to some 2,700 by June. The target is less than 100 connections.

[Third,] Einstein is a system that automatically monitors data traffic on government networks for potential threats. As a program under the CNCI, Einstein will be upgraded ["Einstein II"] to include intrusion-detection technology. [I thought that was dead!]

Also, participation in Einstein for those agencies managing Internet access points will no longer be voluntary, as it was before. If Einstein finds a connection is not being properly managed, DHS will be able to shut it down.

[Finally, the] Federal Desktop Core Configuration (FDCC)... initiated by OMB last year, mandates that agencies adopt a common [desktop]...

As part of the CNCI, NIST proposed in February to extend the FDCC to other operating systems, applications and network devices beyond the existing support for Windows XP and Vista.
(emphasis and comments added)

I loved this part:

The expansion of Einstein, for example, is a major change because it mandates the use of network security monitoring tools that are controlled by an entity outside the agencies.

“Before, they would do this [monitoring] themselves and not necessarily be forthcoming if anything happened,” he said. “Now it’s out of their hands.”

Now, my sources tell me that Einstein is basically garbage, and that the data from it (purely flows) is fairly useless unless it is used to identify traffic to or from know bad IP addresses. That's still worthwhile in my opinion, but it demonstrates why real NSM needs all four forms of data (alert, statistical, session, and full content) to have a chance at winning. What is more significant than simply deploying Einstein capabilities would be getting "hardware footholds" at each gateway.

If each TIC gateway is only forwarding flow data via router NetFlow exports, that's nice but insufficient. If each TIC gateway is tapped and connected to a stand-alone, preferably open source platform, then the game changes. Once a centralized monitoring agency can deploy its own tools and utilize its own tactics and techniques on a platform it controls, you will see real improvements in network-based enterprise visibility and situational awareness. That's what you get when you can implement what I've called self-reliant NSM, and the final story excerpt alluded to that idea as well.

Of course, it is important to implement these programs as openly as possible, with plenty of oversight and defined goals and governance. That is my biggest problem with the secrecy around CNCI. Overclassification breeds paranoia and ultimately reduces security by underminding the faith of the citizenry in our government.

Insider Threat Prediction Materializing

As we approach the end of the year, I'm looking to see if my Predictions for 2008 are materializing. My third prediction was:

Expect increased awareness of external threats and less emphasis on insider threats.

Accordingly, I was happy to see the story Targeted Attacks, DNS Issues Hit Home in New CSI Report contain the following subtitle:

Insider abuse shows marked drop-off in 13th annual survey by Computer Security Institute

Ho ho, what does that mean?

While some threats are on the increase, CSI also found that others are on the downturn. Insider abuse dropped from 59 percent in 2007 to 44 percent in 2008, the largest shift recorded in this year's survey.

"I think there was a lot of hype around this last year, and now it's coming back to reality," Richardson says. Insider abuse numbers hovered at around 42 percent to 48 percent in 2005 and 2006 and then spiked last year, he noted.
(emphasis added)

I noted the annual CSI study supported my position on the prevalence of the insider threat in relation to other threats in my 2006 post 2006 CSI-FBI Study Confirms Insider Threat Post. Now, of course you can dispute the methodology of the CSI study, but even if it's only directionally correct it still supports my prediction.

After all the stories about attacks from a certain large far eastern country (documented in my "v China posts), widespread reporting on named botnets (remember when malware was named, not botnets?), and very public stories on attacking core infrastructure (DNS, BGP, now TCP), it's nice to see CSI-FBI respondents at least realize they're far more likely to be victimized by one of these forces largely outside their control.

I've discussed Incorrect Insider Threat Perceptions before, specifically that the insider threat is the one threat you can really control. Unless you're a police or military organization, you can't do anything about external threats. Anyone with firing power can do something about internal threats.

Attacks Upon Integrity

Earlier this year I wrote First They Came for Bandwidth, where I described the motivation behind different sorts of attacks in an historical context:

First they came for bandwidth... These are attacks on availability, executed via denial of service attacks starting in the mid 1990's and monetized later via extortion. Next they came for secrets... These are attacks on confidentiality, executed via disclosure of sensitive data starting in the late 1990's and monetized as personally identifiable information and accounts for sale in the underground. Now they are coming to make a difference... These are attacks on integrity, executed by degrading information starting at the beginning of this decade.

When I wrote those words, the sorts of attacks on integrity I imagined involved changes to legitimate data. As is often the case with predictions, the reality has taken a similar but not exact direction. Attacks upon integrity are currently appearing as the introduction of outright falsehoods, either by mistake, mischief, or malice. Examples include repostings about UAL bankruptcy; fake posts about Steve Jobs having a heart attack; a fake IAC press release; and so on.

The good news about these incidents is that they become easy to spot. As is often the case with the adversary, low-end means to achieve a goal are used first, followed by increasing sophistication as the targets become more vigilant and experienced. Think about the evolution of phishing as a popular example, but others abound. Currently fake news is being injected into the Internet as a complete package. I would expect the next round to involve subtle modifications to legitimate content. Once some sort of trust technology is applied (digital signatures and the like), then the adversary will have to find ways to subvert those mechanisms.

The winners will be those who best protect their brand by ensuring the integrity of information from them and about them.

Wednesday, October 01, 2008

DoS Me Like It's 1996

This one's in my wheelhouse, but details are sketchy. So far the best simple article is New attacks reveal fundamental problems with TCP by Dennis Fisher. Nick Weaver's Slashdot comment provides the best technical explanation of one of the attack vectors, I think:

The observation: You can use a SYN-cookie like trick on the client side as well for an attacker:

You send SYNs where the initial seq # = H(sip, dip, sport, dport).

Now when you get a SYN/ACK back, you can send the ACK to complete the handshake. You can use the ACK field back from the server to know where you are in what data to send (just subtract the value from the initial sequence # to know what the next piece of data to send is), and you can know where you are in the received data (if necessary) by storing just the server's initial sequence #.

As a result, you can now interact with the server without having to maintain ANY TCP session state, or just a single word (the server's initial seq #), allowing the attacker to use vastly fewer resources to tie up server resources.

On one hand, this is a cool trick, and potentially useful for an attacker: if you have only a couple of machines and really want to tie up server resources, you can use this quite quickly.

But OTOH, attackers already have so many zombie resources that this really doesn't necessarily buy the attacker all that much: If you have 10K machines banging on a server, the 10K machines have a good 2000x more state than the servers. So who cares about stateholding requirements on the zombie side? Thus I think its only really relevant if you wanted to DOS google, akamai, or some similar very-high-resource infrastructure.

And as the attacker can't SPOOF packets with this (it needs to see the SYN/ACK), the zombies can be filtered if the DOS is detected and the attacker's identified as well.
(emphasis added)

So that's pretty clever... clever like the original 1996 problem of SYN flooding, which exploited resource limitations on the server. Reference Mike Schiffman's original Phrack article for details and remember 12 years ago we worried about k1dd13z with dial-up modems DOSing NT 4 servers with 7 packets.

With regard to the latest, we should give some credit to BindView RAZOR's Naptha research from 2000, as noted by Jose Nazario. So-called Naptha attacks are any mechanism that forces the victim's TCP/IP stack to consume more resources than the intruder's, thereby eventually forcing the target to submit. Robert Graham's post implied one attack vector acted what sounded to me like a reverse LaBrea implementation. I would like to incorporate this "sockstress" tool into my next TCP/IP Weapons School class.

I guess we can stay tuned to Robert E. Lee's blog. (Funny that a researcher in Sweden is named after the commander of the Confederate Army.)

Anyone who thinks this is again the end of the world should please keep vulnerabilities in perspective. Also, this is about the best vulnerability we could request. If you're vulnerable, and you're attacked, you'll notice. Even the most rudimentary IT shop has availability monitoring in place. To quote Han Solo:

Bring them on! I prefer a straight fight to all this sneaking around.

I'd much rather deal with availability attacks than confidentiality or integrity attacks.