Monday, November 30, 2009

Real Security Is Threat-Centric

Apparently there's been a wave of house burglaries in a nearby town during the last month. As you might expect, local residents responded by replacing windows with steel panels, front doors with vault entrances, floors with pressure-sensitive plates, and whatever else "security vendors" recommended. Town policymakers created new laws to mandate locking doors, enabling alarm systems, and creating scorecards for compliance. Home builders decided they needed to adopt "secure building" practices so all these retrofitted measures were "built in" future homes.

Oh wait, this is the real world! All those vulnerability-centric measures I just described are what too many "security professionals" would recommend. Instead, police identified the criminals and arrested them. From Teen burglary ring in Manassas identified:

Two suspects questioned Friday gave information about the others, police said.

Now this crew is facing prosecution. That's a good example of what we need to do in the digital world: enable and perform threat-centric security. We won't get there until we have better attribution, and interestingly enough attribution is the word I hear most often from people pondering improvements in network security.

Friday, November 27, 2009

Celebrate FreeBSD 8.0 Release with Donation

With the announcement of FreeBSD 8.0, it seems like a good time to donate to the FreeBSD Foundation, a US 501(c)3 charity. The Foundation funds and manages projects, sponsors FreeBSD events, Developer Summits and provides travel grants to FreeBSD developers. It also provides and helps maintain computers and equipment that support FreeBSD development and improvements.

I just donated $100. Will anyone match me? Thank you!

Historical Video on AFCERT circa 2000

I just uploaded a video that some readers might find entertaining. This video shows the United States Air Force Computer Emergency Response Team (AFCERT) in 2000. Kelly AFB, Security Hill, and Air Intelligence Agency appear. The colonel who leads the camera crew into room 215 is James Massaro, then commander of the Air Force Information Warfare Center. The old Web-based interface to the Automated Security Incident Measurement (ASIM) sensor is shown, along with a demo of the "TCP reset" capability to terminate TCP-based sessions.

We have a classic quote about a "digital Pearl Harbor" from Winn Schwartau, "the nation's top information security analyst." Hilarious, although Winn nails the attribution and national leadership problems; note also the references to terrorists in this pre-9/11 video. "Stop the technology madness!" Incidentally, if the programs shown were "highly classified," they wouldn't be in this video!

I was traveling for the AFCERT when this video was shot, so luckily I am not seen anywhere...

Wednesday, November 25, 2009

Tort Law on Negligence

If any lawyers want to contribute to this, please do. In my post Shodan: Another Step Towards Intrusion as a Service, some comments claim "negligence" as a reason why intruders aren't really to blame. I thought I would share this case from Tort Law, page 63:

In Stansbie v Troman [1948] 2 All ER 48 the claimant, a householder, employed the defendant, a painter. The claimant had to be absent from his house for a while and he left the defendant working there alone. Later, the defendant went out for two hours leaving the front door unlocked. He had been warned by the claimant to lock the door whenever he left the house.

While the house was empty someone entered it by the unlocked front door and stole some of the claimant's posessions. The defendant was held liable for the claimant's loss for, although the criminal action of a third party was involved, the possibility of theft from an unlocked house was one which should have occurred to the defendant.

So, the painter was liable. However, that doesn't let the thief off the hook. If the police find the thief, they will still arrest, prosecute, and incarcerate him. The painter won't serve part of the thief's jail time, even though the painter was held liable in this case. So, even in the best case scenario for those claiming "negligence" for vulnerable systems, it doesn't diminish the intruder's role in the crime.

Review of Martin Libicki's Cyberdeterrence and Cyberwar just posted my three star review of Martin Libicki's Cyberdeterrence and Cyberwar. I've reproduced the review in its entirety here because I believe it is important to spread the word to any policy maker who might read this blog or be directed here. I've emphasized a few points for readability.

As background, I am a former Air Force captain who led the intrusion detection operation in the AFCERT before applying those same skills to private industry, the government, and other sectors. I am currently responsible for detection and response at a Fortune 5 company and I train others with hands-on labs as a Black Hat instructor. I also earned a master's degree in public policy from Harvard after graduating from the Air Force Academy.

Martin Libicki's Cyberdeterrence and Cyberwar (CAC) is a weighty discussion of the policy considerations of digital defense and attack. He is clearly conversant in non-cyber national security history and policy, and that knowledge is likely to benefit readers unfamiliar with Cold War era concepts. Unfortunately, Libicki's lack of operational security experience undermines his argument and conclusions. The danger for Air Force leaders and those interested in policy is that they will not recognize that, in many cases, Libicki does not understand what he is discussing. I will apply lessons from direct experience with digital security to argue that Libicki's framing of the "cyberdeterrence" problem is misguided at best and dangerous at worst.

Libicki's argument suffers five key flaws. First, in the Summary Libicki states "cyberattacks are possible only because systems have flaws" (p xiii). He continues with "there is, in the end, no forced entry in cyberspace... It is only a modest exaggeration to say that organizations are vulnerable to cyberattack only to the extent they want to be. In no other domain of warfare can such a statement be made" (p. xiv). I suppose, then, that there is "no forced entry" when a soldier destroys a door with a rocket, because the owners of the building are vulnerable "to the extent they want to be"? Are aircraft carriers similarly vulnerable to hypersonic cruise missiles because "they want to be"? How about the human body vs bullets?

Second, Libicki's fatal understanding of digital vulnerability is compounded by his ignorance of the role of vendors and service providers in the security equation. Asset owners can do everything in their power to defend their resources, but if an application or implementation has a flaw it's likely only the vendor or service provider who can fix it. Libicki frequently refers to sys admins as if they have mystical powers to completely understand and protect their environments. In reality, sys admins are generally concerned about availability alone, since they are often outsourced to the lowest bidder and contract-focused, or understaffed to do anything more than keep the lights on.

Third, this "blame the victim" mentality is compounded by the completely misguided notions that defense is easy and recovery from intrusion is simple. On p 144 he says "much of what militaries can do to minimize damage from a cyberattack can be done in days or weeks and with few resources." On p 134 he says that, following cyberattack, "systems can be set straight painlessly." Libicki has clearly never worked in a security or IT shop at any level. He also doesn't appreciate how much the military relies on civilian infrastructure from everything to logistics to basic needs like electricity. For example, on p 160 he says "Militaries generally do not have customers; thus, their systems have little need to be connected to the public to accomplish core functions (even if external connections are important in ways not always appreciated)." That is plainly wrong when one realizes that "the public" includes contractors who design, build, and run key military capabilities.

Fourth, he makes a false distinction between "core" and "peripheral" systems, with the former controlled by users and the later by sys admins. He says "it is hard to compromise the core in the same precise way twice, but the periphery is always at risk" (p 20). Libicki is apparently unaware that one core Internet resource, BGP, is basically at constant risk of complete disruption. Other core resources, DNS and SSL, have been incredibly abused during the last few years. All of these are known problems that are repeatedly exploited, despite knowledge of their weaknesses. Furthermore, Libicki doesn't realize that so-called critical systems are often more fragile that user systems. In the real world, critical systems often lack change management windows, or are heavily regulated, or are simply old and not well maintained. What's easier to reconfigure, patch, or replace, a "core" system that absolutely cannot be disrupted "for business needs," or a "peripheral" system that belongs to a desk worker?

Fifth, in addition to not understanding defense, Libicki doesn't understand offense. He has no idea how intruders think or the skills they bring to the arena. On pp 35-6 he says "If sufficient expenditures are made and pains are taken to secure critical networks (e.g., making it impossible to alter operating parameters of electric distribution networks from the outside), not even the most clever hacker could break into such a system. Such a development is not impossible." Yes, it is impossible. Thirty years of computer security history have shown it to be impossible. One reason why he doesn't understand intruders appears on p 47 where he says "private hackers are more likely to use techniques that have been circulating throughout the hacker community. While it is not impossible that they have managed to generate a novel exploit to take advantage of a hitherto unknown vulnerability, they are unlikely to have more than one." This baffling statement shows Libicki doesn't appreciate the skill set of the underground.

Libicki concludes on pp xiv and xix-xx "Operational cyberwar has an important niche role, but only that... The United States and, by extension, the U.S. Air Force, should not make strategic cyberwar a priority investment area... cyberdefense remains the Air Force's most important activity within cyberspace." He also claims it is not possible to "disarm" cyberwarriors, e.g., on p 119 "one objective that cyberwar cannot have is to disarm, much less destroy, the enemy. In the absence of physical combat, cyberwar cannot lead to the occupation of territory." This focus on defense and avoiding offense is dangerous. It may not be possible to disable a country's potential for cyberwar, but an adversary can certainly target, disrupt, and even destroy cyberwarriors. Elite cyberwarriors could be likened to nuclear scientists in this respect; take out the scientists and the whole program suffers.

Furthermore, by avoiding offense, Libicki makes a critical mistake: if cyberwar has only a "niche role," how is a state supposed to protect itself from cyberwar? In Libicki's world, defense is cheap and easy. In the real world, the best defense is 1) informed by offense, and 2) coordinated with offensive actions to target and disrupt adversary offensive activity. Libicki also focuses far too much on cyberwar in isolation, while real-world cyberwar has historically accompanied kinetic actions.

Of course, like any good consultant, Libicki leaves himself an out on p 177 by stating "cyberweapons come relatively cheap. Because a devastating cyberattack may facilitate or amplify physical operations and because an operational cyberwar capability is relatively inexpensive (especially if the Air Force can leverage investments in CNE), an offensive cyberwar capability is worth developing." The danger of this misguided tract is that policy makers will be swayed by Libicki's misinformed assumptions, arguments, and conclusions, and believe that defense alone is a sufficient focus for 21st century digital security. In reality, a kinetically weaker opponent can leverage a cyber attack to weaken a kinetically superior yet net-centric adversary. History shows, in all theatres, that defense does not win wars, and that the best defense is a good offense.

Shodan: Another Step Towards Intrusion as a Service

If you haven't seen Shodan yet, you're probably not using Twitter as a means to stay current on security issues. Shoot, I don't even follow anyone and I heard about it.

Basically a programmer named John Matherly scanned a huge swath of the Internet for certain TCP ports (80, 21, 23 at least) and published the results in a database with a nice Web front-end. This means you can put your mind in Google hacking mode, find vulnerable platforms, maybe add in some default passwords (or not), and take over someone's system. We're several steps along the Intrusion as a Service (IaaS) path already!

Incidentally, this idea is not new. I know at least one company that sold a service like this in 2004. The difference is that Shodan is free and open to the public.

Shodan is a dream for those wanting to spend Thanksgiving looking for vulnerable boxes, and a nightmare for their owners. I would not be surprised if disappears in the next few days after receiving a call or two from TLAs or LEAs or .mil's. I predict a mad scramble by intruders during the next 24-48 hours as they use Shodan to locate, own, and secure boxes before others do.

Matt Franz asked good questions about this site in his post Where's the Controversy about Shodan? Personally I think Shodan will disappear. Many will argue that publishing information about systems is not a problem. We hear similar arguments from people defending sites that publish torrents. Personally I don't have a problem with Shodan or torrent sites. From a personal responsibility issue it would have been nice to delay notification of Shodan until after Thanksgiving.

Tuesday, November 24, 2009

I'm Surprised That Your Kung Fu Is So Expert

This story is so awesome.

Hacks of Chinese Temple Were Online Kung Fu, Abbot Says

A hacker who posted a fake message on the Web site of China's famous Shaolin Temple repenting for its commercial activities was just making a mean joke, the temple's abbot was cited as saying by Chinese state media Monday.

That and previous attacks on the Web site were spoofs making fun of the temple, Buddhism and the abbot himself, Shi Yongxin was cited as telling the People's Daily.

"We all know Shaolin Temple has kung fu," Shi was quoted as saying. "Now there is kung fu on the Internet too, we were hacked three times in a row."

Why am I not surprised that a Shaolin monk has a better grasp of the fundamentals of computer security than some people in IT?

Bonus: Props to anyone who recognizes the title of this post.

Monday, November 23, 2009

Control "Monitoring" is Not Threat Monitoring

As I write this post I'm reminded of General Hayden's advice:

"Cyber" is difficult to understand, so be charitable with those who don't understand it, as well as those who claim "expertise."

It's important to remember that plenty of people are trying to act in a positive manner to defend important assets, so in that spirit I offer the following commentary.

Thanks to John Bambanek's SANS post I read NIST Drafts Cybersecurity Guidance by InformationWeek's J. Nicholas Hoover.

The article discusses the latest draft of SP 800-37 Rev. 1:
DRAFT Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach

I suspected this to be problematic given NIST's historical bias towards "controls," which I've criticized in Controls Are Not the Solution to Our Problem and Consensus Audit Guidelines Are Still Controls. The subtext for the article was:

The National Institute for Standards and Technology is urging the government to continuously monitor its own cybersecurity efforts.

As soon as I read that, I knew that NIST's definition of "monitor" and the article's definition of "monitor" did not mean the real sort of monitoring, threat monitoring, that would make a difference against modern adversaries.

The article continues:

Special Publication 800-37 fleshes out six steps federal agencies should take to tackle cybersecurity: categorization, selection of controls, implementation, assessment, authorization, and continuous monitoring...

Finally, and perhaps most significantly, the document advises federal agencies to put continuous monitoring in place. Software, firmware, hardware, operations, and threats change constantly. Within that flux, security needs to be managed in a structured way, Ross says.

"We need to recognize that we work in a very dynamic operational environment," Ross says. "That allows us to have an ongoing and continuing acceptance and understanding of risk, and that ongoing determination may change our thinking on whether current controls are sufficient."

The continuous risk management step might include use of automated configuration scanning tools, vulnerability scanning, and intrusion detection systems, as well as putting in place processes to monitor and update security guidance and assessments of system security requirements.

Note that the preceding text mentions "intrusion detection systems," but the rest of the text has nothing to do with real monitoring, i.e., detecting and responding to intrusions. I'm not just talking about network-centric approaches, by the way -- infrastructure, host, log, and other sources are all real monitoring, but this is not what NIST means by "monitoring."

To understand NIST's view of monitoring, try reading the new draft. I'll insert my comments.




A critical aspect of managing risk from information systems involves the continuous monitoring of the security controls employed within or inherited by the system.65

[65 A continuous monitoring program within an organization involves a different set of activities than Security Incident Monitoring or Security Event Monitoring programs.]

So, it sounds like activities that involve actually watching systems are not within scope for "continuous monitoring."

Conducting a thorough point-in-time assessment of the deployed security controls is a necessary but not sufficient condition to demonstrate security due diligence. An effective organizational information security program also includes a rigorous continuous monitoring program integrated into the system development life cycle. The objective of the continuous monitoring program is to determine if the set of deployed security controls continue to be effective over time in light of the inevitable changes that occur.

That sounds ok so far. I like the idea of evaluations to determine if controls are effective over time. In the next section below we get to the heart of the problem, and why I wrote this post.

An effective organization-wide continuous monitoring program includes:

• Configuration management and control processes for organizational information systems;

• Security impact analyses on actual or proposed changes to organizational information systems and environments of operation;67

• Assessment of selected security controls (including system-specific, hybrid, and common controls) based on the organization-defined continuous monitoring strategy;68

• Security status reporting to appropriate organizational officials;69 and

• Active involvement by authorizing officials in the ongoing management of information system-related security risks.

Ok, where is threat monitoring? I see configuration management, "control processes," reporting status to "officials," "active involvement by authorizing officials," and so on.

The next section tells me what NIST really considers to be "monitoring":

Priority for security control monitoring is given to the controls that have the reatest volatility and the controls that have been identified in the organization’s plan of action and milestones...

[S]ecurity policies and procedures in a particular organization may not be likely to change from one year to the next...

Security controls identified in the plan of action and milestones are also a priority in the continuous monitoring process, due to the fact that these controls have been deemed to be ineffective to some degree.

Organizations also consider specific threat information including known attack vectors (i.e., specific vulnerabilities exploited by threat sources) when selecting the set of security controls to monitor and the frequency of such monitoring...

Have you broken the code yet? Security control monitoring is a compliance activity. Granted, this is an improvement from the typical certification and accreditation debacle, where "security" is assessed via paperwork exercises every three years. Instead, .gov compliance teams will perform so-called "continuous monitoring," meaning more regular checks to see if systems are in compliance.

Is this really an improvement?

I don't think so. NIST is missing the point. Their approach advocates Control-compliant security, not field-assessed security. Their "scoreboard" is the result of a compliance audit, not the number of systems under adversary control or the amount of data exfiltrated or degraded by the adversary.

I don't care how well your defensive "controls" are informed by offense. If you don't have a Computer Incident Response Team performing continuous threat monitoring for detection and response, you don't know if your controls are working. The NIST document has a few hints about the right approach, at best, but the majority of the so-called "monitoring" guidance is another compliance activity.

Saturday, November 21, 2009

Audio of Bejtlich Presentation on Network Security Monitoring

One of the presentations I delivered at the Information Security Summit last month discussed Network Security Monitoring. The Security Justice guys recorded audio of the presentation and posted it here as Network Security Monitoring and Incident Response. The audio file is InfoSec2009_RichardBejtlich.mp3.

Traffic Talk 8 Posted

I just noticed that my 8th edition of Traffic Talk, titled How to use user-agent strings as a network monitoring tool, was posted this week. It's a simple concept that plenty of NSM practitioners implement, and I highly recommend it.

Monday, November 16, 2009

Extending Security Event Correlation

Last year at this time I wrote a series of posts on security event correlation. I offered the following definition in the final post:

Security event correlation is the process of applying criteria to data inputs, generally of a conditional ("if-then") nature, in order to generate actionable data outputs.

Since then what I have found is that products and people still claim this as a goal, but for the most part achieving it remains elusive.

Please also see that last post for what SEC is not, i.e., SEC is not simply collection (of data sources), normalization (of data sources), prioritization (of events), suppression (via thresholding), accumulation (via simple incrementing counters), centralization (of policies), summarization (via reports), administration (of software), or delegation (of tasks).

So is SEC anything else? Based on some operational uses I have seen, I think I can safely introduce an extension to "true" SEC: applying information from one or more data sources to develop context for another data source. What does that mean?

One example I saw recently (and this is not particularly new, but it's definitely useful), involves NetWitness 9.0. Their new NetWitness Identity function adds user names collected from Active Directory to the meta data available while investigating network traffic. Analysts can choose to review sessions based on user names rather than just using source IP addresses.

This is certainly not an "if-then" proposition, as sold by SIM vendors, but the value of this approach is clear. I hope my use of the word "context" doesn't apply to much historical security baggage to this conversation. I'm not talking about making IDS alerts more useful by knowing the qualities of a target of server-side attack, for example. Rather, to take the case of a server side attack scenario, imagine replacing the source IP with the country "Bulgaria" and the target IP with "Web server hosting Application X" or similar. It's a different way for an analyst to think about an investigation.

Friday, November 13, 2009

Embedded Hardware and Software Pen Tester Positions in GE Smart Grid

I was asked to help locate two candidates for positions in the GE Smart Grid initiative.

We're looking for an Embedded Hardware Penetration Tester (1080237) and an Embedded Firmware Penetration Tester (1080236).

If interested, search for the indicated job numbers at or go to the job site to get to the search function a little faster.

I don't have any other information on these jobs, so please work through the job site. Thank you.

Update Mon 16 Nov: As noted by Charlene in the comments below, the jobs are no longer posted. If I hear they are back I will post an update here.

Update Wed 18 Nov: I was just told the jobs are either open or will be soon. Thank you.

Tuesday, November 10, 2009

Reaction to 60 Minutes Story

I found the new 60 Minutes update on information warfare to be interesting. I fear that the debate over whether or not "hackers" disabled Brazil's electrical grid will overshadow the real issue presented in the story: advanced persistent threats are here, have been here, and will continue to be here.

Some critics claim APT must be a bogey man invented by agencies arguing over how to gain greater control over the citizenry. Let's accept agencies are arguing over turf. That doesn't mean the threat is not real. If you refuse to accept the threat exists, you're simply ignorant of the facts. That might not be your fault, given policymakers' relative unwillingness to speak out.

If you want to get more facts on this issue, I recommend the Northrop Grumman report I mentioned last month.

Saturday, November 07, 2009

Notes from Talk by Michael Hayden

I had the distinct privilege to attend a keynote by retired Air Force General Michael Hayden, most recently CIA director and previously NSA director. NetWitness brought Gen Hayden to its user conference this week, so I was really pleased to attend that event. I worked for Gen Hayden when he was commander of Air Intelligence Agency in the 1990s; I served in the information warfare planning division at that time.

Gen Hayden offered the audience four main points in his talk.

  1. "Cyber" is difficult to understand, so be charitable with those who don't understand it, as well as those who claim "expertise." Cyber is a domain like other warfighting domains (land, sea, air, space), but it also possesses unique characteristics. Cyber is man-made, and operators can alter its geography -- even potentially to destroy it. Also, cyber conflicts are more likely to affect other domains, whereas it is theoretically possible to fight an "all-air" battle, or an "all-sea" battle.

  2. The rate of change for technology far exceeds the rate of change for policy. Operator activities defy our ability to characterize them. "Computer network defense (CND), exploitation (CNE), and attack (CNA) are operationally indistinguishable."

    Gen Hayden compared the rush to develop and deploy technology to consumers and organizations to the land rushes of the late 1890s. When "ease of use," "security," and "privacy" are weighed against each other, ease of use has traditionally dominated.

    When making policy, what should apply? Title 10 (military), Title 18 (criminal), Title 50 (intelligence), or international law?

    Gen Hayden asked what private organizations in the US maintain their own ballistic missile defense systems. None of course -- meaning, why do we expect the private sector to defend itself against cyber threats, on a "point" basis?

  3. Cyber is difficult to discuss. No one wants to talk about it, especially at the national level. The agency with the most capability to defend the nation suffers because it is both secret and powerful, two characteristics it needs to be effective. The public and policymakers (rightfully) distrust secret and powerful organizations.

  4. Think like intelligence officers. I should have expected this, coming from the most distinguished intelligence officer of our age. Gen Hayden says the first question he asks when visiting private companies to consult on cyber issues is: who is your intelligence officer?

    Gen Hayden offered advice for those with an intelligence mindset who provide advice to policymakers. He said intel officers are traditional inductive thinkers, starting with indicators and developing facts, from which they derive general theories. Intel officers are often pessimistic and realistic because they deal with operational realities, "as the world is."

    Policymakers, on the other hand, are often deductive thinkers, starting with a "vison," with facts at the other end of their thinking. "No one elects a politician for their command of the facts. We elect politicians who have a vision of where we should be, not where we are." Policymakers are often optimistic and idealistic, looking at their end goal, "as the would should be."

    When these two world views meet, say when the intel officer briefs the policymaker, the result can be jarring. It's up to the intel officer to figure out how to present findings in a way that the policymaker can relate to the facts.

After the prepared remarks I asked Gen Hayden what he thought of threat-centric defenses. He said it is not outside the realm of possibility to support giving private organizations the right to more aggressively defend themselves. Private forces already perform guard duties; police forces don't carry the whole burden for preventing crime, for example.

Gen Hayden also discussed the developments which led from military use of air power to a separate Air Force in 1947. He said "no one in cyber has sunk the Ostfriesland yet," which was a great analogy. He also says there are no intellectual equivalents to Herman Kahn or Paul Nitze in the cyber thought landscape.

Bejtlich on Security Justice Podcast

After I spoke at the Information Security Summit in Ohio last month, the guys at the Security Justice podcast interviewed me and Tyler Hudak.

You can listen to the archive here. It was fairly loud in the room but you'd never know it listening to the audio. Great work guys.

We discuss open source software, vulnerability research and disclosure, product security incident response teams (PSIRTs), input vs output metrics, insourcing vs outsourcing, and building an incident response team.

DojoCon Videos Online

Props to Marcus Carey for live streaming talks from DojoCon. I appeared in my keynote, plus panels on incident response and cloud security. I thought the conference was excellent and many people posted their thoughts to #dojocon on Twitter.

Tuesday, November 03, 2009

Tentative Speaker List for SANS Incident Detection Summit

Thanks to everyone who attended the Bejtlich and Bradley Webcast for SANS yesterday.

We recorded that Webcast (audio is now available) to start a discussion concerning professional incident detection.

I'm pleased to publish the following tentative speaker list for the SANS WhatWorks in Incident Detection Summit 2009 on 9-10 Dec in Washington, DC.

We'll publish all of this information, plus the biographies for the speakers, on the agenda site, but I wanted to share what I have with you.

Day One (9 Dec)

  • Keynote: Ron Gula

  • Briefing: Network Security Monitoring dev+user: Bamm Visscher, David Bianco

  • Panel: CIRTs and MSSPs, moderate by Rocky DeStefano: Michael Cloppert, Nate Richmond, Jerry Dixon, Tyler Hudak, Matt Richard, Jon Ramsey

  • Cyberspeak Podcast live during lunch with Bret Padres and Ovie Carroll

  • Briefing: Bro introduction: Seth Hall

  • Panel: Enterprise network detection tools and tactics, potentially with a guest moderator: Ron Shaffer, Matt Olney, Nate Richmond, Matt Jonkman, Michael Rash, Andre Ludwig, Tim Belcher

  • Briefing: Snort update: Martin Roesch

  • Panel: Global network detection tools and tactics: Stephen Windsor, Earl Zmijewski, Andre' M. Di Mino, Matt Olney, Jose Nazario, Joe Levy

  • Panel: Commercial security intelligence service providers, moderated by Mike Cloppert: Gunter Ollmann, Rick Howard, Dave Harlow, Jon Ramsey, Wade Baker

  • Evening clas: Advanced Analysis with Matt Richard

Day Two (10 Dec)

  • Keynote: Tony Sager

  • Briefing: Memory analysis dev+user: Aaron Walters, Brendan Dolan-Gavitt

  • Panel: Detection using logs: Jesus Torres, Nate Richmond, Michael Rash, Matt Richard, Ron Gula, J. Andrew Valentine, Alex Raitz

  • Panel: Network Forensics: Tim Belcher, Joe Levy, Martin Roesch, Ken Bradley

  • Briefing: Honeynet Project: Brian Hay, Michael Davis

  • Panel: Unix and Windows tools and techniques: Michael Cloppert, Patrick Mullen, Kris Harms

  • Panel: Noncommercial security intelligence service providers, moderated by Mike Cloppert: Andre' M. Di Mino, Jerry Dixon, Ken Dunham, Andre Ludwig, Jose Nazario

  • Panel: Commercial host-centric detection and analysis tools: Dave Merkel, Ron Gula, Alex Raitz

I'm thankful to have these excellent speakers and panel participants on board for this event. If you register and pay tuition by next Wednesday, 11 Nov, you'll save $250. Thank you.