Thursday, March 20, 2014

Are Nation States Responsible for Evil Traffic Leaving Their Networks?

During recent talks to various audiences, I've mentioned discussions within the United Nations. One point from these discussions involved certain nation states agreeing to modes of behavior in cyber space. I found the document containing these recent statements: A/68/98, Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (pdf). This document is hosted within the United Nations Office for Disarmament Affairs, in the developments in the field of information and telecommunications section.

Fifteen countries were involved in producing this document: Argentina, Australia, Belarus, Canada, China, Egypt, Estonia, France, Germany, India, Indonesia, Japan, the Russian Federation, the United Kingdom of Great Britain and Northern Ireland and the United States of America.

Within the section titled "Recommendations on norms, rules and principles of responsible behaviour by States," I found the following noteworthy:

19. International law, and in particular the Charter of the United Nations, is applicable and is essential to maintaining peace and stability and promoting an open, secure, peaceful and accessible ICT environment...

23. States must meet their international obligations regarding internationally wrongful acts attributable to them. States must not use proxies to commit internationally wrongful acts. States should seek to ensure that their territories are not used by non-State actors for unlawful use of ICTs.

The first statement is important because it "imports" a large body of external law and agreements into the cyber field, for good or ill.

The second statement is important because, if States obey these principles, it has interesting effects upon malicious activity leaving State networks. Collectively these sentences imply that States are responsible for their networks. States can't claim that they are only innocent intrusion victims, and that any malicious activity leaving their State isn't their fault or problem.

Whether States try to meet these obligations, and whether others call them out for not meeting them, is another matter.

Sunday, March 16, 2014

Five Thoughts from VADM Rogers Testimony

I had a chance to read Advance Questions for Vice Admiral Michael S. Rogers, USN (pdf) this weekend.

I wanted to share five thoughts based on excerpts from the VADM Rogers' answers to written questions posed by the Senate Armed Services Committee.

1. The Committee asked: Can deterrence be an effective strategy in the absence of reliable attribution?

VADM Rogers responded: Yes, I believe there can be effective levels of deterrence despite the challenges of attribution. Attribution has improved, but is still not timely in many circumstances...

Cyber presence, being forward deployed in cyberspace, and garnering the indications and warnings of our most likely adversaries can help (as we do with our forces dedicated to Defend the Nation). (emphasis added)

I wonder if "cyber presence" and "being forward deployed in cyberspace" means having access to adversary systems? There's little doubt as to the source of an attack if you are resident on the system launching the attack.

2. The Committee asked: Is it advisable to develop cyberspace officers as we do other combat arms or line officers? Why or why not?

VADM Rogers responded: ...We must find a way to simultaneously ensure combat arms and line officers are better prepared to contribute, and cyberspace officers are able to enjoy a long, meaningful career with upward mobility. A meaningful career should allow them to fully develop as specialized experts, mentor those around them, and truly influence how we ought to train and fight in this mission space. 

I am especially interested in the merit of how a visible commitment to valuing cyberspace officers in our ranks will affect recruitment and retention. I believe that many of today’s youth who are uniquely prepared to contribute (e.g. formally educated or self-developed technical expertise) do not feel there is a place for them in our uniformed services

We must find a way to strengthen the message of opportunity and I believe part of the answer is to do our part to ensure cyberspace officers are viewed as equals in the eyes of line and combat arms officers; not enablers, but equals. Equals with capabilities no less valued than those delivered by professional aviators, special operators, infantry, or surface warfare. (emphasis added)

In my opinion, the best way to meet these goals is to create a separate Cyber Force. Please read the article Time for a US Cyber Force by Admiral James Stavridis (ret) and David Weinstein.

3. The Committee asked: The Unified Command Plan (UCP) establishes U.S. Cyber Command as a subunified command reporting to U.S. Strategic Command. We understand that the Administration considered modifying the UCP to establish U.S. Cyber Command as a full combatant command.
What are the best arguments for and against taking such action now?

VADM Rogers responded: ...The argument for full Unified Command status is probably best stated in terms of the threat. Cyber attacks may occur with little warning, and more than likely will allow only minutes to seconds to mount a defensive action seeking to prevent or deflect potentially significant harm to U.S critical infrastructure. 

Existing department processes and procedures for seeking authorities to act in response to such emergency actions are limited to Unified Combatant Commanders. If confirmed, as the Commander of U.S. CYBERCOM, as a Sub-unified Combatant Commander I would be required to coordinate and communicate through Commander, U.S. Strategic Command to seek Secretary of Defense or even Presidential approval to defend the nation in cyberspace. 

In a response cycle of seconds to minutes, this could come with a severe cost and could even obviate any meaningful action. As required in the current Standing Rules of Engagement, as a Combatant Commander, I would have the requisite authorities to directly engage with SECDEF or POTUS as necessary to defend the nation. (emphasis added)

I'm dismayed but not surprised by this argument. I'm dismayed because it sounds like the most important reason to establish a unified cyber command is the perception that "cyber attacks...allow only minutes to seconds to mount a defensive action." This is just not true for any strategically significant attack.

If you only have "minutes to seconds" left for defense, you are way too far down the kill chain. You need to be intercepting the adversary in the reconnaissance phase, or at least no earlier than the stage whereby the threat explores the target searching for critical elements. I fear the "minutes to seconds" camp is a legacy of the bad old days of Internet worms from 10 years ago.

4. The Committee asked: How could the Internet be redesigned to provide greater inherent security?

VADM Rogers responded: Advancements in technology continually change the architecture of the Internet. Cloud computing, for instance, is a significant change in how industry and individuals use Internet services... 

Several major providers of Internet services are already implementing increased security in email and purchasing services by using encryption for all transmissions from the client to the server. It is possible that the service providers could be given more responsibility to protect end clients connected directly to their infrastructures. 

They are in a position to stop attacks targeted at consumers and recognize when consumer devices on their networks have been subverted. The inability of end users to verify the originator of an email and for hackers to forge email addresses have resulted in serious compromises of end user systems... (emphasis added)

So, we see reference to cloud computing, encrypting client-to-server communications, ISPs protecting end users, and email verification. Think of all the tactical and technology options that were not mentioned here. Also notice the lack of discussion of better operations/campaigns and strategies. Finally, notice the Committee asked about redesigning the Internet, an engineering-focused approach.

5.  I am glad to live in a country where a candidate to lead important military and intelligence agencies can be questioned in then open for public benefit. However, I am disappointed that the Unified Command Plan (UCP), referenced several times in the Q&A, remains a classified document.

The best we seem to have is The Unified Command Plan and Combatant Commands: Background and Issues for Congress, (pdf) a 2013 Congressional Research Service document hosted by FAS, and History of the Unified Command Plan (pdf), hosted by The 2012 CRS report is posted on a Web site. It would be helpful to read an unclassified version of the next UCP, which is due anytime it seems.

PHOTO CREDIT: Gary Cameron, Reuters.

Saturday, March 08, 2014

Bejtlich Teaching at Black Hat USA 2014

I'm pleased to announce that I will be teaching one class at Black Hat USA 2014 2-3 and 4-5 August 2014 in Las Vegas, Nevada. The class in Network Security Monitoring 101. I've taught this class in Las Vegas in July 2013 and Seattle in December 2013. I posted Feedback from Network Security Monitoring 101 Classes last year as a sample of the student commentary I received.

This class is the perfect jumpstart for anyone who wants to begin a network security monitoring program at their organization. You may enter with no NSM knowledge, but when you leave you'll be able to understand, deploy, and use NSM to detect and respond to intruders, using open source software and repurposed hardware.

The first discounted registration deadline is 11:59 pm EDT June 2nd. The second discounted registration deadline (more expensive than the first but cheaper than later) ends 11:59 pm EDT July 26th. You can register here.

Please note: I have no plans to teach this class again in the United States. I haven't decided yet if I will not teach the class at Black Hat Europe 2014 in Amsterdam in October.

Since starting my current Black Hat teaching run in 2007, I've completely replaced each course every other year. In 2007-2008 I taught TCP/IP Weapons School version 1. In 2009-2010 I taught TCP/IP Weapons School version 2. In 2011-2012 I taught TCP/IP Weapons School version 3. In 2013-2014 I taught Network Security Monitoring 101. This fall I would need to design a brand new course to continue this trend.

I have no plans to design a new course for 2015 and beyond. If you want to see me teach Network Security Monitoring and related subjects, Black Hat USA is your best option.

Please sign up soon, for two reasons. First, if not enough people sign up early, Black Hat might cancel the class. Second, if many people sign up, you risk losing a seat. With so many classes taught in Las Vegas, the conference lacks the large rooms necessary to support big classes.

Several students asked for a more complete class outline. So, in addition to the outline posted currently by Black Hat, I present the following that shows what sort of material I cover in my new class.


Is your network safe from intruders? Do you know how to find out? Do you know what to do when you learn the truth? If you are a beginner, and need answers to these questions, Network Security Monitoring 101 (NSM101) is the newest Black Hat course for you. This vendor-neutral, open source software-friendly, reality-driven two-day event will teach students the investigative mindset not found in classes that focus solely on tools. NSM101 is hands-on, lab-centric, and grounded in the latest strategies and tactics that work against adversaries like organized criminals, opportunistic intruders, and advanced persistent threats. Best of all, this class is designed *for beginners*: all you need is a desire to learn and a laptop ready to run a virtual machine. Instructor Richard Bejtlich has taught over 1,000 Black Hat students since 2002, and this brand new, 101-level course will guide you into the world of Network Security Monitoring.


Day One

·         Introduction
·         Enterprise Security Cycle
·         State of South Carolina case study
·         Difference between NSM and Continuous Monitoring
·         Blocking, filtering, and denying mechanisms
·         Why does NSM work?
·         When NSM won’t work
·         Is NSM legal?
·         How does one protect privacy during NSM operations?
·         NSM data types
·         Where can I buy NSM?

·         Break

·         SPAN ports and taps
·         Making visibility decisions
·         Traffic flow
·         Lab 1: Visibility in ten sample networks
·         Security Onion introduction
·         Stand-alone vs server plus sensors
·         Core Security Onion tools
·         Lab 2: Security Onion installation

·         Lunch

·         Guided review of Capinfos, Tcpdump, Tshark, and Argus
·         Lab 3: Using Capinfos, Tcpdump, Tshark, and Argus

·         Break

·         Guided review of Wireshark, Bro, and Snort
·         Lab 4: Using Wireshark, Bro, and Snort
·         Using Tcpreplay with NSM consoles
·         Guided review of process management, key directories, and disk usage
·         Lab 5: Process management, key directories, and disk usage

Day Two

·         Computer incident detection and response process
·         Intrusion Kill Chain
·         Incident categories
·         CIRT roles
·         Communication
·         Containment techniques
·         Waves and campaigns
·         Remediation
·         Server-side attack pattern
·         Client-side attack pattern

·         Break

·         Guided review of Sguil
·         Lab 6: Using Sguil
·         Guided review of ELSA
·         Lab 7: Using ELSA

·         Lunch

·         Lab 8. Intrusion Part 1 Forensic Analysis
·         Lab 9. Intrusion Part 1 Console Analysis

·         Break

·         Lab 10. Intrusion Part 2 Forensic Analysis
·         Lab 11. Intrusion Part 2 Console Analysis


Students must be comfortable using command line tools in a non-Windows environment such as Linux or FreeBSD. Basic familiarity with TCP/IP networking and packet analysis is a plus.


NSM101 is a LAB-DRIVEN course. Students MUST bring a laptop with at least 8 GB RAM and at least 20 GB free on the hard drive. The laptop MUST be able to run a virtualization product that can CREATE VMs from an .iso, such as VMware Workstation (minimum version 8, 9 or 10 is preferred); VMware Player (minimum version 5 -- older versions do not support VM creation); VMware Fusion (minimum version 5, for Mac); or Oracle VM VirtualBox (minimum version 4.2). A laptop with access to an internal or external DVD drive is preferred, but not mandatory.

Students SHOULD test the open source Security Onion ( NSM distro prior to class. The students should try booting the latest version of the 12.04 64 bit Security Onion distribution into live mode. Students MUST ensure their laptops can run a 64 bit virtual machine. For help with this requirement, see the VMware knowledgebase article “Ensuring Virtualization Technology is enabled on your VMware host (1003944)” ( Students MUST have the BIOS password for their laptop in the event that they need to enable virtualization support in class. Students MUST also have administrator-level access to their laptop to install software, in the event they need to reconfigure their laptop in class.


Students will receive a paper class handbook with printed slides, a lab workbook, and the teacher’s guide for the lab questions. Students will also receive a DVD with a recent version of the Security Onion NSM distribution.


Richard Bejtlich is Chief Security Strategist at FireEye, and was Mandiant's Chief Security Officer when FireEye acquired Mandiant in 2013. He is a nonresident senior fellow at the Brookings Institution, a board member at the Open Information Security Foundation, and an advisor to Threat Stack. He was previously Director of Incident Response for General Electric, where he built and led the 40-member GE Computer Incident Response Team (GE-CIRT). Richard began his digital security career as a military intelligence officer in 1997 at the Air Force Computer Emergency Response Team (AFCERT), Air Force Information Warfare Center (AFIWC), and Air Intelligence Agency (AIA). Richard is a graduate of Harvard University and the United States Air Force Academy. His fourth book is "The Practice of Network Security Monitoring" ( He also writes for his blog ( and Twitter (@taosecurity), and teaches for Black Hat.

Saturday, February 22, 2014

The Limits of Tool- and Tactics-Centric Thinking

Earlier today I read a post by Dave Aitel to his mailing list titled Drinking the Cool-aid. Because it includes a chart you should review, I included a screenshot of it in this blog, below. Basically Dave lists several gross categories of defensive digital security technology and tools, then lists what he perceives as deficiencies and benefits of each. Embedded in these pluses and minuses are several tactical elements as well. Please take a look at the original or my screenshot.

I had three reactions to this post.

First, I recognized that it's written by someone who is not responsible for defending any network of scale or significance. Network defense is more than tools and tactics. It's more often about people and processes. My initial response is unsatisfying and simplistic, however, even though I agree broadly with his critiques of anti-virus, firewalls, WAFs, and some traditional security technology.

Second, staying within the realm of tools and tactics, Dave is just wrong on several counts:
  • He emphasizes the role of encryption to defeat many defensive tools, but ignores that security and information technology architects regularly make deployment decisions to provide visibility in the presence of encryption.
  • He ignores or is ignorant of technology to defeat obfuscation and encryption used by intruders.
  • He says "archiving large amounts of traffic is insanely expensive and requires massive analytics to process," which is wrong on both counts. On a shoestring budget my team deployed hundreds of open source NSM sensors across my previous employer to capture data on gateways of up to multi-Gbps bandwidth. Had we used commercial packet capture platforms we would have needed a much bigger budget, but open source software like Security Onion has put NSM in everyone's hands, cheaply. Regarding "massive analytics," it's easier all the time to get what you need for solid log technology. You can even buy awesome commercial technology to get the job done in ways you never imagined.
I could make other arguments regarding tactics and tools, but you get the idea from the three I listed.

Third, and this is really my biggest issue with Dave's post, is that he demonstrates the all-too-common tendency for security professionals to constrain their thinking to the levels of tactics and tools. What do I mean? Consider this diagram from my O'Reilly Webinar on my newest book:

A strategic security program doesn't start with tools and tactics. Instead, it starts with one or more overall program goals. The strategy-minded CISO gets executive buy-in to those goals; this works at a level understood by technicians and non-technicians alike. Next the CISO develops strategies to implement those goals, organizes and runs campaigns and operations to support the strategies, helps his team use tactics to realize the campaigns and operations, and procures tools and technology to equip his team.

Here is an example of one strategic security approach to minimize loss due to intrusions, using a strategy of rapid detection, response, and containment, and NSM-inspired operations/campaigns, tactics, and tools.

Now I don't want to seem too harsh, because tool- and tactics-centric thinking is not just endemic to the digital security world. I read how it played out during the planning and execution of the air campaign during the first Gulf War.

I read the wonderful John Warden and the Renaissance of American Air Power and learned how the US Air Force at the time suffered the same problems. The Air Force was very tactics- and technology-focused. They cared about how to defeat other aircraft in aerial combat and sought to keep the Army happy by making close air support their main contribution to the "joint" fight. The Air Force managed to quickly deploy planes to Saudi Arabia but had little idea how to use those forces in a campaign, let alone to achieve strategic or policy goals. It took visionaries like John Warden and David Deptula to make the air campaign a reality, and forever change the nature of air warfare.

I was a cadet when this all happened and remember my instructors exhibiting the contemporary obsession with tactics and tech we've seen in the security world for decades. Only later in my Air Force career did I see the strategic viewpoint gain acceptance.

Expect to hear more from me about the need for strategic thinking in digital security. I intend to apply to a PhD program this spring and begin research in the fall. I want to apply strategic thinking to private sector digital defense, because that is where a lot of the action is and where the need is greatest.

For now, I talked about the need for strategy in my O'Reilly Webinar.

Thursday, February 06, 2014

More Russian Information Warfare

In all the hype about "cyberspace" and "cyberwar," it's easy to forget about information warfare. This term was in vogue in the military when I was an Air Force intelligence officer in the 1990s. The Russians were considered to be experts at using information to their advantage and they appear to continue to wield that expertise on a regular basis. The latest incarnation goes like this:

1. Unknown parties, probably Russian SIGINT operators, intercept and record a phone call between US Assistant Secretary of State Victoria Nuland and US Ambassador to Ukraine, Geoffrey Pyatt. In the phone call, the parties use language which could be considered inflammatory or insulting to EU politicians.

2. The interceptors pass the phone call recording to a private third party.

3. Either that third party, or some recipient down the line, posts the audio and a video overlay on Youtube.

4. The third party Tweets about the video.

5. Russian-sponsored television begins broadcasting stories about the video.

6. Reputable news media begin broadcasting stories about the video.

7. The rift between American and European leaders widens (possibly).

I find several aspects of this story fascinating.

First, I am surprised that whomever intercepted the phone call decided it was worthwhile to probably burn an intelligence source. It's possible the Americans were using consumer cell phones, subject to monitoring by foreign intelligence services. If true, the Americans were not very OPSEC-aware. If the Americans were using a line which they thought was secure, then the interceptors just revealed they know how to access it.

Second, the use of third parties is characteristic of Russian activities. We are all familiar with the role of patriotic hackers, youth groups, etc. when doing normal "cyber" activities. This sort of propaganda activity, with direct ties to a probable SIGINT operation, is interesting.

Third, I wonder about the cost of this operation. In some ways it is very cheap -- Youtube, Twitter, etc. In other ways, it may be expensive -- interception and probable manual auditing of the audio to identify divisive and "offensive" content.

I don't pretend to be a Russian SIGINT expert, but I wanted to document this case in my blog. Constructive commentary is welcome but subject to moderation due to spam countermeasures. Incidentally, if I got the origin or order of any of these events wrong, I'm open to that too. I didn't ask my Russian-speaking friends to comment -- I'm just noting this story for future reference.

Update: I noticed that sources like Kyiv Post say:

Among the first to tweet the audio recording was an aide to Russian Deputy Prime Minister Dmitry Rogozin, named Dmitry Loskutov, who also wrote: "Sort of controversial judgment from Assistant Secretary of State Victoria Nuland speaking about the EU."

However, the timestamp on this Russian aide Tweet is "11:35 PM - 5 Feb 2014" whereas the private Tweet I mentioned earlier shows "9:36 pm - 4 Feb 2014" -- a day earlier.

Sunday, January 26, 2014

Quick Thought on Internet Governance and Edward Snowden

I am neither an Internet governance expert nor am I personally consumed by the issue. However, I wanted to point out a possible rhetorical inconsistency involving the Internet governance debate and another hot topic, theft of secret documents by insiders, namely Edward Snowden.

Let me set the stage. First, Internet governance.

Too often the Internet governance debate is reduced to the following. One side is characterized as "multi-stakeholder," consisting of various nongovernmental parties with overlapping agendas, like ICANN, IANA, IETF, etc. This side is often referred to as "the West" (thanks to the US, Canada, Europe, etc. being on this side), and is considered a proponent of an "open" Internet. The other side aligns with state governments and made its presence felt at the monumental December 2012 ITU World Conference on International Telecommunications (WCIT) meeting. This side is often referred to as "the East" (thanks to Russia, China, the Middle East, etc.), and is considered a proponent of a "closed" or "controlled" Internet.

Continuing to set the stage, let me now mention theft of secret documents.

One of the critiques of Edward Snowden involves the following. He stole documents on his own accord, claiming he had the right to do so by the "egregious" nature of what he found (or was sent to find). Critics reply that "no one elected Edward Snowden," but that the programs he exposed were authorized by all three branches of the US government. Because that government is elected by the people, one could say the government is speaking on behalf of the people, while Snowden is acting only on his behalf.

Here's the problem.

If you believe that elected governments are the proper forum for expressing the wishes of their people, you should have a difficult time defending a "multi-stakeholder" model that puts groups like ICANN, IANA, IETF, etc. on equal footing (or even above) representatives of elected governments. If you believe in the primacy of the democratic system, you should also believe forums of elected representatives are the proper place to debate and decide Internet governance.

That chain of logic means Western democracies who support representative government should view government-centric bodies like the ITU in more favorable light than they do presently. After all, who created the UN? Where is the organizations headquarters? Who pays its bills?

You probably detect the "escape hatch" for the multi-stakeholder proponents: my use of the term "elected governments." If a regime was not properly elected by its people, it should not have the right to speak for them. This applies to governments such as those in the People's Republic of China. Depending on your view of the legitimacy of the Russian election process, it may or may not apply to Russia. You can extend the argument as necessary to other countries.

The bottom line is this: be careful promoting multi-stakeholder Internet governance at the expense of representation by elected governments, if you also feel that Edward Snowden has no right to contravene the decision of a properly elected American government.

PS: If you want to know more about WCIT, try reading Summary Report of the ITU-T World Conference on International Telecommunications by Robert Pepper and Chip Sharp.

Saturday, January 25, 2014

Suricata 2.0beta2 as IPS on Ubuntu 12.04

Today I decided to install Suricata, the open source intrusion detection and prevention engine from the Open Information Security Foundation (OISF), as an IPS.

I've been running Suricata in IDS mode through Security Onion on and off for several years, but I never tried Suricata as an IPS.

I decided I wanted to run Suricata as a bridging IPS, such that it did not route traffic. In other words, I could place a Suricata IPS between, say, a router and a firewall, or between a router and a host, and neither endpoint would know the IPS was present.

Looking at available documentation across the Web, I did not see specific mention of this exact configuration. It's entirely possible I missed something useful, but most people running Linux as a bridge weren't using Suricata.

Those running Linux as a bridge sometimes enabled an IP address for the bridge, which is something I didn't want to do. (True bridges should be invisible to endpoints.)

Of course, to administer the bridge system itself, you ensure the box has a third interface and you assign that interface a management IP address.

I also noticed those using Suricata as an IPS tended to configure it as a router, giving IP addresses to the internal and external IP addresses. I wanted an invisible bridge, not a router.

The hardware I used for the bridge was a 2003-era Shuttle small form factor system with 512 MB RAM, two NICs (eth0 and eth1), and a wireless NIC (wlan0). I installed Ubuntu Server 12.04.3 LTS. I tried installing the 64 bit version but realized the box was too old for 64 bit. Once I tried a 32 bit installation I was working in no time.

The first step I took was to create the bridge. I wanted to deploy the system between a router and an endpoint with IP address, like this:

router <-> eth0/Linux bridge/eth1 <->

These are the commands to create the bridge. This how-to was useful.

$ sudo apt-get install bridge-utils
$ sudo brctl addbr br0
$ sudo brctl addif br0 eth0
$ sudo brctl addif br0 eth1
$ sudo ifconfig eth0
$ sudo ifconfig eth1
$ sudo ifconfig br0 up

With the bridge working, I could reach, the endpoint host, through the Ubuntu Linux bridge system. If I wanted to, I could watch traffic with Tcpdump on br0, eth0, or eth1.

Next I needed to install Suricata. I decided to use the beta packages published by OISF as described here. I also had to install python-software-properties as shown in order to have add-apt-repository available.

$ sudo apt-get install python-software-properties

$ sudo add-apt-repository ppa:oisf/suricata-beta
You are about to add the following PPA to your system:
 Suricata IDS/IPS/NSM beta packages

Suricata IDS/IPS/NSM - Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine.

Open Source and owned by a community run non-profit foundation, the Open Information Security Foundation (OISF).
 Suricata is developed by the OISF, its supporting vendors and the community.

This engine is not intended to just replace or emulate the existing tools in the industry, but will bring new ideas
 and technologies to the field.

This new Engine supports:

Multi-Threading - provides for extremely fast and flexible operation on multicore systems.
File Extraction, MD5 matching - over 4000 types of file recognition/extraction transmitted live over the wire.
TLS/SSL certificate matching/logging
Automatic Protocol Detection (IPv4/6, TCP, UDP, ICMP, HTTP, TLS, FTP, SMB )
Gzip Decompression
Fast IP Matching
Hardware acceleration on CUDA and GPU cards

and many more great features -
 More info:
Press [ENTER] to continue or ctrl-c to cancel adding it

gpg: keyring `/tmp/tmpqk6Ubk/secring.gpg' created
gpg: keyring `/tmp/tmpqk6Ubk/pubring.gpg' created
gpg: requesting key 66EB736F from hkp server
gpg: /tmp/tmpqk6Ubk/trustdb.gpg: trustdb created
gpg: key 66EB736F: public key "Launchpad PPA for Peter Manev" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

$ sudo apt-get update
Now I was ready to install Suricata and Htp, a dependency.
$ sudo apt-get install suricata htp
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libhtp1 libnet1 libnetfilter-queue1 libnspr4 libnss3 libyaml-0-2
The following NEW packages will be installed:
  htp libhtp1 libnet1 libnetfilter-queue1 libnspr4 libnss3 libyaml-0-2
0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,510 kB of archives.
After this operation, 8,394 kB of additional disk space will be used.
Do you want to continue [Y/n]?
With this process done I added rules from Emerging Threats. I found Samiux's blog post helpful.
$ cd /etc/suricata
$ sudo wget
$ sudo tar -xzf emerging.rules.tar.gz
$ sudo mkdir /var/log/suricata
$ sudo touch /etc/suricata/threshold.config

Now I had to edit /etc/suricata/suricata.yaml. The following diff shows the changes I made to the original file.

$ diff -u /etc/suricata/suricata.yaml.orig /etc/suricata/suricata.yaml
--- /etc/suricata/suricata.yaml.orig    2014-01-25 21:39:57.542801685 -0500
+++ /etc/suricata/suricata.yaml 2014-01-25 21:41:31.530801055 -0500
@@ -46,7 +46,7 @@

 # Default pid file.
 # Will use this file if no --pidfile in command options.
-#pid-file: /var/run/
+pid-file: /var/run/

 # Daemon working directory
 # Suricata will change directory to this one if provided
@@ -208,7 +208,7 @@

   # a line based information for dropped packets in IPS mode
   - drop:
-      enabled: no
+      enabled: yes
       filename: drop.log
       append: yes
       #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
@@ -337,7 +337,7 @@

 # You can specify a threshold config file by setting "threshold-file"
 # to the path of the threshold config file:
-# threshold-file: /etc/suricata/threshold.config
+threshold-file: /etc/suricata/threshold.config

 # The detection engine builds internal groups of signatures. The engine
 # allow us to specify the profile to use for them, to manage memory on an
@@ -373,7 +373,7 @@
   - inspection-recursion-limit: 3000
   # When rule-reload is enabled, sending a USR2 signal to the Suricata process
   # will trigger a live rule reload. Experimental feature, use with care.
-  #- rule-reload: true
+  - rule-reload: true
   # If set to yes, the loading of signatures will be made after the capture
   # is started. This will limit the downtime in IPS mode.
   #- delayed-detect: yes
Next I added the following test rule to /etc/suricata/rules/drop.rules. The file location is arbitrary. I wrote a simple rule to alert on ICMP traffic from a test system, All of the following is one line. I just broke it for readability.
alert icmp any -> any any (msg:"ALERT test ICMP ping from";
 icode:0; itype:8; classtype:trojan-activity; sid:99999998; rev:1;)

Notice I have no iptables rules loaded at this point:

$ sudo iptables -vnL
Chain INPUT (policy ACCEPT 5 packets, 392 bytes)
 pkts bytes target     prot opt in     out     source               destination 

Chain FORWARD (policy ACCEPT 4 packets, 240 bytes)
 pkts bytes target     prot opt in     out     source               destination 

Chain OUTPUT (policy ACCEPT 4 packets, 496 bytes)
 pkts bytes target     prot opt in     out     source               destination

Now I was ready to see if Suricata would at least see and alert on traffic matching my ICMP test rule. First I started Suricata and told it to watch br0, the bridge interface.

$ sudo suricata -c /etc/suricata/suricata.yaml -i br0

25/1/2014 -- 22:44:13 -  - This is Suricata version 2.0beta2 RELEASE
25/1/2014 -- 22:44:16 -  - [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/emerging-icmp.rules
25/1/2014 -- 22:44:33 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/dns-events.rules: No such file or directory.
25/1/2014 -- 22:44:51 -  - [ERRCODE: SC_ERR_PCAP_CREATE(21)] - Using Pcap capture with GRO or LRO activated can lead to capture problems.
25/1/2014 -- 22:44:51 -  - all 2 packet processing threads, 3 management threads initialized, engine started.
I don't care about the Warning or Error notices here. I could fix those but they are not germane to demonstrating the main point of this post.

On a separate system,, I pinged

$ ping -c 2
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=64 time=5.29 ms
64 bytes from icmp_req=2 ttl=64 time=4.03 ms

--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.030/4.663/5.297/0.637 ms
Then I checked my Suricata logs:
$ ls -al /var/log/suricata/
total 88
drwxr-xr-x  3 root root  4096 Jan 25 22:50 .
drwxr-xr-x 11 root root  4096 Jan 25 21:38 ..
-rw-r--r--  1 root root     0 Jan 25 22:15 drop.log
-rw-r--r--  1 root root   392 Jan 25 22:50 fast.log
-rw-r--r--  1 root root     0 Jan 25 21:42 http.log
-rw-r--r--  1 root root 66008 Jan 25 22:50 stats.log
drwxr-xr-x  2 root root  4096 Jan 25 22:15 .tmp
-rw-r--r--  1 root root   388 Jan 25 22:50 unified2.alert.1390708237

$ cat /var/log/suricata/fast.log
01/25/2014-22:50:40.510124  [**] [1:99999998:1] ALERT test ICMP ping from [**] [Classification: A Network Trojan was detected] [Priority: 1] {ICMP} ->
01/25/2014-22:50:41.510464  [**] [1:99999998:1] ALERT test ICMP ping from [**] [Classification: A Network Trojan was detected] [Priority: 1] {ICMP} ->
That worked as expected. I got alerts on the ICMP traffic matching the test ALERT rule.

Now it was time to drop traffic!

I added a new rule to drop.rules, again broken only for readability here:

drop icmp any -> any any (msg:"DROP test ICMP ping from";
 icode:0; itype:8; classtype:trojan-activity; sid:99999999; rev:1;)
I also disabled the previous ALERT rule by commenting it out.

Next I added iptables rules for the FORWARD chain, for traffic traversing the bridge. This Documentation was helpful.

$ sudo iptables -I FORWARD -j NFQUEUE

$ sudo iptables -vnL
Chain INPUT (policy ACCEPT 32 packets, 2752 bytes)
 pkts bytes target     prot opt in     out     source               destination 

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination 
    0     0 NFQUEUE    all  --  *      *              NFQUEUE num 0

Chain OUTPUT (policy ACCEPT 25 packets, 2600 bytes)
 pkts bytes target     prot opt in     out     source               destination 
Finally I restarted Suricata, this time telling it to use queue 0, where NFQUEUE was waiting for packets for Suricata.
$ sudo suricata -c /etc/suricata/suricata.yaml -q 0
25/1/2014 -- 22:54:49 -  - This is Suricata version 2.0beta2 RELEASE
25/1/2014 -- 22:54:52 -  - [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/emerging-icmp.rules
25/1/2014 -- 22:55:08 -  - [ERRCODE: SC_ERR_OPENING_RULE_FILE(41)] - opening rule file /etc/suricata/rules/dns-events.rules: No such file or directory.
25/1/2014 -- 22:55:26 -  - all 3 packet processing threads, 3 management threads initialized, engine started.
With Suricata running in IPS mode, I tried pinging from as I did earlier.
$ ping -c 2
PING ( 56(84) bytes of data.

--- ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1006ms
Nothing got through! I confirmed that I could ping the same box from another source IP address. In other words, only ICMP from was blocked. Now check the Suricata logs:
$ ls -al /var/log/suricata/
total 152
drwxr-xr-x  3 root root   4096 Jan 25 22:57 .
drwxr-xr-x 11 root root   4096 Jan 25 21:38 ..
-rw-r--r--  1 root root    294 Jan 25 22:57 drop.log
-rw-r--r--  1 root root    798 Jan 25 22:57 fast.log
-rw-r--r--  1 root root      0 Jan 25 21:42 http.log
-rw-r--r--  1 root root 125812 Jan 25 22:57 stats.log
drwxr-xr-x  2 root root   4096 Jan 25 22:15 .tmp
-rw-r--r--  1 root root    388 Jan 25 22:50 unified2.alert.1390708237
-rw-r--r--  1 root root      0 Jan 25 22:55 unified2.alert.1390708526
-rw-r--r--  1 root root    360 Jan 25 22:57 unified2.alert.1390708633

$ cat drop.log
01/25/2014-22:57:17.031400: IN= OUT= SRC= DST= LEN=84 TOS=0x00 TTL=64 ID=36055 PROTO=ICMP TYPE=8 CODE=0 ID=59729 SEQ=256
01/25/2014-22:57:18.038179: IN= OUT= SRC= DST= LEN=84 TOS=0x00 TTL=64 ID=36056 PROTO=ICMP TYPE=8 CODE=0 ID=59729 SEQ=512
Cool, those are our dropped ICMP packets. Checking fast.log we'll see the original two ALERT test messages, but check out the new DROP test messages too:
$ cat /var/log/suricata/fast.log
01/25/2014-22:50:40.510124  [**] [1:99999998:1] ALERT test ICMP ping from [**] [Classification: A Network Trojan was detected] [Priority: 1] {ICMP} ->
01/25/2014-22:50:41.510464  [**] [1:99999998:1] ALERT test ICMP ping from [**] [Classification: A Network Trojan was detected] [Priority: 1] {ICMP} ->
01/25/2014-22:57:17.031400  [Drop] [**] [1:99999999:1] DROP test ICMP ping from [**] [Classification: A Network Trojan was detected] [Priority: 1] {ICMP} ->
01/25/2014-22:57:18.038179  [Drop] [**] [1:99999999:1] DROP test ICMP ping from [**] [Classification: A Network Trojan was detected] [Priority: 1] {ICMP} ->
So that's it.

Note that with this configuration, if you stop Suricata then the host it's "protecting" is totally unreachable. You can restore connectivity by flushing the iptables rules via this command:

$ sudo iptables -F
Now the endpoint is reachable while Suricata is not running. To re-enable the IPS, you have to set up the NFQUEUE via iptables again as shown previously.

Following these directions you have the foundation for building a bridged IPS using Suricata on Ubuntu Server 12.04. The next step would be to fix the configuration issues causing the start-up error messages, make the bridge, firewall, and Suricata components available at start-up, and then build your own set of DROP rules. There are probably also optimizations for PF_RING and other performance features. Good luck!

Do you run Suricata as an IPS? How do you do it? Have you tried the new 2.x beta?