Wednesday, February 28, 2007

Sguil Client on Ubuntu

Inspired by an old post, John Curry, and David Bianco's NSM Wiki, I decided I would install the Sguil client on Ubuntu. It was really easy.

First I edited the /etc/apt/sources.list file to include the "universe" package collections:

deb edgy universe
deb-src edgy universe

Next I updated the apt cache and added the libraries I needed.

richard@neely:~$ sudo apt-get update
richard@neely:~$ sudo apt-get install tclx8.4 tcllib iwidgets4 wireshark
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
itcl3 itk3 libadns1 libpcre3 tcl8.4 tk8.4 wireshark-common
Suggested packages:
itcl3-doc itk3-doc iwidgets4-doc tclreadline tclx8.4-doc
Recommended packages:
The following NEW packages will be installed:
itcl3 itk3 iwidgets4 libadns1 libpcre3 tcl8.4 tcllib tclx8.4 tk8.4 wireshark
0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded.
Need to get 13.0MB of archives.
After unpacking 51.4MB of additional disk space will be used.
Do you want to continue [Y/n]? y

When done I downloaded the sguil-client-0.6.1.tar.gz archive, and modified sguil.conf thus:

set ETHEREAL_PATH /usr/bin/wireshark

That's it. I was able to start Sguil and access servers.

New Laptop Configuration

Last year I bought a Lenovo X60s laptop to serve as a portable VMware server for my classes. Recently my seven-year-old Thinkpad a20p has been giving me trouble, like losing half its RAM. When you only have 512 MB, that's a big deal. I decided that it was time to move operations to the newer laptop, even though the screen is smaller than I prefer for daily use. I figure I can get by with the smaller screen at least through the end of the year, when I hope to buy my next dream laptop.

I decided this was the time to try a new laptop configuration. The X60s came with Windows XP SP2 preinstalled. Although the bottom of the laptop showed a product key, I used the Magical Jelly Bean Keyfinder v2.0 Beta 2½ to retrieve the key used by Windows internally.

I installed Ubuntun Desktop 6.10 but preserved the 5 GB IBM restore partition. I am really impressed by Ubuntu. I never use configuration GUIs for anything, but I did use Ubuntu's to set up wireless networking prior to the actual installation. I really like running a live CD prior to touching the hard drive; it allowed me to test wireless connectivity, X, other devices, and so on.

Here is the partition layout:

richard@neely:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 1.9G 229M 1.6G 13% /
varrun 756M 92K 756M 1% /var/run
varlock 756M 0 756M 0% /var/lock
procbususb 10M 132K 9.9M 2% /proc/bus/usb
udev 10M 132K 9.9M 2% /dev
devshm 756M 0 756M 0% /dev/shm
lrm 756M 18M 738M 3% /lib/modules/2.6.17-11-generic/volatile
/dev/sda10 14G 129M 13G 1% /data
/dev/sda7 721M 17M 666M 3% /home
/dev/sda2 4.9G 4.0G 856M 83% /media/sda2
/dev/sda8 287M 8.1M 264M 3% /tmp
/dev/sda5 5.0G 1.7G 3.1G 36% /usr
/dev/sda6 1.9G 363M 1.5G 20% /var
/dev/sda9 23G 129M 22G 1% /vmware

As implied by the /vmware partition, I installed VMware Workstation Beta 6. I plan to deploy two VMs -- Windows XP SP2 and FreeBSD -- and do all of my production work inside those VMs. I also have a /data partition. Inside the /data partition I'm going to use TrueCrypt to encrypt all my personal and customer data. I'm going to let the two VMs access that partition via Samba. In other words, Ubuntu will be a Samba server and I'll have Windows and FreeBSD mount the Samba drive for "home" directories.

To avoid being prompted to insert my Ubuntu CD-ROM, I commented out this line in /etc/apt/sources.list:

#deb cdrom:[Ubuntu 6.10 _Edgy Eft_ - Release i386 (20061025)]/ edgy main restricted

I needed the following to install VMware Workstation.

richard@neely:~$ uname -a
Linux neely 2.6.17-11-generic #2 SMP Thu Feb 1 19:52:28 UTC 2007 i686 GNU/Linux

richard@neely:~$ sudo apt-get install build-essential linux-headers-`uname -r`
Reading package lists... Done
Building dependency tree
Reading state information... Done
linux-headers-2.6.17-11-generic is already the newest version.
The following extra packages will be installed:
dpkg-dev g++ g++-4.1 libc6-dev libstdc++6-4.1-dev linux-libc-dev
Suggested packages:
debian-keyring gcc-4.1-doc lib64stdc++6 glibc-doc manpages-dev
The following NEW packages will be installed:
build-essential dpkg-dev g++ g++-4.1 libc6-dev libstdc++6-4.1-dev
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 7932kB of archives.
After unpacking 30.4MB of additional disk space will be used.
Do you want to continue [Y/n]? y
Selecting previously deselected package linux-libc-dev.
Get:1 edgy-security/main linux-libc-dev [1770kB]
Get:2 edgy-updates/main libc6-dev 2.4-1ubuntu12.3 [1852kB]
Get:3 edgy/main libstdc++6-4.1-dev 4.1.1-13ubuntu5 [1619kB]
Get:4 edgy/main g++-4.1 4.1.1-13ubuntu5 [2573kB]
Get:5 edgy/main g++ 4:4.1.1-6ubuntu3 [1434B]
Get:6 edgy/main dpkg-dev 1.13.22ubuntu7 [110kB]
Get:7 edgy/main build-essential 11.3 [6974B]
Fetched 5735kB in 1m5s (87.5kB/s)
Selecting previously deselected package linux-libc-dev.
(Reading database ... 107084 files and directories currently installed.)
Unpacking linux-libc-dev (from .../linux-libc-dev_2.6.17.1-11.35_i386.deb) ...
Selecting previously deselected package libc6-dev.
Unpacking libc6-dev (from .../libc6-dev_2.4-1ubuntu12.3_i386.deb) ...
Selecting previously deselected package libstdc++6-4.1-dev.
Unpacking libstdc++6-4.1-dev (from .../libstdc++6-4.1-dev_4.1.1-13ubuntu5_i386.deb) ...
Selecting previously deselected package g++-4.1.
Unpacking g++-4.1 (from .../g++-4.1_4.1.1-13ubuntu5_i386.deb) ...
Selecting previously deselected package g++.
Unpacking g++ (from .../g++_4%3a4.1.1-6ubuntu3_i386.deb) ...
Selecting previously deselected package dpkg-dev.
Unpacking dpkg-dev (from .../dpkg-dev_1.13.22ubuntu7_all.deb) ...
Selecting previously deselected package build-essential.
Unpacking build-essential (from .../build-essential_11.3_i386.deb) ...
Setting up linux-libc-dev ( ...
Setting up libc6-dev (2.4-1ubuntu12.3) ...
Setting up dpkg-dev (1.13.22ubuntu7) ...
Setting up libstdc++6-4.1-dev (4.1.1-13ubuntu5) ...
Setting up g++-4.1 (4.1.1-13ubuntu5) ...
Setting up g++ (4.1.1-6ubuntu3) ...
Setting up build-essential (11.3) ...

Now it's time for VMware Workstation.

richard@neely:~$ sudo bash
root@neely:~# cd /usr/local/src
root@neely:/usr/local/src# mv /tmp/VMware-workstation-e.x.p-39849.i386.tar.gz .
root@neely:/usr/local/src# tar -xzpf VMware-workstation-e.x.p-39849.i386.tar.gz
root@neely:/usr/local/src# cd vmware-distrib
root@neely:/usr/local/src/vmware-distrib# ./

I accepted all of the defaults and everything worked as I hoped. Here are a few notes for my future reference.

Do you want networking for your virtual machines? (yes/no/help) [yes]

Configuring a bridged network for vmnet0.

Your computer has multiple ethernet network interfaces available: eth0, eth1.
Which one do you want to bridge to vmnet0? [eth0] eth1

The following bridged networks have been defined:

. vmnet0 is bridged to eth1

Do you wish to configure another bridged network? (yes/no) [no] yes

Configuring a bridged network for vmnet2.

The following bridged networks have been defined:

. vmnet0 is bridged to eth1
. vmnet2 is bridged to eth0

All your ethernet interfaces are already bridged.

Do you want to be able to use NAT networking in your virtual machines? (yes/no)

Configuring a NAT network for vmnet8.

Do you want this program to probe for an unused private subnet? (yes/no/help)

Probing for an unused private subnet (this can take some time)...
The subnet appears to be unused.

The following NAT networks have been defined:

. vmnet8 is a NAT network on private subnet

Do you wish to configure another NAT network? (yes/no) [no]

Do you want to be able to use host-only networking in your virtual machines?

Configuring a host-only network for vmnet1.

Do you want this program to probe for an unused private subnet? (yes/no/help)

Probing for an unused private subnet (this can take some time)...

The subnet appears to be unused.

The following host-only networks have been defined:

. vmnet1 is a host-only network on private subnet

Do you wish to configure another host-only network? (yes/no) [no]

Starting VMware services:
Virtual machine monitor done
Blocking file system: done
Virtual ethernet done
Bridged networking on /dev/vmnet0 done
Host network detection done
Host-only networking on /dev/vmnet1 (background) done
DHCP server on /dev/vmnet1 done
Bridged networking on /dev/vmnet2 done
Host-only networking on /dev/vmnet8 (background) done
DHCP server on /dev/vmnet8 done
NAT service on /dev/vmnet8 done

The configuration of VMware Workstation e.x.p build-39849 for Linux for this
running kernel completed successfully.

You can now run VMware Workstation by invoking the following command:

With VMware installed I turned to Truecrypt. I needed one other installation before deploying Trucecrypt.

root@neely:/usr/local/src# apt-get install dmsetup
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 26.6kB of archives.
After unpacking 90.1kB of additional disk space will be used.
Get:1 edgy/main dmsetup 2:1.02.07-1ubuntu2 [26.6kB]
Fetched 26.6kB in 0s (35.6kB/s)
Selecting previously deselected package dmsetup.
(Reading database ... 108832 files and directories currently installed.)
Unpacking dmsetup (from .../dmsetup_2%3a1.02.07-1ubuntu2_i386.deb) ...
Setting up dmsetup (1.02.07-1ubuntu2) ...
root@neely:/usr/local/src# mv /tmp/truecrypt-4.2a-ubuntu-6.10-x86.tar.gz .
root@neely:/usr/local/src# tar -xzvf truecrypt-4.2a-ubuntu-6.10-x86.tar.gz
root@neely:/usr/local/src# cd truecrypt-4.2a/
root@neely:/usr/local/src/truecrypt-4.2a# ls
License.txt Readme.txt truecrypt_4.2a-0_i386.deb
root@neely:/usr/local/src/truecrypt-4.2a# dpkg -i truecrypt_4.2a-0_i386.deb
Selecting previously deselected package truecrypt.
(Reading database ... 108839 files and directories currently installed.)
Unpacking truecrypt (from truecrypt_4.2a-0_i386.deb) ...
Setting up truecrypt (4.2a-0) ...
root@neely:/usr/local/src/truecrypt-4.2a# chmod u+s /usr/bin/truecrypt

At this point I decided to change my default prompt by creating /home/richard/.profile with the following.

PS1='`hostname -s`:$PWD$ '; export PS1

Now I set up Truecrypt.

neely:/home/richard$ mkdir tc
neely:/home/richard$ sudo chown richard:richard /data
neely:/home/richard$ ls -ald /data
drwxr-xr-x 3 richard richard 4096 2007-02-28 08:49 /data
neely:/home/richard$ truecrypt -c
Volume type:
1) Normal
2) Hidden
Select [1]:

Enter file or device path for new volume: /data/tc1
1) FAT
2) None
Select [1]:
Enter volume size (bytes - size/sizeK/sizeM/sizeG): 4.2G

Hash algorithm:
1) RIPEMD-160
2) SHA-1
3) Whirlpool
Select [1]:

Encryption algorithm:
1) AES
2) Blowfish
3) CAST5
4) Serpent
5) Triple DES
6) Twofish
7) AES-Twofish
8) AES-Twofish-Serpent
9) Serpent-AES
10) Serpent-Twofish-AES
11) Twofish-Serpent

Enter password for new volume '/data/tc1':
Re-enter password:

Enter keyfile path [none]:

TrueCrypt will now collect random data.

Is your mouse connected directly to computer where TrueCrypt is running? [Y/n]: y

Please move the mouse randomly until the required amount of data is captured...
Mouse data captured: 100%
Done: 4095.10 MB Speed: 19.43 MB/s Left: 0:00:00
Volume created.
neely:/home/richard$ truecrypt -u /data/tc1 tc/
Enter password for '/data/tc1':
neely:/home/richard$ touch tc/test
neely:/home/richard$ ls -al tc/
total 5
drwxr-xr-x 2 richard richard 4096 2007-02-28 15:22 .
drwxr-xr-x 15 richard richard 1024 2007-02-28 15:05 ..
-rwxr-xr-x 1 richard richard 0 2007-02-28 15:22 test

With Truecrypt installed I turned to Samba.
neely:/home/richard$ dpkg --list | grep -i samba
ii samba-common 3.0.22-1ubuntu4.1 Samba common files used by both the server a

It looks like Sambra is not installed, although samba-common is. Weird.

neely:/home/richard$ sudo apt-get install samba
Reading package lists... Done
Building dependency tree
Reading state information... Done
Recommended packages:
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 3028kB of archives.
After unpacking 7569kB of additional disk space will be used.
Get:1 edgy-security/main samba 3.0.22-1ubuntu4.1 [3028kB]
Fetched 3028kB in 14s (214kB/s)
Preconfiguring packages ...
Selecting previously deselected package samba.
(Reading database ... 108847 files and directories currently installed.)
Unpacking samba (from .../samba_3.0.22-1ubuntu4.1_i386.deb) ...
Setting up samba (3.0.22-1ubuntu4.1) ...
Generating /etc/default/samba...
TDBSAM version too old (0), trying to convert it.
TDBSAM converted successfully.
account_policy_get: tdb_fetch_uint32 failed for field 1 (min password length), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 2 (password history), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 3 (user must logon to change password), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 4 (maximum password age), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 5 (minimum password age), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 6 (lockout duration), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 7 (reset count minutes), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 8 (bad lockout attempt), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 9 (disconnect time), returning 0
account_policy_get: tdb_fetch_uint32 failed for field 10 (refuse machine password change), returning 0
* Starting Samba daemons... [ ok ]

I modified /etc/samba/smb.conf as shown.

neely:/home/richard$ diff /etc/samba/smb.conf.orig /etc/samba/smb.conf
< workgroup = MSHOME
> workgroup = TAOSECURITY
< ;[homes]
< ; comment = Home Directories
< ; browseable = no
> [homes]
> comment = Home Directories
> browseable = yes
< ; writable = no
> writable = yes

I added a richard user and reloaded Samba.

neely:/home/richard$ sudo smbpasswd -a richard
New SMB password:
Retype new SMB password:
neely:/home/richard$ sudo /etc/init.d/samba reload
* Reloading /etc/samba/smb.conf... [ ok ]

To test the Samba share I tried mounting it from a different FreeBSD box.

orr:/home/richard$ sudo mount_smbfs -I //richard@ /samba
orr:/home/richard$ ls /samba/
Desktop Examples tc
orr:/home/richard$ ls /samba/tc

Nifty. As you can see the Truecrypt directory is available. This is where I will have my Windows and FreeBSD VMs write sensitive data.

Once I have the VMs created I will modified smb.conf again to have Samba only listen on interfaces provided by VMware, such as the host-only network "".

I think this setup will work. I will have instant access to Windows or FreeBSD via my VMware images. I will have all my sensitive data stored in the Truecrypt file. I plan to not use Ubuntu as much as possible, and instead do work inside the two VMs.

Other notes:

Bluetooth off:

echo disabled | sudo tee -a /proc/acpi/ibm/bluetooth

Turn the blacklight brightness right down.

Enable hard-disk spin-down, by setting:






richard@neely:~/.gnupg$ ls -al secring.gpg
lrwxrwxrwx 1 richard richard 35 2007-05-04 15:52 secring.gpg -> /home/richard/tc/.gnupg/secring.gpg

Backup to NetCenter:

sudo mount -t nfs /mnt

Thursday, February 22, 2007

Jose Nazario on Botnets

I recommend reading Black Hat: Botnets Go One-on-One by Kelly Jackson Higgins. She interviews Jose Nazario for a peak at findings from his talk at Black Hat DC next week. I won't be attending, although I plan to stop by Thursday evening to meet friends Erik Birkholz, Rohyt Belani, and any other ex-Foundstoners we can find.

Tuesday, February 20, 2007

Snort DCE/RPC Vulnerability Thoughts

Yesterday Sourcefire posted a new advisory on a vulnerability in the DCE/RPC preprocessor introduced in Snort 2.6.1. The vulnerable exists in 2.6.1,,, and 2.7 beta 1.

A look at the snort/src/dynamic-preprocessors/dcerpc/ directory of Snort CVS shows dcerpc.c and smb_andx_decode.c were modified three days ago to patch the vulnerability. You can check the diffs for dcerpc.c and smb_andx_decode.c to see how Sourcefire addressed the problem.

This level of transparency is one of my favorite aspects of open source projects. If you are so inclined you can check the source code to find the original vulnerability and then decide if the fix is proper.

There are probably a few dinosaurs out there who think this level of disclosure is too much, since it shows the adversary exactly where to find the problem. The truth is that several years of exceptionally effective reverse engineering of binary patches for closed, proprietary operating systems (and even creation of patches based on reverse engineering!) have demonstrated that hiding source code provides little to no secrecy. (Remember the source code for your favorite operating systems is probably already stolen anyway.)

The question is, what happens now? The slide below is from my TCP/IP Weapons School (layers 4-7) class. It's original research (meaning I didn't copy it from elsewhere, not that it's particularly awe-inspiring) based on analyzing Microsoft protocols. In the class we look at all of these protocols to see how they can be fragmented at the DCE/RPC and SMB layers. (For news on the next class in your area, visit my training schedule.) If you look at the slide you'll see DCE/RPC can appear in a variety of transports. This is worrisome given the vulnerability in Snort's DCE/RPC preprocessor. In 2005 in response to the Snort Back Orifice vulnerability I wondered if we might see a Snort worm. I don't think that will happen this time since it didn't happen last time.

On a related note, I finished writing the fourth Snort Report today. I cover upgrading to Snort (which fixes the vulnerability) and how to check out Snort 2.7.0 from CVS to run a patched version of the 2.7.0 beta.

Monday, February 19, 2007

Bejtlich Teaching at Techno Security 2007

I've previously spoken at the Techno Security 2005 and Techno Security 2006 conferences. A visit to the Techno Security 2007 conference page shows I will be teaching TCP/IP Weapons School (Layers 2-3) at the 2007 event this summer. I'll be teaching 6 and 7 June at one of my favorite vacation spots, Myrtle Beach Marriott Resort at Grande Dunes. I'll also be speaking as part of the technical tracks on 5 June. If you'd like to register for TCP/IP Weapons School, please check out the details here and return the registration form (.pdf) to me as quickly as you can. I believe we have a limit of 20 seats, and at $995 per person you get to attend my two day class and the entire Techno conference.

I'm working out the details for other public classes listed here, but it will be tough to beat this Myrtle Beach deal. If you're a security vendor, this is an excellent show to have a booth. There's a very high concentration of sharp security people and decision-makers at this conference. Email jack [at] thetrainingco [dot] com if you want to know more about attending the show or securing a booth, and tell Jack that Richard sent you.

Friday, February 16, 2007

Combat Insider Threats with Nontechnical Means

I've written many posts on insider threats, like How Many Spies and Of Course Insiders Cause Fewer Security Incidents. Recently a former Coca-Cola employee was found guilty of trying to steal Coke's trade secrets, with an intent to sell them to Pepsi. According to this story, detection of the plot was decidedly non-technical:

In May, a letter appeared at Pepsi's New York headquarters offering to sell the trade secret. But that's how the beverage superpowers learned of common corporate priorities: Pepsi officials immediately notified Coke of the breach; in turn, Coke executives contacted the FBI and a sting operation was put into play.

Today I read Insider Tries to Steal $400 Million at DuPont. The story claims a technical detection method:

Computer security played a key role in the case. The chemist, Gary Min, was spotted when he began accessing an unusually high volume of abstracts and full-text PDF documents from DuPont's Electronic Data Library (EDL), a Delaware-based database server which is one of DuPont's primary storage repositories for confidential information.

Between Aug. 2005 and Dec. 12, 2005, Min downloaded some 22,000 abstracts and about 16,700 documents -- 15 times the number of abstracts and reports accessed by the next-highest user of the EDL, according to documents unsealed yesterday by Colm Connolly, U.S. Attorney for the District of Delaware...

Min began downloading the documentation about two months before he received an official job offer from Victrex, a DuPont competitor, in October 2005. The new job was slated to begin in January of 2006, but Min did not tell DuPont he was leaving until December 2005, according to the documents. It was after he announced his departure that DuPont's IT staff detected the high volume of downloads from Min's computer.

That demonstrates DuPont was unaware of the activity (which started in August) until December. That means DuPont only started looking for odd activity once Min announced his departure, so again we have another non-technical detection method. Something similar to the Coke case occurred based on this line from the same story:

Victrex was not accused of conspiring with Min. In fact, the company assisted authorities in collecting evidence against him, according to the documents.

So, in both cases nontechnical means identified and caught insider threats.

This follow-up story, 10 Signs an Employee Is About to Go Bad, lists only two technical means to identify insider threats -- the remaining eight are all decidedly analog or physical. I recommend reading this list. It represents one of the better arguments I've seen for "convergence" between physical and digital security staffs.

Unfortunately, many companies are spending lots of money on products to supposedly combat insider threats, when the best approach is nontechnical. Meanwhile, these same companies are completely 0wn3d by outsiders in .ro, .ru, .cn, etc., but little attention is paid because external threats are not the "hot topic" right now. The only saving grace is that some of the technical methods that might be helpful against insiders may work against outsiders who control company assets.

Shawn Carpenter Vindicated

Two years ago I posted Real Threat Reporting. My story discussed Shawn Carpenter, formerly an analyst at Sandia National Labs who discovered Titan Rain activity at his site. After bringing news of the intrusions to the FBI, Sandia fired him.

According to these AP, ComputerWorld, and FCW stories, a New Mexico jury awarded Shawn "$35,661 for lost wages and benefits, $1,875 for counseling costs and $350,000 for emotional distress." The jury also awarded "$4.3 million in punitive damages" which makes "doing the right thing" a financially attractive proposition when your agency doesn't want you discussing national security failings with outside parties.

Thursday, February 15, 2007

Open Source Winners

The chart comes from How To Tell The Open Source Winners From The Losers by InformationWeek's Charles Babcock. You can more or less skip the article, but the chart is interesting. I don't think it's absolutely necessary to have a benevolent dictator if you have a core team like FreeBSD does. In fact, projects with benevolent dictators suffer from a single point-of-failure that might only be addressed by a fork or replacement by another like-minded individual.

February 2007 (IN)SECURE Magazine

The February 2007 (.pdf) issue of (IN)SECURE Magazine is available. This is a great magazine. Interesting articles include an interview with security researcher/ninja Joanna Rutkowska, discussions of Vista and Office 2007, and a neat overview of security careers by Mike Murrary. (Note to Mike: I've never heard of Tim Keanini until now. No offense, but I don't think he's up there with Marcus Ranum or Ron Gula.)

Tuesday, February 13, 2007

Binary Upgrade of FreeBSD 6.1 to 6.2

Last year I described performing a binary upgrade of FreeBSD 6.0 to 6.1. Today I tried a similar process for FreeBSD 6.1 to 6.2, using Colin Percival's instructions for 6.1 to 6.2-RC1.

shuttle01# mkdir /usr/upgrade
shuttle01# cd /usr/upgrade
shuttle01# fetch
upgrade-to-6.2.tgz 100% of 18 kB 120 kBps
shuttle01# tar -xzf upgrade-to-6.2.tgz
shuttle01# cd upgrade-to-6.2
shuttle01# sh -f freebsd-update.conf -d /usr/upgrade
-r 6.2-RELEASE upgrade
Looking up mirrors... 1 mirrors found.
Fetching public key from done.
Fetching metadata signature for 6.1-RELEASE from done.
Fetching metadata index... done.
Fetching 2 metadata files... done.
Inspecting system... done.

The following components of FreeBSD seem to be installed:
kernel/smp world/base world/dict world/doc world/manpages

The following components of FreeBSD do not seem to be installed:
kernel/generic src/base src/bin src/contrib src/crypto src/etc src/games
src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue
src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin
world/catpages world/games world/info world/proflibs

Does this look reasonable (y/n)? y

Fetching metadata signature for 6.2-RELEASE from done.
Fetching metadata index... done.
Fetching 1 metadata patches. done.
Applying metadata patches... done.
Fetching 1 metadata files... done.
Inspecting system... done.
Preparing to download files... done.
Fetching 2703 patches.....10....20....30....40....50....60....70....80...
Applying patches... done.
Fetching 5315 files... done.
The following files will be removed as part of updating to 6.2-RELEASE-p1:
The following files will be added as part of updating to 6.2-RELEASE-p1:
The following files will be updated as part of updating to 6.2-RELEASE-p1:
shuttle01# sh -f freebsd-update.conf
-d /usr/upgrade install
Installing updates...
Kernel updates have been installed. Please
reboot and run " install" again to
finish installing updates.
shuttle01# cd /usr/upgrade/upgrade-to-6.2
shuttle01# sh -f freebsd-update.conf -d /usr/upgrade install
Installing updates... done.
shuttle01# uname -a
Fri Jan 12 11:05:30 UTC 2007 i386

That's it -- very easy. I believe we'll see this integrated into 6-STABLE and then appear in 6.3 RELEASE. I love it.

Monday, February 12, 2007

Another Anti-Virus Problem

Here's more evidence if you need to make a case that blindly requiring anti-virus or other agents on all systems is neither cost-free nor automatically justified, as I mentioned late last year. As reported by SANS @RISK (link will work shortly):

Trend Micro Antivirus, a popular antivirus solution, contains a buffer overflow vulnerability when parsing executables compressed with the UPX executable compression program. A specially-crafted executable could trigger this buffer overflow and execute arbitrary code with SYSTEM/root privileges, allowing complete control of the vulnerable system. Note that the malicious file can be sent to a vulnerable system via email (spam messages), web, FTP, Instant Messaging or Peer-to-Peer file sharing. UPX file format vulnerabilities have been widely-reported in the past, and UPX file fuzzers are commonly available.

Here's the Trend Micro advisory.


Late last year I mentioned I planned to read and review FISMA Certification & Accreditation Handbook by Laura Taylor. You know if I read a book on Cisco MARS on one leg of my last trip, I probably read a different book on the return leg. FISMA was that book. These comments are going to apply most directly to FISMA itself, based on what I learned reading Ms. Taylor's book. I'll save comments on the book itself for a later date.

Last year I wrote FISMA is a joke.. I was wrong, and I've decided to revise my opinion. Based on my understanding of FISMA as presented in this book,

FISMA is a jobs program for so-called security companies without the technical skills to operationally defend systems. This doesn't mean that if you happen to conduct FISMA work, you're definitelTy without technical skills. I guarantee my friends at ClearNet Security are solid guys, just based on their ability to detect the C&A project they joined was worthless. Anyway, I guess it's time to put on a flame-retardant suit.

Let's start with p xxiii in FISMA to understand the thought process of those who believe in it. Foreword author Sunil James, "Former Staff Director of the FDIC," writes that the FISMA "process has been proven to reduce risk to federal information systems." I think he means that FISMA reduces the risk of unemployment for those who perform C&A on federal information systems.

Without saying anything else, I think I know the problem with the FISMA-supporting crowd. I bet they do not do any other work, especially not technical work and certainly not incident response work for the agencies they "certify" and "accredit." If they did have operational security responsibilities for their C&A clients they'd wonder "why are these systems repeatedly being 0wned if their C&A packages are up-to-date?" Those that keep one foot in the C&A world and another in the operational world realize their FISMA feet are wet and smelly and not doing anyone any good, period.

Another sad truth about FISMA, despite Mr. Porter's unsubstantiated claim, is that there is zero connection between high FISMA scores and lower impact or number of intrusions. If you don't believe me, keep your eyes open for the next FISMA report card from the House Committee on Oversight and Government Reform. A look at the 2006 scores is interesting. Figure out who has good scores, and then see how they fared in this staff report on Federal breaches since January 2003. Hint: everyone's been 0wned, is 0wned, and will continue to be 0wned while money is spent paying consultants to write FISMA C&A packages.

Let's see what this FISMA book has to say about C&A packages. Page 34 claims a good ISSO can maintain C&A packages for 6 systems, and that writing each package can take 3-6 months. They need to be updated every 3 years. The C&A handbook guiding the writing of packages is usally 200+ pages and needs to be kept current. The packages themselves are usually 500+ (!) pages, and require 2-4 weeks to be read by the accreditors.

On p 34 we come to a root of the problem:

Since once C&A package could easily take a year for a well-versed security expert to prepare, it is considered standard and acceptable for ISSOs to hire consultants from outside the agency to prepare the Certification Package.

Page 38 continues:

Since C&A, if done properly, is usually a much bigger job than most people realize, I cannot emphasize enough the value in using outside consultants. Putting together a Certification Package is a full-time job.

I do not have a problem with consultants. Heck, I am a consultant. However, the vast majority of my work does not revolve around writing 500 page reports based on self-assessments every three years.

Laura Taylor writes on pp 8-9:

C&A is essentially a documentation and paperwork nightmare... prepare yourself to write, write, and write some more. If you detest writing, you're in the wrong business.

Basically, preparing a Certification Package is writing about security -- extensive writing about security. When you are preparing a Certification Package, you usually don't perform any sort of hands-on security. You review the existing security design and architecture documents, interview various IT support and development folks familiar with the infrastructure, and document your findings.
(emphasis added)

I am not making this up. The really sad part is this: the author then says

...why C&A exists -- it is a process that enables authorizing officials to discover the security truths about their infrastructure so that informed decisions can be made.

Security truths? What are they based on?

In chapter 8 we read about "security self-assessments." Maybe those are helpful? Hmm, probably not: "A security self-assessment is a survey-based audit that is essentially a long list of questions." What's worse, page 115 says:

[I]n September 2003 a report put out by the Office of Inspector General at the Environmental Protection Agency found that 36 percent of the responses to security self-assessments contained inaccurate information.

Ms. Taylor's recommendation?

Tip: Encourage self-assessment respondents to answer questions truthfully.

Maybe some other aspect of C&A and FISMA shows merit. Chapter 12, discussing Security Tests and Evaluation (ST&E), begins with this:

A Security Test & Evaluation, known among security experts as an ST&E, is a document that demonstrates that an agency has performed due diligence in testing security requirements and evaluation the outcome of the tests.

The ST&E is a C&A document that tends to give agencies lots of trouble. It's not clear to many agencies what tests they should be doing, who should be doing them, and what the analysis of the tests should consist of.

That's another winner in my book.

FISMA fans out there are going to cite the vulnerability scans which are usually part of ST&E as a sign that something technical happens during C&A. Believe me, I am sparing the author of this book and her "technical editors" by not reproducing their recommendations for assessments. (One word: Strobe.)

We could also look at the Privacy Impact Assessment, but guess what -- it's another self-survey.

The bottom line is that FISMA doesn't mention C&A at all, but the author thinks that's ok because C&A fulfill's FISMA's goals. The reality is far different. According to the act itself, the first "purpose" is to:

provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support Federal operations and assets. (emphasis added)

FISMA is failing miserably. It's ironic that this FISMA book begins each chapter with a quote, and this begins chapter 2:

It is common sense to take a method and try it. If it fails, admit it frankly and try another. But above all, try something. -- FDR

It's time for the FISMA fans to admit this five year FISMA experiment has been a waste of taxpayer money and agency resources.

Returning to my football analogy, C&A is an expensive, extensive practice session controlled by the players and overseen by agents who get paid the longer the team is on the field. Success is measured not by the score of the next game but by the number of worthless statistics written about the players prior to the first snap. Once the team takes to the field they are annihilated by the opposition, but the agents don't care because they're spending their money elsewhere.

If you are a C&A Package-writing company, I strongly recommend you gain some operational capabilities or look for a new line of business. I am committed to eliminating your position in the Federal government. Laura Taylor writes with some apparent regret that "most private and public companies don't put nearly as much time, effort, and resources into documenting their security as government agencies do." Let's keep it that way. At least it frees up resources for work that has a chance of stopping an intruder.

Do I have anything "nice" to say? Yes -- if you are so cursed as to be responsible for a FISMA C&A project, I do recommend reading this book. Forget the technical aspects and concentrate on understanding the FISMA maze. I thought Laura Taylor wrote a clear and well-organized book with lots of practical advice and good templates. I would much rather see her not have to write about this subject again, though!

Earth to MARS

Disclaimer: I'm going to single out a book by Cisco employees that talks about a Cisco product. I have no personal feelings about Cisco. I have friends there. I've done work for Cisco. Since I think Cisco is eventually going to own all network security functions in their switches, I may even work for Cisco one day.

This post is for all product vendors who approach understanding and defending the network in the ways described here. Wherever you read "Cisco" feel free to add products that share the characteristics I outline below.

Once again I found myself hanging in the sky last week. Trips to and from the West Coast gave me the opportunity to read Security Threat Mitigation and Response: Understanding Cisco Security MARS by Dale Tesch and Greg Abelar. This is mainly another Cisco marketing book, like Self-Defending Networks: The Next Generation of Network Security by Duane DeCapite. While I have a few thoughts on the book, I would much rather address the underlying philosophy presented by the authors. I'm fairly sure they're only repeating Cisco market-speak, but I hear the same message from many vendors, consultants, and individuals.

In this post I'd like to take issue with that message. In short, almost nothing about the approach taken by Cisco MARS (and similar products) is new or will "solve" your security problems. Unless you augment tactics and technologies like MARS, you will find yourself wondering why you spent time and effort to end up as frustrated and confused as you were pre-MARS.

I'll refer to this book as STM, short for the Security Threat Mitigation term in the title. STM is supposed to be "beyond Security Information Management" (SIM). According to STM, the five core SIM functions (pp 6-7) are:

  • Collect event data

  • Store data

  • Correlate to show relationships (what the book calls "true power")

  • Present data

  • Report on, alarm on, and/or notify about data

STM starts to knock the value of SIMs by introducing the idea of "garbage in, garbage out," saying "information or events from several different sources can be 'garbage' unless they are put together in a useful way." I disagree -- garbage in always produces garbage out. The idea that a ton of garbage can be turned into something valuable (like gold from coal) is the fallacy of SIM/SEM technology.

The book tries to position "STM" as "beyond SIM" by granting STM the following attributes:

  • Data reduction

  • Timely attack mitigation

  • End-to-end network awareness

  • Integrated vulnerability assessment

  • Session correlation

You are probably think what I am thinking; how is that really different from a SIM/SEM? I bet all the SIM/SEM vendors are saying "we do that already."

Things start to get really weird when the authors talk about the "advantages of a proactive security framework" (pp 14-15) compared to what everyone else must be using. STM says:

The key to this framework is the network's capability to behave actively. This does not necessarily mean to take action itself, but to automatically collect data from numerous sources and come to a decision..." (emphasis added)

... so that a person can react like we've always done. It is intellectual dishonesty to claim this product (or any other) is acting "actively" when the end result is still waiting for a person to react. Mind you, I'm not knocking reaction. Too many people seems to think proactivity is king and reactivity is evil, but sometimes there's no other option. It bugs me that Cisco is packaging manual "shunning" in a new wrapping and calling it "self-defending" and "active" when the end result is a person having to make a reactive decision.

The real problem is far more insidious, however. As has always been the case with alert-centric products, there is not enough data available to make a decision. In other words, Cisco MARS, like other products, still cannot tell me if an attack was successful.

More absurdity appears in the section (p 15) on "false-positive reduction," where three methods are given for MARS to "determine the validity of event data." They are, basically:

  1. Network topology

  2. Vulnerability assessment via limited network scanning

  3. User determination

So, we have 1) could the attack reach the target? Guess what -- if it's a TCP connection it probably did. Next we have 2) scan to see if the IIS attack hit an Apache server; limited at best, worthless at other times. Finally -- and this kills me -- 3) let the user decide. How is this better than anything else again?

The section "Enhancing the Self-Defending Network" adds insult to injury by naming three "missing links" in the SDN addressed by STM:

  1. Automated log correlation

  2. Automated threat response

  3. Automated mitigration

We know SIM/SEM does #1. A product that accomplished 2 and 3 would be impressive, but guess again -- MARS does neither automatically:

Automated mitigation is not yet achievable because many security devices still put out false-positive alerts, but CS-MARS makes a recommendation for mitigation and offers security responders a single click to deploy commands on devices that will stop offending traffic after the responder has analyzed the attack data. (emphasis added)

Again, it all comes back to having the data necessary to make a decision, and then letting an analyst make the decision. There's nothing proactive or special here.

STM emphasizes the speed with which one can respond, but how can an analyst even know to start the escalation process? The end of chapter 3 presents a case that aims to show "ROI" for MARS based on shutting down a spreading virus faster than a non-MARS solution. Here's the core of the problem, however:

The virus starts trying to infect other hosts... [MARS sends] an email page... to a security responder 24 hours a day, 7 days a week... The security responder gets the page and logs into CS-MARS.

And that's faster or better how?

STM demonstrates real ignorance of the sorts of data an analyst needs to make decisions regarding security events. On MARS, "forensic analysis" is "visual tools for attack-path analysis and attack propagation" (p 82) or mapping NAT'd IPs to public IPs or IPs to MAC addresses. "Attack validation" (i.e., should I worry?) is knowing "if an attack actually reached the intended destination." Furthermore, p 89 says:

[U]pon detecting an anomalous behavior, CS-MARS starts to dynamically store the full NetFlow records for the anomalous activity. This intelligent collection system provides all the information that a security analyst needs. (emphasis added)

Wrong. What else does MARS offer? Page 127 says "retrieval of raw messages" is available as a download of a zipped text file! I bet that's easy to manipulate when trying to understand alerts. On pp 229-230 we see examples of the sort of "incident ID" data available to analysts -- almost all of which is unactionable and worthless. On MARS, the analysis process consists of looking at a rule MARS used to generate an alert and then guessing at the relationship between the rule and events listed in the "detailed data" section.

I think the real problem with this approach is demonstrated by the case studies in chapter 9, which I guess Cisco sees as a validation of their approach. We read about a Trojan found at a state agency via an IDS alert (no different from non-MARS). We see a .edu respond using MARS because phone calls prompted an investigation of traffic on port 25! We read about a hospital that used a spike in traffic to a Web server as a reason to investigate the Web server (wow). A financial firm investigates high ICMP traffic, and a small business sees odd DNS names in its weekly MARS reports to find another company using its wireless access point. Honestly, is this supposed to make me a believer?

If you want to know how I recommend dealing with this and similar situations, I've already written about it. In brief, analyst must have high-fidelity, original, content-neutral data sources to investigate. That is the key term here. If you do not have such data (as provided by NSM) to investigate, you are doing alert management. And you will lose.

So what is MARS (and similar products) good for? In short, there is value in centralizing and presenting events from disparate security products. There is value in having a single window into that world, with some sort of accountability, escalation, and case management. There is value in being able to contain network-centric exploitation via centralized device control. Just remember that making the decisions to protect the enterprise requires having the proper data. NSM is one way to get that data.

Not Your Father's TCP/IP Stack

I sometimes hear of people talking about controlling TCP and UDP ports, as if that is the battleground for network access in 2007. Reality check -- that hasn't been true for years, unfortunately. Boy, I miss those days -- the days when defined applications used defined ports and blocking all but a few meant understanding the applications permitted in the enterprise. The Cisco IPJ article Boosting the SOA with XML Networking reminded me with this excellent diagram.

Those days are long gone, thanks to security monstrosities like those depicted next.

My gut tells me that when I see a bunch of terms squashed into one box, it's going to be a mess to understand, inspect, and control. I expect to hear from the development crowd that XML-fu is God's gift to the Green Earth, but it will take a miracle for me to believe that Everything-over-HTTP-over-port-80-TCP is a "good idea." We've got 65535 TCP ports to use and the whole world is collapsing onto one. Argh.

Incidentally, kudos to Cisco for publishing IPJ in such a Web-friendly format, as well as sending free printed copies.

Thursday, February 08, 2007

I See You

In recent posts like Consider This Scenario, I posted information collected from my live network connection. I don't worry about exposing real data, as long as it belongs to my own network. I obviously don't expose client data!

Today I received a new alert from OSSEC:

OSSEC HIDS Notification.
2007 Feb 08 09:46:13

Received From: macmini->/var/log/auth.log
Rule: 5701 fired (level 12) -> "Possible attack on the ssh server
(or version gathering)."
Portion of the log(s):

Feb 8 09:46:11 macmini sshd[21224]: Bad protocol version identification
'Yo. I just read your blog about this SSH server'
from ::ffff:

Interesting. Here is an OSSEC alert -- but is there anything else? How many people think I should check my macmini host again? Rather than poke around on that box, I first check my independent NSM Sguil sensor to see what it says about the event.

I didn't see any Snort alerts, so I did a session query and got one result.

Sensor:cel433 Session ID:5029174672303084694
Start Time:2007-02-08 14:46:16 End Time:2007-02-08 14:46:39 ->
Source Packets:6 Bytes:49
Dest Packets:6 Bytes:60

This is probably the connection that prompted the OSSEC alert. I can generate a human-readable transcript of the event. Here's what that looks like.

Sensor Name: cel433
Timestamp: 2007-02-08 14:46:16
Connection ID: .cel433_5029174672303084694
Src IP: (
Dst IP: (
Src Port: 60096
Dst Port: 22
OS Fingerprint: -
Linux 2.6, seldom 2.4 (older, 4) (NAT!) [priority1] (up: 33 hrs)
OS Fingerprint: ->
(distance 20, link: pppoe (DSL))

DST: SSH-2.0-OpenSSH_3.8.1p1 Debian-8.sarge.6
SRC: Yo. I just read your blog about this SSH server
DST: Protocol mismatch.

As you can see, someone from an IP in Brazil connected to port 22 TCP, entered the string you see, and then disconnected.

The nice aspect of having this sort of data available is I can see exactly what transpired for this event. I queried and found only one session from the .br IP. I can query on the destination (my) IP for other connections to port 22 TCP, and see other activity from Hong Kong that resulted in no successful connections. There is no guesswork or assumptions that need to be made. I have real data and can make real judgments about what is happening.

Is this the latest and greatest uber 31337 attack? Of course not. Is this the ultimate mega network carrying umpteen billion bps? Nope. However, you will find these methods will help you when something more significant is happening. Here, as elsewhere in my blog, I use small, simple cases to try to illustrate lessons from bigger cases that may not be suitable for public discussion.

Wednesday, February 07, 2007

Arbor Launches ATLAS

If you didn't see the announcement, you might like perusing Arbor Network's new Active Threat Level Analysis System (ATLAS) Initiative, "a multi-stage project to develop the world’s first globally scoped threat analysis network with the help of the service provider community." I'm not sure I totally agree with that description, but the range of data available looks interesting. I plan to mine some of my NSM session data based on information from ATLAS. I applaud Arbor for making this sort of information publicly and freely available.

Tuesday, February 06, 2007

NoVA BUG Founded

If you visit or, you'll see I just created the northern Virginia BSD users group.

Two years ago I expressed interest in helping with this organization, but someone else registered and did nothing with the name or concept.

Following in the modest success of NoVA Sec, I thought it was time to create a BSD users group for the technical professionals in this area. I'll be looking for an organization to host our first meeting, probably in March. If you are interested in participating in these low-key yet high-value gatherings of like-minded BSD users, please leave a comment at the NoVA BUG blog. I think we'll be able to recruit someone to host a mailing list fairly soon. Thank you!

Snort Report 3 Posted

My third Snort Report has been posted. Using the snort.conf file built in the second Snort Report, I show how Snort can detect suspicious activity without using any rules or dynamic preprocessors. Granted, the examples are somewhat limited, but you get the idea. The purpose of these articles is to develop an intuitive understanding of Snort's capabilities, starting with the basics and becoming more complicated.

Friday, February 02, 2007

Single-Digit Security Service Providers

Yesterday I learned that more friends of mine from Foundstone have departed to start their own companies. I could probably list a dozen such companies with whom I do work, from whom I get leads, or to whom I pass leads. It seems this is a really popular way for security specialists to do work they enjoy without the burden of corporate management.

I think clients like this approach because they always interact directly with the people doing the work. They can target specialists and only bring in the people they need. When I am hired for a project that extends beyond network-centric monitoring, response, and/or forensics, I call on one or more friends I trust. For example, one client needs help with monitoring, infrastructure, and applications, so I am driving to the client with the best guys I know for each subject.

I wonder if it might be useful for all of us "single-digit security service providers" (i.e., those of us with less than ten employees) to meet, perhaps at Black Hat USA? So many people asked if I was attending Black Hat last year, but I didn't make it. This year I think I will attend, and it might be cool for all of the security small business owners to meet and share war stories and capabilities. I'd like to expand my list of trusted colleagues, but I usually only feel comfortable recommending another person after I've met them and hopefully seen what skills they offer. This is related to my personal LinkedIn policy.

While I know a lot of people at bigger companies, I'm never really going to call on a large company for help unless the project is beyond what I could do with a small team. So, please don't be offended if you want to attend this meeting but work for a big consulting firm or defense contractor. Your company doesn't need any help from my company, believe me!

If there's interest in large companies looking to subcontract work to small companies, I think we can talk about arranging a second meeting for that sort of social networking. I do that too and so do my friends. If you work at a large company and want to meet potential subcontractors, also please email me and we'll set up a second meeting to accommodate those interests.

If either of these meetings at Black Hat sound like a good idea, please comment here and/or email taosecurity [at] gmail [dot] com. Thank you.

Consider This Scenario

The other day I posted I Am Not Anti-Log. I alluded to the fact that I am not a big log fan but I do see the value of logs. This post will give you an indication as to why I prefer network data to logs.

Yesterday morning I installed OSSEC on the one system I expose to the Internet. OSSEC is really amazing in the sense that you can install it and immediately it starts parsing system logs for interesting activity.

The system on which I installed OSSEC only offers OpenSSH to the world. Therefore, you could say I was surprised when the following appeared in my Gmail inbox this morning:

OSSEC HIDS Notification.
2007 Feb 02 06:25:01

Received From: macmini->/var/log/auth.log
Rule: 40101 fired (level 12) -> "System user sucessfully logged to the system."
Portion of the log(s):

Feb 2 06:25:01 macmini su[14861]:
(pam_unix) session opened for user nobody by (uid=0)

I don't know what that means, but I don't feel good about it. At this point I know what everyone is thinking -- SSH to host macmini and look around! Don't do it. If my macmini box is owned, the last thing I want is for the intruder to know that I know. The only defense when you suspect a box is compromised is to play dead and stupid. So where do I go to learn more about what's happened?

I do have one log-centric option. If I am running Syslog on macmini, and sending the logs elsewhere via Syslog to a central collector, I should check the collector for unusual logs from macmini. However, if I check the Syslog server, and see nothing unusual, does that mean anything? Not really! The lack of log messages may indicate whatever the intruder did was not logged by the victim. Maybe the first act involved killing remote logging! This is one of the reasons I am not a big fan of logs. The absence of logs does not confirm integrity. What can I do now?

My best option is to hop onto my NSM sensor and look for suspicious connections to or from macmini around the time I got this OSSEC alert. For example, the following capture shows part of the screen output showing sessions from 0800 GMT to about 1600 GMT. My OSSEC event occurred at 1125 GMT. This search for inbound sessions shows only ICMP at the time of the OSSEC event. A similar check for outbound sessions did not reveal anything interesting. Therefore, the OSSEC event was probably caused by some sort of daemon running on macmini, not by a login from a remote unauthorized user.

Notice the difference between the data presented in the network sessions vs the host logs. The absence of suspicious session data means no remote intruder interacted with the system during the time in question. In other words, a lack of a record showing an inbound OpenSSH session does mean no OpenSSH sessions occurred at the time in question. If you wonder if some sort of advanced covert channel ICMP-fu is happening, I have the full content validating each of the ICMP "sessions" and could investigate them if I so desired.

Also keep in mind the value of collection session and full content independent of alerts. If I waited for an alert before starting to log sessions and/or full content, I'd be in the same boat with the host-centric loggers. By doing content-neutral logging (i.e., grab everything, continuously) I can look for suspicious activity regardless of the presence or absence of alerts.

Thursday, February 01, 2007

TaoSecurity 2007 Training Schedule

I just posted the TaoSecurity 2007 Training Schedule on my company Web site. I didn't include all of the places I might be teaching this year. All of the public classes are tentative at this point, but I am working on securing hosting facilities. You'll notice I plan to conduct six public classes across the US, and I am appearing at a few overseas conferences too -- including a one-day public class in Sydney, Australia.

If you would like to support my bid to teach at Black Hat USA Training (28-21 July 2007) in Las Vegas, NV, please email Ping Look via ping [at] blackhat [dot] com.

Email training [at] taosecurity [dot] com for advance details on the classes listed below. Registration information for public classes will be posted shortly.

I maintain the latest schedule at TaoSecurity training.

If you would like me to conduct a private class at your facility, please email training [at] taosecurity [dot] com.

Thank you. I hope to meet you in 2007!

Enemy-Centric vs Population-Centric Security

Gunnar Peterson pointed me to a great blog post he wrote called Protect the Transaction. He quotes Dave Kilcullen's post Two Schools of Classical CounterInsurgency, which discusses the difference between “enemy-centric” and “population-centric” counter-insurgency operations.

I consider two responses to these posts. First, when monitoring, you can take a threat-centric or an asset-centric approach to monitoring insider threats. This is especially true when monitoring inside an organization. As I teach in my Network Security Operations class, threat-centric monitoring places sensors closer to the suspected intruders (rogue sys admins, curious call center workers, etc.) while asset-centric monitoring places sensors closer to valuable resources (source code repositories, payroll servers, etc.) Sometimes you can follow both approaches, but that usually ends up in a "monitor everywhere" style that can be cost- and operationally-prohibitive. Keep in mind that defenses are (or should be) collapsing around the item of value, which Gunnar calls the transaction. He and I would agree that data is the key resource, so resistance, detection, and response should focus on that element.

Second, in terms of threats and assets in general (i.e., "enemies" and "populations"), we as enterprise defenders can really only influence the asset or "population" variable. We address that aspect through design, architecture, secure coding, countermeasures, and so on. Only law enforcement or the military can address threats or "enemies" by prosecuting or eliminating them.

Keith Jones on Forensics

Keith Jones, my friend from Jones, Rose, Dykstra and Associates and Real Digital Forensics coauthor wrote The Real World of Computer Forensics for CMP. It's a good read.

Keith, Curtis (Rose) and I are discussing writing Real Digital Forensics 2, which will be fun to develop. We're considering writing a series of cases involving a single enterprise, but involving a wide variety of incident types and data sources. I don't see the book on shelves before 2008, though. It's a lot of work simply creating the evidence for analysis and inclusion on a DVD.