Saturday, March 31, 2007

Help Johnny Long Go to Uganda

Long-time readers of my blog know I severely limit the number of non-technical stories I write here. I've probably written less than a dozen in over four years. This one definitely deserves to be posted, however.

I shook hands with Johnny Long at ShmooCon last week, but we didn't get a chance to chat. If you don't know Johnny Long, you haven't paid attention to the scene during the last few years! In short, Johnny invented Google hacking, and he's one of the nicest guys you could meet at a security conference.

Today I received an email from Johnny stating that he and his wife Jen are flying to Uganda in May to do missionary work. He's working for AIDS Orphans Education Trust. In his usual low-key manner, he's asking for help. He didn't specifically ask people outside of his email addressees to help, but I figure there are a lot of people who could contribute a few dollars to help defray the costs he and his wife must bear to fly and live in Uganda.

His trip is going to cost $4200 and I can guarantee not a penny will be wasted. How often do you get a chance to personally assist someone you know? Johnny has decided to crawl out of his digital shell and try to make a difference in the real world. If you want to join me in helping Johnny and his wife, send a contribution via PayPal to johnny [at] ihackstuff [dot] com.

Thank you for your time.

Friday, March 30, 2007

Full Content Monitoring as a Wiretap

I received the following question today:

When installing Sguil, what legal battles have you fought/won about full packet capture and its vulnerability to open records requests from outside parties? I am getting concerns, from various management, regarding the legal ramifications of the installation of a system similar to Sguil in the state government arena. Do you have any advice for easing their worries? I know how important full data capture is to investigating incidents, and I consider it of paramount importance to the security of our state that we do so. Are there any legal precedents that can be cited?

Before I say anything else it is important to realize I am not a lawyer, I don't play one on YouTube, and I recommend you consult your lawyer rather than listen to anything I might say.

With that out of the way, I have written about wiretaps a few times before. Let me get these generic wiretapping issues out of the way before addressing the question specifically.

The pertinent Federal law is 18 U.S.C. §2511.

A great place to look for commentary and precedents on digital security issues is Orin Kerr's Computer Crime Case Updates. This search for wiretap may or may not be helpful.

Finally, for recent commentary by a lawyer (but not your lawyer), I recommend Sysadmins, Network Managers, and Wiretap Law (.pdf slides) by Alex Muentz. These notes from his LISA 2006 talk are helpful too.

I think the key element of the question originally posed was full packet capture and its vulnerability to open records requests from outside parties. It sounds like the question asker is worried about discoverability of full content data. I touched on this briefly in The Revolution Will Be Monitored.

My answer to this problem is what I would consider both practical and technically limiting: do not store more full content data than you need. For any modern production network, capturing and storing days or weeks of full content traffic can be an expensive proposition. For example, in one client location I have about 200 GB of space available for full content storage. That space allows me to save a little more than 10 days of full content, even with fairly draconian BPFs limiting what is stored. If for some reason I needed to produce that data to management or attorneys, I could only provide the last 10 days of information. If the event in question occured prior to that period, I just don't have it.

I do know of some locations that operate massive storage area networks to save TBs of full content. I do not advocate that for anyone but the most specialized of clients. I do recommend collecting the amount of full content (if possible, legally and technically) that works for your investigative window. For example, if you have a requirement to review your alert and session data such that you are never more than 5 days past an event of interest, you might want to save 7 days of full content. From an investigation point of view, more is always better. From a practical point of view, it might be too costly.

Remember that any network data collection should be considered a wiretap. Full content is the form of network data that most resembles a wiretap.

With respect to session data, I recommend saving as much of that as possible. In practical terms it comes down to the amount of space you're willing to devote to database files. At the same client I am collecting as many sessions as I can, without filters. 30 days of such session data is producing about 20 GB of uncompressed MySQL table files. As you can see I can store many more days of session data as compared to full content data. That means much more session data is discoverable. I might choose to limit storage of that session data to meet whatever guidance corporate legal counsel might provide.

Session data is like pen register/trap and trace data, because it does reveal content. I still treat it like a wiretap but it probably does not meet the same standards.

Event data, i.e. IDS alerts, take so little space as to not require any real storage consideration (compared to full content and session data). Therefore, the primary limiting factor is legal and policy, not technical.

I think anyone who really wants a better answer would do well to check our Prof Kerr's list, and potentially ask him. Alex Muentz would be another good resource.

Threat Deterrence, Mitigation, and Elimination

A comment on my last post prompted me to answer here. My thesis is this: a significant portion, if not the majority, of security in the analog world is based on threat deterrence, mitigation, and elimination. Security in the analog world is not based on eliminating or applying countermeasures for vulnerabilities. A vulnerability-centric approach is too costly, inconvenient, and static to be effective.

Consider the Metro subway in DC, pictured above. There are absolutely zero physical barriers between the platform and the trains. If evil attacker Evelyn were so inclined, she could easily push a waiting passenger off the platform into the path of an arriving train, maiming or killing the person instantly.

Why does this not happen (regularly)? Evelyn is presumably a rational actor, and she is deterred by vigilante justice and the power of the legal system. If she killed a Metro passenger in the state of Virginia she would probably be executed herself, or at the very least spend the rest of her life in prison. Hopefully they are few people like Evelyn in the world, but would more Metro passengers be murdered if there were no attribution or apprehension of the killers?

How do you think the Metro board would react to such an incident?

  1. Build barriers to limit the potential for passengers to land in front of moving trains

  2. Screen passengers as they enter Metro stations

  3. Mandate trains to crawl within reach of waiting passengers

  4. Add Metro police to watch for suspicious individuals

  5. Add cameras to watch all Metro stations

  6. Lobby Congress to increase penalties


My ranking is intentional. 1 would never happen; it is simply too costly when weighed against the risks. 2 would be impossible to implement in any meaningful fashion and would provoke a public backlash. 3 might happen for a brief period, but it would be abandoned because it would slow the number of trains carrying passengers. 4 might happen for a brief period as well, but the costs of additional personal make it an unlikely permanent solution; it's also ineffective unless the police is right next to a likely incident. 5 and 6 could happen, but they are only helpful for deterrence -- which is not prevention.

Earlier I said Evelyn is a rational actor, so she could presumably be deterred. She could also be mitigated or eliminated. Imagine if Evelyn's action was a ritual associated with gang membership. Authorities could identify and potentially restrict gang members from entering the Metro. (Difficult? Of course. This is why deterrence is a better option.) Authorities could also infiltrate and/or destroy the gang.

Irrational actors cannot be deterred. They may be mitigated and/or eliminated.

Forces of nature cannot be deterred either. Depending on their scope they may be mitigated, but they probably cannot be eliminated. Evelyn's house cannot be built for a reasonable amount of money to withstand a Category V hurricane. Such a force of nature cannot be deterred or eliminated. Given a large enough budget Evelyn's house could be built to survive such a force, so mitigation is an option. Insurance is usually how threats like hurricanes are mitigated, however.

Everyone approaches this problem via the lens of their experience and capabilities. Coders think they can code their way out of this problem. Architects think they can design their way out. I am mainly an operator and, in some ways, an historian. I have seen in my own work that prevention eventually fails, and by learning about the past I have seen the same. In December 2005 I wrote an article called Engineering Disasters for a magazine, and in the coming weeks a second article with more lessons for digital security engineers will be published in a different venue.

I obviously favor whatever cost-effective, practical trade-offs (not solutions) we can implement to limit the risks facing digital assets. I am not saying we should roll over and die, hoping the authorities will catch the bad guys and prevent future crimes. Nevertheless, the most pressing problem in digital security is attribution and apprehension of those perpetrating crimes involving information resources. Until we take the steps necessary to address that problem, no amount of technical vulnerability remediation is going to matter.

Thursday, March 29, 2007

Remember that TJX Is a Victim

Eight years ago this week news sources buzzed about the Melissa virus. How times change! Vulnerabilities and exposures are being monetized with astonishing efficiency these days. 1999 seems so quaint, doesn't it?

With the release of TJX's 10-K to the SEC all news sources are discussing the theft of over 45 million credit cards from TJX computers. I skimmed the 10-K but didn't find details on the root cause. I hope this information is revealed in one of the lawsuits facing TJX. Information on what happened is the only good that can come from this disaster.

It's important to remember that TJX is a victim, just as its customers are victims. The real bad guys here are the criminals who compromised TJX resources and stole sensitive information. TJX employees may be found guilty of criminal negligence, but that doesn't remove the fact that an unauthorized party attacked TJX and stole sensitive information. Unfortunately I believe the amount of effort directed at apprehending the offenders will be dwarfed by the resources directed at TJX. That will leave those intruders and others like them to continue preying on other weak holders of valuable information.

Update: At least US credit card holders don't have it as bad as our friends in the UK.

VMware Server 1.0.2 on Ubuntu 6.10

Previously I documented installing VMware Workstation 6 Beta on my Thinkpad x60s. I decided to uninstall Workstation and install VMware Server 1.0.2. I should have used the vmware-uninstall.pl script but even without using it directly I managed to remove the old Workstation installation without real trouble.

Running Server on Ubuntu 6.10 (desktop) required me to add a few packages. I found Martti Kuparinen's installation guide very helpful. I had to add the following packages to ensure a smooth Server installation.

sudo apt-get install xinetd
sudo apt-get install libX11-dev
sudo apt-get install xlibs-dev

I did not have to install linux-kernel-headers.

I was really impressed that Martti provided a patch for two scripts that did not work correctly out of the box. When I applied the patch I was able to start VMware's Web server and access it via my browser.

richard@neely:/tmp$ wget http://users.piuha.net/martti/comp/ubuntu/httpd.vmware.diff
--13:52:24-- http://users.piuha.net/martti/comp/ubuntu/httpd.vmware.diff
=> `httpd.vmware.diff'
Resolving users.piuha.net... 193.234.218.130
Connecting to users.piuha.net|193.234.218.130|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2,973 (2.9K) [text/plain]

100%[====================================>] 2,973 --.--K/s

13:52:25 (1.81 MB/s) - `httpd.vmware.diff' saved [2973/2973]

richard@neely:/tmp$ cd /
richard@neely:/$ sudo patch -b -p0 < /tmp/httpd.vmware.diff
Password:
patching file /etc/init.d/httpd.vmware
patching file /usr/lib/vmware-mui/src/lib/httpd.vmware
richard@neely:/$ sudo netstat -natup | grep vm
tcp 0 0 0.0.0.0:8333 0.0.0.0:*
LISTEN 5205/httpd.vmware
tcp 0 0 0.0.0.0:8222 0.0.0.0:*
LISTEN 5205/httpd.vmware

Thanks to this guide I made this addition to /etc/xinetd.d/vmware-authd so the vmware console on port 902 TCP didn't listen on all interfaces:

bind = 127.0.0.1

To prevent the Web server from starting at boot and potentially listening on a hostile network, I removed the x bit from the script in /etc/init.d so it would not be started at boot. I can start it manually.

richard@neely:~$ sudo chmod -x /etc/init.d/httpd.vmware
richard@neely:~$ sudo sh /etc/init.d/httpd.vmware start
Starting httpd.vmware: done

I noticed while installing the packages the suggestion to run apt-get autoremove, so I did once everything was installed.

richard@neely:~$ sudo apt-get autoremove
Password:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
libnl1-pre6 network-manager libnm-util0 dhcdbd
The following packages will be REMOVED:
dhcdbd libnl1-pre6 libnm-util0 network-manager
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
Need to get 0B of archives.
After unpacking 1217kB disk space will be freed.
Do you want to continue [Y/n]? y
(Reading database ... 115360 files and directories currently installed.)
Removing network-manager ...
* Stopping NetworkManager daemon [ ok ]
* Stopping NetworkManager dispatcher [ ok ]
Removing dhcdbd ...
Removing libnl1-pre6 ...
Removing libnm-util0 ...

I have VMware Server running well on Ubuntu now.

Wednesday, March 28, 2007

Mesh vs Chain

When Matasano Chargen suggested reading Nate Lawson's blog, I immediately added it to my Bloglines collection. Today I read Building a Mesh Vs a Chain and Mesh Approach vs Defense-in-Depth. Nate's basic premise is this:

When explaining the desired properties of a security system, I often use the metaphor of a mesh versus a chain. A mesh implies many interdependent checks, protection measures, and stopgaps. A chain implies a long sequence of independent checks, each assuming or relying on the results of the others.

With a mesh, it’s clear that if you cut one or more links, your security still holds. With a chain, any time a single link is cut, the whole chain fails.


He explains why mesh != defense-in-depth:

A commenter suggested by email that the mesh concept in my previous post is very similar to defense-in-depth. While they are similar, there are some critical differences that are especially important when you apply them to software protection.

Defense-in-depth comes from military history where a defender would build a series of positions and then fall back each time the enemy advanced forward through the first positions. This works in security as well. For instance, a web server may be run in a restricted chroot environment so that if the web server is compromised, damage is limited to the files in the restricted directory, not the whole system.

The mesh model, on the other hand, involves a series of interlocking checks and enforcement mechanisms. There is nothing to fall back to because all the defenses are active at the same time, mutually reinforcing each other. This concept is less common than defense-in-depth for network security use due to the difficulty of incorporating it into system designs. However, it is extremely common in cryptography.


I suggest reading both posts for more information. I found this design idea very interesting, but I agree that implementing it outside of cryptography seems difficult. It would be neat to devise more mesh-based systems.

Security Operations Fundamentals

Last year I last wrote:

Marcus [Ranum] noted that the security industry is just like the diet industry. People who want to lose weight know they should eat less, eat good food, and exercise regularly. Instead, they constantly seek the latest dieting fad, pill, plan, or program -- and wonder why they don't get the results they want!

You might be wondering about the digital security equivalent to eating less, eating good food, and exercising regularly. Addressing that subject adequately would take more than this blog post, but I want to share the steps I use as a consultant when encountering a new client's enterprise.

You'll notice that these steps fit nicely within Mike Rothman's Pragmatic CSO construct. These are a little more specific and focused because I am not acting as a Chief Security Officer when I work as a consultant.

  1. Instrument sample ingress/egress points. What, monitor first? That's exactly right. Start collecting NSM data immediately (at least session, preferably alert, full content, session, and statistical). It's going to take time to progress through the rest of the steps that follow. While working on the next steps your network forensics appliance can be capturing data to be analyzed later.

  2. Understand business operations. Replace business with whatever term makes you more comfortable if you are a .gov, .mil, .edu, etc. You've got to know the purpose of the organization before you can understand the data it needs to do its job. This requires interviewing people who know this, preferably business owners and managers.

  3. Identify and prioritize business data. Once you understand the purpose of the organization, you should determine the data it needs to function. Not all data is equal, so perform a relative ranking to determine the most important down to least important. This work must be done with the cooperation of the businesses; it cannot be security- or consultant-driven.

  4. Identify and prioritize systems processing business data. By systems I mean an entire assemblage for processing data, not individual computers. Systems include payroll processing, engineering and development, finance projections, etc. Prioritize these systems as you did the data they carry. Hopefully these two sets of rankings will match, but perhaps not.

  5. Identify and prioritize resources comprising systems. Here we start dealing with individual servers, clients, and infrastructure. For example, the database containing payroll data is probably more important than the Web server offering access to clients. Here tech people are more important than managers because tech people build and maintain these devices.

  6. Define policy, profile resources, and identify violations. Steps 2-5 have gotten you to the point where you should have a good understanding of the business and its components. If you have a policy, review it to ensure it makes sense given the process thus far. If you haven't yet defined a policy for the use of your information resources, do so now.

    Next, profile how those resources behave to determine if they are supporting business operations or if they are acting suspiciously or maliciously. I recommend taking a passive, traffic-centric approach. This method has near-zero business impact, and, if executed properly, can be done without alerting anyone insider or outside the company acting maliciously. Here you use the data you started collecting in step 1.

  7. Implement short term incident containment, investigation, and remediation. I have yet to encounter an enterprise that doesn't immediately find a hot-button item in step 6. Put out those fires and score some early wins before moving on.

  8. Plan and execute instrumentation improvements. Based on step 7, you'll realize you want visibility across the entire enterprise. Increase the number of sensors to cover all of the areas you want. This step encompasses improved host-centric logging and other visibility intitiatives.

  9. Plan and execute infrastructure improvements. You'll probably decide to implement components of my Defensible Network Architecture to take a more proactive stance towards defending the network. You may be able to reconfigure existing processes, products, and people to act in a more secure manner. You may need to design, buy, or train those elements.

  10. Plan and execute server improvements. Here you decide what, if any, changes should be made to the resources offering business data to users, customers, partners, and the like. Maybe you want to encrypt data at rest as well as in motion. Maybe you decide to abandon an old Web framework for a new one... and so on.

  11. Plan and execute user platform improvements. This step changes the gear users rely upon, so it's the last step. Users are most likely to resist that which they can immediately see, so tread carefully. Improvements here involve OS upgrades or changes, moves to thin clients, removal or upgrades of software, and similar issues.

  12. Measure results and return to step 1. I recommend using metrics like those I described here. Measure Days since last compromise of type X, System-days compromised, Time for a pen testing team of [low/high] skill with [internal/external] access to obtain unauthorized [unstealthy/stealthy] access to a specified asset using [public/custom] tools and [complete/zero] target knowledge, and so on.


You may notice steps 8-11 reflect my TaoSecurity Pyramid of Trust. That is no accident.

It is also important to realize that steps 8-11 are based on data collected in step 1 and analyzed in step 6. Enterprise security improvements should not be driven by the newest products or concept. Improvements should be driven by understanding the enterprise and specifically the network. Otherwise, you are playing soccer goal security by making assumptions and not judgements.

Only when you understand what is happening in the enterprise should you consider changing it. Only when you realize existing processes, products, and/or people are deficient should you consider changes or additions. Think in terms of what problem am I trying to solve, not what new process, product, or person is now available.

Tuesday, March 27, 2007

Ayoi on the Importance of NSM Data

At my ShmooCon talk I provided a series of case studies showing the importance of Network Security Monitoring data. The idea was to ask how it would be possible to determine if an IDS alert represented a real problem if high-quality data didn't exist. Alert management is not security investigation, and unfortunately most products and processes implement the former while the latter is truly needed.

I noticed that Ayoi in Malaysia posted a series of blog stories showing his investigative methodology using NSM data and Sguil (Not Only Alert Data parts I, II, and III). These posts demonstrate several alerts and compare data available via an alert management tool like BASE versus a security investigation tool like Sguil. I am glad to see these sorts of stories because they show how people in the trenches do their jobs.

I have yet to meet an analyst -- someone responsible for finding intrusions -- who rejects my methods or the need for collecting NSM data. Almost everyone who argues against these methods is not directly responsible for the technical aspects of personally detecting and responding to intrusions.

Monday, March 26, 2007

SANS Software Security Institute

Today I attended a free three-plus-hour seminar offered by the new SANS Software Security Institute. This is part of SANS dedicated to software security. I recommend reading their press release (.pdf) for the full scoop, but basically SANS is introducing a Secure Programming Skills Assessement, additional training (eventually), and a certification path. Other people will summarize the program, so I'd like to share a few thoughts from the speakers at today's event.

  • Michael Sutton from SPI Dynamics said that the idea of assembling a team of security people to address enterprise vulnerabilities worked (more or less) for network and infrastructure security because the team could (more or less) introduce elements or alter the environment sufficiently to improve their security posture. The same approach is not working and will not work for application security because addressing the problem requires altering code. Because code is owned by developers, the security team can't directly change it. This is an important point for those who think they can just turn their CSIRT loose on the software security problem in the same way they attacked network security.

    Michael also said no security is trustworthy until trusted. (He actually said "trusted." There's a difference. Anyone can "trust" software. The question is whether it is worthy of trust, i.e., "trustworthy.")

  • Alan Paller made a few comments. He said we have 1.5 million programmers in the world, so training all of them probably isn't an option. He said SANS is working with Tipping Point to create a "Programmer's @Risk" newsletter like the existing vulnerabilities @Risk newsletter. Alan repeated a recommendation made my John Pescatore that organizations should run security tests against bids as well as upon acceptance.

    Alan noted that software testing should be considered a part of a "building permit" (pre-development) and a second "occupancy permit" (deployment in the enterprise). Alan also said PCI is the only worthwhile security standard. Others just require writing about security, while PCI requires a modicum of doing security. (Mark Curphey disagrees!)

  • Jim Routh of DTCC said it's important for developers to recognize that security flaws are software defects, and not the security team's problem! His team of 450 inhouse developers uses three stages of testing: 1) white box for developers; 2) black box for integrators; and 3) third party for deployment.

  • Mike Willburn from the FBI said FISMA C&A results in "well-documented" systems that score well on report cards but are "full of holes." Bravo.

  • Andrew Wing from Teranet said he doesn't let an inhouse project progress to user acceptance training unless it scores a certain rank using an automated software security assessment tool.

  • Jack Danahy from Ounce Labs stressed the importance of contract language for procuring. The OWASP Legal Project also offers sample language. Alan stressed the need to build security into contracts, rather than relying on the vague concept of "negligence" when security isn't explicitly included in a contract.

  • Michael Weider from Watchfire said he fears user-supplied content will be the next exploitation vector. I shuddered at the horror of MySpace and the like.

  • Steve Christey mentioned SAMATE (Software Assurance Metrics And Tool Evaluation).


That's what I can document given the time I have. Thanks to SANS for their leadership in this endeavor.

Manipulating Packet Captures

While capturing traffic at Hack or Halo I realized the timestamps on the packets were off by one hour. Apparently I didn't patch this infrequently used Hacom box for the recent DST change.

I captured traffic using Sguil's log_packets.sh script, which uses Snort to write a new full content trace every hour. For the first round of the contest, the script produced two traces. I combined them using Mergecap, bundled with Wireshark.

richard@neely:/var/tmp/shmoocon2007$ mergecap -w shmoocon_hack_rd1.pcap
snort.log.1174770982 snort.log.1174773600

The Capinfos program accompanying Wireshark summarizes the new trace:

richard@neely:/var/tmp/shmoocon2007$ capinfos shmoocon_hack_rd1.pcap
File name: shmoocon_hack_rd1.pcap
File type: Wireshark/tcpdump/... - libpcap
Number of packets: 719534
File size: 155340234 bytes
Data size: 143827666 bytes
Capture duration: 4587.056482 seconds
Start time: Sat Mar 24 17:17:41 2007
End time: Sat Mar 24 18:34:08 2007
Data rate: 31355.11 bytes/s
Data rate: 250840.89 bits/s
Average packet size: 199.89 bytes

I decided to alter the timestamps using Editcap, also packaged with Wireshark.

richard@neely:/var/tmp/shmoocon2007$ editcap -t 3600 shmoocon_hack_rd1.pcap
shmoocon_hack_rd1_timeadj.pcap

Now the timestamps are correct.

richard@neely:/var/tmp/shmoocon2007$ capinfos shmoocon_hack_rd1_timeadj.pcap
File name: shmoocon_hack_rd1_timeadj.pcap
File type: Wireshark/tcpdump/... - libpcap
Number of packets: 719534
File size: 155340234 bytes
Data size: 143827666 bytes
Capture duration: 4587.056482 seconds
Start time: Sat Mar 24 18:17:41 2007
End time: Sat Mar 24 19:34:08 2007
Data rate: 31355.11 bytes/s
Data rate: 250840.89 bits/s
Average packet size: 199.89 bytes

I'm getting these traces to Shmoo now so they can be shared.

Sunday, March 25, 2007

ShmooCon 2007 Wrap-Up

ShmooCon 2007 ended today. Only four talks occurred today (Sunday), and only two of them (Mike Rash, Rob King/Rohlt Dhamankar) really interested me. Therefore, I went to church with my family this morning and took lead on watching the kids afterwards. I plan to watch those two interesting talks once they are released as video downloads. (It takes me 1 1/2 - 2 hours each way into and out of DC via driving and Metro, so I would have spent more time on the road than listening to speakers.)

I also left right after Bruce Potter's introductory comments on Friday afternoon. If it hadn't been for the NoVA Sec meeting I scheduled Friday at 1230, I probably would have only attended Saturday's sessions. I heard Avi Rubin's 7 pm keynote was good, and I would have liked to watch Johnny Long's talk. Otherwise I thought spending time with my family was more important.

That leaves Saturday. I spent the whole day at ShmooCon, from the first talk to the end of Hack or Halo. I began the day with Ofir Arkin from Insightix. (I actually spent about half an hour chatting with Ofir Friday afternoon, which was cool. I also spent time Friday speaking with several people I recognized.) Ofir demonstrated that just about all Network Admission Control concepts and implementations are broken. He only covered about half his material, but I left wondering who would bother spending thousands or millions on NAC when it doesn't seem to work and is fighting the last war anyway.

Ofir emphasized that knowledge of the enterprise is the key to network defense. He pointed out that NAC products which provide a shared medium quarantine area are exactly where an intruder wants his machine to be delivered. Once in that area he can attack the weakest, non-compliant systems on the same subnet or VLAN used by the quarantine. Using PVLANs an avoid this problem, but only if not subject to VLAN hopping attacks. Ofir questioned whether per-port security is ever feasible, especially in an age of increasing use of VMs.

One basic take-away for me was this: if I find myself on a network requiring NAC, do the following.

  1. Find the nearest printer.

  2. Unplug the network cable.

  3. Connect the network cable from the printer to a hub, and connect the hub to the network port.

  4. Connect my laptop to the hub.

  5. Sniff printer's MAC address and IP address.

  6. Disconnect the printer.

  7. Assign the printer's MAC and IP address to my laptop, and access the network.


While this will not work everywhere, it's probably going to work in enough places to make NAC a questionable prospect for physical defense. Hosts connecting via VPN are another issue.

After Ofir spoke I saw Joel Wilbanks, Matt Fisher, and Mike Murphy talk about incident response when Web applications are attacked. They made the point that Web app incidents don't usually leave artifacts (think files on the hard drive) on the victim. Web app forensics becomes a log analysis exercise. If no logs exist (Web, database, OS, etc.), you're hosed. They recommended populating database tables with honeytokens and writing custom IDS signatures to alert on the presence of those tokens in network traffic.

During their presentation several attendees questioned the role of SSL for inbound connections. The speakers recommending terminating SSL at an accelerator, and passing clear text by an IDS before sending it to the Web server or re-encrypting it. At least one of the attendees was shocked -- shocked -- to consider passing "sensitive" data in the clear like that. I have never understood this argument. The question is simple: do you care to know what is being carried in SSL, or do you not care? If you do care (and you should), architect your enterprise so you have visibility into what's happening. If you don't care, tell me so I can avoid doing business with you.

As far as SSL is concerned, I consider inbound SSL a solved problem. Outbound SSL, as might be used for a command and control channel, is not solved -- unless you want to break SSL and teach users to accept a man-in-the-middle attack scenario. I worry about outbound SSL, not inbound.

I had lunch with Joe Stewart, so in some sense I didn't really miss his talk. He was nice enough to share his thoughts with me on his next Sandnet and other projects.

My talk happened at 1300. This means I missed Billy Hoffman release Jikto, so I plan to download his talk (and Joe's) when available. I was really pleased by the outcome. The room was totally filled and people were standing outside the room listening. Thanks to everyone who attended. I wish we had more time for questions, so feel free to leave a comment here or email if you have unanswered issues.

After my talk I listened to Raven talk about backbone security. She is fuzzing key routing protocols (RIP, OSPF, EIGRP, BGP, etc.) by mainly attacking open source implementations. She just got a Cisco 2600 series router so IOS is her next target. If she is getting results doing this work in her spare time sitting in airports, you can only imagine what funded, dedicated teams are doing with budgets for equipment and manpower.

I spent the next hour chatting with familiar faces in the area near the talks. Marty McKeay was there, along with Mike Rash, Jamie Butler, and Bret Padres and Ovie Carroll from the CyberSpeak Podcast. (Sorry I couldn't get back to you guys in time!)

At 1600 I squeezed into Dan Kaminsky's talk. Before he started I had a chance to chat briefly with Mike Poor and Ed Skoudis from Intel Guardians. Mike and Marc Sachs (who I saw independently) were not happy with my TCP options analysis. Oh well!

I felt bad for Dan. The poor guy showed remarkable resolve trying to speak, despite an attendee who felt compelled to interrupt every fifth sentence. Dan had to dodge plenty of Shmoo balls while explaining slides with way too many words on them. I think Dan's research is way outside the realm of what most security people do, but probably perfect for a paper at USENIX.

I stayed in the same room to listen to Josh Wright and Mike Kershaw talk about LORCON. As their Web page states: LORON is "a generic library for injecting 802.11 frames, capable of injection via multiple driver frameworks, without forcing modification of the application code." Basically, if you write a wireless packet injector, you should use LORCON. Don't write something for a specific wireless driver -- let LORCON handle that for you. I was really impressed, especially since I had never seen Mike (author of Kismet) and Josh (lots of tools, cool research) in person. In addition to LORCON they mentioned this WiFi frame injection patch for Wireshark.

When their talk was done I headed over to the Hack or Halo room. I set up my Hacom Lex Twister on a SPAN port (argh, yes, I forgot a tap) and captured the traffic from the Hack contest. I monitored it live with Sguil, which was fun.

Overall, I was again impressed by the organization and manpower demonstrated by ShmooCon. I was less impressed by the overall slate of talks, but I think the quality of attendees compensated for that. The first ShmooCon in 2005 attracted about 350 people. The second had about 800. This year nearly 1200 people attended. I was very thankful to attend and speak and I look forward to at least attending next year.

Update: I forgot to ask -- if you liked my talk, please send feedback to feedback [at] shmoocon [dot] org. Thank you!

Saturday, March 24, 2007

Blogging from ShmooCon Hack or Halo

So much from my lousy camera phone. That's my best attempt to show Sguil monitoring traffic at the ShmooCon Hack or Halo contest. I plan to share the network traffic from the hacking contest when I get the opportunity. Thanks to WXS and the ShmooCon crew for letting my attach a sensor to the network.

Friday, March 23, 2007

Taking the Fight to the Enemy

ShmooCon started today. ShmooCon leader Bruce Potter finished his opening remarks by challenging the audience to find anyone outside of the security community who cares about security. I decided to take his idea seriously and I thought about it on the Metro ride home.

It occurred to me that the digital security community fixates on vulnerabilities because that is the only aspect of the Risk Equation we can influence. Lines of business control assets, so we can't decrease risk by making assets less valuable. (That doesn't even make sense.) We do not have the power or authority to remove threats, so we can't decrease risk by lowering the attacks against our assets. (Threat mitigation is the domain of law enforcement and the military.) We can only address vulnerabilities, but unless we develop the asset ourselves we're stuck with whatever security the vendor provided.

I would like to hear if anyone can imagine another realm of human endeavor where the asset owner or agent is forced to defend his own interests, without help from law enforcement or the military. The example can be historical, fictional, or contemporary. I'm reminded of Wells Fargo stagecoaches being robbed as they crossed the West, forcing WF to hire private guards with guns to defend company assets in transit. As a fictional example, Sherlock Holmes didn't work for Scotland Yard; victims hired the Great Detective to solve crimes that the authorities were too slow or unwilling to handle.

As I've said many times before, we are wasting a lot of time and money trying to "secure" systems when we should be removing threats. I thought of this again last night while watching Chris Hansen work with law enforcement to take more child predators off the streets. Imagine if I didn't have law enforcement deterring and jailing criminals like that. I'd have to wrap my kids in some sort of personal tank when I send them to school, and they'd still probably end up in harm's way. That's the situation we face on the Internet. There's no amount of bars over windows, high fences, or other defenses that will stop determined intruders. Removing or deterring the intruders is history's lesson.

This FCW article has the right idea:

The best defense against cyberattacks on U.S. military, civil and commercial networks is to go on the offensive, said Marine Gen. James Cartwright, commander of the Strategic Command (Stratcom), said March 21 in testimony to the House Armed Services Committee.

“History teaches us that a purely defensive posture poses significant risks,” Cartwright told the committee. He added that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests...”

The Stratcom commander told the committee that the United States is under widespread, daily attacks in cyberspace. He added that the country lacks dominance in the cyberdomain and that it could become “increasingly vulnerable if we do not fundamentally change how we view this battle space.”


Put me in, coach. I'm ready to play, today.

Wireless Ubuntu on Thinkpad x60s

I'm used to doing everything manually when running wireless FreeBSD on older laptops. Running Ubuntu has shielded me from some of the command-line configuration I used to perform on FreeBSD. Linux uses different commands for certain tasks. My new laptop also has a different chipset from my old laptop, so I wanted to see if I could get Kismet working on it.

If I want to find wireless networks via the command line I use this command.

richard@neely:~$ sudo iwlist eth1 scan
eth1 Scan completed :
Cell 01 - Address: 00:13:10:65:2F:AD
ESSID:"shaolin"
Protocol:IEEE 802.11bg
Mode:Master
Channel:1
Encryption key:on
Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 6 Mb/s; 9 Mb/s
11 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s
48 Mb/s; 54 Mb/s
Quality=76/100 Signal level=-58 dBm Noise level=-58 dBm
Extra: Last beacon: 68ms ago
...truncated...

If I want to associate with that WAP using WEP I use this command.

richard@neely:~$ sudo iwconfig eth1 essid shaolin channel 1 key KEYDIGITS

I am associated now.

richard@neely:~$ iwconfig eth1
eth1 IEEE 802.11g ESSID:"shaolin"
Mode:Managed Frequency:2.412 GHz Access Point: 00:13:10:65:2F:AD
Bit Rate:54 Mb/s Tx-Power:15 dBm
Retry limit:15 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=76/100 Signal level=-58 dBm Noise level=-59 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:2909 Missed beacon:0

I can grab an IP via DHCP.

richard@neely:~$ sudo dhclient eth1
Internet Systems Consortium DHCP Client V3.0.4
Copyright 2004-2006 Internet Systems Consortium.
All rights reserved.
For info, please visit http://www.isc.org/sw/dhcp/

Listening on LPF/eth1/00:13:02:4c:30:2d
Sending on LPF/eth1/00:13:02:4c:30:2d
Sending on Socket/fallback
DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 8
DHCPOFFER from 192.168.2.1
DHCPREQUEST on eth1 to 255.255.255.255 port 67
DHCPACK from 192.168.2.1
bound to 192.168.2.103 -- renewal in 42728 seconds.

Here is ifconfig output.

richard@neely:~$ ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:13:02:4C:30:2D
inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::213:2ff:fe4c:302d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4984 errors:19 dropped:2928 overruns:0 frame:0
TX packets:239 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5491350 (5.2 MiB) TX bytes:188020 (183.6 KiB)
Interrupt:74 Base address:0xc000 Memory:edf00000-edf00fff

I can check my gateway.

richard@neely:~$ netstat -nr -4
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
172.16.250.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8
172.16.207.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth1

I can change my IP from DHCP to static.

richard@neely:~$ sudo killall dhclient
richard@neely:~$ sudo ifconfig eth1 inet 192.168.2.8 netmask 255.255.255.0 up
richard@neely:~$ ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:13:02:4C:30:2D
inet addr:192.168.2.8 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::213:2ff:fe4c:302d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5625 errors:20 dropped:2929 overruns:0 frame:0
TX packets:245 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5492954 (5.2 MiB) TX bytes:192494 (187.9 KiB)
Interrupt:74 Base address:0xc000 Memory:edf00000-edf00
ichard@neely:~$ sudo route add default gw 192.168.2.1
richard@neely:~$ netstat -nr -4
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
172.16.250.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8
172.16.207.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1
0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth1

Here are the changes I made to enable Kismet after checking my wireless card.

richard@neely:~$ sudo lshw -businfo | grep eth1
pci@03:00.0 eth1 network PRO/Wireless 3945ABG Network Connection

richard@neely:~$ diff -u /etc/kismet/kismet.conf.orig /etc/kismet/kismet.conf
--- /etc/kismet/kismet.conf.orig 2007-03-23 09:53:28.000000000 -0400
+++ /etc/kismet/kismet.conf 2007-03-23 09:56:00.000000000 -0400
@@ -7,10 +7,10 @@
version=2005.06.R1

# Name of server (Purely for organizational purposes)
-servername=Kismet
+servername=neely

# User to setid to (should be your normal user)
-#suiduser=your_user_here
+suiduser=richard

# Sources are defined as:
# source=sourcetype,interface,name[,initialchannel]
@@ -19,7 +19,7 @@
# The initial channel is optional, if hopping is not enabled it can be used
# to set the channel the interface listens on.
# YOU MUST CHANGE THIS TO BE THE SOURCE YOU WANT TO USE
-source=none,none,addme
+source=ipw3945,eth1,addme

Kismet works fine. When operating eth1 is in monitor mode.

richard@neely:~$ iwconfig eth1
eth1 unassociated ESSID:"shaolin"
Mode:Monitor Frequency=2.412 GHz Access Point: 00:13:10:65:2F:AD
Bit Rate:0 kb/s Tx-Power:16 dBm
Retry limit:15 RTS thr:off Fragment thr:off
Power Management:off
Link Quality:0 Signal level:0 Noise level:0
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0

When Kismet exits I'm able to cleanly use my original connection.

Thursday, March 22, 2007

Committing Changes to CVS

In my last post I set up CVS so I could upload my Sguil scripts. I decided I would document how I make changes to those scripts and commit them to CVS.

First I needed to check out a copy of the scripts. I made a dev directory and will now use that for all future development.

richard@macmini:~$ export CVS_RSH=ssh
richard@macmini:~$ mkdir dev
richard@macmini:~$ cd dev
richard@macmini:~/dev$ cvs -z3 \
> -d:ext:taosecurity@taosecurity.cvs.sf.net:/cvsroot/taosecurity checkout -P \
> taosecurity_sguil_scripts
taosecurity@taosecurity.cvs.sf.net's password:
cvs checkout: Updating taosecurity_sguil_scripts
U taosecurity_sguil_scripts/README
U taosecurity_sguil_scripts/sancp
U taosecurity_sguil_scripts/sguil_client_install.sh
U taosecurity_sguil_scripts/sguil_database_install_pt1.sh
U taosecurity_sguil_scripts/sguil_database_install_pt2.sh
U taosecurity_sguil_scripts/sguil_sensor_install.sh
U taosecurity_sguil_scripts/sguil_sensor_install_patch.sh
U taosecurity_sguil_scripts/sguil_server_install.sh
U taosecurity_sguil_scripts/sguild_adduser.sh
U taosecurity_sguil_scripts/snort
U taosecurity_sguil_scripts/snort_src_install.sh
richard@macmini:~/dev$ cd taosecurity_sguil_scripts/
richard@macmini:~/dev/taosecurity_sguil_scripts$ ls
CVS sguild_adduser.sh sguil_sensor_install.sh
README sguil_database_install_pt1.sh sguil_server_install.sh
sancp sguil_database_install_pt2.sh snort
sguil_client_install.sh sguil_sensor_install_patch.sh snort_src_install.sh

With the scripts checked out I edit the 'snort' script and commit it.

richard@macmini:~/dev/taosecurity_sguil_scripts$ cvs commit -m \
> "Update for Snort 2.6.1.2." snort
taosecurity@taosecurity.cvs.sf.net's password:
Checking in snort;
/cvsroot/taosecurity/taosecurity_sguil_scripts/snort,v <-- snort
new revision: 1.2; previous revision: 1.1
done

When done I realize I need to change two other scripts, so I change them and commit them at the same time.

richard@macmini:~/dev/taosecurity_sguil_scripts$ cvs commit -m \
> "Update for Snort 2.6.1.3." snort_src_install.sh README
taosecurity@taosecurity.cvs.sf.net's password:
Checking in snort_src_install.sh;
/cvsroot/taosecurity/taosecurity_sguil_scripts/snort_src_install.sh,v <-- snort_src_install.sh
new revision: 1.2; previous revision: 1.1
done
Checking in README;
/cvsroot/taosecurity/taosecurity_sguil_scripts/README,v <-- README
new revision: 1.2; previous revision: 1.1
done
richard@macmini:~/dev/taosecurity_sguil_scripts$

That's it. The changes are visible immediately via Web-based CVS.

TaoSecurity CVS at Sourceforge

For a while I've maintained a set of fairly lame scripts for automating installation of certain Sguil components on FreeBSD. These scripts have previously been posted as .tar.gz archives in various places. Today I decided to make use of the TaoSecurity Sourceforge site I created a few months back. From now on you can access my scripts via CVS at that site.

My CVS experience is minimal, although I posted some notes from Sguil a few years ago.

I wanted to document how I set this up, because it was not intuitive. Thanks to Bamm for helping me via IRC. I also found this doc and this how-to helpful.

I decided to maintain my local repository on macmini. I wanted to experiment with a local repository before committing anything to Sourceforge. Here I set up that local repository. I have my scripts in a directory called taosecurity_sguil_scripts. I'm going to call the CVS module taosecurity_sguil_scripts too.

richard@macmini:~$ mkdir cvsroot
richard@macmini:~$ cvs -d /home/richard/cvsroot init
richard@macmini:~$ cd taosecurity_sguil_scripts

I wanted my scripts to have lines in them indicating the version number. Bamm pointed me towards this keyword list, which resulted in me adding the following:

richard@macmini:~/taosecurity_sguil_scripts$ cat sguild_adduser.sh
#!/bin/sh
#
# $Id$ #
#
SGUIL=sguil-0.6.1
LD_LIBRARY_PATH=/usr/local/lib/mysql
export LD_LIBRARY_PATH
cd /usr/local/src/$SGUIL/server/
./sguild -c sguild.conf -u sguild.users -adduser sguil
cp sguild.users /usr/local/etc/nsm/
chown sguil:sguil /usr/local/etc/nsm/sguild.users

That # $Id$ # will be transformed into what I want later.

Now I check the scripts into the local repository.

richard@macmini:~/taosecurity_sguil_scripts$ cvs -d /home/richard/cvsroot/ \
> import -m "Initial import." taosecurity_sguil_scripts TaoSecurity start
N taosecurity_sguil_scripts/sguil_sensor_install.sh
N taosecurity_sguil_scripts/snort_src_install.sh
N taosecurity_sguil_scripts/sguil_server_install.sh
N taosecurity_sguil_scripts/sguil_database_install_pt2.sh
N taosecurity_sguil_scripts/sguil_database_install_pt1.sh
N taosecurity_sguil_scripts/README
N taosecurity_sguil_scripts/sguild_adduser.sh
N taosecurity_sguil_scripts/sguil_client_install.sh
N taosecurity_sguil_scripts/sguil_sensor_install_patch.sh
N taosecurity_sguil_scripts/snort
N taosecurity_sguil_scripts/sancp

No conflicts created by this import

To experiment with checking scripts out of the local repository, I make a wc_tss working copy directory and try retrieving the files.

richard@macmini:~/taosecurity_sguil_scripts$ cd
richard@macmini:~$ mkdir wc_tss
richard@macmini:~$ cd wc_tss/
richard@macmini:~/wc_tss$ cvs -d /home/richard/cvsroot/ checkout \
> taosecurity_sguil_scripts
cvs checkout: Updating taosecurity_sguil_scripts
U taosecurity_sguil_scripts/README
U taosecurity_sguil_scripts/sancp
U taosecurity_sguil_scripts/sguil_client_install.sh
U taosecurity_sguil_scripts/sguil_database_install_pt1.sh
U taosecurity_sguil_scripts/sguil_database_install_pt2.sh
U taosecurity_sguil_scripts/sguil_sensor_install.sh
U taosecurity_sguil_scripts/sguil_sensor_install_patch.sh
U taosecurity_sguil_scripts/sguil_server_install.sh
U taosecurity_sguil_scripts/sguild_adduser.sh
U taosecurity_sguil_scripts/snort
U taosecurity_sguil_scripts/snort_src_install.sh
richard@macmini:~/wc_tss$ ls
taosecurity_sguil_scripts
richard@macmini:~/wc_tss$ cd taosecurity_sguil_scripts/
richard@macmini:~/wc_tss/taosecurity_sguil_scripts$ ls
CVS sguild_adduser.sh sguil_sensor_install.sh
README sguil_database_install_pt1.sh sguil_server_install.sh
sancp sguil_database_install_pt2.sh snort
sguil_client_install.sh sguil_sensor_install_patch.sh snort_src_install.s

Great, that worked. Let's see if sguild_adduser.sh has the Id I expect.

richard@macmini:~/wc_tss/taosecurity_sguil_scripts$ cat sguild_adduser.sh
#!/bin/sh
#
# $Id: sguild_adduser.sh,v 1.1.1.1 2007/03/22 16:24:55 richard Exp $ #
#
SGUIL=sguil-0.6.1
LD_LIBRARY_PATH=/usr/local/lib/mysql
export LD_LIBRARY_PATH
cd /usr/local/src/$SGUIL/server/
./sguild -c sguild.conf -u sguild.users -adduser sguil
cp sguild.users /usr/local/etc/nsm/
chown sguil:sguil /usr/local/etc/nsm/sguild.users

Awesome. I think I'm ready to upload the scripts to Sourceforge.

richard@macmini: export CVS_RSH=ssh
richard@macmini:~$ cd taosecurity_sguil_scripts
richard@macmini:~/taosecurity_sguil_scripts$ cvs \
> -d:ext:taosecurity@taosecurity.cvs.sf.net:/cvsroot/taosecurity import -m \
> "Initial import." taosecurity_sguil_scripts TaoSecurity start
The authenticity of host 'taosecurity.cvs.sf.net (66.35.250.90)' can't be
established.
RSA key fingerprint is 13:f1:65:c3:6c:b7:7e:a5:f0:f3:f5:19:f4:42:9c:4a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'taosecurity.cvs.sf.net,66.35.250.90' (RSA)
to the list of known hosts.
taosecurity@taosecurity.cvs.sf.net's password:
N taosecurity_sguil_scripts/sguil_sensor_install.sh
N taosecurity_sguil_scripts/snort
N taosecurity_sguil_scripts/sancp
N taosecurity_sguil_scripts/sguil_client_install.sh
N taosecurity_sguil_scripts/sguil_server_install.sh
N taosecurity_sguil_scripts/sguil_database_install_pt1.sh
N taosecurity_sguil_scripts/sguil_database_install_pt2.sh
N taosecurity_sguil_scripts/snort_src_install.sh
N taosecurity_sguil_scripts/sguil_sensor_install_patch.sh
N taosecurity_sguil_scripts/sguild_adduser.sh
N taosecurity_sguil_scripts/README

No conflicts created by this import

That worked. I can now browse CVS and see my files.

To test checking them out, I go to another machine and try the following.

richard@neely:/tmp$ cvs \
> -d:pserver:anonymous@taosecurity.cvs.sourceforge.net:/cvsroot/taosecurity \
> login
Logging in to
:pserver:anonymous@taosecurity.cvs.sourceforge.net:2401/cvsroot/taosecurity
CVS password:
cvs login: CVS password file /home/richard/.cvspass does not exist
- creating a new file
richard@neely:/tmp$ cvs \
> -d:pserver:anonymous@taosecurity.cvs.sourceforge.net:/cvsroot/taosecurity \
> checkout taosecurity_sguil_scripts
cvs checkout: Updating taosecurity_sguil_scripts
U taosecurity_sguil_scripts/README
U taosecurity_sguil_scripts/sancp
U taosecurity_sguil_scripts/sguil_client_install.sh
U taosecurity_sguil_scripts/sguil_database_install_pt1.sh
U taosecurity_sguil_scripts/sguil_database_install_pt2.sh
U taosecurity_sguil_scripts/sguil_sensor_install.sh
U taosecurity_sguil_scripts/sguil_sensor_install_patch.sh
U taosecurity_sguil_scripts/sguil_server_install.sh
U taosecurity_sguil_scripts/sguild_adduser.sh
U taosecurity_sguil_scripts/snort
U taosecurity_sguil_scripts/snort_src_install.sh

That worked too. So, from now on, if you'd like to get my FreeBSD Sguil installation scripts, please retrieve them from CVS.

Gconcat on FreeBSD

The last time I wanted to combine two smaller drives into a single virtual drive on FreeBSD I used Gvinum. Ceri Davies posted a helpful comment indicating I should try using gconcat(8). I did that today and thanks to an insightful piece of advice from Robert Watson, I got it working.

This is what the drive looked like.

shuttle01# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad5s1a 496M 36M 420M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad5s1f 989M 22K 910M 0% /home
/dev/ad5s1h 54G 4.0K 50G 0% /nsm1
/dev/ad7s1d 361G 4.0K 332G 0% /nsm2
/dev/ad5s1g 989M 12K 910M 0% /tmp
/dev/ad5s1d 1.9G 531M 1.3G 29% /usr
/dev/ad5s1e 4.8G 1.6M 4.5G 0% /var

I want to create /dev/concat/nsm. However, if I try to do that while /nsm1 and /nsm2 are mounted I'll get errors like this:

shuttle01# gconcat label -v nsm ad5s1h ad7s1d
Can't store metadata on ad5s1h: Operation not permitted.

So, unmount /nsm1 and /nsm2 before continuing. Next:

shuttle01# umount /nsm1
shuttle01# umount /nsm2
shuttle01# gconcat label -v nsm ad5s1h ad7s1d
Metadata value stored on ad5s1h.
Metadata value stored on ad7s1d.
Done.
shuttle01# ls /dev/concat/nsm
/dev/concat/nsm
shuttle01# newfs /dev/concat/nsm
/dev/concat/nsm: 438631.6MB (898317520 sectors) block size 16384, fragment size 2048
using 2387 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
super-block backups (for fsck -b #) at:
160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976,
...truncated...

Now I can create /nsm and mount it.

shuttle01# mkdir /nsm
shuttle01# mount /dev/concat/nsm /nsm
shuttle01# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad5s1a 496M 36M 420M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad5s1f 989M 22K 910M 0% /home
/dev/ad5s1g 989M 12K 910M 0% /tmp
/dev/ad5s1d 1.9G 531M 1.3G 29% /usr
/dev/ad5s1e 4.8G 1.6M 4.5G 0% /var
/dev/concat/nsm 415G 4.0K 382G 0% /nsm
shuttle01# touch /nsm/test

Cool. If I don't want to use it again:

shuttle01# umount /nsm
shuttle01# gconcat stop nsm
shuttle01# gconcat unload
shuttle01# ls /dev/concat

To use it again:
shuttle01# gconcat load nsm
shuttle01# ls /dev/concat/nsm
/dev/concat/nsm
shuttle01# mount /dev/concat/nsm /nsm
shuttle01# ls /nsm
.snap test

Remember to enable gconcat in /boot/loader.conf:

shuttle01# cat /boot/loader.conf
geom_concat_load="YES"

Remember to edit /etc/fstab. The original looked like this:

# Device Mountpoint FStype Options Dump Pass#
/dev/ad5s1b none swap sw 0 0
/dev/ad5s1a / ufs rw 1 1
/dev/ad5s1f /home ufs rw 2 2
/dev/ad5s1h /nsm1 ufs rw 2 2
/dev/ad7s1d /nsm2 ufs rw 2 2
/dev/ad5s1g /tmp ufs rw 2 2
/dev/ad5s1d /usr ufs rw 2 2
/dev/ad5s1e /var ufs rw 2 2
/dev/acd0 /cdrom cd9660 ro,noauto 0 0

The new one looks like this:

# Device Mountpoint FStype Options Dump Pass#
/dev/ad5s1b none swap sw 0 0
/dev/ad5s1a / ufs rw 1 1
/dev/ad5s1f /home ufs rw 2 2
#/dev/ad5s1h /nsm1 ufs rw 2 2
#/dev/ad7s1d /nsm2 ufs rw 2 2
/dev/ad5s1g /tmp ufs rw 2 2
/dev/ad5s1d /usr ufs rw 2 2
/dev/ad5s1e /var ufs rw 2 2
/dev/concat/nsm /nsm ufs rw 2 2
/dev/acd0 /cdrom cd9660 ro,noauto 0 0

I'm deploying this system in a RAID 0 configuration because it's a Shuttle with two HDDs. I'm not worrying about recovering from a disaster with this box. It's going to help with the ShmooCon network. If I had three or more drives I'd consider using gstripe(8).

Recovering from Corrupted MySQL Database

Today one of my clients ran into a problem with his Sguil installation. The server hosting his Sguil MySQL database experienced a crash, as shown by dmesg on reboot:

Trying to mount root from ufs:/dev/ad0s1a
WARNING: / was not properly dismounted
WARNING: /home was not properly dismounted
WARNING: /nsm was not properly dismounted
WARNING: /usr was not properly dismounted
WARNING: /var was not properly dismounted

The original error message said:

ERROR: loaderd: mysqlexec/db server: Incorrect key file for table
'./sguildb/sancp_sensor_20070322.MYI'; try to repair it

If the sensor crashed while SANCP data was loading, it would make sense that sancp_sensor_20070322.MYI was corrupted.

When trying to restart sguild, the following error appeared:

[user@sensor ~]$ ./sguild_start.sh
pid(3119) Loading access list: ./sguild.access
pid(3119) Sensor access list set to ALLOW ANY.
pid(3119) Client access list set to ALLOW ANY.
pid(3119) Adding AutoCat Rule:
pid(3119) Adding AutoCat Rule: ||ANY||ANY||ANY||ANY||ANY||ANY||tag:
Tagged Packet||1
pid(3119) Email Configuration:
pid(3119) Config file: ./sguild.email
pid(3119) Enabled: No
pid(3119) Connecting to localhost on 3306 as user
pid(3119) MySQL Version: version 5.0.27
pid(3119) SguilDB Version: 0.11
pid(3119) Creating event MERGE table.
pid(3119) Creating tcphdr MERGE table.
pid(3119) Creating udphdr MERGE table.
pid(3119) Creating icmphdr MERGE table.
pid(3119) Creating data MERGE table.
ERROR: loaderd: You appear to be using an old version of the
sguil database schema that does not support the MERGE sancp
table. Please see the CHANGES document for more information
.
SGUILD: Exiting...

That doesn't look good.

Whenever I encounter a database problem, I first run mysqlcheck (with the database running) like so:

[user@sensor ~]$ mysqlcheck -r sguildb -p
Enter password:
sguildb.data
note : The storage engine for the table doesn't support repair
sguildb.data_sensor_20070215 OK
sguildb.data_sensor_20070216 OK
...edited...
sguildb.sancp_sensor_20070215 OK
sguildb.sancp_sensor_20070216 OK
...edited...
sguildb.sancp_sensor_20070320 OK
sguildb.sancp_sensor_20070321 OK
sguildb.sensor OK
...truncated...

Note sguildb.sancp_sensor_20070322 isn't listed.

I stopped MySQL and then ran myisamchk, which showed the following:

sensor:/var/db/mysql/sguildb# myisamchk *.MYI
...edited...
Checking MyISAM file: sancp_sensor_20070322.MYI
Data records: 2687 Deleted blocks: 0
myisamchk: warning: Table is marked as crashed
myisamchk: warning: 1 client is using or hasn't closed the table properly
- check file-size
myisamchk: error: Size of indexfile is: 303104 Should be: 306176
- check record delete-chain
- check key delete-chain
- check index reference
- check data record references index: 1
myisamchk: error: Found key at page 295936 that points to
record outside datafile
MyISAM-table 'sancp_sensor_20070322.MYI' is corrupted
Fix it using switch "-r" or "-o"
...truncated...

I fixed it like this:

sensor:/var/db/mysql/sguildb# myisamchk -r sancp_sensor_20070322.MYI
- recovering (with sort) MyISAM-table 'sancp_sensor_20070322.MYI'
Data records: 2687
- Fixing index 1
- Fixing index 2
- Fixing index 3
- Fixing index 4
- Fixing index 5
- Fixing index 6
Data records: 2678

Next I restarted the database and re-ran my sguild startup script. Everything returned to normal as I had hoped.

This is another example of the idea that anyone who uses detection systems long enough eventually becomes a database admin!

Wednesday, March 21, 2007

Backscatter Detected

Recently I posted a conclusion to my backscatter investigation, where people were reporting backscatter from SYN and other DoS attacks to SANS ISC. When you monitor your own cable modem it's not common to see this sort of traffic unless you go explicitly looking for it. Today however I saw the following using Sguil.

Count:2 Event#1.204541 2007-03-20 18:04:19
BLEEDING-EDGE DROP Known Bot C&C Server Traffic (group 1)
69.143.202.28 -> 193.109.122.67
IPVer=4 hlen=5 tos=0 dlen=40 ID=2644 flags=2 offset=0 ttl=64 chksum=58655
Protocol: 6 sport=1024 -> dport=6667

Seq=2934031229 Ack=0 Off=5 Res=0 Flags=*****R** Win=0 urp=54297 chksum=0
Payload:
None.

What got my attention at first was an alert indicating my host (possibly some box behind my NAT gateway) appeared to initiate a connection to port 6667 TCP (IRC) on an IP that was a "Known Bot" IP for command and control. Looking at the packet data in the alert and seeing the RST flag, I guessed this wasn't a problem. Still, I wanted to know why one of my boxes would send a RST. Could that be some sort of phone-home method designed to elude uber-packet monkeys? (Conspiracy theory hat on!)

I decided to query for session data to see if I could find any other sessions involving 193.109.122.67 (ede.nl.eu.undernet.org):

------------------------------------------------------------------------
Sensor:cel433 Session ID:5044069116374886360
Start Time:2007-03-20 18:04:19 End Time:2007-03-20 18:04:19
193.109.122.67:6667 -> 69.143.202.28:1024
Source Packets:1 Bytes:0
Dest Packets:1 Bytes:0
------------------------------------------------------------------------
Sensor:cel433 Session ID:5044103368738397945
Start Time:2007-03-20 20:17:14 End Time:2007-03-20 20:17:14
193.109.122.67:6667 -> 69.143.202.28:3072
Source Packets:1 Bytes:0
Dest Packets:1 Bytes:0

All I found were the two sessions indicated by the aggregated Snort alerts. Notice the session data shows the source and destination each sent one packet. Interesting. One of the nice aspects of SANCP is that it can report a summary of the TCP flags seen during a session. That information is shown below.

This shows the source sent a SYN ACK and the dest sent a RST. The RST caused the alert. The SYN ACK triggered the RST.

Finally we can look at the full content:

14:04:19.731091 IP 193.109.122.67.6667 > 69.143.202.28.1024:
S 4257997451:4257997451(0) ack 2934031229 win 2048
14:04:19.731429 IP 69.143.202.28.1024 > 193.109.122.67.6667:
R 2934031229:2934031229(0) win 0

This doesn't tell us anything we didn't already know, but it's nice to see exactly what happened on the wire. Remember that if we weren't collecting independent, content-neutral, non-alert-triggered full content data, we wouldn't have the first SYN ACK packet. That is the key to knowing this is a backscatter event. Some unknown party spoofed my IP, 69.143.202.28, and SYN flooded 192.109.122.67 on port 6667 TCP. The SYN ACK was sent back to me, and my NAT gateway replied with a RST. Simple -- and no conspiracy. Hat off.

When Lawsuits Attack

I haven't said anything about the intrusions affecting TJX until now because I haven't felt the need to contribute to this company's woes. Today I read TJX Faces Suit from Shareholder:

The Arkansas Carpenters Pension Fund owns 4,500 shares of TJX stock, and TJX denied its request to access documents outlining the company's IT security measures and its response to the data breach.

The shareholder filed the lawsuit in Delaware's Court of Chancery Monday afternoon under a law permitting shareholders to sue for access to corporate documents in certain cases, The Associated Press reported. The pension fund wants the records to see whether TJX's board has been doing its job in overseeing the company's handling of customer data, the news agency said.


Imagine having your security measures and incident response procedures laid bare for everyone to see. (It's possible there might not be anything to review!) How would your policies and procedures fare?

The following sounds like many incidents I've investigated.

The TJX breach was worse than first thought, TJX officials recently admitted. The company initially believed that attackers had access to its network between May 2006 and January 2007. However, the ongoing investigation has turned up evidence that the thieves also were inside the network several other times, beginning in July 2005.

Originally the company was compromised for nine months, but now the scope could reach almost a year prior. The question is whether this is evidence of compromise by another group or the same group. In either case the company's security posture looks terrible.

The sad part about this sort of incident is that most if not all of the preventative systems TJX might have applied are worthless for response and forensics. I'm guessing TJX is relying on host-centric forensics like analysis of MAC times of files on artifacts on victim servers to scope the incident. I bet TJX is paying hundreds of thousands of dollars in investigative consulting right now, beyond the damage to their brand and other technical and financial recovery costs.

Hopefully these lawsuits will shed some light on TJX's security practices so other companies can learn from their mistakes. This is the sort of incident that my future National Digital Security Board would do well to investigate and report.

Ubiquitous Monitoring on the Horizon

In January I wrote The Revolution Will Be Monitored. Today I read Careful, the Boss Is Watching:

Recently, software vendor Ascentive LLC installed its new BeAware employee monitoring application on all the PCs at one of its new corporate clients. The corporation notified its employees that their Web surfing habits -- as well as their email, instant messaging, and application usage -- were now being monitored and recorded.

"Internet usage at the corporation dropped by 90 percent almost overnight," recalls Adam Schran, CEO of Ascentive. "As soon as employees knew they were being monitored, they changed their behavior."


Wow, what a bandwidth saver. Who needs to upgrade the T-3 when you actually take measures to enforce your stated security policy? The story continues:

While tools for tracking employee network usage have been available for years, emerging products such as BeAware take monitoring to a whole new level. The new BeAware 6.7 lets managers track workers' activity not only on the network or in the browser, but also in email, chatrooms, applications, and shared files. And at any unannounced moment, a manager can capture an employee's screen, read it, and even record it for posterity.

Such exhaustive monitoring may seem a bit draconian to the uninitiated, but analysts and vendors all say the use of such "Big Brother" software can make a drastic impact on productivity and security. In a recent study by AOL and Salary.com, 44.7 percent of workers cited personal Internet use as their top distraction at work. A Gallup poll conducted in 2005 indicated that the average employee spends more than 75 minutes a day using office computers for non-business purposes.

Once employees know their activities are being monitored, however, their personal computer use is quickly curtailed, Schran observes.


This reminds me of an event that happened when I was working the night shift at the AFCERT in 1999. We had witnessed a rash of attacks against vulnerable Microsoft Front Page installations. Around 2 or 3 am I noticed someone altering the Web site of an Air Force base in Florida. Looking at the source IP it looked like it might belong to someone who worked on base. I managed to tie a home telephone number to the IP and I called, asking if so-and-so was currently modifying the af.mil Web site. I remember a surprised lady answering the phone and asking, "So you can see what I'm doing right now?"

I have never been a fan of monitoring network traffic to reduce what .mil and .gov call "fraud, waste, and abuse." You won't read recommendations for using Network Security Monitoring to intercept questionable Web surfing, for example. However, this story is another data point for my prediction that we are moving to a workplace where everything is monitored, all the time.

If you try to implement this sort of activity, you better be sure to have an ironclad policy and support from your legal staff. I would call this level of invasion of privacy a wiretap.

Wine on Ubuntu

I'm finding more reasons to like running Ubuntu on the desktop. Two of my favorite Windows applications are MWSnap (a simple screen capture tool) and Irfanview (a simple image viewer and editor). (Gimp fans, please spare me your comments. I can't stand that program. It's a bulldozer when all I need is a garden shovel.)

I poked around looking for native Linux programs that might suit my needs, but then I thought "What about using Wine to run the Windows binaries on Linux?" I'd never used Wine before, but it was only an 'apt-get install wine' away from appearing on my Ubuntu laptop.

I first tried Irfanview, but I ran into the same issues as described here. After creating /home/richard/wine and putting mfc42.dll there with installation binaries for Irfanview and MWSnap, I was able to run Wine in that directory and install both programs.

Wine ended up creating the following directory structure.

richard@neely:~/.wine/drive_c/Program Files$ ls -al
total 5
drwxr-xr-x 5 richard richard 1024 2007-03-21 11:46 .
drwxr-xr-x 4 richard richard 1024 2007-03-21 11:42 ..
drwxr-xr-x 2 richard richard 1024 2007-03-21 11:42 Common Files
drwxr-xr-x 5 richard richard 1024 2007-03-21 11:43 IrfanView
drwxr-xr-x 3 richard richard 1024 2007-03-21 11:46 MWSnap

Running each program requires something like this:

richard@neely:~$ wine .wine/drive_c/Program\ Files/IrfanView/i_view32.exe
richard@neely:~$ wine .wine/drive_c/Program\ Files/MWSnap/MWSnap.exe

Overall I am really pleased to see this working so well.

ShmooCon Talk

If you're attending ShmooCon this weekend you may have seen I am scheduled to speak at the same time as security ninjas Joe Stewart and Billy Hoffman. It's bad enough that people have to choose between Joe and Billy, my involvement as a third talk aside.

Joe and I asked the ShmooCon organizers if it might be possible to switch me to another slot, since I would like to see Joe's talk too. Based on feedback from many of you, you also want to see Joe's talk. Unfortunately, the ShmooCon organizers did not find a way to change the schedule.

This is really bad because Billy is releasing Jikto at ShmooCon, so choosing between Joe and Billy is another lousy decision.

Given the feedback from you I've heard, I'm considering my options. They are:

  1. Talk at 1300 as scheduled.

  2. Give up my slot and volunteer to speak at 1200 Saturday during lunch.

  3. Give up my slot and volunteer to speak after the keynote Friday night.

  4. Other ideas?


What are your thoughts on this? Is it worth making waves in order to deal with this situation? Thank you.

Update: Thanks for your public and private feedback. I'll just appear at my talk and make the best of it!

Tuesday, March 20, 2007

Proactive vs Reactive Security

Whenever I hear someone talk about the merits of "proactive" security vs "reactive" security I will politely nod, but you may notice a tightening of my jaw. I can't stand these sorts of comparisons. When I hear people praise proactive measures they're usually talking about "stopping attacks" rather than "watching them." Since a good portion of my technical life is spent cleaning up the messes left by people who put faith in preventing intrusions, I am a little jaded. Before I go any further, believe me, I would much rather not have intrusions occur at all. I would much rather prevent than detect and respond to intrusions. The fact of the matter is that intrusions still happen and that proactive measures aren't always that great. In fact, sometimes so-called proactive measures are worse than reactive or passive ones. How can that be?

Kelly Jackson Higgins' latest article Grab Fingerprint, Then Attack provides an example. She writes the following:

First you determine if an IDS/IPS is sitting at the perimeter, and then "fingerprint" it to find out the brand of the device, says the hacker also known as Mark Loveless, security architect for Vernier Networks. By probing the devices, "You can extrapolate what brand of IPS is blocking them and use that to plan your attack."

Different IDS/IPS products block different threats, so an attacker can use those characteristics to gather enough intelligence to pinpoint the brand name, he says. And it's not hard to distinguish an IDS from an IPS: If you can access XYZ before the attack, but not after, it's an IPS. And if there are delays in blocking your traffic, it could be an admin reading the IDS logs, Loveless says.


This concept is as old as dirt, dating all the way back to fingerprinting firewalls. However, it illustrates my point very well. A "proactive" device like an IPS would block traffic it deems malicious. An intruder smart enough to want to identify and evade said IPS could do so using test traffic, then launch an attack that sails through the IPS -- which at that point is ignorant and ineffective. The only reason the intruder could accomplish this task is that the "proactive" nature of the IPS revealed its operation, thereby providing intelligence to the intruder. In aggregate security has been degraded by a "proactive" device.

Contrast that scenario with that of the lowly, "reactive," passive network forensics appliance. All it does is record what it sees. It doesn't stop anything. It's so quiet no one knows it is there -- including the intruder. Of course it isn't blocking anything, but it is providing Network Security Monitoring data. Properly configured and used it can act as a sort of intrusion detection system as well. In aggregate security has been improved by a "reactive" or passive device.

I hope this post has challenged the convention wisdom in the same way that my diatribes against mandatory anti-virus installation may have done. I think one way to overcome the problems caused by the active device is to complement it with the passive one, but most organizations emphasize "prevention" over all else and discard detection and response.

Security Bloggers Network

I noticed security ninja and fellow former Foundstoner Mark Curphey mentioned me in a post on his departure from the Security Bloggers Network. You may wonder why I never joined SBN. When I was asked to join in December, I politely declined. I saw no benefit to myself or my readers to joining some kind of meta-feed hosted by Feedburner. Not joining SBN was probably as popular as my personal LinkedIn policy since it means I exercise some discretion regarding the parties with whom I associate.

My personal version of SBN is my Bloglines subscription. Anything I care to read is there. I probably only pay attention to 2/3 to 3/4 of the feeds on that list, so in some cases the lesser-noticed feeds are acting like bookmarks. (In other words, some of my friends may have blogs but I don't necessarily care that they trimmed their cat's toenails last weekend.)

I think it's healthy to have discussions about the state of our "security community." Debate is one of the features of a vibrant community. If no one read this blog I would still use it mainly as a way to record how I build or configure systems, or how I think about certain issues. (Prior to writing any book or major article I review my blog posts to refresh my memory on certain issues.) If you find such content helpful, great! If not, no problem!

Programming and Digital Security

I received the following question recently, so I thought I would anonymize the person asking the question but post my response publicly.

I have a question regarding programming languages and their relation to computer security research. I would appreciate your input on the following. In order for one to be able to "contribute" to security research, do you feel it is necessary for one to become familiar with programming languages?

I am fascinated by computer security and have read several books about stages of attack, malware, and defenses but have not read any books containing any code as I do not understand it. I therefore feel as if I am of no use if I cannot write tools or examine exploits on my own.

I would again really appreciate your input on this, and if you recommend learning programming languages, do you believe one can get away with knowing just one or do you feel an understanding of several is necessary (and if so, which one[s] would you suggest)?


These are great questions. I struggle with them as well because I am not a programmer. When I was a kid in 1980 I programmed my Timex Sinclair using BASIC. I remember looking at books on coding in Assembly for my Commodore 64 several years later so I could try writing cool "demo" programs to impress my BBS buddies. In high school I basically abandoned those sorts of pursuits and only used my C-64 for writing papers, much the same as I did in college from 1990-1994 with my first PC. At USAFA I took the mandatory programming course for freshmen, which basically involved me writing programs in PASCAL and then helping classmates get their versions to compile. (Talk about zero concepts of security!) I did manage to install an Ethernet NIC in my 486SX and play the first game of Doom on FalconNet at USAFA. I didn't own a computer for most of 1996 and early 1997, but by 1998 I was back online and doing my first hands-on security work at the Air Force CERT.

The Air Force formally trained me as an intelligence officer, so I've always been an analyst and operator. Since my first hands-on security work required inspecting network traffic, I've always been deeply involved with TCP/IP. My first jobs also held me responsible for detecting and responding to intrusions. I've enjoyed staying within defensive roles, although I've poked my head out to do "offensive" work occasionally.

I've managed to build a career without being a programmer, but I'm not satisfied with my level of skill or knowledge. If you review my terribly stalled reading list and my Amazon.com Wish List you'll see plenty of books on programming.

I think programming is important for several reasons.

  1. The development community is becoming security-aware. The big pushes I see for improvement lie with groups like US-CERT's Build Security In. Most of the interesting books being published cover software security of one sort or the other. If you want to really participate in this work you need to understand programming.

  2. Programmers can build tools to solve their problems. I have many ideas for cool tools but no ability to execute on them. Because I didn't study programming earlier I am stuck with learning a language to code my tool, then writing the tool itself. My lack of progress over the last several years indicates how tough it is to overcome this hurdle when one's primary work doesn't require programming. I end up using basic scripting capabilities to get other people's tools to solve some of my problems, which is sub-optimal.

  3. Programmers can deeply understand security-related code. Even if I can't code my own tools, it would be extremely helpful to be able to reverse engineer other people's tools or malware, see how they solve certain problems, discover flaws in open source code, and perform other code-centric functions. This is probably the most realistic place for me to apply some beginning programming knowledge. Understanding what someone else wrote can be easier than starting to write a program from scratch.


To directly answer your question, I don't think you need to be a programmer to "contribute," but if you want to be a researcher I think it would be extremely beneficial. I am not a researcher; I am an operator. Programming would still be helpful but it's not critical.

You are definitely not "of no use" because you can't program. I hope my background demonstrates you can be "useful" while not being a programmer.

Regarding languages, here are my two cents.

  • Assembly: If you want to understand shellcode, I recommend learning some Assembly.

  • C: If you want to know write or understand operating systems and many security tools, learn C.

  • Perl: Perl seems to be a prerequisite for many security jobs and a lot of custom code is written in Perl. I would prefer to avoid it but I think some familiarity with Perl is helpful.

  • Python and/or Ruby: These two newer languages are very popular. Jose Nazario, for example, writes many cool Python tools. Metasploit 2.x was written in Perl but 3.x is written in Ruby.

  • Lua: I only learned about Lua recently, but it's apparently got a role in Snort 3.x and a researcher friend showed me how he uses it in his work.


I hope this answered your questions. I hope those of you with backgrounds or skills similar to mine will take heart, if you find yourselves doubting your worth compared to your programmer peers!