Monday, February 28, 2005
My instructor is Todd Lammle, author of the recently updated CCNA: Cisco Certified Network Associate, Deluxe Edition (640-801) study guide. Two weeks ago I saw Todd was personally teaching this class, so I immediately signed up. I'm probably not the easiest student to have in a networking class. When Todd asked if HTTP uses TCP, I felt it necessary to mention Universal Plug and Play Protocol (UPNP) which runs HTTP over UDP. I also mentioned that DNS uses TCP to answer queries when the response is larger than the 512 byte limit on UDP responses. Todd's tolerating me so far, but he said I have to provide a copy of my book. :)
Why am I studying for the CCNA? Once in a while I find myself working on Cisco routers and switches, and it's been almost seven years since my last formal training on either platform. I believe the CCNA is a fairly well-respected certification, as far as entry-level certs go. By attending this class I also get a free copy of the latest deluxe study guide edition, which is tough to pass up when you're a book reader and reviewer!
Todd told me today that he's sold somewhere between 600,000-700,000 copies of his study guides over the last five years. That is absolutely amazing, considering that a technical book which sells more than 10,000 copies is regarded as a big hit. I will be reading and studying this book in preparation for my CCNA exam, which I plan to take some time next week. Todd is an excellent instructor, and he's already improved my subnet addressing skills by an order of magnitude. In other words, subnet questions which might have taken 60 seconds to answer will probably take about 6 seconds. This speed increase is important, as candidates have about a minute to answer each question. They can't return to skipped questions, so it pays to answer as rapidly as possible.
Sunday, February 27, 2005
"Peter Szor's The Art of Computer Virus Research and Defense (TAOCVRAD) is one of the best technical books I've ever read, and I've reviewed over 150 security and networking books during the past 5 years. This book so thoroughly owns the subject of computer viruses that I recommend any authors seeking to write their own virus book find a new topic. Every technical computing professional needs to read this book, fast."
This book absolutely blew me away. Read my review to see why, then order a copy!
Thursday, February 24, 2005
I already have Richard Blum's Professional Assembly Language on my reading list, but Prof. Dandamudi's book offers a twist. It doesn't just explain assembly programming on the Intel x86 architecture. Prof. Dandamudi also covers MIPS R2000 programming using the SPIM simulator available in the FreeBSD ports tree. The R2000 is an old processor, part of the MIPS R series; however, it still makes a good programming example. I will post an Amazon.com review when done with this book.
I gave Kevin's earlier book four stars, and I plan to read the new book very soon. I think we security professionals benefit from reading books about threats as well as vulnerabilities. Those of you who have followed this blog or read my book know the difference. By learning how structured threats think and behave, we can better prepare our defenses. If even some of the stories in The Art of Intrusion are true, we will gain a very valuable insight into the adversary's mind. Stay tuned for a full review -- I've got this book on deck.
What do you think? Which group presents the bigger risk? I decided to frame this question with respect to risk, since one can estimate risk using the equation
risk = threat X vulnerability X cost of asset (replacement) or "asset value"
On a related note, I found this October 2004 article by Anton Chuvakin to be interesting: Issues Discovering Compromised Machines. He begins by questioning the claim made by the authors of the book Exploiting Software, that "Most of the global 2000 companies are currently infiltrated by hackers. Every major financial institution not only has broken security, but hackers are actively exploiting them." While this is plausible, the level of exploitation is uncertain. Do intruders have complete control of all of these organizations, or are they contained in some manner? We will probably never see proof of this, but who knows what could happen after the latest T-Mobile disclosures.
On another related note, Microsoft security expert and employee Robert Hensing has been on a blogging tear. He is posting details of some incident responses he has done. They make for good reading.
First we have the assessment approach. This involves probing target systems which may have been involved in the incident. Assessors look for security weaknesses in services and applications they believe could have yielded the information acquired by the intruders. Jack Koziol's recent blog entry is an example of this approach.
In my opinion this method is least likely to yield useful information, and is often a waste of time, as far as determining the details of the incident at hand. The assessment approach is largely speculation, albeit with access to some or all of the systems which could have been victimized. From a forensic standpoint, this is a poor way to investigate an intrusion. Assessors typically interact directly with victimized or potentially victimized systems. Their "investigation" risks damaging evidence that could be retrieved by a forensic investigator. Despite the harm caused by this method, I have read the CSO of an immense security company advocate this approach in her most recent book.
The assessment approach is useful for incident recovery. It is important to know the scope of a target's vulnerabilities before declaring a case "solved." It does no good to patch one hole if three remain open. I wrote about combining assessment with incident response in a whitepaper for Foundstone titled Expediting Incident Response with Foundstone ERS. Jack's probing of the T-Mobile site is valuable in that it shows they still have problems. The assessment method may in some cases yield the answer to a problem by constructing an experiment resembling the incident. Professor Feynman's O-ring in ice water experiment shows the power of doing "what-if" incident response. The problem I've seen in the digital realm is that the assessment-minded conduct their "investigation" on the original evidence (the victimized systems), thereby spoiling information for the next phase...
The second technical way to investigate an incident is the forensic method. This process centers on examining digital evidence collected from victimized or potentially victimized systems in a forensically sound manner. Evidence is acquired carefully, in accordance with procedures most likely to withstand an adversarial legal system. This contrasts starkly with the assessment method, where assessors typically "race to root" on the target and then declare "victory."
The weakness of the forensic method lies in the lack of evidence or an absence of useful evidence. I have performed many incident responses where I only acquired case-solving information by collecting it with my own products and processes. Frequently the victim has not enabled sufficient logging, or he has trounced the evidence by performing his own amateur investigation. While the former is usually not excusable, the second can often not be avoided. If an administrator suspects something is wrong with one of her servers, she is most likely going to check it out before calling in outside forensic help. Unfortunately, this destroys evidence that could have been collected in a fairly easy manner.
The third way to investigate an intrusion is the law enforcement method. I do not necessarily mean law enforcement is involved, although they are most likely to follow this technique. Rather, I am referring to a non-technical, human source-oriented means of investigating an incident. This method relies on cultivating informants, interviewing various parties, and conducting open research on threats that may have had the capabilities and intentions to harm a victim.
Several examples can be found on the Web. Brian McWilliams reports the following:
"An anonymous source provided O'Reilly Network with a screen grab, proving he was able to access the contents of Hilton's T-Mobile inbox as of Tuesday morning. Another image confirmed that Hilton's 'secret answer' was her dog's name."
This Rootsecure.net story mixes the assessment and law enforcement methods, but it points to the existence of tMobile_exploit_tools.zip, a program to gain access to T-Mobile Web accounts.
Incidentally, CSC posted an advisory last August saying "T-mobile Wireless and Verizon Northwest are vulnerable to caller-ID authentication spoofing, enabling arbitrary compromise of customer
voicemail/message center." Essentially, the phones can be set up to trust callers and play voicemail based on caller ID, which can be spoofed.
The law enforcement method can be the most successful means to resolve an intrusion. It is especially helpful when digital evidence is lacking. Often an investigator (most likely a real law enforcement agent) can acquire evidence pointing to the physical intruder, usually by speaking with informants. The law enforcement agents then obtain digital, hard-copy, and physical evidence by obtaining a search warrant for the suspected intruder's home or office. This is generally the only way to tie a person to a keyboard, which is the best means to successfully prosecute an intruder.
Wednesday, February 23, 2005
"GHH emulates a vulnerable web application by allowing itself to be indexed by search engines. It's hidden from casual page viewers, but is found through the use of a crawler or search engine. It does this through the use of a transparent link which isn't detected by casual browsing but is found when a search engine crawler indexes a site. The transparent link (when well crafted) will reduce false positives and avoid a fingerprint of the honeypot.
The honeypot connects to a configuration file, and the configuration file writes to a log file which is chosen during configuration. The log file contains information about the host, including IP address, referral information, and user agent.
Using the information gathered in the log file, an administrator can learn more about attackers doing reconnaissance against their site. An administrator can cross reference logs and view a better picture of specific attackers."
This is a really ingenious idea. You run this system to learn what Google hacking techniques potential intruders are running against your Web site. While the original Honeynet was designed to watch intruders scan and attack hosts, the GHH watches for people to scan and attack vulnerable applications.
Installation doesn't appear too complicated. Currently Apache and PHP are supported, with IIS and .NET "coming in a future release."
If anyone tries this out, please comment here.
Tuesday, February 22, 2005
"See, at first I decided I would use this Squil IDS thing but that crazy Russian guy that wrote down the docs said I needed to keep every packet in a database (who has time for being a packet rat like that?) to make sure I don't get hackered by the nerds! Well that makes a whole hell of a lot of sense! If you keep them online in a database and you get hacked then the hacker will be able to just copy and paste them packets and whammo! Instant replay attack! Maybe I should I gift wrap them too? Smart thinking there you Bolshevik dundernuts! First Northcut drops his drawers at SANS and now this Betjitch guy wants to pinch it off for the hackers! His book should be called Tao of Network Reach-arounds!"
My reply is also available.
This "disclosure cycle" is similar to the way exploits circulate through the underground. One or more people independently or jointly discover a vulnerability and code an exploit. They keep it closely guarded, perhaps using it to access sensitive targets. If they are professional black hats, they never reveal the fact they have the exploit. If they are not using the exploit to advance certain goals, or they feel the exploit's shelf life is expiring, they pass the exploit to others. That new group is more likely to circulate the exploit widely throughout the underground. Eventually one or more black hats down the distribution food chain decide to go public, perhaps to gain some notoriety for themselves or their group.
It's an example of intruders becoming more sophisticated in the way they publicize their ability to gain unauthorized access to important systems. Five to ten years ago they demonstrated their expertise by defacing Web sites. Now they show off their skills by posting sensitive information. I would expect to see more of this.
They are attracting attention for their "de-perimeterisation" and "open network" ideas, which their "Visioning White Paper" define as follows:
"de-perimeterisation: the act of applying organisational and technical design changes to enable collaboration and commerce beyond the constraints of existing perimeters, through cross-organisational processes, services, security standards and assurance"
"open network: a network freely accessible at low or no cost to arbitrary communicationg parties, such as but not limited to the public global Internet, with few or no inbuilt information security controls protecting the use of that network (although the network infrastructure itself will typically have some protection in order to support the provision of a service of useful quality)"
If you'd like to read more wordy explanations, I recommend diving in to the 39-page "Visioning White Paper." It offers some of the most painful English I've seen. I think it could have been reduced to 1/4 of its present size.
Sorting through the text, we see The Jericho Group intends to push de-perimeterisation as a means to achive open networks. They cite "increasing on-line collaboration and trading among multiple business entities," "outsourcing and offshoring of support services," and "use of low cost open networks" as reasons to pursue de-perimeterisation. They believe "existing security approaches are a barrier to change because they assume... an organizatrion owns, controls, and is accountable for the ITC [information and communications technology] it employs... and all individuals sit within organisations." I do not disagree with either point.
As for the group's focus, we read "Jericho Forum will therefore primarily focus on information flows that span organisations and individuals and how to secure and manage these across open networks. The focus will be on business to business (B2B) and business to government (B@G) flows, but not exclusively."
The Jericho Group cites the following as evidence of the need for de-perimeterisation. "For complex networks, protocols, and application access requirements involving customers, business partners or suppliers, firewall complexity and cost of operation will rise... Many communication protocols now run within the web (HTTP) protocol to allow 'tunneling'; indeed arbitrary tunnelling is possible rendering 'layered' communications architectures meaningless... De-perimeterisation involves re-appraising where security controls are positioned, re-balancing cost and complexity. This may involve moving security controls from firewalls or proxies to internal end systems or applications, or if the confidentiality or integrity of data is paramount, to move controls from the systems and data repositories that hold data at rest to the data itself (i.e. using cryptographic techniques."
Leaving clunky language aside, let's consider their argument. Although I do not see this mentioned in the group's paper, I would agree that individual hosts should be able to defend themselves. This has historically been a problem for operating systems not designed to survive the public Internet. I endorse making individual hosts and their applications more independent and reliable.
However, no organization that has spent hundreds of thousands of dollars on firewalls and other perimeter security devices is going to abandon them. Despite the starry-eyed cries of IPv6 developers who long for the days of unfettered end-to-end connectivity, most hosts on the Internet will continue to be separated by a wide variety of "middleboxes."
Anywhere that organizational access controls can be deployed, they should be deployed. When security rests entirely with the end host, the compromise of that end host means complete loss of control for the responsible enterprise. If a "de-perimeterised" company suffers a worm outbreak, and it has abandoned its perimeter access controls and segmented subnets, what will stop the worm from spreading? If that same organization is subjected to a denial of service attack, how will victim hosts on a "de-perimeterised" network defend themselves?
A principle of security that will not disappear is defense-in-depth. Hosts should be made to be self-reliant and survivable, and function within perimeters, however porous various technologies may seem to make them.
Other stories on the Jericho Forum can be found here, here, here, and here. Those needing a Biblical refresher to appreciate the significance of the name "Jericho" might find this link useful.
Monday, February 21, 2005
I will probably never read Mapping Security since it is a non-technical book for managers, and my reading list is stacked for the next year. I want to mention it here, however, because it is unique. Author Tom Patterson presents a global survey of doing security work in a variety of countries. I know of no other book like this, and I think it would be invaluable for managers of multinational corporations, international salespeople, and globe-trotting consultants. I do not fit into any of those categories. If I wanted to know how to conduct business outside the United States, I would definitely read Patterson's book.
Saturday, February 19, 2005
Politicians are getting angry, according to AP: "On Wednesday, Sen. Dianne Feinstein, D-Calif., called for hearings on her proposed national version of the California law, while Sen. Bill Nelson, D-Fla., asked federal regulators Friday to oversee data-brokering companies the same way they do other companies that handle financial and medical records. New York state legislator James Brennan asked his state to suspend an $800,000 ChoicePoint contract until the company agreed to warn any New York residents whose data might have been exposed."
Suspend until ChoicePoint sends a letter? How about cancelling the contract instead?
Update: Check out this great quote by Adam Shostack:
I hope Richard, at TaoSecurity, takes Choicepoint to IDS kindergarden.
Now I see that Mr. Gilligan has won the SC Magazine US Editors Award. Ostensibly Mr. Gilligan was given this award because he is working to standardize Microsoft software deployed across the Air Force. I would rather have seen him win the award for making a bold, and more correct, decision to implement a phase-out of Microsoft software. Unfortunately, it seems "no one is fired for buying Microsoft."
Incidentally, prior to becoming AF CIO in 2001, Mr. Gilligan served as CIO of the US Department of Energy -- the same DoE that has scored an F for computer security every year grades have been assessed, including 2000.
Contrast Mr. Gilligan's position with that of Richard Clarke, who is reported (also on Slashdot) to have said this about Microsoft at the RSA conference: "Given their record in the security area, I don't know why anybody would buy from them." I reported on a talk Mr. Clarke gave at RAID 2003, where he made very interesting and candid comments.
We have the Air Force barking up the wrong tree with new Microsoft purchases. The Navy and Marine Corps are stuck with a disfunctional NMCI. I guess this leaves the Army to embark on a bold strategy that leaves the broken enterprise desktop computing model behind? Stay tuned.
The "Report Grading Elements" (.pdf) used the following major categories to grade agencies:
1. The percentage of the agency's programs and systems reviewed, including contractor operations or facilities in FY04 by CIOs and IGs.
2. The degree to which agency program officials and the agency CIO have used appropriate methods to ensure that contractor provided services or services provided by another agency are adequately secure and meet policy requirements.
3. The degree to which the agency used the NIST self-assessment guide or equivalent methodology to conduct its reviews.
4. The agency developed (Plan of Action and Milestones) POA&Ms for each significant deficiency identified in FY04.
5. The agency developed, implemented, and managing an agency-wide plan of action and milestone process.
6. Certification and accreditation topics.
7. The CIO implemented agencywide policies that require detailed specific security configurations and what is the degree by which the configurations are implemented.
8. Incident detection, response, and reporting topics.
9. The CIO has ensured security training and awareness of all employees, including contractors and those with significant IT Security responsibilities.
10. The progress the agency has made to develop an inventory of major IT systems.
How were grades asigned? The Grading Methodology (.pdf) says:
"The Committee's computer security grades are based on information contained in the Federal Information Security Management Act (FISMA) reports from agencies and Inspectors General (IG) for fiscal year 2004. On December 17, 2002, the President signed into law the Electronic Government Act. Title III of that Act is the FISMA. FISMA lays out the framework for annual IT security reviews, reporting and remediation planning at federal agencies. FISMA requires that agency heads and IGs evaluate their agencies computer security programs and report the results of those evaluations to the OMB in September of each year along with their budget submissions. FISMA also requires that agency heads report the results of those evaluations annually to the Congress and the Government Accountability Office."
Notice that these grades do not reflect the effectiveness of any of these security measurements. An agency could be completely 0wn3d (compromised in manager-speak) and it could still receive high scores. I imagine it is difficult to grade effectiveness until a common set of security metrics is developed, including ways to count and assess incidents.
Here are the grades from previous years, courtesy of Homeland Security IntelWatch:
Friday, February 18, 2005
A route server is a router which peers with BGP routers for the purpose of letting researchers and others look at routing tables. For example, if one connects to a route server, you may be able to get a BGP summary like this:
route-server>sh ip bgp summary
BGP router identifier 220.127.116.11, local AS number 1838
BGP table version is 4152117, main routing table version 4152117
153391 network entries and 306780 paths using 28990811 bytes of memory
56813 BGP path attribute entries using 3182032 bytes of memory
28694 BGP AS-PATH entries using 820436 bytes of memory
21 BGP community entries using 520 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP activity 370101/893167614 prefixes, 830387/523607 paths, scan interval 60 secs
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
18.104.22.168 4 17233 1504269 122790 4152117 0 0 6w0d 153390
22.214.171.124 4 17233 1599161 122790 4152117 0 0 6w0d 153390
126.96.36.199 4 64512 36971 443590 0 0 0 4w1d Active
A Looking Glass is an interface that allows a researcher or others to see advertised routes to reach a specified IP. For example, visit the Qwest Looking Glass and see how to reach 188.8.131.52 (origin2.microsoft.com). Here are the results:
Route info for 184.108.40.206 from Atlanta
sh ip bgp 220.127.116.11
BGP routing table entry for 18.104.22.168/18, version 95828332
Paths: (3 available, best #1, table Default-IP-Routing-Table)
Not advertised to any peer
22.214.171.124 (metric 8163) from 126.96.36.199 (188.8.131.52)
Origin IGP, metric 601, localpref 80, valid, internal, best
Community: 209:888 209:889
Originator: 184.108.40.206, Cluster list: 220.127.116.11, 18.104.22.168
22.214.171.124 (metric 8163) from 126.96.36.199 (188.8.131.52)
Origin IGP, metric 601, localpref 80, valid, internal
Community: 209:888 209:889
Originator: 184.108.40.206, Cluster list: 220.127.116.11, 18.104.22.168
22.214.171.124 (metric 8163) from 126.96.36.199 (188.8.131.52)
Origin IGP, metric 601, localpref 80, valid, internal
Community: 209:888 209:889
Originator: 184.108.40.206, Cluster list: 220.127.116.11, 18.104.22.168
Using the route server we queried earlier, we get that system's perspective on reaching 22.214.171.124:
route-server>sh ip bgp 126.96.36.199
BGP routing table entry for 188.8.131.52/18, version 110936
Paths: (2 available, best #2, table Default-IP-Routing-Table)
Not advertised to any peer
17233 7018 8075 8070, (received & used)
184.108.40.206 from 220.127.116.11 (18.104.22.168)
Origin IGP, localpref 100, valid, external
Community: 7018:5000 17233:666 17233:1002 17233:7018
17233 7018 8075 8070, (received & used)
22.214.171.124 from 126.96.36.199 (188.8.131.52)
Origin IGP, localpref 100, valid, external, best
Community: 7018:5000 17233:666 17233:1001 17233:7018
Possibly one of the coolest sites I've seen is BGPlay. As the site describes itself, "BGPlay is a Java application which displays animated graphs of the routing activity of a certain prefix within a specified time interval. Its graphical nature makes it much easier to understand how BGP updates affect the routing of a specific prefix than by analyzing the updates themselves." If you want to see how BGP and the "Big Internet" works, try out this Java applet in a Java-enabled browser.
Ref: Who owns an autonomous system number.
"I eagerly anticipated reading Jeanna Matthews' Computer Networking: Internet Protocols in Action (CN:IPIA). I am always looking for good networking books to recommend to people asking how to enter the digital security field. I am pleased to report that CN:IPIA is an excellent, hands-on, packet-oriented introduction to networking, suitable for all entry-level analysts. Even those with several years of experience may learn a trick or two, as I did."
This is a great book. I also learned that we can freely download copies of certain IEEE 802 standards from Get IEEE 802, like the ubiquitous 802.3 CSMA/CD standard. These are hundreds of pages long, and really only useful to hardware and protocol developers. However, if you need to reference an authoritative source, you can't beat these documents.
Thursday, February 17, 2005
In contrast, the Sun Ray does not run a conventional operating system. It doesn't run embedded Windows, Linux, or Solaris. There is enough logic on the Sun Ray to support a TCP/IP stack and display graphics. That's it. All of the work is done on the Sun Ray Server. With version 3.0, the server can run on Solaris or several Linux distros. I personally plan to run Red Hat or Fedora. You can confirm my claims by reading the Sun Ray Overview .pdf.
You could argue that the Sun Ray is running some sort of operating system, but I would counter by saying it's much more limited than Windows CE or Linux. Simple = more secure. Microsoft publishes security patches and service packs for Windows CE and Windows XP Embedded. Sun only publishes patches for its Sun Ray Server software. With a Wyse terminal, you may find yourself needing to patch their so-called "thin client." Patching dozens, hundreds, or thousands of "thin clients" sounds the same as patching general-purpose PCs. With the Sun Ray, you let a real thin client sit on a user's desk while you only patch the Sun Ray server.
I'm not arguing the Sun Ray is the answer to everyone's problems, but I do see it as being several steps towards the right direction.
Update: I just read Network Computing's review of "remote-display servers." These include products like Citrix Metaframe. Unfortunately, NWC mixes the term "server-based computing" with "thin clients" throughout the article. Here's the deal: all of the so-called "thin clients" in this story are PCs running Windows 2000:
"For our performance tests, we set up 30 workstations running client software to connect to each of our five display servers. These workstations were all members of our test Active Directory domain, and each ran Windows 2000 Professional."
These workstations run special clients to connect to centralized servers running applications like Microsoft Office 2003 and so on. So, instead of having to install Office on every PC, you use the copy on the central server. This is a step in the right direction, but I would encourage NWC and others to avoid the term "thin client" when talking about these products and instead focus on the term "remote-display computing." These Windows 2000 desktops aren't thin at all; you still need to patch and secure Windows 2000 on every box.
Wednesday, February 16, 2005
This should have been done ten years ago when I was using Windows for Workgroups 3.11 as an Air Force lieutenant. This approach is fighting the last war, since it relies on running hundreds of thousands of personal computers with general purpose operating systems. All of these systems will still need applications installed, and those apps and the OS will have to be patched, updated, etc.
Instead of running PCs, .mil and .gov should adopt centrally managed thin clients. (No, I do not work for Sun, nor do I receive any compensation for pushing their Sun Rays!) Instead of wasting time shoring up a flawed PC-centric computing model, the military and government should run screaming away from Windows on PCs and embrace thin clients. I don't mean run an embedded Windows-based client either, like the Wyse terminals.
Expect to see more on Sun's thin clients as I deploy them here in my network operations center.
Tuesday, February 15, 2005
The same day I received this book, I also got the new copy of Cisco's IP Journal. This is a free quarterly newsletter I recommend every networking professional read. In a dash of Police-esque synchronicity, the first article is by Douglas Comer and introduces readers to network processors. I am looking forward to reading Prof. Comer's Network Systems Design with Network Processors, Agere Version. His article alludes to a version of that book for the Intel 2xxx family of network processors.
This is the first time I recall a vendor (at least Microsoft) denying access to a service because a user is running vulnerable software. This would be like refusing to let a person browse the Web because their version of Internet Explorer is too old, or refusing to let them check mail because Outlook is out-of-date. This is a form of "network addmission control" (Cisco-speak) or "network access protection" (Microsoft-speak) taken to a whole new level. I hope to see more of this in the future. Of course, I would prefer all of this to be transparent to users who don't care. I would much rather have everyday email- and Web-checking users running centrally managed thin clients.
Update: According to this Microsoft Security Response Center Blog entry, "all 150 million MSN Messenger users worldwide are now updated and no longer subject to exploitation from this vulnerability. It was a big decision to make the upgrade mandatory in such a short period of time, but we collectively decided that the small inconvenience of having customers upgrade was the right thing to do to help protect them."
ChoicePoint claims the data was stolen through 50 fake companies that were set up to access the data. MSNBC says "The incident was discovered in October, when ChoicePoint was contacted by a law enforcement agency investigating an identity theft crime. In that incident, suspects had posed as a ChoicePoint client to gain access to the firm's rich consumer databases."
MSNBC also reports that ChoicePoint "says it has 10 billion records on individuals and businesses, and sells data to 40 percent of the nation's top 1,000 companies. It also has contracts with 35 government agencies, including several law enforcement agencies."
This is the same ChoicePoint that MSNBC profiled last month. In that story company vice president James A. Zimbardi said "We do act as an intelligence agency, gathering data, applying analytics."
If this private intelligence agency is going to collect and publish my personal information, it better be held to a high standard. I bet that California residents aren't the only Americans affected by this incident. I have no insider information but I expect to hear more details in the future.
This story comes on the heels of a Washington Post report that government contractor SAIC suffered a physical break-in at a San Diego facility on 25 January 2005. Thiefs stole computers "containing the Social Security numbers and other personal information about tens of thousands of past and present company employees." Aside from this buried announcement, the reason we know of this intrusion is the California law requiring disclosure to those affected. In SAIC's case, that is 45,000 current and former employees.
Both of these incidents indicate that California's disclosure law needs to be expanded to the Federal level. How many other organizations are leaking personal data without our knowledge?
These two cases also demonstrate my security mantra that prevention eventually fails. Therefore, we need to have robust detection and response mechanisms in place. The best detection mechanism for an individual may be a service that provides access to your credit report (for a fee). This allows you to monitor access to your credit report and spot potentially fraudulent activity. Consumers in certain western US states are already entitled to an annual free credit report from each of the three credit bureaus. Check this Federal Trade Commission site for more details. It looks like those of us in the northeast will have to wait until 1 September 2005.
Once available, however, it looks like one could order one credit report from each bureau per year. It might be a good strategy to order one from Experian in, say, January, another from Equifax in May, and the third from TransUnion in September. The following year, repeat the cycle, in the same order. This strategy provides a look at your credit report every four months, as opposed to once per year.
The only response strategy is to follow the Federal Trade Commission's identity theft advice.
Monday, February 14, 2005
"'Google Hacking for Penetration Testers' (GHFPT) should be a wake-up call for organizations that consider 'information leakage' a theoretical problem. 'Information leakage' refers to the unintentional disclosure of sensitive information to public forums, like the Web. Security staff can use the tools and techniques outlined in Johnny Long's GHFPT to assess the degree of information leakage affecting their organizations. They can then propose, implement, and test remedies. When Google says they are clean, they can be reasonably assured they are."
I recommend visiting the author's site at johnny.ihackstuff.com to download his Shmoocon slides. They are a good overview of the book.
First up is Beginning Perl, 2nd Ed by James Lee and published by Apress. James also co-wrote Hacking Linux Exposed, 2nd Ed, which I enjoyed. I do not plan to read this book and become a Perl guru. Instead, I hope to become familiar enough with Perl to understand applications that use the langauge. Oinkmaster, the Snort rules update script, is one example.
My plan to start seriously learning Python begins with Practical Python by Magnus Lie Hetland and published by Apress. I gained an introduction to Python when I read Learning Python, 2nd Ed by Mark Lutz. Two subjects which were never really addressed in that book, however, were accepting user input and network programming. I hope this Python book and those that follow help me to put Python to work.
My next Python book is Dive Into Python by Mark Pilgrim and published by Apress. This book expects readers to have some knowledge of programming, so there is less hand-holding than an introductory book might have. I am reading this and other Python books because the language seems like a good way to accomplish programming tasks that don't require the low-level bit handling power of C.
My last Python reference is Foundations of Python Network Programming by John Goerzen and also published by Apress. By now you should see I think Apress is bringing a lot of helpful programming texts to the world. I intend to read this book to learn how to write client-server networking programs.
My last book on interpreted languages is Practical Programming in Tcl and Tk, 4th Ed by Brent Welch, Ken Jones, and Jeffrey Hobbs, and published by PHPTR. I'd like to gain some familiarity with Tcl/Tk because it's the language in which Sguil is written. I would like to contribute more to Sguil than the few modifications I've already submitted.
Next is Beginning C, 3rd Ed by Ivor Horton and published by Apress. C is everywhere, from operating systems to security applications like Snort. As with Perl, I don't expect to read this book and become a C wizard. I already gained a passing familiarity with C by reading Stephen Prata's C Primer Plus, 4th Ed. I hope to read this new C book and improve my ability to understand other people's C, and perhaps make tweaks if needed.
I have similar hopes for Practical C Programming, 3rd Ed by Steve Oualline and published by O'Reilly. I actually started to read this book before any other book on C. I became frustrated when I found some of the exercises required knowledge of programming topics not yet introduced in the book. With a little more C understanding, I think I could complete the exercises and gain additional insights into C.
If you thought learning C would be tough, try a book on assembly like Professional Assembly Language by Richard Blum and published by Wrox, a Wiley imprint. I am definitely not reading this book to become an assembly programmer. I am also not reading the book to modify assembly produced by the compiler, as is the stated goal of the text. Rather, I frequently encounter assembly when looking at exploit code. I would like to be able to follow what the code is doing, and thereby improve my understanding of the enemy's capabilities.
The next series of books builds upon the programming knowledge gained from the previous titles. I start with Exploiting Software by Greg Hoglund and Gary McGraw, published by Addison-Wesley. This is a very popular book, but I have held off reading it until I have the necessary programming background to really appreciate it. This is in some ways the second book in a series on security programming; the first was Building Secure Software.
The next book in the attacking software category is Buffer Overflow Attacks by James C. Foster, et al, published by Syngress. I find it interesting to see an entire book devoted to this class of attack. I am looking forward to gaining a good understanding of this sort of exploit. The book features several strong contributing authors.
My last book on attacking software is the very popular Shellcoder's Handbook by Jack Koziol, et al, published by Wiley. This book is similar to Exploiting Software; I prefer not to read it until I have a better understanding of C and assembly. This book featured a number of zero-day attacks that were not fixed until after the book's publication.
I plan to move from attacking software to defending it by reading Writing Secure Code, 2nd Ed by Michael Howard and David C. LeBlanc, published by Microsoft Press. If anyone needs to read this book, it is certainly Microsoft. Whereas Building Secure Code was more UNIX-oriented, this book supposedly addressed Windows vulnerabilities.
I plan to temporarily leave the security world behind once I start reading the Pocket Guide to TCP/IP Sockets (C Version) by Michael J. Donahoo and Kenneth L. Calvert, published by Morgan Kaufmann. This is a short book, but it provides an introduction to socket programming in C. The authors assume some command of C, so my earlier reading should prepare me.
Next I hope to read Understanding UNIX/LINUX Programming: A Guide to Theory and Practice by Bruce Molay, published by PHPTR. I would like to gain a general appreciation for programming in the UNIX environment when reading this book. I am not planning to hack any kernels or userland applications, but I want to know more about what is happening under the hood of my UNIX systems.
The next book is one that some people thought might never be published. It's Unix Network Programming, Vol. 1: The Sockets Networking API, 3rd Ed, by the late W. Richard Stevens, and updated by Bill Fenner and Andrew M. Rudoff, published by Addison-Wesley. When Richard Stevens passed away in late 1999, the world lost an exceptionally talented author and person. This book is an update to his 2nd edition, and I look forward to reading it.
The last book waiting to be read on my bookshelf is BSD Sockets Programming from a Multi-Language Perspective by M. Tim Jones, published by Charles River Media. I think this is a good way to end my programming reading, because it shows how to accomplish network programming tasks in a comparative manner. Jones covers socket programming in C, Java, Ruby, Perl, Python, and Tcl. While I will not have any Java or Ruby experience, I expect to learn a lot by comparing the various approaches for the languages with which I am somewhat familiar.
I have other unread books on my shelf, but these are the ones I currently possess and plan to read. My Amazon.com Wish List shows over 30 other titles I hope to acquire in the coming months (or probably years). At some point I will integrate them into my upcoming reading list, or just do individual pre-reviews as I acquire them. Stay tuned. :)
Sunday, February 13, 2005
Update: It must be confusing to work for NetSec. One minute you're working for MCI, the next you're working for Verizon!
Friday, February 11, 2005
1. What is the cheapest switch you've found that offers a SPAN port?
2. Is anyone interested in writing a chapter providing an overview of peer-to-peer protocols? I have been unable to contact the subject matter expert I hoped to contribute this section to my new book. I am looking for someone with experience detecting, interpreting, and controlling peer-to-peer protocols on internal networks. I am interested in providing the reader the following:
- Overview of general p2p principles and networks
- Discussion of popular p2p implementations
-- Networks and clients
-- General analysis of packet traces via Ethereal or Tethereal or Tcpdump captures (save captures for inclusion in book, if possible)
- Ways to detect p2p activity
- Ways to control (but not eliminate) p2p on internal networks; in other words, allow BitTorrent for downloading .iso's, but don't let it consume too much bandwidth
- Other topics you find relevant and interesting
I recommend responding via comment for the first question, and emailing taosecurity at gmail dot com for the second. I've just sent an email to the guys at Slyck.com to see if they'd like to help.
Update: I found this listing of switches reportedly offering mirror ports at Colasoft.
Thursday, February 10, 2005
"[T]he search by the dog into, effectively, the entire contents of a closed container inside a locked trunk, without probable cause, was 'reasonable' even though the driver and society would consider the closed container 'private' because the search only revealed criminal conduct.
The same reasoning could easily apply to an expanded use of packet sniffers for law enforcement."
Since Rasch is a Senior Vice President and the Chief Security Counsel (i.e., a lawyer) at Solutionary Inc., he may be on to something. The comments on Mark's article by those not trained as lawyers are in some cases amusing. He responds to several of them.
A few people (here, here, and here) demonstrate an understanding of the difference between a logo and a mascot and have been brave enough to speak up.
Unfortunately, the FUD is already flying. Kon Wilms started the Help Save the FreeBSD Mascot online petition and has enjoyed a healthy number of sign-ups. I've started a Save the FreeBSD Mascot and Create a Logo petition to counter Mr. Wilms' effort. I'm not sure if it will be able to offset his call to "send a message to the FreeBSD Project administrators that they keep our mascot," even when we know the mascot will not change. According to the PetitionOnline FAQ, I might not see my petition posted until tomorrow or later.
My petition text says the following:
"The FreeBSD Project will soon officially release details of a contest to design the group's first-ever logo:
FreeBSD does not currently have a logo; it only has a mascot, the popular "daemon" named Beastie.
The logo will complement -- not replace or remove -- the existing "Beastie" FreeBSD mascot. Further, having a logo independent of the mascot will allow wider, unencumbered promotion of FreeBSD. The Beastie creator, Marshall Kirk McKusick, asserts copyright over the BSD Daemon:
This petition calls on FreeBSD users to express their support for the creation of an official FreeBSD logo. We make this call to counter the "Help Save the FreeBSD Mascot" petition, which has confused users by implying that they need to "send a message to the FreeBSD Project administrators that they keep our mascot." People signing that petition are being mislead. They rightly fear the removal of Beastie the mascot -- which won't happen -- but are sending the wrong message.
Beastie fans have nothing to fear! By signing this petition, you show the FreeBSD Project your support for the FreeBSD logo contest and recognize Beastie is not going to disappear."
When the petition is active, I'll update this site with a link to it.
Tuesday, February 08, 2005
Feb. 23 newvers.sh starts to say 5.4-PRERELEASE
Mar. 2 RELENG_5 code freeze begins
Mar. 4 Public test release build called 5.4-PRERELEASE
Mar. 16 Branch RELENG_5_4, unfreeze RELENG_5
Mar. 18 5.4-RC1
Mar. 25 5.4-RC2
Apr. 4 5.4-RELEASE
You can watch the schedule and open issues pages as the release engineering process continues.
Monday, February 07, 2005
"'Internet Denial of Service' (IDOS) is an excellent book by expert authors. IDOS combines sound advice with a fairly complete examination of the denial of service (DoS) problem set. Although the authors write from the DoS point of view, as a network security monitoring advocate I found myself agreeing with many of their insights. Since there are no other books dedicated to DoS, I was very pleased to find this one is a powerful resource for managers and technicians alike."
The "RST scan" controversy mentioned in the review refers to my paper Interpreting Network Traffic. I discussed the issue in The Tao as well.
Two interesting projects I intend to research further are D-WARD and DefCOM.
Sunday, February 06, 2005
I started the day in a briefing by Joe Stewart and Mike Wisener of LURHQ. I attended this talk primarily because Joe has been my point of contact at LURHQ for contributing several malware analysis case studies to my next book, Extrusion Detection. LURHQ analysts do some of the best technical research publicly available, and they are going to share some original write-ups in the new book.
The title of the talk was "Binary Difference Analysis via Phase Cancellation." This didn't mean much to me initially, but I am definitely glad I attended. Joe and Mike explained a way to analyze compiled binaries. In other words, how does the code in compiled malware A resemble variant B? Alternatively, how does a patch to binary C change it into binary D?
Joe cited work by Halvar Flake of SABRE Security on function signature plug-ins for IDA Pro and Todd Sabin of Bindview on instruction comparison. Joe and Mike devised a method which takes advantage of the fact that adding two waveforms together, when they are 180 degrees out of sync, results in cancellation. The top "wave equation" in the figure at right shows this property. Adding two waveforms together when they are exactly the same causes a new wave with twice the amplitude.
Joe and Mike use math to convert assembly instructions obtained from a compiled binary into a wave-like representation. They then take the binary they want to compare with the first and convert it into a wave-like form as well. They apply an algorithm that assigns higher relevancy values to so-called "worker" assembly commands (like CALL) and lower relevancy values to so-called "helper" assembly commands (like PUSH). Using their algorithm they compare the waveforms of the assembly language of the two binaries and display differences.
To implement this in code, the LURHQ analysts wrote an architecture for the free debugger Ollydbg to extend it to understand Perl plug-ins. Their OllyPerl API was inspired by IDA Python. They use OllyPerl to run their proof of concept tool WaveDiff. Joe and Mike showed how their tool found the deltas in a Microsoft binary before and after a patch was applied. The patch appeared to address the vulnerability stated by the vendor, but other differences were also apparent. This method shows a way to determine the extent of changes introduced by patches.
The LURHQ duo stated their tool is frustrated by obfuscation techniques. It depends on identical subroutine layouts and JMPing to different offsets is also a problem. They theorized that first using techniques developed by Sabin and Flake, like graph isomorphism, to isolate subroutines, and then detecting changes in those subroutines using phase cancellation, would make for a more robust solution.
After the LURHQ talk I went to the room where Marty Roesch was scheduled to talk about Snort developments. I ended up watching the end of a discussion of Renderman's WarPack. I can almost let this one speak for itself. The device is a mobile wireless tour-de-force. Look to see it in action at the next DefCon. I recommend keeping a safe distance. Renderman warned his audience that his biology has probably been adversely affected by having antennas pointed in certain directions.
Marty's Snort talk was a great look at future Snort developments involving target-based IDS (aka "T-IDS"). The problem is that traditional IDS lack context about the networks they monitor. In other words, how does the IDS know how an end host will handle traffic variations? For example, there are eight possible ways to reassemble IP fragments, with five commonly seen in real TCP/IP stacks. There are two ways to reassemble TCP streams, confounded by questions of handling resets in and out of the TCP window. TCP also features options involving maximum segment size, timestamps, inactivity timeouts, and PAWS (Protect Against Wrapped Sequences), to name a few. If the IDS reassembles IP or TCP fragments one way, and the end host uses another, an opportunity for evasion a la Ptacek and Newsham arises.
There are three ways to provide context to the IDS: active scanning, agents on monitored hosts, or passive identification. Active scanning suffers from a time lag problem, meaning scans 24 hours apart will miss activity by hosts active outside the scanning interval. Host agents are problematic because not all devices can accept them, like printers, embedded devices, and the like. (This is also a problem for network addmission control schemes.) Passive identification has the best chance of offering real-time information for all hosts which appear online.
Implementing target-based IDS in Snort requires a large code overhaul. Marty is replacing the frag2 IP defragmenter with frag3 and the stream4 TCP reassembly module with stream5. Both are target-based and multiple instances of each can be run simultaneously. For example, one could run one instance of stream5 to handle TCP streams for Windows stacks and a second for Linux stacks.
These are the first steps in the drive to implement target-based traffic processing. The next will be application-based processing. The former has the IDS model TCP/IP stacks for various target hosts. The latter has the IDS model versions and features for various target applications, like IIS 4.0, 5.0, or 6.0, perhaps at varying patch levels.
In addition, Sourcefire is working on a new traffic data acquisition system, or 'daq_module'. This is an abstraction layer to make it easier for Snort to collect traffic other than Ethernet. For example, one could use the new module to grab traffic from a divert socket, or a custom NIC driver, or even the pflog output from the Pf firewall. The new Snort decoder will support various output formats, including "Snort classic" (the current version), plus Tcpdump and Tethereal.
Marty briefly previewed the new Snort rules syntax, called Snort Language 2.0. The new language is needed because the existing rule syntax is far overextended. Marty never anticipated that Snort rules would include stateful analysis, PCRE matching, and protocol decoding! The new rules will have an info - test - action structure. Beyond a new structure, the new rules will be less port-centric and instead be based on "discoverable attributes." When a packet arrives for inspection, the new Snort will check a "dispatcher," which will look for target attributes (like OS, applications, and so on) in an attribute table. Snort will apply "micropolicies" (for, say Linux, or OpenSSH, or Win32, or IIS) to decide whether to alert on the packet or stream in question. Snort admins will still be able to write their own rules, along with the new micropolicies and attribute table. Sourcefire hopes users will buy a commercial RNA system to create the attribute table automatically.
I asked Marty if or how his competitors were innovating. He said NFR continues to integrate OS fingerprinting into its IDS. He said Gartner reported IDS was "dead" because it took too much time to properly configure an IDS and interpret its results. Target-based IDS (especially combined with RNA) will make detecting suspicious and malicious activity much easier and more accurate. Marty cited my book with respect to my idea of "defensible networks" and mentioned Sourcefire is working on a Secure Sockets Layer (SSL) decryption device.
After Marty's talk I attended Johnny Long's Google hacking presentation. Johnny's Google-fu is impressive, and he is definitely a PowerPoint Ranger. In short, "all your information are belong to Google." I learned that you can remove image requests from views of Google's caches by appending "&strip=1" to the end of cache URLs. In other words, http://184.108.40.206/search?q=cache:5OTSvO8549oJ:www.taosecurity.com/+&hl=en displays www.taosecurity.com and pulls non-cached images from www.taosecurity.com, while http://220.127.116.11/search?q=cache:5OTSvO8549oJ:www.taosecurity.com/+&hl=en&strip=1 won't touch www.taosecurity.com and won't display images.
Johnny showed how to get around Google's attempts to surpress queries for the PHPNuke admin.php vulnerability. If you query for "inurl:admin.php" you see an error screen. If you change the case of, say the first P, you get the results you want. Regarding what you can find with Google, the answer is everything -- Social Security numbers, credit cards, driver's licenses, etc. It's often not your fault, but the fault of ignorant or naive local governments, police departments, and others who collect and post such information without thinking of the consequences.
Johnny also showed how to use Sensepost.com's BiLE tool to perform link and relationship analysis of Web sites. You really need to read Johnny's new book, pictured earlier, to get the full idea.
Shmoocon concluded with the publication of an advisory on International Domain Name spoofing. Shmoo released this advisory to spur browser developers to fix the problem. This is a phisher's dream, so I hope Web browsers fix their IDN support soon.
On a related note, you mind enjoy reading Security Best Practice: Host Naming & URL Conventions by Gunter Ollmann. He offers advice on naming hosts in order to make life more difficult for phishers.
Overall I very much enjoyed Shmoocon. I intend to attend next year and perhaps submit a proposal to talk about packet radio or another offbeat topic. I felt the theme of this con, in some respects, was "cross training." Beyond traditional host- and network-oriented presentations, I saw briefings on RF, IR, and using an audio noise reduction technique (phase cancellation) to find differences in compiled binaries. Talks on lock picking were offered. I think this cross-pollination is healthy for the digital security community because we are exposed to ideas outside our mental comfort zone.
For another look at Shmoocon, check out this Secureme Blog entry.
The day started with a rant by Riley "Caezar" Eller on the state of security. Caezar wrote Bypassing MSB Data Filters for Buffer Overflow Exploits on Intel Platforms and works for CoCo Corp. (CoCo appears to stand for Connection Optimizing Cryptographic Operator.)
He pleaded for someone to invent a new Internet and asked why other speakers at security conventions do not make similar requests. Such pleas are similar to those who call for replacement of gasoline-powered automobiles with hydrogen-powered vehicles. It's easy to create an end-user product like a hydrogen-powered car, assuming the extra costs could be reduced. However, who will finance and build the infrastructure that makes such a vehicle worth buying and driving? Therefore, we see more success with incremental products like the Toyota Prius, which leverage existing fuel infrastructure while offering a more fuel-efficient power system.
I next attended Roger Dingledine's talk on Tor, an anonymous communication system that implements onion routing. I saw Aaron Higbee of the SecureMe blog there. Roger's presentation was excellent. He was one of the few speakers who managed to speak clearly, at an understandable speed, in complete and thoughtful sentences, without wasting words. He said Tor has about 15,000 users currently with 100-200 routing servers. Tor is not just for hiding a client's Internet identity; servers can be hidden as well. For example, Bloggers Without Borders is operating a "hidden" Web server. Tor is not peer-to-peer in the sense that all clients are also servers. Rather, you can be a Tor user without carring other people's traffic.
Next I attended a presentation by Lance James and Lucky225 of Secure Science Corporation. They demonstrated that telephone security was, is, and will continue to be broken. This talk was an eye-opener for me, since I don't spend any time on the voice side of the house. The pair showed how features of systems like Free World Dialup can be used to make free calls beyond the intended uses. They explained how Caller ID is completely worthless and trivially spoofed. Check VOIP-Info for an introduction to this world. They also mentioned K7.net, IPkall.com, Callwave.com, Ureach.com, Packet8.net, and the Kphone tool.
I had lunch with Andy Williams from Reuters (who wrote Hackers, Virus Writers Target Mobile Phones based on a Shmoocon briefing) and Marty Roesch, Snort creator and Sourcefire founder. (I saw Ron Gula of Dragon IDS and Tenable Security at the con, but couldn't find him for lunch.) Marty made an interesting comment on "intrusion prevention systems" (aka "layer 7 firewalls"). He said it is a commonly accepted security practice to implement access control via a "default deny, allow some" policy. Intrusion preventions systems completely break this best practice. They try to "deny some" and then they "allow all." The tragedy of the situation is compounded when organizations follow Gartner's advice to remove their intrusion detection systems and proxies (another "dead" security tool). Now we have exploit traffic that passed through the IPS with no way to audit or contain it. This argument is similar to my discussion Considering Convergence? where I recommend against collapsing access control and detection into one appliance.
I then saw Michael "Abadd0n" (zero or letter O, not sure) Lynn introduce radio frequency security issues. Mike wrote AirJack which he presented at Black Hat USA 2002. He is obviously a really smart guy, but he spent way too much time on RF theory and background issues. Even though he spoke extremely fast, he never really got to discuss anything very interesting. He was excited about the Universal Software Radio Peripheral and related GNU Radio projects.
A similar electro-magnetic theme followed, with a presentation on infrared hacking by Major Malfunction. I wish Mike's RF briefing had followed Major Mal's lead in terms of presentation style. Major Mal demonstrated how he completely owned the television systems in several hotels by figuring out the IR commands used by the TV remote. He also discussed brute-forcing and replaying codes to open garage doors, vending machines, auto alarms, and other devices. It seems IR is totally broken too.
The second-to-last talk for the day discussed a wireless IDS project by Laurent Butti and Franck Veysset from France Telecom. Their unnamed product is not yet open source and they do not have any public Web presence yet. (I saw the name "semper" in the title of the demo, so perhaps that is the name of their wireless IDS?) The Snort-Wireless project appears stalled, so it would be nice to see an alternative. They reminded me of an open source switch management project called Netdisco. An audience member piped up about the 3rd Generation Partnership Project (3GPP) and its support for embedding GPS coordinates in device messages.
I ended the day listening to David Hulton (aka H1kari) explain Field Programmable Gate Arrays. K1kari organizes Toorcon and wrote bsd-airtools. He builds embedded systems for Pico Computing and showed how to deploy FPGAs to create fast password-cracking systems. The boards he used are built by Xilinx. More information is available at OpenCores.org.