Tuesday, March 21, 2017

Cybersecurity Domains Mind Map

Last month I retweeted an image labelled "The Map of Cybersecurity Domains (v1.0)". I liked the way this graphic divided "security" into various specialties. At the time I did not do any research to identify the originator of the graphic.

Last night before my Brazilian Jiu-Jitsu class I heard some of the guys talking about certifications. They were all interested in "cybersecurity" but did not know how to break into the field. The domain image came to mind as I mentioned that I had some experience in the field. I also remembered an article Brian Krebs asked me to write titled "How to Break Into Security, Bejtlich Edition," part of a series on that theme. I wrote:

Providing advice on “getting started in digital security” is similar to providing advice on “getting started in medicine.” If you ask a neurosurgeon he or she may propose some sort of experiment with dead frog legs and batteries. If you ask a dermatologist you might get advice on protection from the sun whenever you go outside. Asking a “security person” will likewise result in many different responses, depending on the individual’s background and tastes.

I offered to help the guys in my BJJ class find the area of security that interests them and get started in that space. I thought the domains graphic might facilitate that conversation, so I decided to identify the originator so as to give proper credit.

It turns out that that CISO at Oppenheimer & Co, Henry Jiang, created the domains graphic. Last month at LinkedIn he published an updated Map of Cybersecurity Domains v2.0:

Map of Cybersecurity Domains v2.0 by Henry Jiang
If I could suggest a few changes for an updated version, I would try to put related disciplines closer to each other. For example, I would put the Threat Intelligence section right next to Security Operations. I would also swap the locations of Risk Assessment and Governance. Governance is closer to the Framework and Standard arena. I would also move User Education to be near Career Development, since both deal with people.

On a more substantive level, I am not comfortable with the Risk Assessment section. Blue Team and Red Team are not derivatives of a Penetration test, for example. I'm not sure how to rebuild that section.

These are minor issues overall. The main reason I like this graphic is that it largely captures the various disciplines one encounters in "cybersecurity." I could point a newcomer to the field at this image and ask "does any of this look interesting?" I could ask someone more experienced "in which areas have your worked?" or "in which areas would you like to work?"

The cropped image at the top of this blog shows the Security Operations and Threat Intelligence areas, where I have the most experience. Another security person could easily select a completely different section and still be considered a veteran. Our field is no longer defined by a small set of skills!

What do you think of this diagram? What changes would you make?

Friday, March 17, 2017

Bejtlich Moves On

Exactly six years ago today I announced that I was joining Mandiant to become the company's first CSO. Today is my last day at FireEye, the company that bought Mandiant at the very end of 2013.

The highlights of my time at Mandiant involved two sets of responsibilities.

First, as CSO, I enjoyed working with my small but superb security team, consisting of Doug Burks, Derek Coulsen, Dani Jackson, and Scott Runnels. They showed that "a small team of A+ players can run circles around a giant team of B and C players."

Second, as a company spokesperson, I survived the one-of-a-kind ride that was the APT1 report. I have to credit our intel and consulting teams for the content, and our marketing and government teams for keeping me pointed in the right direction during the weeks of craziness that ensued.

At FireEye I transitioned to a strategist role because I was spending so much time talking to legislators and administration officials. I enjoyed working with another small but incredibly effective team: government relations. Back by the combined FireEye-Mandiant intel team, we helped policy makers better understand the digital landscape and, more importantly, what steps to take to mitigate various risks.

Where do I go from here?

Twenty years ago last month I started my first role in the information warfare arena, as an Air Force intelligence officer assigned to Air Intelligence Agency at Security Hill in San Antonio, Texas. Since that time I've played a small part in the "cyber wars," trying to stop bad guys while empowering good guys.

I've known for several years that my life was heading in a new direction. It took me a while, but now I understand that I am not the same person who used to post hundreds of blog entries per year, and review 50 security books per year, and write security books and articles, and speak to reporters, and testify before Congress, and train thousands of students worldwide.

That mission is accomplished. I have new missions waiting.

My near-term goal is to identify opportunities in the security space which fit with my current interests. These include:
  • Promoting open source software to protect organizations of all sizes
  • Advising venture capitalists on promising security start-ups
  • Helping companies to write more effective security job descriptions and to interview and select the best candidates available
My intermediate-term goal is to continue my Krav Maga training, which I started in January 2016. My focus is the General Instructor Course process required to become a fully certified instructor. I will also continue training in my other arts, such as Brazilian Jiu-Jitsu. Krav, though, is the priority, thanks to the next goal.

My main instructor, Nick Masi (L) and his instructor, Eyal Yanilov (R)
My long-term goal is to open a Krav Maga school in the northern Virginia area in the fall of 2018. Accomplishing this goal requires completing the GIC process and securing a studio and students to join me on this new journey. I plan to offer private training, plus specialized seminars for other executives who feel burned out, or who seek self-defense or fitness. I will also offer classes tailored for kids and women, to meet the requirements of those important parts of our human family.

Anyone who has spoken with me about these changes has sensed my enthusiasm. I've also likely encouraged them to join me at my current Krav Maga school, First Defense in Herndon, VA. Tell them Richard sent you!

Change, while often uncomfortable, is a powerful growth accelerator. I am thankful that my family, and my wife Amy in particular, are so supportive of my initiatives.

If you would like to join me in any of these endeavors, please leave a comment here with your email address, or email me via taosecurity at gmail dot com. Best wishes to those remaining at FireEye!

Wednesday, March 15, 2017

The Missing Trends in M-Trends 2017

FireEye released the 2017 edition of the Mandiant M-Trends report yesterday. I've been a fan of this report since the 2010 edition, before I worked at the company.

Curiously for a report with the name "trends" in the title, this and all other editions do not publish the sorts of yearly trends I would expect. This post will address that limitation.

The report is most famous for its "dwell time" metric, which is the median (not average, or "mean") number of days an intruder spends inside a target company until he is discovered.

Each report lists the statistic for the year in consideration, and compares it to the previous year. For example, the 2017 report, covering incidents from 2016, notes the dwell time has dropped from 146 days in 205, to 99 days in 2016.

The second most interesting metric (for me) is the split between internal and external notification. Internal notification means that the target organization found the intrusion on its own. External notification means that someone else informed the target organization. The external party is often a law enforcement or intelligence agency, or a managed security services provider. The 2016 split was 53% internal vs 47% external.

How do these numbers look over the years that the M-Trends report has been published? Inquiring minds want to know.

The 2012 M-Trends report was the first edition to include these statistics. I have included them for that report and all subsequent editions in the table below.

Year Days Internal External
2011 416 6 94
2012 243 37 63
2013 229 33 67
2014 205 31 69
2015 146 47 53
2016 99 53 47

As you can see, all of the numbers are heading in the right direction. We are finally into double digits for dwell time, but over 3 months is still far too long. Internal detection continues to rise as well. This is a proxy for the maturity of a security organization, in my opinion.

Hopefully future M-Trends reports will include tables like this.

Tuesday, March 14, 2017

The Origin of Threat Hunting

2011 Article "Become a Hunter"
The term "threat hunting" has been popular with marketers from security companies for about five years. Yesterday Anton Chuvakin asked about the origin of the term.

I appear to have written the first article describing threat hunting in any meaningful way. It was published in the July-August 2011 issue of Information Security Magazine and was called "Become a Hunter." I wrote it in the spring of 2011, when I was director of incident response for GE-CIRT. Relevant excerpts include:

"To best counter targeted attacks, one must conduct counter-threat operations (CTOps). In other words, defenders must actively hunt intruders in their enterprise. These intruders can take the form of external threats who maintain persistence or internal threats who abuse their privileges. Rather than hoping defenses will repel invaders, or that breaches will be caught by passive alerting mechanisms, CTOps practitioners recognize that defeating intruders requires actively detecting and responding to them. CTOps experts then feed the lessons learned from finding and removing attackers into the software development lifecycle (SDL) and configuration and IT management processes to reduce the likelihood of future incidents...

In addition to performing SOC work, CTOps requires more active, unstructured, and creative thoughts and approaches. One way to characterize this more vigorous approach to detecting and responding to threats is the term “hunting.” In the mid-2000s, the Air Force popularized the term “hunter-killer” for a missions whereby teams of security experts performed “friendly force projection” on their networks. They combed through data from systems and in some cases occupied the systems themselves in order to find advanced threats. The concept of “hunting” (without the slightly more aggressive term “killing”) is now gaining ground in the civilian world.

2013 Book "The Practice of NSM"
If the SOC is characterized by a group that reviews alerts for signs of intruder action, the CIRT is recognized by the likelihood that senior analysts are taking junior analysts on “hunting trips.” A senior investigator who has discovered a novel or clever way to possibly detect intruders guides one or more junior analysts through data and systems looking for signs of the enemy. Upon validating the technique (and responding to any enemy actions), the hunting team should work to incorporate the new detection method into the repeatable processes used by SOC-type analysts. This idea of developing novel methods, testing them into the wild, and operationalizing them is the key to fighting modern adversaries."

The "hunting trips" I mentioned were activities that our GE-CIRT incident handlers -- David Bianco,  Ken Bradley, Tim Crothers, Tyler Hudak, Bamm Visscher, and Aaron Wade -- were conducting. Aaron in particular was a driving force for hunting methodology.

I also discussed hunting in chapter 9 of my 2013 book The Practice of Network Security Monitoring, contrasting it with "matching" as seen in figure 9-2. (If you want to save 30% off the book at No Starch, use discount code "NSM101.")

The question remains: from where did I get the term "hunt"? My 2011 article stated "In the mid-2000s, the Air Force popularized the term “hunter-killer." My friend Doug Steelman, a veteran of the Air Force, NSA, and Cyber Command, provided a piece of the puzzle on Twitter. He posted a link to a 2009 presentation by former NSA Vulnerability and Analysis Operations (VAO) chief Tony Sager, a friend of this blog.

July 2009 Presentation by Tony Sager
In the mid-2000s I was attending an annual conference held by NSA called the Red Team/Blue Team Symposium, or ReBl for short. ReBl took place over a week's time at the Johns Hopkins University Applied Physics Lab in Laurel, MD. If you Google for the conference you will likely find WikiLeaks emails from the HBGary breach.

It was a mix of classified and unclassified presentations on network defense. During these presentations I heard the term "APT" for the first time. I also likely heard about the "hunt" missions the Air Force was conducting, in addition to probably hearing Tony Sager's presentation mentioning a "hunt" focus.

That is as far back as I can go, but at least we have a decent understanding where I most likely first heard the term "threat hunting" in use by practitioners. Happy hunting!

Monday, February 13, 2017

Does Reliable Real Time Detection Demand Prevention?

Chris Sanders started a poll on Twitter asking "Would you rather get a real-time alert with partial context immediately, or a full context alert delayed by 30 mins?" I answered by saying I would prefer full context delayed by 30 minutes. I also replied with the text at left, from my first book The Tao of Network Security Monitoring (2004). It's titled "Real Time Isn't Always the Best Time."

Dustin Webber then asked "if you have [indicators of compromise] IOC that merit 'real-time' notification then you should be in the business of prevention. Right?"

Long ago I decided to not have extended conversations over Twitter, as well as to not try to compress complex thoughts into 140 characters -- hence this post!

There is a difference, in my mind, between high-fidelity matching (using the vernacular from my newest book, The Practice of Network Security Monitoring, 50% off now with code RSAREADING) and prevention.

To Dustin's point, I agree that if it is possible to generate a match (or "alert," etc.) with 100% accuracy (or possibly near 100%, depending on the severity of the problematic event), i.e., with no chance or almost no chance of a false positive, then it is certainly worth seeking a preventive action for that problematic event. To use a phrase from the last decade, "if you can detect it, why can't you prevent it?"

However, there are likely cases where zero- or low-false positive events do not have corresponding preventive actions. Two come to mind.

First, although you can reliably detect a problem, you may not be able to do anything about it. The security team may lack the authority, or technical capability, to implement a preventive action.

Second, although you can reliably detect a problem, you may not want to do anything about it. The security team may desire to instead watch an intruder until such time that containment or incident mitigation is required.

This, then, is my answer to Dustin's question!

Sunday, February 12, 2017

Guest Post: Bamm Visscher on Detection

Yesterday my friend Bamm Visscher published a series of Tweets on detection. I thought readers might like to digest it as a lightly edited blog post. Here, then, is the first ever (as far as I can remember) guest post on TaoSecurity Blog. Enjoy.

When you receive new [threat] intel and apply it in your detection environment, keep in mind all three analysis opportunities: RealTime, Batch, and Hunting.

If your initial intelligence analysis produces high context and quality details, it's a ripe candidate for RealTime detection.

If analysts can quickly and accurately process events generated by the [RealTime] signature, it's a good sign the indicator should be part of RealTime detection. If an analyst struggles to determine if a [RealTime alert] has detected malicious activity, it's likely NOT appropriate for RealTime detection.

If [the threat] intelligence contains limited context and/or details, try leveraging Batch Analysis with scheduled data reports as a better detection technique. Use Batch Analysis to develop better context (both positive and negative hits) to identify better signatures for RealTime detection.

Hunting is the soft science of detection, and best done with a team of diverse skills. Intelligence, content development, and detection should all work together. Don't fear getting skunked on your hunting trips. Keep investing time. The rewards are accumulative. Be sure to pass Hunting rewards into Batch Analysis and RealTime detection operations in the form of improved context.

The biggest mistake organizations make is not placing emphasis outside of RealTime detection, and "shoe-horning" [threat] intelligence into RealTime operations. So called "Atomic Indicators" tend to be the biggest violator of shoe-horning. Atomic indicators are easy to script into signature driven detection devices, but leave an analyst wondering what he is looking at and for.

Do not underestimate the NEGATIVE impact of GOOD [threat] intelligence inappropriately placed into RealTime operations! Mountains of indiscernible events will lead to analyst fatigue and increase the risk of good analyst missing a real incident.

Thursday, February 09, 2017

Bejtlich Books Explained

A reader asked me to explain the differences between two of my books. I decided to write a public response.

If you visit the TaoSecurity Books page, you will see two different types of books. The first type involves books which list me as author or co-author. The second involves books to which I have contributed a chapter, section, or foreword.

This post will only discuss books which list me as author or co-author.

In July 2004 I published The Tao of Network Security Monitoring: Beyond Intrusion Detection. This book was the result of everything I had learned since 1997-98 regarding detecting and responding to intruders, primarily using network-centric means. It is the most complete examination of NSM philosophy available. I am particularly happy with the NSM history appendix. It cites and summarizes influential computer security papers over the four decade history of NSM to that point.

The main problem with the Tao is that certain details of specific software versions are very outdated. Established software like Tcpdump, Argus, and Sguil function much the same way, and the core NSM data types remain timeless. You would not be able to use the Bro chapter with modern Bro versions, for example. Still, I recommend anyone serious about NSM read the Tao.

The introduction describes the Tao using these words:

Part I offers an introduction to Network Security Monitoring, an operational framework for the collection, analysis, and escalation of indications and warnings (I&W) to detect and respond to intrusions.   Part I begins with an analysis of the terms and theory held by NSM practitioners.  The first chapter discusses the security process and defines words like security, risk, and threat.  It also makes assumptions about the intruder and his prey that set the stage for NSM operations.  The second chapter addresses NSM directly, explaining why NSM is not implemented by modern NIDS' alone.  The third chapter focuses on deployment considerations, such as how to access traffic using hubs, taps, SPAN ports, or inline devices.  

Part II begins an exploration of the NSM “product, process, people” triad.  Chapter 4 is a case study called the “reference intrusion model.”  This is an incident explained from the point of view of an omniscient observer.  During this intrusion, the victim collected full content data in two locations.  We will use those two trace files while explaining the tools discussed in Part II.  Following the reference intrusion model, I devote chapters to each of the four types of data which must be collected to perform network security monitoring – full content, session, statistical, and alert data.  Each chapter describes open source tools tested on the FreeBSD operating system and available on other UNIX derivatives.  Part II also includes a look at tools to manipulate and modify traffic.  Featured in Part II are little-discussed NIDS' like Bro and Prelude, and the first true open source NSM suite, Sguil.

Part III continues the NSM triad by discussing processes.  If analysts don’t know how to handle events, they’re likely to ignore them.  I provide best practices in one chapter, and follow with a second chapter explicitly for technical managers.  That material explains how to conduct emergency NSM in an incident response scenario, how to evaluate monitoring vendors, and how to deploy a NSM architecture.

Part IV is intended for analysts and their supervisors.  Entry level and intermediate analysts frequently wonder how to move to the next level of their profession.  I offer some guidance for the five topics with which a security professional should be proficient: weapons and tactics, telecommunications, system administration, scripting and programming, and management and policy.  The next three chapters offer case studies, showing analysts how to apply NSM principles to intrusions and related scenarios.

Part V is the offensive counterpart to the defensive aspects of Parts II, III, and IV.  I discuss how to attack products, processes, and people.  The first chapter examines tools to generate arbitrary packets, manipulate traffic, conduct reconnaissance, and exploit flaws inn Cisco, Solaris, and Microsoft targets.  In a second chapter I rely on my experience performing detection and response to show how intruders attack the mindset and procedures upon which analysts rely.

An epilogue on the future of NSM follows Part V.  The appendices feature several TCP/IP protocol header charts and explanations.   I also wrote an intellectual history of network security, with abstracts of some of the most important papers written during the last twenty-five years.  Please take the time to at least skim this appendix,  You'll see that many of the “revolutionary ideas” heralded in the press were in some cases proposed decades ago.

The Tao lists as 832 pages. I planned to write 10 more chapters, but my publisher and I realized that we needed to get the Tao out the door. ("Real artists ship.") I wanted to address ways to watch traffic leaving the enterprise in order to identify intruders, rather than concentrating on inbound traffic, as was popular in the 1990s and 2000s. Therefore, I wrote Extrusion Detection: Security Monitoring for Internal Intrusions.

I've called the Tao "the Constitution" and Extrusion "the Bill of Rights." These two books were written in 2004-2005, so they are tightly coupled in terms of language and methodology. Because Extrusion is tied more closely with data types, and less with specific software, I think it has aged better in this respect.

The introduction describes Extrusion using these words:

Part I mixes theory with architectural considerations.  Chapter 1 is a recap of the major theories, tools, and techniques from The Tao.  It is important for readers to understand that NSM has a specific technical meaning and that NSM is not the same process as intrusion detection.  Chapter 2 describes the architectural requirements for designing a network best suited to control, detect, and respond to intrusions.  Because this chapter is not written with specific tools in mind, security architects can implement their desired solutions regardless of the remainder of the book.  Chapter 3 explains the theory of extrusion detection and sets the stage for the remainder of the book.  Chapter 4 describes how to gain visibility to internal traffic.  Part I concludes with Chapter 5, original material by Ken Meyers explaining how internal network design can enhance the control and detection of internal threats.

Part II is aimed at security analysts and operators; it is traffic-oriented and requires basic understanding of TCP/IP and packet analysis.  Chapter 6 offers a method of dissecting session and full content data to unearth unauthorized activity.  Chapter 7 offers guidance on responding to intrusions, from a network-centric perspective.  Chapter 8 concludes part III by demonstrating principles of network forensics.

Part III collects case studies of interest to all types of security professionals.  Chapter 9 applies the lessons of Chapter 6 and explains how an internal bot net was discovered using Traffic Threat Assessment.  Chapter 10 features analysis of IRC bot nets, contributed by LURHQ analyst Michael Heiser. 

An epilogue points to future developments.  The first appendix, Appendix A, describes how to install Argus and NetFlow collection tools to capture session data.  Appendix B explains how to install a minimal Snort deployment in an emergency.  Appendix C, by Tenable Network Security founder Ron Gula, examines the variety of host and vulnerability enumeration techniques available in commercial and open source tools.  The book concludes with Appendix D, where Red Cliff Consulting expert Rohyt Belani offers guidance on internal host enumeration using open source tools.

At the same time I was writing Tao and Extrusion, I was collaborating with my friends and colleagues Keith Jones and Curtis Rose on a third book, Real Digital Forensics: Computer Security and Incident Response. This was a ground-breaking effort, published in October 2005. What made this book so interesting is that Keith, Curtis and I created workstations running live software, compromised each one, and then provided forensic evidence for readers on a companion DVD. 

This had never been done in book form, and after surviving the process we understood why! The legal issues alone were enough to almost make us abandon the effort. Microsoft did not want us to "distribute" a forensic image of a Windows system, so we had to zero out key Windows binaries to satisfy their lawyers. 

The primary weakness of the book in 2017 is that operating systems have evolved, and many more forensics books have been written. It continues to be an interesting exercise to examine the forensic practices advocated by the book to see how they apply to more modern situations.

This review of the book includes a summary of the contents:

• Live incident response (collecting and analyzing volatile and nonvolatile data; 72 pp.)
• Collecting and analyzing network-based data (live network surveillance and data analysis; 87 pp.)
• Forensic duplication of various devices using commercial and open source tools (43 pp.)
• Basic media analysis (deleted data recovery, metadata, hash analysis, “carving”/signature analysis, keyword searching, web browsing history, email, and registry analyses; 96 pp.)
• Unknown tool/binary analysis (180 pp.)
• Creating the “ultimate response CD” (response toolkit creation; 31 pp.)
• Mobile device and removable media forensics (79 pp.)
• On-line-based forensics (tracing emails and domain name ownership; 30 pp.)
• Introduction to Perl scripting (12 pp.)

After those three titles, I was done with writing for a while. However, in 2012 I taught a class for Black Hat in Abu Dhabi. I realized many of the students lacked the fundamental understanding of how networks operated and how network security monitoring could help them detect and respond to intrusions. I decided to write a book that would explain NSM from the ground up. While I assumed the reader would have familiarity with computing and some security concepts, I did not try to write the book for existing security experts.

The result was The Practice of Network Security Monitoring: Understanding Incident Detection and Response. If you are new to NSM, this is the first book you should buy and read. It is a very popular title and it distills my philosophy and practice into the most digestible form, in 376 pages.

The main drawback of the book is the integration of Security Onion coverage. SO is a wonderful open source suite, partly because it is kept so current. That makes it difficult for a print book to track changes in the software installation and configuration options. While you can still use PNSM to install and use SO, you are better off relying on Doug Burks' excellent online documentation. 

PNSM is an awesome resource for learning how to use SO and other tools to detect and respond to intrusions. I am particularly pleased with chapter 9, on NSM operations. It is a joke that I often tell people to "read chapter 9" when anyone asks me about CIRTs.

The introduction describes PNSM using these words:

Part I, “Getting Started,” introduces NSM and how to think about sensor placement.

• Chapter 1, “Network Security Monitoring Rationale,” explains why NSM matters, to help you gain the support needed to deploy NSM in your environment.
• Chapter 2, “Collecting Network Traffic: Access, Storage, and Management,”addresses the challenges and solutions surrounding physical access to network traffic.

Part II, “Security Onion Deployment,” focuses on installing SO on hardware and configuring SO effectively.

• Chapter 3, “Stand-alone NSM Deployment and Installation,” introduces SO and explains how to install the software on spare hardware to gain initial NSM capability at low or no cost.
• Chapter 4, “Distributed Deployment,” extends Chapter 3 to describe how to install a dispersed SO system.
• Chapter 5, “SO Platform Housekeeping,” discusses maintenance activities for keeping your SO installation running smoothly. 

Part III, “Tools,” describes key software shipped with SO and how to use these applications.

• Chapter 6, “Command Line Packet Analysis Tools,” explains the key features of Tcpdump, Tshark, Dumpcap, and Argus in SO.
• Chapter 7, “Graphical Packet Analysis Tools,” adds GUI-based software to the mix, describing Wireshark, Xplico, and NetworkMiner.
• Chapter 8, “NSM Consoles,” shows how NSM suites, like Sguil, Squert, Snorby, and ELSA, enable detection and response workflows.

Part IV, “NSM in Action,” discusses how to use NSM processes and data to detect and respond to intrusions.

• Chapter 9, “NSM Operations,” shares my experience building and leading a global computer incident response team (CIRT).
• Chapter 10, “Server-side Compromise,” is the first NSM case study, wherein you’ll learn how to apply NSM principles to identify and validate the compromise of an Internet-facing application.
• Chapter 11, “Client-side Compromise,” is the second NSM case study, offering an example of a user being victimized by a client-side attack.
• Chapter 12, “Extending SO,” concludes the main text with coverage of tools and techniques to expand SO’s capabilities.
• Chapter 13, “Proxies and Checksums,” concludes the main text by addressing two challenges to conducting NSM.

The Conclusion offers a few thoughts on the future of NSM, especially with respect to cloud environments. 

The Appendix, “SO Scripts and Configuration,” includes information from SO developer Doug Burks on core SO configuration files and control scripts.

I hope this post helps explain the different books I've written, as well as their applicability to modern security scenarios.