Showing posts from November, 2018

The Origin of the Term Indicators of Compromise (IOCs)

I am an historian . I practice digital security, but I earned a bachelor's of science degree in history from the United States Air Force Academy. (1) Historians create products by analyzing artifacts, among which the most significant is the written word. In my last post , I talked about IOCs, or indicators of compromise. Do you know the origin of the term? I thought I did, but I wanted to rely on my historian's methodology to invalidate or confirm my understanding. I became aware of the term "indicator" as an element of indications and warning (I&W), when I attended Air Force Intelligence Officer's school in 1996-1997. I will return to this shortly, but I did not encounter the term "indicator" in a digital security context until I encountered the work of Kevin Mandia. In August 2001, shortly after its publication, I read Incident Response: Investigating Computer Crime , by Kevin Mandia, Chris Prosise, and Matt Pepe (Osborne/McGraw-Hill). I

Even More on Threat Hunting

In response to my post More on Threat Hunting , Rob Lee asked : [D]o you consider detection through ID’ing/“matching” TTPs not hunting? To answer this question, we must begin by clarifying "TTPs." Most readers know TTPs to mean tactics, techniques and procedures, defined by David Bianco in his Pyramid of Pain post as: How the adversary goes about accomplishing their mission, from reconnaissance all the way through data exfiltration and at every step in between. In case you've forgotten David's pyramid, it looks like this. It's important to recognize that the pyramid consists of indicators of compromise (IOCs). David uses the term "indicator" in his original post, but his follow-up post from his time at Sqrrl makes this clear: There are a wide variety of IoCs ranging from basic file hashes to hacking Tactics, Techniques and Procedures (TTPs). Sqrrl Security Architect, David Bianco, uses a concept called the Pyramid of Pain to categorize

More on Threat Hunting

Earlier this week hellor00t asked via Twitter : Where would you place your security researchers/hunt team? I replied : For me, "hunt" is just a form of detection. I don't see the need to build a "hunt" team. IR teams detect intruders using two major modes: matching and hunting. Junior people spend more time matching. Senior people spend more time hunting. Both can and should do both functions. This inspired Rob Lee to blog a response, from which I extract his core argument: [Hunting] really isn’t, to me, about detecting threats... Hunting is a hypothesis-led approach to testing your environment for threats. The purpose, to me, is not in finding threats but in determining what gaps you have in your ability to detect and respond to them... In short, hunting, to me, is a way to assess your security (people, process, and technology) against threats while extending your automation footprint to better be prepared in the future. Or simply stated, it’s

Cybersecurity and Class M Planets

I was considering another debate about appropriate cybersecurity measures and I had the following thought: not all networks are the same. Profound, right? This is so obvious, yet so obviously forgotten. Too often when confronting a proposed defensive measure, an audience approaches the concept from their own preconceived notion of what assets need to be protected. Some think about an information technology enterprise organization with endpoints, servers, and infrastructure. Others think about an industrial organization with manufacturing equipment. Others imagine an environment with no network at all, where constituents access cloud-hosted resources. Still others think in terms of being that cloud hosting environment itself. Beyond those elements, we need to consider the number of assets, their geographic diversity, their relative value, and many other aspects that you can no doubt imagine. This made me wonder if we need some sort of easy reference term to capture the essenti