Tuesday, March 24, 2015

Can Interrogators Teach Digital Security Pros?

Recently Bloomberg published an article titled The Dark Science of Interrogation. I was fascinated by this article because I graduated from the SERE program at the US Air Force Academy in the summer of 1991, after my freshman year there. SERE teaches how to resist the interrogation methods used against prisoners of war. When I attended the school, the content was based on techniques used by Korea and Vietnam against American POWs in the 1950s-1970s.

As I read the article, I realized the subject matter reminded me of another aspect of my professional life.

In intelligence, as in the most mundane office setting, some of the most valuable information still comes from face-to-face conversations across a table. In police work, a successful interrogation can be the difference between a closed case and a cold one. Yet officers today are taught techniques that have never been tested in a scientific setting. For the most part, interrogators rely on nothing more than intuition, experience, and a grab bag of passed-down methods.

“Most police officers can tell you how many feet per second a bullet travels. They know about ballistics and cavity expansion with a hollow-point round,” says Mark Fallon, a former Naval Criminal Investigative Service special agent who led the investigation into the USS Cole attack and was assistant director of the federal government’s main law enforcement training facility. “What as a community we have not yet embraced as effectively is the behavioral sciences...”

Christian Meissner, a psychologist at Iowa State University, coordinates much of HIG’s research. “The goal,” he says, “is to go from theory and science, what we know about human communication and memory, what we know about social influence and developing cooperation and rapport, and to translate that into methods that can be scientifically validated.” Then it’s up to Kleinman, Fallon, and other interested investigators to test the findings in the real world and see what works, what doesn’t, and what might actually backfire.

Does this sound familiar? Security people know how many flags to check in a TCP header, or how many bytes to offset when writing shell code, but we don't seem to "know" (in a "scientific" sense) how to "secure" data, networks, and so on.

One point of bright light is the Security Metrics community. The mailing list is always interesting for those trying to bring counting and "science" to the digital security profession. Another great project is the Index of Cyber Security run by Dan Geer and Mukul Pareek.

I'm not saying there is a "science" of digital security. Others will disagree. I also don't have any specific recommendations based on what I read in the interrogation article. However, I did resonate with the article's message that "street wisdom" needs to be checked to see if it actually works. Scientific methods can help.

I am taking small steps in that direction with my PhD in the war studies department at King's College London.

2 comments:

~NorthernLitez~ said...

I would say that one should understand fundamentals first, of what is needed before being able to practically approach the subject. This is where it lacks in digital/network forensics, perhaps. Once we understand how things work in the "micro" level, applying practical methods become much easier.

dre said...

From theory, the issue we face is not only parading the right kind of cyber security science, but also ground truthing the damage valuation standards.

Primary losses (versus secondary loss, taken from FAIR) require actuarial tablization. Secondary losses, which include intangible assets such as brand damage and reputation loss, have not been modeled fully. The science is currently in progress -- and changes too often to narrow down. The Internet moves too fast -- even the advertises can't keep up with their own brand and reputation trajectory changes.

Douglas Hubbard's books on measuring intangibles and understanding risk have well-thought theories, however they change rapidly with new releases every 3 or so years. There are a small few of economists who can calculate and narrow down the science of cyber risk, cyber insurance, et al. They need to publish with the other authors and thought leaders in this field. The coming together of minds is happening, but it is slower than Internet speed. The language and socializing of FAIR and similar threat and risk quantification practices needs to happen across all verticals, and all audit, risk, regulatory, cyber insurance, and cyber threat experts need to start speaking this new language.