Brief Thought on Digital Security

I was asked to write an article for an upcoming issue of Information Security Magazine based on my Engineering Disasters blog post. I had the following thought after writing that article.

When an engineering catastrophe befalls the "real" or "analog" world, it's often very visible. Failed bridges collapse, levees break, sink holes swallow buildings, and so on. If you look closely enough, prior to ultimate failure you see indications of pending doom. Cracks appear in concrete, materials swell or contract, groaning noises abound, etc.

This is generally not the case in the digital world. It is possible for an enterprise to be completely owned by unauthorized parties, without any overt signs. If one knows where to look of course, indicators can be seen, and evidence of compromise can be gathered, analyzed, and escalated. This is the reason I advocate network security monitoring (NSM) and conducting traffic threat assessments (TTAs).

Comments

Anonymous said…
I think that there are indications of "impending doom" in the digital world...it's simply that most people either don't recognize them (due to lack of training/experience) or simply ignore them.

Sure, things that happen in the physical world will assault our physical senses...sounds, smells (leaking gas, flame/fire, etc.), etc. In the digital world, many of us simply haven't set up our smoke detectors, and don't recognize the various warning signs.

IMHO, another issue is that some folks who investigate incidents ignore Accam's Razor and assume the worst..."I can't immediately determine what's going on after a cursory glance, so therefore it must be a rootkit developed by an enemy nation-state." Too much of this desensitizes us to the real issues.

H. Carvey
"Windows Forensics and Incident Recovery"
http://www.windows-ir.com
http://windowsir.blogspot.com
John Ward said…
I agree with Harlan. Just like in the physical world, there are usually signs that something is wrong. For example, our company recently tried to roll out a new Learning management System. There were telltale signs that this thing was going south, such as it crashing once a day, it losing data, and it showing cached information for the wrong users. Sure enough, it had a catastrophic failure. A combination of poor engineering from the vendor and poor implementation led to its demise. And much like in real world scenarios, the stakeholders were to stubborn and to proud to admit there was a problem.

I am glad that Occams Razor was brought up due to it serving as a system design pitfall as well. Too often developers will over complicate a simple design scenario, such as writing a program for something that already exists, or choosing the wrong language for a task to showboat their “skills”. Simple solutions typically lead to simple problems; complex solutions lead to complex problems.
Anonymous said…
It's easy for a manager to bury his head in the sand when no one can see an intrusion and all of the data is still in the system (albeit copied to parts unknown). It reminds me of the bank who found out they were compromised only AFTER the they were told by the FBI: http://www.newsbits.net/2001/20010605.htm -- "Michelle Dietrich, technical manager for the bank, said bank executives first learned that their system had been targeted when FBI officials notified them in August of the break-in...".

How many managers, by default, are using the FBI as their "intrusion detection system"? I agree with Richard, NSM is much preferable to doing nothing. And Harlan has a good point about overreactions to indications. I think the answer is maturing processes along the lines of ITIL. NSM is good, but embedding it in operational processes is even better.
Anonymous said…
I like to say that security is like the sugar in a cake: you can bake a cake without it, and it may look perfectly fine -- until someone actually bites into it. And then it's too late; you can't just slap icing on top and solve the problem.
Anonymous said…
Not to beat a dead horse, but I wanted to present an example, based on what Richard used in his post.

Cracks appear in concrete, materials swell or contract, groaning noises abound, etc.

I would submit that a great many sysadmins don't even see the digital version of these warning signs. From my experience, in many cases, if this sort of thing is even recognized at all, it's chalked up to user (or rather, in the popular venacular, "luser") error.

I know that in some cases, the warning signs are blown out of proportion, and the affected system is reimaged, with no root cause analysis, and put back into production.

Some of us will see cracks in the sidewalk around a building, and we may even think, "hhhmmm...this area has a history of sink holes". We may even walk around the building, and investigate these warning signs. Then we may go into the building and see if there are any effects there, as well...such as deformations in the structure, cracks in the walls, etc.

IMHO, in most cases, your normal, everyday sysadmin will trip over a crack in the sidewalk on the way into the building, and blame it on shoddy materials and lazy contractors...and then simply go about their day.

H. Carvey
"Windows Forensics and Incident Recovery"
http://www.windows-ir.com
http://windowsir.blogspot.com
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics