Saturday, September 06, 2008

Request for Feedback on Deny by Default

A friend of mine is working on digital defense strategies at work. He is interested in your commentary and any relevant experiences you can share. He is moving from a "deny bad, allow everything else" policy to an "allow good, deny everything else" policy.

By policy I mean a general approach to most if not all defensive strategies. On the network, define which machines should communicate, and deny everything else. On the host, define what applications should run, and deny everything else. In the browser, define what sites can be visited, and deny everything else. That's the central concept, although expansions are welcome.

My friend would like to know if anyone in industry is already following this strategy, and to what degree. If you can name your organization all the better (even if privately to me, or to him once the appropriate introductions are made). Thank you.

12 comments:

Matt said...

I don't really think that you can fit one security strategy to every case. Different situations call for different tactics.

I do feel that in many cases, deny by default is a better tactic, and judging by the amount of ACL-type things which have an implicit drop at the end, I'm not alone.

To really give advice, I think we'd have to know more about the situation. Deny by default is probably better, but there's got to be a reason he didn't go with that in the beginning, and I'd be curious what that reasoning was.

yoshi said...

Deny by default is typically used in web farms in dmz.

Its not sustainable in most other situations as it requires a higher level of skillset and experience than you average outsourced windows, unix, network admin can muster.

The secondary problem is that unless you start from that stance from the very beginning its very hard to go back to retrofit applications or sell management to that stance.

The third problem is that approach costs more.

As matt said - design your security profile to your situation. A one fit everything strategy doesn't exist.

Rob Lewis said...

@yoshi,

I commented on Richard's ROI post that one of our customer's used our solution and saved money by moving from dependence on a managed service provider to managing in-house. I would describe the the in-house skill set overall as very average, and in the area of security, probably less than average.

The solution happens to be in the area of scalable multi-level security and kernel level enforcement of access and audit control at the data file level on a per user basis. In other words, deny-by-default at the data file level.

Your points regarding complexity, cost (due to administrative load) and after the fact implementation are not untrue for traditional implementations of trusted computing.

I don't want to "sell" on Richard's board, but I do want to advise you that Trustifier technology is a one fit strategy that does exist, and which removes the barriers of complexity and cost to provide military grade security for the commerical space. What is more, it can be added to an existing IT infrastructure without ripping and replacing.

Josh said...

In regards to default deny on firewalls, I have always thought that was the best way to do it, but I think it was you Richard that mentioned a few months ago about how this has started quite a few apps to just tunnel their traffic through ssl on port 80--Would you see this being a big enough problem to switch to a default allow, etc. ? Or are most apps just going to tunnel through 80 regards of of default deny or allow?

John A. said...

Wow...old theory, but still quite relevant. Heck, it even addresses two of Marcus Ranum's old "Six Dumbest Ideas in Computer Security" - "Default Permit" and "Enumerating Badness". After watching the evolutions in attacks in the last few years I've come to the conclusion that if you're REALLY serious about security, you pretty much have to go to a deny-all, permit-by-exception type philosophy. There are too many ways attacks can be obfuscated for the traditional "define the bad and look for it / block it" philosophy to work. There will always be a new vulnerability, a new way to exploit an old vulnerability or a new way to obfuscate an old exploit...and in the modern "targetted attack" era, you must assume you'll be the first one hit with the new "thing". Doing anything less just concedes that you'll always play catch-up, and you'll always risk compromised systems and stolen data before you can identify & mitigate the next evolution.

No one says it's easy. Yes, it's "hard" to define every little thing you'll need to permit (IPs, ports, protocols, URLs, processes, etc.) and still have everything work. Yes, enterprise networks these days usually involve incredible complexity - thank vendors for making their operating systems and apps so darn complicated. However, everyone pays lip service to how important it is to "know your network" (and by extension, your systems). If we are all managing networks the way we should be, why is it be so hard to define what we need and allow/white-list just that? I also love the "but we'll probably break something if we try that" argument - are we better off just letting the bad guys break it when they decide to? Quite frankly I think the people (and I'm referring to organizational decision makers here, not the noble security admins/managers who fight valiantly but get vetoed) that shrug off white-listing as "too hard" or "not practical" just don't have the willpower to dedicate some resources to doing the legwork or the guts to tell people that they can't do whatever they want, whenever they want, however they want. No one needs a true general-purpose system or network; the set of actions we need our computers and networks to take is finite, and much smaller than most people probably think.

Christian Folini said...

I believe the trend goes in direction of default deny / positive security / whitelisting or whatever you may call it.

It's complex and it is very time consuming. But it also gives you a very clear understanding of your systems and their interactions. I think that aspect is generally undervalued.

JimmytheGeek said...

I implemented a default deny in an academic environment, which is commonly considered hostile to this sort of thing. My approach was to be incredibly responsive in instituting exceptions. In particular, I made no attempt to validate the reasons for the exception. It wasn't my business, not my call.

What I did do was craft those exceptions narrowly, so the implications were narrow. I expect this could be done elsewhere.

Tim Armstrong said...

I manage an industrial control system at a 50+ year old coal fired power plant. The older stuff (thicknet and public IPs) I've had to leave alone due to a lack of knowledge, but on the newer stuff I've been able to implement a deny by default policy with good success.

I don't want my control system using ports 80, 443, 22, 23, etc. The necessary protocols are pretty specific and specialized. One incident that defined my approach was a contractor firing up Netscape (on a 15 year old Solaris box) to look for documentation within the network. I didn't even know that machine had a browser on it to begin with. What if he'd looked outward?

Like other commenters have already said, the amount of work required and benefit available should vary widely with environment. As a wise man once said 'your mileage may vary'.

We can exchange more information if it would be of interest, but I shouldn't name the company in a blog.

Joe said...

Deny by default has worked for many years. Lets look at modern examples of the opposite, allow all, but attempt to block the bad.

* Anti-spam
* Anti-virus

Two broken industries. Vendors in these spaces have taken the the WRONG approach to the problems both of these attempt to fix. So why should we ever do the same with other layers of security. The bad evolves unpredictably. You can't deny it. (pun intended)

Marcin Antkiewicz said...

1) I would not call the allowed "good", but "permitted", in order to avoid
complacency. Thus, the policy allows select traffic, without stating anything
else but "it's on the checklist".

2) I see user expectation to be a major problem. A problem that is definitely
solvable, but will requite a lot of buy-in.

3) a total default deny can be done in small improvements. Start egress
filtering, use tool that will disallow installation of software not on
whitelist, for networks that are not firewalled, establish netflow baselines and
alert on variations. Use experience and data from the initial effort to form
increasingly focused and longer strategy, that will best match the company's
business processes.

4) controlling network entropy to that level will be expensive, but should pay
off in governance. Strong metrics program will let key stake holders understand
where their time and money went.

Anonymous said...
This comment has been removed by a blog administrator.
Allen Baranov, CISSP said...

I'm going to agree with Matt and disagree somewhat with joe.

Default deny is definitely the better plan. It allows you to block ugly things that you don't know about and maybe don't even exist just yet.

It works best in a fairly static, fairly simple environment. That is why Firewalls are a good example of where to use default deny. It is possible to see what is being blocked, see what it is for and open it if need be. The number of open ports should not be too large and easily managed.

PCs are a bad example of where to use default deny because they are complex (lots of files), change frequently and are not very standardised. The more you do standardise your desktops, the easier it would be to lock them down.

That is why antivirus (though not 100% effective) is the best tool for the job. If you lock your PCs down and know what files are on them and what they should be doing then antivirus is probably not so useful.

Getting back to the network - a lot of online applications have been popping up over the last few years but most of them either are web-based or at least run on port 80 which means that although your Firewall is properly configured - you are not really using a "default deny" policy. You probably back it up with an IPS and a webproxy with site blocking technology which takes you back to "default allow with blocking".