Request for Feedback on Deny by Default
A friend of mine is working on digital defense strategies at work. He is interested in your commentary and any relevant experiences you can share. He is moving from a "deny bad, allow everything else" policy to an "allow good, deny everything else" policy.
By policy I mean a general approach to most if not all defensive strategies. On the network, define which machines should communicate, and deny everything else. On the host, define what applications should run, and deny everything else. In the browser, define what sites can be visited, and deny everything else. That's the central concept, although expansions are welcome.
My friend would like to know if anyone in industry is already following this strategy, and to what degree. If you can name your organization all the better (even if privately to me, or to him once the appropriate introductions are made). Thank you.
By policy I mean a general approach to most if not all defensive strategies. On the network, define which machines should communicate, and deny everything else. On the host, define what applications should run, and deny everything else. In the browser, define what sites can be visited, and deny everything else. That's the central concept, although expansions are welcome.
My friend would like to know if anyone in industry is already following this strategy, and to what degree. If you can name your organization all the better (even if privately to me, or to him once the appropriate introductions are made). Thank you.
Comments
I do feel that in many cases, deny by default is a better tactic, and judging by the amount of ACL-type things which have an implicit drop at the end, I'm not alone.
To really give advice, I think we'd have to know more about the situation. Deny by default is probably better, but there's got to be a reason he didn't go with that in the beginning, and I'd be curious what that reasoning was.
Its not sustainable in most other situations as it requires a higher level of skillset and experience than you average outsourced windows, unix, network admin can muster.
The secondary problem is that unless you start from that stance from the very beginning its very hard to go back to retrofit applications or sell management to that stance.
The third problem is that approach costs more.
As matt said - design your security profile to your situation. A one fit everything strategy doesn't exist.
I commented on Richard's ROI post that one of our customer's used our solution and saved money by moving from dependence on a managed service provider to managing in-house. I would describe the the in-house skill set overall as very average, and in the area of security, probably less than average.
The solution happens to be in the area of scalable multi-level security and kernel level enforcement of access and audit control at the data file level on a per user basis. In other words, deny-by-default at the data file level.
Your points regarding complexity, cost (due to administrative load) and after the fact implementation are not untrue for traditional implementations of trusted computing.
I don't want to "sell" on Richard's board, but I do want to advise you that Trustifier technology is a one fit strategy that does exist, and which removes the barriers of complexity and cost to provide military grade security for the commerical space. What is more, it can be added to an existing IT infrastructure without ripping and replacing.
No one says it's easy. Yes, it's "hard" to define every little thing you'll need to permit (IPs, ports, protocols, URLs, processes, etc.) and still have everything work. Yes, enterprise networks these days usually involve incredible complexity - thank vendors for making their operating systems and apps so darn complicated. However, everyone pays lip service to how important it is to "know your network" (and by extension, your systems). If we are all managing networks the way we should be, why is it be so hard to define what we need and allow/white-list just that? I also love the "but we'll probably break something if we try that" argument - are we better off just letting the bad guys break it when they decide to? Quite frankly I think the people (and I'm referring to organizational decision makers here, not the noble security admins/managers who fight valiantly but get vetoed) that shrug off white-listing as "too hard" or "not practical" just don't have the willpower to dedicate some resources to doing the legwork or the guts to tell people that they can't do whatever they want, whenever they want, however they want. No one needs a true general-purpose system or network; the set of actions we need our computers and networks to take is finite, and much smaller than most people probably think.
It's complex and it is very time consuming. But it also gives you a very clear understanding of your systems and their interactions. I think that aspect is generally undervalued.
What I did do was craft those exceptions narrowly, so the implications were narrow. I expect this could be done elsewhere.
I don't want my control system using ports 80, 443, 22, 23, etc. The necessary protocols are pretty specific and specialized. One incident that defined my approach was a contractor firing up Netscape (on a 15 year old Solaris box) to look for documentation within the network. I didn't even know that machine had a browser on it to begin with. What if he'd looked outward?
Like other commenters have already said, the amount of work required and benefit available should vary widely with environment. As a wise man once said 'your mileage may vary'.
We can exchange more information if it would be of interest, but I shouldn't name the company in a blog.
* Anti-spam
* Anti-virus
Two broken industries. Vendors in these spaces have taken the the WRONG approach to the problems both of these attempt to fix. So why should we ever do the same with other layers of security. The bad evolves unpredictably. You can't deny it. (pun intended)
complacency. Thus, the policy allows select traffic, without stating anything
else but "it's on the checklist".
2) I see user expectation to be a major problem. A problem that is definitely
solvable, but will requite a lot of buy-in.
3) a total default deny can be done in small improvements. Start egress
filtering, use tool that will disallow installation of software not on
whitelist, for networks that are not firewalled, establish netflow baselines and
alert on variations. Use experience and data from the initial effort to form
increasingly focused and longer strategy, that will best match the company's
business processes.
4) controlling network entropy to that level will be expensive, but should pay
off in governance. Strong metrics program will let key stake holders understand
where their time and money went.
Default deny is definitely the better plan. It allows you to block ugly things that you don't know about and maybe don't even exist just yet.
It works best in a fairly static, fairly simple environment. That is why Firewalls are a good example of where to use default deny. It is possible to see what is being blocked, see what it is for and open it if need be. The number of open ports should not be too large and easily managed.
PCs are a bad example of where to use default deny because they are complex (lots of files), change frequently and are not very standardised. The more you do standardise your desktops, the easier it would be to lock them down.
That is why antivirus (though not 100% effective) is the best tool for the job. If you lock your PCs down and know what files are on them and what they should be doing then antivirus is probably not so useful.
Getting back to the network - a lot of online applications have been popping up over the last few years but most of them either are web-based or at least run on port 80 which means that although your Firewall is properly configured - you are not really using a "default deny" policy. You probably back it up with an IPS and a webproxy with site blocking technology which takes you back to "default allow with blocking".