Let a Hundred Flowers Blossom
I know many of us work in large, diverse organizations. The larger or more complex the organization, the more difficult it is to enforce uniform security countermeasures. The larger the population to be "secure," the more likely exceptions will bloom. Any standard tends to devolve to the least common denominator. There are some exceptions, such as FDCC, but I do not know how widespread that standard configuration is inside the government.
Beyond the difficulty of applying a uniform, worthwhile standard, we run into the diversity vs monoculture argument from 2005. I tend to side with the diversity point of view, because diversity tends to increase the cost borne by an intruder. In other words, it's cheaper to develop exploitation methods for a target who 1) has broadly similar, if not identical, systems and 2) publishes that standard so the intruder can test attacks prior to "game day."
At the end of the day, the focus on uniform standards is a manifestation of the battle between two schools of thought: Control-Compliant vs Field-Assessed Security. The control-compliant team believes that developing the "best standard," and then applying that standard everywhere, is the most important aspect of security. The field-assessed team (where I devote my effort) believes the result is more important than how you get there.
I am not opposed to developing standards, but I do think that the control-compliant school of thought is only half the battle -- and that controls occupy far more time and effort than they are worth. If the standard whithers in the face of battle, i.e., once field-assessed it is found to be lacking, then the standard is a failure. Compliance with a failed standard is worthless at that point.
However, I'd like to propose a variation of my original argument. What if you abandon uniform standards completely? What if you make the focus of the activity field-assessed instead of control-compliant, by conducting assessments of systems? In other words, let a hundred flowers blossom.
(If you don't appreciate the irony, do a little research and remember the sorts of threats that occupy much of the time of many this blog's readers!)
So what do I mean? Rather than making compliance with controls the focus of security activity, make assessment of the results the priority. Conduct blue and red team assessments of information assets to determine if they meet various resistance and (maybe) "survivability" metrics. In other words, we won't care how you manage to keep an intruder from exploiting your system, as long as it takes longer for a blue or red assesor with time X and skill level Y and initial access level Z (or something to that effect).
In such a world, there's plenty of room for the person who wants to run Plan 9 without anti-virus, the person who runs FreeBSD with no graphical display or Web browser, the person who runs another "nonstandard" platform or system -- as long as their system defies the field assessment conducted by the blue and red teams. (Please note the one "standard" I would apply to all assets is that they 1) do no harm to other assets and 2) do not break any laws by running illegal or unlicensed software.)
If a "hundred flowers" is too radical, maybe consider 10. Too tough to manage all that? Guess what -- you are likely managing it already. So-called "unmanaged" assets are everywhere. You probably already have 1000 variations, never mind 100. Maybe it's time to make the system's inability to survive against blue and red teams the measure of failure, not whether the system is "compliant" with a standard, the measure of failure?
Now, I'm sure there is likely to be a high degree of correlation between "unmanaged" and vulnerable in many organizations. There's probably also a moderate degree of correlation between "exceptional" (as in, this box is too "special" to be considered "managed") and vulnerable. In other instances, the exceptional systems may be impervious to all but the most dedicated intruders. In any case, accepting that diversity is a fact of life on modern networks, and deciding to test the resistance level of those assets, might be more productive than seeking to develop and apply uniform standards.
What do you think?
Comments
in that case, perhaps uniform assessment standards become necessary? you depend on either a set of best practices around your defenses / security controls, or around your assessment methodology. if your assessments suck, they won't reveal gaps in your "unique snowflake" defenses/controls either.
do you feel like it's possible to accomplish both? have a set of industry-wide controls that are both attainable for most organizations and able to stand up under internal assessments, penetration tests, and of course, actual attacks?
just some thoughts. good post!
I think you're in tune with what I'm saying. I was thinking in terms of an organization. I agree that if your blue and red assessments are horrible then you're not doing anything worthwhile! I should have stated that as an assumption. In the end I think it is a good idea to have standards AND assessments, but the main idea for this post was to move the ball towards the outcomes and away from the inputs.
I think the problem today is to use the standard to completely lock down everything. This robs flexibility and results in the problems all of us are so familiar with.
"Maybe it's time to make the system's inability to survive against blue and red teams the measure of failure, not whether the system is "compliant" with a standard, the measure of failure?" ... I tend to agree with this view. All I do care about is real-world attack and blue/red teams are real-world (or almost) attackers. I've seen too many "standards" turning into huge, complex beasts and when something such as MS08-067 hits the dark alleys, they'd have difficulties coping with it. Think for a minute: a standard might be (should be) asking for patch management but usually there's some delay between the time a patch is available and the time it is applied. Moreover, some servers (usually some very juicy ones) use applications which vendors strongly recommend against patching...
Caterpillar Blue sums it up pretty nicely. We do need standards and "compliance", albeit light ones. And we do need diversity if we care about the overall system's survivability.
One might say "let that department figure out their own best way to improve," but I wonder if a company would really let this department stumble around in a state of poor security if the company already had a strong patch management system, a secure desktop config, etc? Seems like they'd just be asked/told to use that standard.
That said, the aspect of this that I really like is that given a hundred blossoms, at least one organization would inevitably find a way to do it better...the monolithic standards-definer is never going to be in the 99th percentile all the time.
Sure, I think that failure to sufficiently resist the blue and red assessments might mean that the asset owners forfeit the privilege of choosing their own standard, so they are then required to adopt the standard recommended by the blue and red team.
I'm arguing less for a "free for all" than for a refocus on the outcomes. If asset owners fail on the outcome side, they clearly need help. That help would probably take the form of the standard.
That may sound circular, but consider what is popular now.
1. Propose standard.
2. (Hopefully) determine compliance to standard.
3. End.
My post tries to put emphasis on outcomes, not compliance with inputs.
My approach however is different in regards to desktops. I think we should all operate on the premise that the desktop is owned and plan accordingly. Do things like stop all client to client traffic and prevent storage of any business data on the desktop. Every desktop should be an island with access to server resources only. If you can somehow reduce your reliance on service accounts that's even better. Well that's my pipe dream anyway, a return to the 70's dumb terminal, thin-client model where everything is controlled on the servers and clients don't run roughshod all of your network snooping, infecting, ex filtrating, etc. I can dream right? :-)
Richard, this is probably your best post of 2009!
The problem is we do these things, not because it increases security but we need to pass an audit.
Now it is time to play devil's advocate on this. Any audit whether it be checklist based or more pen-test based is only a snapshot in time. For the one week, every year things are looked at (or whatever it is). All too often organizations run to get compliant and then let things fall out of play. In this case I feel that checklist audits are better. Checklists audits can check policies, CM processes, etc. Pen testing never will (unless it is completely random), or done strategically. For example a patch is released and a month later the pen test team unexpectedly runs that exploit across a whole network to see how many machines are vulnerable. If you run down this road I feel that timeliness is more of an issue. I feel that the pen-test team needs to be more of an object that is embedded in the culture of the organization as opposed to someone that is hired every so often.
Richard, Great post!!!! I will be attending your class at BlackHat DC and am looking forward to getting more nuggets of wisdom from you.
i would agree with you ideas, but let's put ourselves into a big company managing complex infra, getting billions of logs everyday, without generic security hardening / practice you cannot even eat due to the time you spend in getting the box up and running
I propose the establishment of configuration specifications for the various technologies which an organization chooses to standardize on. The difference is the contents of such a configuration specification is not just what SANS, NIST, vendors, etc have to say about how to secure a platform, database, etc, but are tested by field assessment teams. In this way, the field assessment team is validating the configuration specifications prior to deployment instead of having a compliance function take a document from SANS and say; here's our standard, go forth and deploy. Let me give you an example of what I mean. When a platform is attacked there are certain things that happen, certain things the attacker needs to own the system, harvest useful information and expand their influence on the network. One of those things is the need to dump password hashes, another is the need to inject DLLs into running processes. So what does the attacker need, they often need the Debug programs right on Windows and the Administrator account has that by default. What if, as part of field assessment this reality is determined and a decision is made to remove the debug right from all users? Who cares right, if an attacker gets Administrator or SYSTEM access they can just add it back? Yes, but the very act of adding the Debug programs right to an account is exactly the kind of anomalous event which tells you something is wrong. Beyond configuration settings, the configuration specifications should extend to the definition of log configuration elements, not just how to set logging on the platform itself, but what an analyst should configure on the SEIM, if you have one. The events of interest should be identifiable as the field assessment team's footprints are observed. In each case intelligence is collected regarding what is done to attack the system. Once the attack’s various inputs and outputs are deconstructed, adjustments can be made to remove vulnerable aspects from the platform in question by modifying the configuration specification, deploying incremental changes and adding the relevant log sequences to the resulting documentation and SEIM.
If proposed controls or aspects of a configuration specification are worthless, then let the field assessment teams identify the path into the light instead of just punting and throwing the environment into chaos with the Run What You Brung (RWYB) method.
Still, I do not think we can ignore the inputs either. I think it is far easier to fix one standard than it is to test 10, 50 or 100 deployed systems - not to mention the difficulty of operating so many different systems. (see Visible Ops for details on that)
Why not do both? After all, just because you have a standard, it doesn't mean that it is being followed, implemented or integrated correctly. Red and Blue team testing would be a great way to test your initial assumptions and how well you execute.
The majority of my career has been in small business where there are no such things. I deal with heterocultures (polycultures?) every day, simply because there aren't enough resources to deploy imaged systems in most small companies. I use a mix of tools to bring these under control, but really, it's more a case of "let's keep 80% of the bad stuff out and just deal with the 20% that get through".
It's much like betting on the agility of the antelope over the armor of the tortoise. Both are valid security strategies, but both have costs. The tortoise/monoculture will be safer against most threats but slower to react. The antelope will be more vulnerable and therefore has to be fast and agile in response.
The test, I think, to see if such an environment is really secure is whether the people managing the diverse network are fast and nimble when responding to events. If they are, then I agree that it could well be better than a monoculture. However, if they are the sort of organization that has ignored legacy systems hanging around everywhere, I fear that they could be much much worse.
My view has changed quite a bit since I started out. I'm at the point now where I'd rather hand out a 1-2 page hardening guideline that focuses on what really matters and complement that with regular automated vulnerability assessment and periodic manual penetration testing.
Of course, this thinking would have been heresy to me 10 years ago :).
The other angle with short-hand guidance is system owners can reasonably argue 'they didn't know' about the need to deploy a defensive control. Now that they do, a decision needs to made whether this is to be codified in the (ever expanding) guideline or not. A natural way to decide this is based on the frequency of the vuln.
Control-Compliant vs. Field-Assessed will never be settled and the two camps are similar to two opposing political parties. I believe if you are in a position to make decisions and steer an organization you should consider the involvement of both ideologies and not go strictly in one direction. If an aggressor knows your validated product list only contains X,Y,Z, they only need to focus on vulnerabilities for those vendors, which is exactly what Richard described in monoculture. Control-Compliant has the stench of old government practices and fortunately most of the diehard supporters are on the way out from what I see at my location.
I've done a bit of research into sec. professionals adoption of concepts such as these. One of the strongest bits of feedback I received was touched on in your comment chain...
It's a question of regulation. The government-mandated requirements pushed forth make it difficult to 'let a hundred flowers bloom' and still meet their arbitrary requirements.
That said, I am strongly in favor of this approach. Just want to adopt it to FSA and US regulated companies...
My view has changed quite a bit since I started out. I'm at the point now where I'd rather hand out a 1-2 page hardening guideline that focuses on what really matters and complement that with regular automated vulnerability assessment and periodic manual penetration testing.