What Would Galileo Think

I love history. Studying the past constantly reminds me that we are not any smarter than our predecessors, although we have more knowledge available. The challenge of history is to apply its lessons to modern problems in time to positively impact those problems.

I offer this post in response to some of the reporting from the Gartner Security Summit 2008, where pearls of wisdom like the following appear:

What if your network could proactively adapt to threats and the needs of the business? That’s the vision of the adaptive security infrastructure unveiled by Gartner here today.

Neil MacDonald, vice president and fellow at Gartner, says this is the security model necessary to accommodate the emergence of multiple perimeters and moving parts on the network, and increasingly advanced threats targeting enterprises. “We can’t control everything [in the network] anymore,” MacDonald says. That’s why a policy-based security model that is contextual makes sense, he says.

“The next generation data center is adaptive – it will do workloads on the fly,” he says. “It will be service-oriented, virtualized, model-driven and contextual. So security has to be, too.”


Translation? Buzzword, buzzword, how about another buzzword? People are paying to attend this conference and hear this sort of "advice"?

I humbly offer the following free of charge in the hopes it makes a slight impact on your approach to security. I am confident these ideas are not new to those who study history (like my Three Wise Men and those who follow their lead).

Let's go back in time. It's the early 17th century. For literally hundreds of years, European "expertise" with the physical world has been judged by one's ability to recite and rehash Aristotelian views of the universe. In other words, one was considered an "expert" not because his (or her) views could be validated by outcomes and real life results, but because he or she could most accurately adhere to statements considered to be authoritative by a philosopher who lived in the fourth century BC. Disagreements were won by the party who best defended the Aristotelian worldview, regardless of its actual relation to ground truth.

Enter Galileo, his telescope, and his invention of science. Suddenly a man is defending the heliocentric model proposed by Copernicus using measurements and data, not eloquent speech and debating tactics. If you disagree with Galileo (and people tried), you have to debate his experimental results, not his rhetoric. It doesn't matter what you think; it matters what you can demonstrate. Amazing. The world has been different ever since, and for the better.

Let's return to the early 21st century. For the last several years, "expertise" with the cyber world has been judged by one's ability to recite and rehash audit and regulatory views of cyber security. In other words, one was considered an "expert" not because his (or her) views could be validated by outcomes and real life results, but because he or she could most accurately adhere to rules considered to be authoritative by regulators and others creating "standards." Disagreements were won by the party who best defended the regulatory worldview, regardless of its actual relation to ground truth.

Does this sound familiar? How many of you have attended meetings where participants debated password complexity policies for at least one hour? How many of you have wondered whether you need to deploy an IPS -- in 2008? How many of you buy new security products without having any idea if deploying said product will make any difference at all?

What would Galileo think?

Perhaps he might do the following. Galileo would first take measurements to identify the nature of the "cybersecurity universe," such as it is. (One method is described in my post How Many Burning Homes.) Galileo would then propose a statement that changes some condition, like "remove local Administrator access on Windows systems", and devise an experiment to identify the effect of such a change. One could select a control group within the population and contrast its state with a group who loses local Administrator control, assessing the security posture of each group after a period (like a month or two). If the change resulted in measurable security improvement, like fewer compromised systems, the result is used to justify further work in that direction. If not, abandon that argument.

This approach sounds absurdly simple, yet we do not do it. We constantly implement new defensive security measures and have little or no idea if the benefit, if any (who is measuring anyway?) outweighs the cost (never mind just money -- what about inconvenience, etc.) Instead of saying "I can show that removing local Administrator access while drive our compromised host ratio down 10%," we say "Regulation X says we need anti-virus on all servers" or "Guideline Y says we should have password complexity policy Z."

Please, let's consider changing the way we make security decisions. We have an excellent model to follow, even if it is four hundred years old.

Comments

Michael Janke said…
This sounds pretty difficult to implement. In the example you give, if I take away local admin on half my desktops, I have to control all the other variables (the users, the browsers, the sites browsed), otherwise I would not know if the local admin change was the reason that one set of desktops had fewer compromises.

Presumably if one had enough controls on the experiment, a large enough sample, and if one dug out their college stats texts, this could be done. But you'd have to do this for each and every change in the corporate security profile. (Group Policy, Anti-Virus software, etc.) Heck - you'd even have to set up a control group for each months Windows patch rollout, so that you could demonstrate that the April patch set was really necessary before incurring the cost of the rollout. And - because security incidents don't happen every day, a couple months with no incidents would neither demonstrate nor disprove the value of the change.

It sounds like you'd spend the next decade monitoring carefully controlled experiments instead making rational improvements in security. You'd have the perfect security configuration for Win XP about the time that Windows 11 shipped.

An alternative would be a process that sets and enforces standards and guidelines, but also requires that any enforced standards or guidelines be directly attributable to the mitigation of a specific threat or attack vector, preferably one that actually occurs or occurred. That sort of thought process, combined with a rational set of policies that allow for exceptions and/or deviations, provided that alternative bits are in place to mitigate the actual threats.

I know of a case where a difficult to implement security related configuration was rolled out to a somewhat (or very) hostile group of business units. The key to convincing the units was clearly demonstrating the threat to their data using analysis from security incidents that had already occurred within the organization, and demonstrating that the proposed technology would mitigate the threat(s) in the least cost/most effective manner.

Of course my theory is backed by only one experiment with about 30 data points, so it isn't much to hang your hat on either. ;)
Joshua Rieken said…
I was staring at Galileo's telescopes at the Istituto e Museo di Storia della Scienza in Florence yesterday morning. Similar thoughts went through my mind. Let's hope for a security Enlightenment.
Anonymous said…
It's always nice to have people refer to history when explaining a fact or a development. I believe you get to the correct conclusion from the Galileo history.

However, as a historian by profession, I have to disagree on some of the assumptions.

It's not all black before Galileo: You mention Copernicus and I'll quote Albertus Magnus: "When I am doing natural research, I am not interested in miracles." (De Generatione et Corruptione, 13th century)

Aristotle was very much interested in the examination of nature (as opposed to Plato). The problem is that Aristotle was followed blindly without questioning enough. You could say that the problem with Gartner reports is not so much their content as the people following them blindly.

And now back to Galileo: He was not the first one doing proper research. In his childhood the Western World had introduced the modern calendar. The one in use today. The one that is
so exact, it takes the exact length of a year up to the single second in consideration. And they did not get this information out of praying. It was hard natural research. Done generations before. It just took centuries to get the world to accept the new calendar.

So you can say, that the problem was rather politics then research. And that is also the problem with Galileo: His research was brillant and widely accepted. Accepted until he foolishly challenged his own patrons. And he challenged them in his famous dialogue
(Dialogue Concerning the Two Chief World Systems), which is a brillant piece of literature (it is not so much a brillant piece of natural research in my eyes, but that's only me). Now this famous dialogue made a fool of the pope, his
former friend. And boy did he fight back!

And it was this backfire that made Galileo the icon of modern natural research. Not because he really was such an icon, but because we need such a person to mark a change in beliefs
that started right centuries before in the middle ages.

But as I mentioned above: Your conclusions are still correct. Starting with complex information, unreliable sources and facts, that are hard to verify, and still arrive at the correct
conclusions, that is what makes a good historian. Well done.

And if anybody is interested in more on this, then read the Cambridge Companion to Galileo, it's a very good read. And have a peek into the dialogue. It's really a great piece of writing and fairly easy to understand.
Christian Folini: I agree. I should have provided a disclaimer saying "massive historical simplification follows". :)
Anonymous said…
This post sounds suspiciously like the The New School of Information Security by Shostack and Stewart. Specifically Chapter 3 'On Evidence', which starts "In 1610 Galileo published his observations of Jupiter's moons. He used his findings to argue in favor of Copernicus' heliocentric model..." and which goes on to discuss issues surrounding empiricism and "evidence" used to support conventional wisdom in the industry.

;)
Anonymous said…
If there was one post I could have IT/Security mgmt read this year it is this one. Great post!! Somebody needs to "call out" the industry.
Mad Irish -- No kidding! Scout's Honor, I have not read or even seen a copy of that book yet. Cool.
Anonymous said…
Galileo's "invention of science"?

Give the guy his due, but don't go overboard.
Anonymous said…
No kidding. I was recently criticized at a meeting for "wasting time" by questioning the assumptions behind our current password policy. Its as if security dogma like the "8 character complex password" is sacred even in today's world of quad core consumer processors and rainbow tables. In order for this profession to remain relevant, we need to recognize that our policies and practices are based on certain assumptions (i.e. attacker resources, trust boundaries, security of third parties, etc.) and constantly question whether those assumptions hold. Unfortunately, I think it would require a paradigm shift in thinking and then a pragmatic reexamination to subsequently sell it to the business folks.
Re: password policy -- Agreed. I wonder if those advocating password policies even know where time-based requirements originate? If that logic were carried into 2008, passwords should be changed every second, or however long it takes to query a rainbow table!
Anonymous said…
In The New School of Information Security, Shostack and Stewart argue that 'Best Practices' are the bane of computer security and that they're basically used as sticks to enforce behavior and rarely based on any empirical evidence. The point out that much of what we consider 'Best Practice' or 'collective wisdom' is based on dubious assumptions at best. I think anyone interested in this post should read the book - I HIGHLY recommend it (not being paid to post this stuff, I promise :).

I do think the book is reflective of new trends emerging in infosec though, rather than breaking new ground. I think as security evolves into a real field and matures (how many companies now have an actual CSO?) there are going to be some paradigm shifts. For one thing, lots of new practitioners of computer security have formal training in other fields that allows them a certain perspective and objectivity on the industry. I think this fresh viewpoint is going to push a lot of old assumptions aside and hopefully it will change the just-buy-an-appliance-and-rack-it way many of us do things. At the very least a lot of these new participants are asking "why" and pointing to the fact that a lot of our 'best practices' aren't supported by any empirical evidence and that the scientific process is sadly lacking from much of what we do.
Anonymous said…
Re: invention (not father) of science

It's nice that Wikipedia reports that many call Galileo that. However, it's pretty clear that folks like Archimedes could also have a damn good case made for them.

Like Newton, Galileo had some shoulders to stand on.
Anonymous said…
Anonymous, I was going to write something to counter you argument. But I figure Einstein said it better...

"Propositions arrived at by purely logical means are completely empty as regards reality. Because Galileo realised this, and particularly because he drummed it into the scientific world, he is the father of modern physics—indeed, of modern science altogether." (1954, p.271).

Einstein, Albert (1954). Ideas and Opinions, translated by Sonja Bargmann, London: Crown Publishers. ISBN 0-285-64724-5.
Anonymous said…
"Best Practices" - I've worked at consulting firms that collectively abhor that term, mostly for legal reasons. I agree with the end result though. IMO, "Best Practices" should be called out as dogma that the recommender is too lazy (or unable) to explain the reasoning behind. Beware of consultants quoting Gartner or spouting "Best Practices." You can get that kind of advice for free from vendors ;)
JimmytheGeek said…
I have a little trouble with the empirical model suggested above, for reasons others have mentioned but I'll recap anyway.

It's not like you have repeatable experiments you can run on demand. 1337 haxors are like sunspots. They attack, but they might not attack YOU for a while. Meanwhile, the bots are buzzing constantly and the drive-by downloads are happening. The subtle stuff you might not even notice won't get detected and measured.

A variation of this approach is strictly reactive: "We know we had these N problems with this cause. Countermeasure Y reduced N to NsubY. " Y is/is not worth the trouble. Apart from the reactivity, this makes sense to me.

And if a countermeasure doesn't prevent an attack (so your score is less in that game), perhaps it was because you were hit by the one person who could get through. Does that make the countermeasure unsound?

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4