Friday, August 19, 2005

Thoughts on SANS .edu Security Debate

The 10 August 2005 issue of the SANS NewsBites newsletter featured this comment by John Pescatore:

"There has [sic] been a flood of universities acknowledging data compromises and .edu domains are one of the largest sources of computers compromised with malicious software. While the amount of attention universities pay to security has been rising in the past few years, it has mostly been to react to potential lawsuits do [sic] to illegal file sharing and the like - universities need to pay way more attention to how their own sys admins manage their own servers."

I agree with John's assessment, except for the last phrase that implies university sys admins "need to pay way more attention" to security. From my own view of the world, a lot of university system administrators read TaoSecurity Blog, attend my classes (especially USENIX), and read my books. I believe the fault lies with professors and university management who generally do not care about security and are unwilling to devote the will and resources to properly secure .edu networks.

The 17 August 2005 newsletter features a letter to the editor signed by eleven .edu security analysts. They take exception with Mr. Pescatore's comments. SANS is requesting comments on that letter. Here is my take on a few excerpts.

The letter states:

"Many of these schools are complex and most security implementations typically used at a corporate or government level don't fit a university model because a broader range of network activities is permitted on university networks, in large part due to a much more limited set of policies and controls compared to government and commercial entities."

The "broader range of network activities" is part of the problem. Most .edu networks apply very little inbound access control and hardly any outbound access control. (Sometimes that is reversed; one .edu I worked with implemented zero inbound control and single outbound control denying TFTP!)

Do .edu networks think the corporate world does not support a wide variety of protocols and services? I recently finished a traffic threat assessment for a client. I was surprised to see the number of protocols in use that I did not immediately recognize. This is no different from a .edu, except the .com had taken steps to restrict use of those protocols and services to defined partners. "I can't define who will access my data," a .edu might reply. If that is the case, the .edu has decided that anyone in the world can access potentially sensitive data. (See the section below on the "tenth planet" to read consequences of that stance.) In reality, the .edu is saying "it's too difficult" to define who should access data. That's a cop-out.

The "limited set of policies and controls" is not the fault of the administrators. It is the fault of management who refuse to reign in professors, or to force them to accept responsibility for operating insecure systems. If a professor is a prolific researcher, he or she is often given a "pass" to run whatever infrastructure he or she needs for research purposes. While research is obviously important, the professors and staff should realize that lack of security jeopardizes their research. How would they feel to know that a team of competing researchers, or even corporate spies, were stealing the next breakthrough in gene therapy from research systems?

We already know that so-called "tenth planet" discoverer Michael Brown was forced to rush his announcement for fear that "hackers" would reveal his work. I heard Mr. (Dr.?) Brown on NPR science Friday a few weeks ago, and he confirmed the story. He and his colleagues preferred to give an orderly press conference to inform the world of their discovery. Instead, Mr. Brown decided to rush the process. He feared a "hacker" would provide information on how to find the tenth planet to amateur astronomers, who might then take credit for its discovery! Security is not an inconvenience; it's a necessity.

The letter continues:

"Many times, the tools to secure these environments don't exist and changing the culture in these heterogeneous environments to one which promotes secure computing is very difficult."

Actually, all of the tools to secure a .edu exist. Almost all of them exist in open source form, too. Ten years ago this might not have been the case, but today one can employ open source countermeasures that in some cases exceed their commercial counterparts. The array of network-centric security capabilities offered by OpenBSD , for example, is amazing. Firewall? Pf. VPN? IPSec. Secure remote access? OpenSSH. Centralized time synchronization? OpenNTPD. I could continue at the host level if one needed a reliable platform for hosting Web sites, handling email, etc.

The tools exist, but the managerial will to implement them does not.

The letter continues:

"Our overall approach to our networking is about promoting research and information sharing and our security architecture needs to take that into account. Many schools uphold the concept of the 'End-to-End' nature of the original Internet for both research and communication of ideas. These ideas on full connectivity have merit and cannot be dismissed because the nature of faculty research or inter-university collaboration might rely on unfettered access to the Internet. The concept of a DMZ is not feasible for many schools compared to many in government and business which cannot live without one."

Immense multi-national organizations foster information sharing and research. While they admittedly are not perfect, many enterprises manage to maintain better security than .edu's. The "end-to-end" Internet is a myth that to which too many people cling. That model may have worked when the Internet was a private network, but "end-to-end" today places no barriers between your system and anyone else in the world with an IP address.

The majority of hosts are not designed, configured, or deployed in a self-defending manner. Hosts that cannot protect themselves must be supported by additional security resources. Even if a system could be operated indepedently (e.g., an OpenBSD server), without any network-based access control, this is not a tenable defensive model. The .edu world needs to understand that defense-in-depth is one of the best ways to compensate for weak host software, potential misconfiguration, and aggressive intruders.

Finally, "the concept of a DMZ" is not feasible for many organizations, not just .edu's. Security zones, which group hosts of similar security requirements, are now the best way to offer network-centric access control and monitoring.

What are your thoughts?

6 comments:

Phil Hollows said...

Couldn't agree with you more, Richard. This isn't a technical problem, it's a management (read: faculty) problem. Gartner are dead wrong to blame admins - well, at least until faculty stop wrapping themselves in the "academic freedom" flag and actually educate themselves about IT in general (and infosec in particular).

I blogged on this back in March when the Harvard incident first surfaced. Plus my personal favorite, Guess I was wrong about locked university offices

My meagre 2c.

Anonymous said...

With every week that goes past and as new incidents blow by us, ignorance of the issues becomes a weaker and weaker defense, even at .edu's. In my mind, if a company or .edu has lacking security or resources behind security (this includes management apathy), that is an active decision to accept the risks.

At an .edu, I think one of the important steps is to determin the criticality of systems. I think security should start with those systems and information deemed critical, especially those with sensitive information. With the openness of networks at an .edu, you may as well secure what you deem most important, and let the rest be "open."

That would be my initial approach though, especially in the face of mgmt apathy and lack of resources.

So, with this sort of thinking, one can actually *begin* to blame sysadmins for not at least securing those critical assets... (as a sysadmin, I still sympathize with them , I have to point out)

-LonerVamp

Richard Johnson said...

I think the original comment by John Pescatore begs the question badly. He's apparently confusing willingness to disclose security issues with actual compromises.

By the way, his comment about the largest class of computers compromised with malicious software being at universities is factually incorrect. The largest class of compromised systems are instead broadband subscriber (home) systems in Korea, France, USA, Germany, etc.

I believe your profession and background is also leading you down the wrong path. Your view of "properly secure" may not be what makes sense at a university. University networks exist primarily to enable the open exchange of packets. It's a side-effect of universities existing to disseminate knowledge.

Prevention isn't always network traffic restrictions. In my experience, proper security in an open environment involves prevention along with better detection and response than I've ever seen in a corporation.

I bridge the corporate and .edu worlds. Speaking of prevention, I have yet to see a university that allows even a 'stellar mass scientist' to do whatever they want on the network. There are maintenance standards that require acknowledgement of security rules (patch application, accounts allowed, etc.), before the switch will even provide link to the lab.

On the other hand, I've seen senior management at corporations ignore all network safety rules, and get away with it; woe betide the security staff (such as me) who objects. Just as dangerous, I've seen what staff will do to subvert corporate network access restrictions (as they do other nonsensical bureaucratic rules) just so they can get their jobs done.

Wherever it happens, it doesn't matter whether a host or subnet is down because of a compromise that must be cleaned up, or whether it's down because of firewall rules. In either case, the host/net is not doing the job for which it was set up. In my experience, .edus do a better job of balancing that than most corporates, even including those portions of universities that do intensely proprietary research.

Perhaps examples illustrating the issues which really matter to me when I'm wearing both my incident response and my incident prevention hats will help elucidate the difference. First, let me contrast the typical response to our reports of an incident at each type:

.edu: Thanks for the notification. Here's what we found the attacker did here (flow logs, packet captures, binaries, source) [Followed by more information exchange and often revised threat models on both ends.]

corporate: Deafening silence.

Of course, the difference isn't just in the reactions after prevention inevitably fails. Let me contrast typical interconnect planning discussions:

.edu: Here's how we authenticate users. Here's the access control we use. Show us yours, and we'll design something that works for all our collaborating professors.

corporate: We have an IPS. It's therefore impossible for our internal hosts to be compromised. Your suggesting that they could be is insulting. Your objection to us terminating our VPN on your internal network must thus just be a malicious control issue on your part. We will now attempt to have your management overrule you.

Putting it very bluntly, .edu sites know how to set up firewalls, but they don't rely on them as some kind of panacea. They know how to keep students in line, and are in fact leading the way in network admission control systems (heck, some have had that kind of system in place for half a decade or more). Most corporate structures don't have such defense in depth, nor can they even acknowledge that it's necessary.

I'll interconnect with a more open network, where policies and procedures exist for both minimizing and handling the inevitable compromises, before I'll interconnect with a "I know nothink, nothink!" network that denies the possibility of compromise, and then almost always denies the compromise after the fact.

Don't mistake disclosure and discussion for compromise frequency. "Prevent, talk about failues, and fix quickly" is better than "head-in-sand, silence, and covering-of-asses" any day.

Anonymous said...

I read an article by someone (I think it was 'the' network admin for MIT, but I can't find the article to confirm) stating that firewalling was dead. His point was that he would always lose any battle with a Nobel Prize winner. My thinking: you do econ/chem/phys/whatever, I'll do netsec. Or we settle this with fisticuffs...

I work at a much smaller edu, so I happily conceed alpha status to that guy. I deal with much smaller egos, as well. Still - how hard could it be to demonstrate that leaving the keys in the ignition and the door unlocked is an unwise way to leave your car? At a certain level, people skills are used to differentiate among the technically proficient. The guy DID thrive in an intensely political environment. Maybe it was by rolling over, dunno. His point, that we need to harden hosts instead, was lame. Is it easier to impose a patching/config regimine on prima donnas?

I have a default block. Now, I realize that we need to connect to stuff - so that default block is amended on request. We're tail, not dog. No problem, no attitude, maybe some education. A few crafted, narrowly targetted exceptions to the default block, and we're pretty secure and everybody's happy.

seann@dorand.net said...

I've been a UNIX admin and security specialist for the last 12 or so years.

It seems that any large networked environment (edu's or otherwise) have so many use cases, user groups, business requirements, internal politcal battles, and a veritable cornucopia of other needs that its nearly impossible to have widely effective security measures.

The client at which I'm currently working has an extremely strong firewall perimeter, DMZ structure, and IDS infrastructure. We obviously implement strict inbound filtering and have very tight egress as well from a protocol standpoint. We have internal firewalls that segment sensitive sections of the network from the ever expanding corporate network. All UNIX boxes are hardened as much as feasibly possible. Encryption is used on the network where appropriate.

The network insecurities always seem to boil down to the Windows based systems and how they're managed. I've never worked in an EDU network environment but I imagine the vast majority of security issues are related to Windows systems.

We all do our part beating up on Microsoft but I'd really really like to get to a point where some of these issues can be solved as the reality is that the Windows hosts aren't going to go away. I feel the pain of the Windows admins. I know *I* wouldn't want to have to support all that baggage.

Everytime there's a new worm or something coming out, my boiler plate response to the Windows admins is usually as follows:

- Yes, that port is blocked at our perimeter.

Attack vectors will most likely come from:

- VPN users (home users or 3rd parties)

- Site to site vpn connections that management forced us to put in allowing M$ RPC traffic for file transfer.

- Dial-up users (eventhough their connections pass through a firewall... and no we can't shut down 135, 445, etc becasue everyone will complain that they can't get to the domain and map drives, etc.)

- Consultants or 3rd parties physically attaching their infected systems to the network.

- Dual-homing idiots that are attached to the corporate network but dialing out to AOL or some 3rd party network.

On top of this it seems that the Windows guys have the firm belief that as long as they're patching systems routinely and running the latest virus signatures that they're fine.

Ok great. Today you're fine.. but that doesn't change the fact that every single Windows host in the environment is listening on all the standard ports and shutting them down will break your ability to manage them effectively.

What options do they even really have? They always want to manage some of the Windows systems that sit behind our internal enclave firewalls. When you try to explain that its a bad idea to allow inbound access from the domain and associated support systems to the sensitive systems behind the internal firewalls they start to get that deer in the headlights look.

I really feel for the Windows guys. They can't just start shutting ports down or they can't effectively manage 5000+ systems and support their user base.

In the UNIX world we learned running NIS was a bad idea. We got sick of patching all the time. This is the pain the Windows world is going through now with their inherent trust relationships amongst all domain hosts, etc.

I'd like to start implementing datacenter firewalls to host applications behind to segment these systems from the generalized network but once you have to start poking the M$ protocols through there (especially to/from domain controllers) there's no hope.

What practical things could be done to improve this situation (aside from Microsoft redesigning their system and coming up with new management methodologies)?

Anonymous said...

The corporate world is motivated only by one thing: profits/revenues. If taking action to promote security is not worth the costs, then it simply will not be done. Corps are entirely economically influenced...

THis is general, and there are companies that may spend more on security and also show some philanthropy, but generally speaking, corps are economical entities and respond only to those pressures. If embarassment and loss of customer base is costly, yup, security will be a solution.

For .edus, that can be a little different, as the whole purpose for many of those people at the edu is vastly different.

LonerVamp