Network Computing Misses the Mark

I really enjoy reading the free IT magazine Network Computing. However, I believe comments by NWC authors in the last two issues demonstrate some fundamental misunderstandings of open source applications and system administration. These are not earth-shattering issues, but I thought I would share them with you.

First, the 27 October 2005 issues includes an article called Open-Source Security Technology Joins Endangered List. Here are excerpts:

"For many users and vendors, network security is dependent on a collection of open-source programs that provide key capabilities, sometimes as standalone tools and sometimes as the basis for commercial products. Last month, however, the open-source status of two of those key technologies--Snort and Nessus--became threatened....

The moral is that heavy reliance on open source carries risk, and that the greatest insurance policy for open-source technology is participation by a large number of users and developers. If you're thinking of using open source, keep a close eye on what happens to both Snort and Nessus."

I would argue that open source carries much less "risk" when compared to closed applications. The fact that the code is open is the "greatest insurance policy," not "participation by a large number of users and developers." If an open source program is no longer maintained, it can be assumed by another developer. Assuming the license is truly open, that new developer can resume the project, fork it, or rebuild from scratch using the original as inspiration.

For example, Linux guru Tim Lawless started the Saint Jude project to protect the integrity of the Linux kernel, but had to abandon coding it in 2002. Last week Rodrigo Rubira Branco took over maintainership and released a new version. BASE, the replacement for the Web-based alert browser ACID, is a second example. The new version of SPADE hosted by Bleeding Snort is a third example. None of this would be possible with so-called less "risky" closed programs.

The second example of Network Computing missing the mark appeared in the following letter and response:

"I have a question concerning an application one of my consultancy clients needs that's targeted for Microsoft Data Center Server 2003, a product used to manage DPM, on Unisys 3S7000. The systems integrator is saying that 'for performance reasons,' it plans to 'modify the operating system' for the application.

It's been a long time since I've heard of any vendor advocating modification of a native OS to boost performance or achieve goals not supported by the OS. I've been all over Microsoft's OEM partner site and haven't read anything about using Data Center Server as an OEM product. Not even its predecessor, Data Center Server 2000, was ever available as a shrinkwrapped product; you had to have Microsoft services to implement it.

Have you ever heard of any vendor wanting to tweak the Windows kernel in order to support its application? Sounds risky...

Don MacVittie replies: Larry, your instincts are dead-on. Even in the Linux world, tweaking the OS for the application layer is generally considered taboo. There's just too much that can go wrong.

Are you sure the vendor is talking about making code changes to the kernel? Maybe what it has in mind is custom drivers, which are more acceptable, or a custom build, which is relatively common for OEMs.

If the vendor really does want to modify the kernel, you should tell your client to run away from it as fast as it can. There are enough good products out there to handle high-volume backups and replication without having to resort to such a drastic measure."

Good grief. "Even in the Linux world, tweaking the OS for the application layer is generally considered taboo. There's just too much that can go wrong." Like what, better performance? I do not know if it is possible for end users to make any modifications to the Windows kernel, perhaps via a sysctl mechanism as found in BSD. I do not fault the NWC writer for advising users to stay away from Windows kernel tweaks.

Linux and BSD are completely different beasts. I find the power to alter the kernel to be an advantage, not voodoo. In production I make few kernel customizations on BSD not because I am scared and need to "run away." I only make the customizations with which I am familiar, like adding support for IPSec or NAT. If I encountered a problem that could be addressed by customizing the kernel, I would take full advantage of the control that an open source OS provides.

What are your thoughts on these issues?

Comments

John Ward said…
Rich,

I am going to play Devils Advocate here for a minute. While I agree with you that Peer Review does provide a certain amount of insurance, and is generally good, I have to question why in your previous post about the Snort BO exploit (http://taosecurity.blogspot.com/2005/10/snort-bo-exploit-published-as-i.html), you basically insinuate a grand conspiracy on the part of Neel Mehta and ISS for doing the exact same thing you are advocating about OSS in this article. Why is peer review praised in some instances, but condemned in the case of Neels discovery?

Also, are you actually advocating changing a Production release of an OS kernel in a Production environment, especially one that has undergone User Assurance Testing? I would be hesitant to even change parameters in the /proc filesystem in a production instance.

However, I do agree with you and feel the NWC is a little off base on its FUD concerning OSS. I knew it was only a matter of time until someone put a spin on the Nessus thing, but what happened with Snort? The way I remember incidents the past few months, Snort is still OSS and still has its support base, unless NWC is somehow equating that SourceFire = Snort…
Anonymous said…
It seems to me the risk is in open source projects stagnating. To use your example, Saint Jude was without an active developer for three years. No bug fixes, no new features. The prime argument for Nessus going closed source was a lack of developer interest. Sure, a few "Keep Nessus Free" projects have sprung up, but how many will be actively maintained a year from now? The risk is considerably less than investing in a commercial product, but the fact is that most users of open source software are not developers.

Your second point is entirely semantic. Compiling in IPSec & NAT support is not the same as adding new code.
John,

Dude, what are you talking about? Peer review has nothing to do with this post. I am praising open source because it allows continuation of a project if the original developer abandons the project, fails to make a change desired by the user community, etc.

Regarding kernel modifications, the point I was making is that the power to modify a kernel is a benefit, not something to "run away" from. Of course you should test systems prior to deployment in production. I did not say anything about modifying kernels on production systems without testing.
To Anonymous:

Regarding stagnation: are you saying commercial products do not stagnate? Don't make me name commercial products that were lousy years ago, continue to be lousy, and have no hope of abandoning their lousy nature in the future!

About my second point: it is entirely not semantic. Nothing in the original letter and response, or in my comments, referred to adding new code.
John Ward said…
Rich,

I wasn't clear in my original post. The nature of OSS allows for two types of insurance, one provided by Peer Review, the other by allowing for continual lifespan of a project by forks, community development, etc. Both of which are benefits in my opinion. My question is, how can you point fingers in one aspect (peer review, bug finding, as in Neels case) and not the other (continued life span for abandoned projects), when in fact they one in the same and stem from the same characteristic of OSS, namely being freely available both in terms of the source? Why are some instances of the same practice OK, while others are anti-competitive smear campaigns by rival companies?
Hey John,

My original post gave five reasons I thought a Snort worm could occur. Only one of those involved Snort being an open source project.

My second post defended my intuition that Neel Mehta found a vulnerability in Snort because he worked on behalf of his employer, ISS. Tom Ptacek continues to assert that "just because ISS published the advisory doesn't mean that Mehta was forced to research Snort." The alternative story I am supposed to accept is that Mehta found a vulnerability in Snort on his own, reported it to his employer, who then told US-CERT, who told Sourcefire.

This all happened one week after Checkpoint decides to acquire Sourcefire. Suddenly a $880 million market cap company, CKP is playing in the same sandbox with a $1400 million market cap company, ISSX. Wouldn't it make sense to cast doubt on the new kid on the block by showing their recently acquired product has security vulnerabilities?
Anonymous said…
Two points of clarification:

1) Check Point's market cap was $5542M at market close today, not $880M. :)

2) Nice to see that NWC doesn't take me or Gil Shwed at our word when we say that we're not planning on closing Snort anytime soon. Guess we'll just have to prove it to you...
Hi Marty,

If you read this -- where are you getting your market cap numbers? Yahoo finance shows CKP has "Market Cap: 877.82M" for today.
Oops -- CKP is "Checkpoint Systems Inc., a maker of electronic security devices used by retailers" -- not Check Point Software Technologies Ltd.
Got it -- CHKP. No wonder ISS is worried.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics