Notes from NWC

I found some great stories in the 8 June 2006 Network Computing magazine. I liked their feature on The All IP Enterprise, which included 10GigE: Fast, Pricey and Coming to a Data Center Near You by Don MacVittie. He writes:

Seeing as those who forget history are doomed to repeat it and all that, remember that new network technologies generally follow a well-trodden path: Very expensive, seen as backbone only, considered ultra high bandwidth; still expensive but making its way into nonbackbone roles, nice to have if you can swing it (this is where 10GigE gear is now); prices in free-fall, widely used; cheaper than the technology it replaced, pervasive.

That's a really nice way to summarize technology adoption.

One significant difference between 10GigE and its forebears could affect this cycle: With Fast and Gigabit Ethernet, we just upgraded switches and cards, and possibly cables if they were really old, and voilĂ ! But 10GigE runs best over fiber. Vendors will pitch you on copper, and under the proposed 10GigE standard, Cat 6 cable will run up to 55 meters, while Cat 6a and Cat 7 will run up to 100 meters, but we're not sold. Plus, the implementations we're aware of today require upgraded connectors. Expect to pull wire, or better yet, pull fiber.

This is indeed interesting. If you ever expect 10GigE to the desktop, you won't be able to use your Cat 5e cables.

Near-term, using 10GigE as a backbone makes perfect sense when aggregating the throughput of multiple networks. That's the first place to look for return on investment. Another use for 10GigE that's often overlooked is in server consolidation. We recently ran tests on an HP machine running eight dual-core AMD Opteron chips and found it to be network-bound. Considering the machine used four 1-gigabit network cards, we'd say we've reached the point with 1 gigabit that servers can be limited by their network connectivity. We sure hit a throughput wall with our testing, which was admittedly over-the-top in comparison to what you're likely to see in regular data-center use.

"10GigE is the next logical step for high-performance data centers," Conover says. "With massively scalable multiprocessor systems, say eight-core and up, eliminating the CPU ceiling for network performance, it's time to start thinking about 10-gigabit connectivity direct to hosts. Remember this simple rule of thumb: It takes 1 MHz of processing power to deliver 1 Mbps of throughput. With eight core systems available, running at 2 GHz each, it's feasible to support 10-Gbps interfaces on next-generation server computing platforms."
(emphasis added)

I think this rings true for packet inspection too. I have seen vendors with eight 2 GHz CPUs reporting packet inspection approaching 8 Gbps.

It occured to me that one of the expected benefits of an all-IP enterprise would be easier troubleshooting. If it's all IP, you should be able to visually inspect the traffic to see what is happening. But is that possible at 10GigE speeds? For companies that care about being able to inspect traffic at arbitrary locations, on-demand, I think matrix switches are going to become more popular.

In P PBXs Come Into Their Own, author Brian Riggs writes:

Although it may sound counterintuitive, the key value of IP PBXs is not that they run voice traffic over IP networks. It's that they run voice over the same infrastructure on which business applications depend. This lets you tightly integrate telephony software with business applications. CRM is a classic example--when a contact agent, sales rep or other customer-facing employee receives a call, the telephony software identifies the caller, whether through caller ID or by storing a contract number entered by the caller.

This is an example of tighter integration of applications, at the expense of isolation for security purposes. Expect to see intruders take advantage of this development.

Speaking of future attacks, consider this excerpt from iSCSI Takes on Fibre Channel for the All-IP Enterprise by Steven Hill:

Also in the product-development stage are single-chip, reprogrammable 10GigE extensions that provide IP off-loading and support for multiple protocols and take advantage of IP-specific features added to the RDMA (Remote Direct Memory Address) protocol; iWarp for TCP/IP and iSER (iSCSI Extensions for RDMA) for iSCSI.

RDMA: now that sounds secure. Some people like to blame users for security problems. I blame application and protocol developers. Sometimes I wish the feature train would just stop for a few years to let security types figure out all of the problems these new applications and protocols are introducing.

In How to Survive Data Breach Laws by Patrick Mueller, we read this advice:

Strategic Ignorance: The appropriate level of monitoring, whether by conventional host and network intrusion-detection products or by specialized database-security products, must be determined in the risk assessment. However, the implications flowing from this decision are not obvious. For example, focused monitoring will detect more intrusions. But because most breach laws provide no explicit guidance on what type of monitoring is required, you have wide discretion in choosing the specific technology controls (though you can't go too low--limitations on lax security can be enforced by your local state attorney general or the FTC).

Industry-specific regulations such as Gramm-Leach-Bliley (covering financial organizations) impose information security requirements. Compliance with these laws also often provides an exemption from breach law liability. Preferring to remain oblivious to some types of unauthorized disclosure may sound ridiculous, but after all the costs of a notification event are added up, a rational organization will find itself calculating these trade-offs.
(emphasis added)

Good grief. I'm hoping this is a description of "facts on the ground" and not a recommendation. This sounds like part of the reason one might choose a MSSP that looks good on paper but doesn't really find intrusions.

Comments

Anonymous said…
you're posts aren't as technical as they used to be, and it's sort of disappointing.

for example, i disliked your bolding of the `1 Mhz of processing ... 1 Mbps of throughput'. i think it's largely misleading, and you're better than that.

i also find it odd that after you've spent so much time thinking, talking, and posting about packet capture infrastructure... you're primary and only solution mentioned in this post is "matrix switches". what about ids/ips load-balancing, netflow, rspan/erspan/vacls, ios ip traffic export, distributed traffic collection with pf dup-to, et al? or at least mention of cheaper regeneration taps?

you're surely correct about all-IP troubleshooting. the benefits of using HTTP and SIP for everything also means that troubleshooting is all but standardized. however, these protocols aren't easily laughed at. they are still largely more complex than the out-of-the-box fire-and-forget systems of the past.

btw check out this blog post about tapping 10Gbps: http://thenetworkguy.typepad.com/nau/2006/05/tapping_10_gig.html
dre,

So sorry to disappoint you! I'll send a refund for the price of subscribing to my blog right away.
Anonymous said…
send me some money you get from adsense instead.
dre,

You bet -- I'll send a check for $0 right away!

(I have no AdSense ads here.)
By the way, when you say

"i also find it odd that after you've spent so much time thinking, talking, and posting about packet capture infrastructure... you're primary and only solution mentioned in this post is "matrix switches". what about ids/ips load-balancing, netflow, rspan/erspan/vacls, ios ip traffic export, distributed traffic collection with pf dup-to, et al? or at least mention of cheaper regeneration taps?"

you are ignoring the entire books, articles, and other blog posts I've devoted to this subject. Every blog post I write is not the definitive, all-encompassing essay you expect.

Popular posts from this blog

Zeek in Action Videos

MITRE ATT&CK Tactics Are Not Tactics

New Book! The Best of TaoSecurity Blog, Volume 4