Matasano Is Right About Agents

I've been exceptionally busy teaching all week at USENIX LISA, so blogging has been pushed aside. However, I literally read the Matasano Blog first, of all the Bloglines feeds I watch. This evening I read their great post Matasano Security Recommendation #001: Avoid Agents. They really mean "Minimize Agents," as noted in their summary:

Enterprise security teams should seek to minimize their exposure to endpoint agent vulnerabilities, by:

1. Minimizing the number of machines that run agent software.
2. Minimizing the number of different agents supported in the enterprise as a whole.

I absolutely agree with these statements. One of the first signs that you are dealing with a clueless security manager is the requirement to run anti-virus on every system. I shared the pain of such a foolish idea yesterday with a student who is struggling to meet such a mandate. He must deploy anti-virus on his Unix-like servers (I forget what OS -- something not common, however), and he's not allowed to use any open source solution. He's ended up with the only vendor in the world who sells a so-called "AV" solution for his platform, and it's absolutely a waste of money.

Worse, as is the case any time you add code to a platform, you are adding vulnerabilities. Write the following on your security policy management clue-bat: Running AV is not cost-free. In other words, running AV on any system may introduce vulnerabilities that were not present before. Try perusing the results of querying Secunia or OSVDB to see lists of AV products with security problems -- some of them allowing privilege escalation and compromise.

The only problem I have with the Matasano approach is the slide I posted above. Agents or Enterprise Management Applications are never "threats." They may offer vulnerabilities which can be exploited by threats, but agents themselves are not a threat.

Copyright 2006 Richard Bejtlich


Anonymous said…
Get with the times Rich.... I don't think they meant it in the literal definitions of "threat". I think hes just indicating that no matter what, EMA is a bad idea. The phrase "Threat or Menace" is an old joke. Its a rhetorical question from an obviously biased source. See "Spiderman", JJ Jamesons headline "Spiderman: Threat or Menace" indicates that no matter what, he is leading the public to a negative opinion of 'ole Spidey... In the Art of Unix Programming, the chapter entitles "Threads: Threat or Menace" is ESR leading readers to the assumption to avoid threads due to overwhelming number of headaches and little benefit. Do a Google search, the phrase has something like 1.2 mil hits.

Your supposed to chuckle, not take it literally ;)
Ok, that's fine. I haven't read comic books since I was 12.

It doesn't help the situation to see threat used in that way.

I didn't post this originally, but the Matasano "Threats" section of their story is mostly about vulnerabilities, e.g., "Agent implementations are often substantially homogenous, even across operating systems, enabling uniformly effective attacks against desktops, Windows servers, and Unix servers." That's a vulnerability, not a threat.
Anonymous said…
Interesting. I don't think you meant to say what read. I don't think it makes you clueless to put AV on every system. However the term "every" is arbitrary in many cases. I always recommend putting AV on "every" windows pc/laptop in our enterprise, because trojans/spyware are real problems. I don't run AV on my Mac/*BSD/Linux boxes, not because they are more *secure*, but because I don't feel I need to, at this time. The future may change that.

I agree about the agent problem, but I can see where it is tough to get around this, such as the need for backup software agents (all platforms), patch management agents (windows), anti-virus (windows), and...well I guess that's all I can think of right now.
Hi Joe,

A policy mandating AV on EVERY system, without regard to peculiarities of the systems in question, is a bad idea. That is a sign of cluelessness.

Deciding to run AV on every system, after weighing the risks, is not clueless. I think we agree on that.
Anonymous said…
Richard, I totally agree with your regarding security policies and mandatory AV.

Many of us have to fight the FUD pretty hard in order to not waste money on AV for Unix/Linux boxes. Management is often under the impression that AV will fully mitigate the vulnerabilities that are exploited by the malware, or will protect the node from 'hackers'.

It might also be worth mentioning that the PCI Data Security Standard (1.1) for credit card transactions makes an exception for Linux/Unix systems in its AV standard.

Also, I saw the presentation given by Tom Ptacek and Dave Goldsmith from Matasano at BlackHat Vegas this year, entitled "Do Enterprise Management Agents Dream of Electric Sheep?"

Their discussion of the lack of a security development process in many commercial agents was alarming. They also pointed out that the attack surface is huge because every component in many of these agent/server architectures has a long list of vulnerabilities. I can also confirm from my experience receiving support for certain enterprise agents that the first solution to most support issues is to disable authentication and encryption.

Only two things hampered their presentation:

- having to follow Dan Kaminsky's Black Ops 2006 (not to mention his herd of fanboys);

- not being able to disclose specific information on any of the 30+ vulnerabilities that they had exploit code and advisories prepared for because of their policy of no public disclosure without a vendor fix.

They said that some of the vulnerabilities they found in enterprise management agents were disclosed to the vendor more than one year ago with little more than a grunt in response.
Anonymous said…
As with most things computer and security related, it depends. I don't care for absolute statements like always, every, etc. Each case needs to be evaluated on it's unique characteristics and the solution tailored to meet that uniqueness.

While I agree that Linux / Unix systems don't really need AV to protect the OS. If they are providing file sharing services to Windows systems than it is a good idea to AV scan the files in those shares. This can be done by having the Unix / Linux OS run AV and scan the local file system that is shared or have a Windows server that already has AV installed scan the shares. The latter causes more traffic on the network as every file on the share must go across the network from the Unix / Linux system to the Windows system that is performing the scan.
Anonymous said…
Hi Richard

i intersted in your TCP/IP Weapons School course but i live outside the usa and i cann't take it can
you help me which books and labs
(materials) i can purshase to complement a big part of your course?
bamm said…
So why are corporations pushing out all these agents to thousands of hosts if security experts believe the practice is a "bad idea"? I'll wager that it's control issue. Corporations who are anal about insuring AV lives on every desktop probably aren't doing a good job of controlling the traffic that traverses their network. It's a stop-gap measure that makes a bad situation worse.

AV is a good example. Why does AV live outside of the company mailservers and web filter/proxies? Probably because the company allows users unrestricted access to webmail, pop, imap, etc. and does not take advantage of content filters that can be placed at ingress/egress points.

People, take control of your network and the assets that operate on them. And stop implementing technology before you develop the policies and processes needed to control them!

JimmytheGeek said…
So what's the alternative? Software vendors develop agents to control and collect data on otherwise unmanagable numbers of machines. There may not be built in mechanisms externally available to take care of this stuff. I think, for the most part, these mechanisms are present in modern OS's. Essentially, the agent and its problems are already provided, so don't add more. Also, I think the pull method, where an agent periodically checks in and requests updates and reports status, is more robust than the push method, where a central server issues directives. In both cases, owning the server is tantamount to control, but there are some beneficial corner cases for the pull method. In the pull method, you could still provide a malicious update on the server that the client's agent would pull down and act on in good faith. But that's a little trickier than the Stalinist, push method. Also, a pull method doesn't create a new, listening service that can be attacked without controlling the server.

I believe cfengine and puppet both follow this approach, but I haven't used either.

Of course, the element of authoritarian direct control is part of the appeal of these systems. So push will probably win over pull due to its superficial appeal.
Re: question on TCP/IP Weapons School:

I am considering writing a book using the same ideas as those in the class. I have nothing definite yet though.
JD said…
Bamm has a point. I don't want to say "but" or "however". I will say that it can be difficult to take better control of the traffic travelling on your networks and systems when you have little to no support to do so.

The argument at that point is "so do what you have to, ethically, to get buy-in", I suppose.

But I'm hearing so many horror stories from my colleages...upper-level management who are more willing to trust the SOX auditors from the consulting firm who forgot to "paste Nessus scan data here" in their final report than their own in-house security staff...desktop support techs doubling as the company janitor, working 12 hour days on salary, six days per week, being expected to keep up on the lastest security stuff because the enterprise is too cheap to hire the correct number of staff...the untrained attempting to secure things they don't undertand, creating more vulnerabilities on networks and individual systems...a CIO who believed you could fix a server that had been compromised and was running an IRC channel devoted to trading illegal software by scanning it with a free copy of AVG...

How do you take control of the situtation when even blocking users from being able to access, say, is seen as a fireable offense? How do you take control of your traffic when you're not even allowed to monitor it because you're "just the janitor who also that overpaid PC guy"? How do you get buy-in when the managers you are presenting your data to aren't seeing the potential costs due to loss of business and potential fines/jail time as listed on your PowerPoint slide but they are seeing instead "nothing seems to have gone wrong so far" because they don't WANT to envision that their resources will be compromised, if they haven't been already?
Anonymous said…
This comment has been removed by a blog administrator.
Unknown said…
The future is mobile computing, unless we get a huge lashback into thin clients (which many managers seem to talk about, but realizing that is difficult...). With mobile computing, you have laptops and devices moving outside of your tightly controlled, AV-protected network. For devices like these, especially since most of them are Windows, you need some sort of agent AV (or standalone, I guess).

Agents are not evil in themselves, you sometimes need them, and other times they're simply the best product around to do what you want.

The problem comes with using 12 agents on every server or workstation. That's a lot of cost in resources, plus it means 12 possible openings for attackers. Yes, you do want to limit the number of agents you use.

The problem with this approach is that very, very few networks have the luxury to be built up from scratch fully. Five years ago an AV agent may have been deployed. Three years ago a backup agent. Two years ago log correlation agents. This year, people are thumping about HIDS and WAFS. These slowly pile up as the landscape changes. And while it might be nice to remove old agents and replace them with a more unified agent system, that is still a daunting dollar figure usually. It would take a matured TCO/ROI presence in an organization to make that recommendation fly.

To compound the problem with policies, many of those policies are made by managers who are rather looking at workstations and normal users. In that case, yes, every system should have AV protection in this day and age. (And yes, network-based as well, as I agree that security will move to the switch/network with the exception of mobile devices.) Many policies are written that way so that you don't have non-geek people reading, "most systems should have AV" and then asking that theirs be one of them. D'oh!

(Holy crap, some of these blogger captchs even *I* can't decipher!)
Anonymous said…
Another thing about agent based management is that 'Integrators' often lump a couple of big COTS products together, wrapper it with a couple gui applications and call it a 'system' that defies most good system engineering practices, much less security engineering ones. These 'systems' invariably have multiple, severely easy to exploit, attributes. Just take a hard look at the products, protocols/ports, and method of operation. Most are botnets waiting to happen. And with full control (root/sys/admin) privs on the boxen, it's the overlay network that they put in for the intruder, free of charge, fully authorized, and poorly/slopily administered...and bypassing most/all monitoring routines, becuase of it's very nature. It's the 'Ghost in the Machine'. So argue about password length, strength of encryption keys, security configuration guides, antivirus, hids, nids, whatever. If the 'command and control' is not cleaned up and secured, the 'units' will be taking 'orders' from someone other that their obstensible 'chain of command'.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics