Security Application Instrumentation
Last year I mentioned ModSecurity in relation to a book by its author. As mentioned on the project Web site, "ModSecurity is an open source web application firewall that runs as an Apache module." In a sense Apache is both defending itself and reporting on attacks against itself. I consider these features to be forms of security application instrumentation. In a related development, today I learned about PHPIDS:
PHPIDS (PHP-Intrusion Detection System) is a simple to use, well structured, fast and state-of-the-art security layer for your PHP based web application. The IDS neither strips, sanitizes nor filters any malicious input, it simply recognizes when an attacker tries to break your site and reacts in exactly the way you want it to. Based on a set of approved and heavily tested filter rules any attack is given a numerical impact rating which makes it easy to decide what kind of action should follow the hacking attempt. This could range from simple logging to sending out an emergency mail to the development team, displaying a warning message for the attacker or even ending the user’s session.
This sort of functionality needs to be built into every application. It is not sufficient (reasons to follow) but it is required.
We used to (and still do) talk about hosts defending themselves. I agree that hosts should be able to defend themselves, but that does not mean we should abandon network-level defenses (as the misguided Jericho Forum advocates).
Today we need to talk about applications defending themselves. When they are under attack they need to tell us, and when they are abused, subverted, or breached they would ideally also tell us.
In the future (now would be nice, but not practical yet) we'll need data to defend itself. That's a nice idea but the implementation isn't ready yet (or even fully conceptualized, I would argue).
Returning to applications: why is it necessary for an application to detect and prevent attacks against itself? Increasingly it is too difficult for third parties (think network infrastructure) to understand what applications are doing. If it's tough for inspection and prevention systems it's even tougher for humans. The best people to understand what's happening to an application are (presumably) the people who wrote it. (If an application's creator can't even understand what he/she developed, there's a sign not to deploy it!) Developers must share that knowledge via mechanisms that report on the state of the application, but in a security-minded manner that goes beyond the mainly performance and fault monitoring of today.
(Remember monitoring usually develops first for performance, then fault, then security, and finally compliance.)
So why isn't security application instrumentation sufficient? The problem is one should not place one's trust entirely in the hands of the target. One of Marcus Ranum's best pieces of wisdom for me was the distinction between "trusted" and "trustworthy." Just because you trust an application doesn't make it worthy of that trust. Just because you have no alternative but to "trust" an application doesn't make it trustworthy either. Trustworthy systems behave in the manner you expect and can be validated by systems outside of the influence of the target.
For most of my career my mechanism for determining whether systems are trustworthy has been network sensors. That's why they sit at the top of my TaoSecurity Enterprise Trust Pyramid. In a host- and application-centric world I might consider a second system with one-way direct memory access to a target to be the most trusted source of information on the target, followed by a host reporting its own memory, then other mechanisms including application state, logs, etc.
You can't entirely trust the target because it can be compromised and told to lie. Of course all elements of my trust pyramid (or any trust pyramid) can be compromised but the degree of difficulty (should) increase as isolation from the target is achieved.
I'll end this post with a plea to developers. Right now you're being taught (hopefully) "secure coding." I would like to see the next innovation be security application instrumentation, where you devise your application to report not only performance and fault logging, but also security and compliance logging. Ideally the application will be self-defending as well, perhaps offering less vulnerability exposure as attacks increase (being aware of DoS conditions of course).
Eventually we should all be wearing the LogLogic banner at right, because security will be more about analyzing and acting on instrumented applications and data and less about inspecting a security product's interpretation of attacks.
I am not trying to revoke my Response to Bruce Schneier Wired Story. SAI doesn't mean the end of the security industry. I saw in this story that Bruce Schneier is still on another planet:
In a lunch presentation, security expert Bruce Schneier of BT Counterpane also predicted a sea change. "Long term, I don't really see a need for a separate security market," he said, calling for greater integration of security technology into everyday hardware, software, and services.
"You don't buy a car and then go buy anti-lock brakes from the company that developed them," Schneier quipped. "The safety features are bought and built in by the company that makes the car." Major companies such as Microsoft and Cisco are paving the way for this approach by building more and more security features directly into their products, he noted.
"That doesn't mean that security becomes less important or that there won't be innovation," Schneier said. "But in 10 years, I don't think we'll be going to conferences like these, that focus only on security. Those issues will be handled as part of broader discussions of business and technology."
Schneier needs to study more history. I'll be at Black Hat or its equivalent in ten years, and he'll probably be there as another keynote!
Pete Lindstrom reminds me of my post that says car analogies fail unless the security concern is caused by an intelligent adversary. Inertia is not an intelligent adversary with certain threat advantages.
One final note on adversaries: first they DoS'd us (violating availability). Now they're stealing from us (violating confidentiality). When will they start modifying our data in ways that benefit them in financial and other ways (violating integrity)? We will not be able to stop all of it and we will need our applications and data to help tell us what is happening.
Incidentally, since I'm on the subject of logs I wanted to briefly say why I usually disagree with people who use the term "Tcpdump logs" or "Pcap logs." If you're storing full content network traffic, you are not "logging." You are collecting the actual data that was transferred on the wire. That is collection, not logging. If I copy and store every fax that's sent to a department, I'm not logging the faxes -- I am collecting them. A log would say:
or similar. In this session session data could be considered logging, since sessions are records of conversations and not the actual conversations.
That said, logs are great because a single good log message can be more informative than a ton of content. For example, I would much rather read a log that says file X was transferred via SMB from user RMB to user ARB, etc., than try to interpret the SMB traffic manually.
PHPIDS (PHP-Intrusion Detection System) is a simple to use, well structured, fast and state-of-the-art security layer for your PHP based web application. The IDS neither strips, sanitizes nor filters any malicious input, it simply recognizes when an attacker tries to break your site and reacts in exactly the way you want it to. Based on a set of approved and heavily tested filter rules any attack is given a numerical impact rating which makes it easy to decide what kind of action should follow the hacking attempt. This could range from simple logging to sending out an emergency mail to the development team, displaying a warning message for the attacker or even ending the user’s session.
This sort of functionality needs to be built into every application. It is not sufficient (reasons to follow) but it is required.
We used to (and still do) talk about hosts defending themselves. I agree that hosts should be able to defend themselves, but that does not mean we should abandon network-level defenses (as the misguided Jericho Forum advocates).
Today we need to talk about applications defending themselves. When they are under attack they need to tell us, and when they are abused, subverted, or breached they would ideally also tell us.
In the future (now would be nice, but not practical yet) we'll need data to defend itself. That's a nice idea but the implementation isn't ready yet (or even fully conceptualized, I would argue).
Returning to applications: why is it necessary for an application to detect and prevent attacks against itself? Increasingly it is too difficult for third parties (think network infrastructure) to understand what applications are doing. If it's tough for inspection and prevention systems it's even tougher for humans. The best people to understand what's happening to an application are (presumably) the people who wrote it. (If an application's creator can't even understand what he/she developed, there's a sign not to deploy it!) Developers must share that knowledge via mechanisms that report on the state of the application, but in a security-minded manner that goes beyond the mainly performance and fault monitoring of today.
(Remember monitoring usually develops first for performance, then fault, then security, and finally compliance.)
So why isn't security application instrumentation sufficient? The problem is one should not place one's trust entirely in the hands of the target. One of Marcus Ranum's best pieces of wisdom for me was the distinction between "trusted" and "trustworthy." Just because you trust an application doesn't make it worthy of that trust. Just because you have no alternative but to "trust" an application doesn't make it trustworthy either. Trustworthy systems behave in the manner you expect and can be validated by systems outside of the influence of the target.
For most of my career my mechanism for determining whether systems are trustworthy has been network sensors. That's why they sit at the top of my TaoSecurity Enterprise Trust Pyramid. In a host- and application-centric world I might consider a second system with one-way direct memory access to a target to be the most trusted source of information on the target, followed by a host reporting its own memory, then other mechanisms including application state, logs, etc.
You can't entirely trust the target because it can be compromised and told to lie. Of course all elements of my trust pyramid (or any trust pyramid) can be compromised but the degree of difficulty (should) increase as isolation from the target is achieved.
I'll end this post with a plea to developers. Right now you're being taught (hopefully) "secure coding." I would like to see the next innovation be security application instrumentation, where you devise your application to report not only performance and fault logging, but also security and compliance logging. Ideally the application will be self-defending as well, perhaps offering less vulnerability exposure as attacks increase (being aware of DoS conditions of course).
Eventually we should all be wearing the LogLogic banner at right, because security will be more about analyzing and acting on instrumented applications and data and less about inspecting a security product's interpretation of attacks.
I am not trying to revoke my Response to Bruce Schneier Wired Story. SAI doesn't mean the end of the security industry. I saw in this story that Bruce Schneier is still on another planet:
In a lunch presentation, security expert Bruce Schneier of BT Counterpane also predicted a sea change. "Long term, I don't really see a need for a separate security market," he said, calling for greater integration of security technology into everyday hardware, software, and services.
"You don't buy a car and then go buy anti-lock brakes from the company that developed them," Schneier quipped. "The safety features are bought and built in by the company that makes the car." Major companies such as Microsoft and Cisco are paving the way for this approach by building more and more security features directly into their products, he noted.
"That doesn't mean that security becomes less important or that there won't be innovation," Schneier said. "But in 10 years, I don't think we'll be going to conferences like these, that focus only on security. Those issues will be handled as part of broader discussions of business and technology."
Schneier needs to study more history. I'll be at Black Hat or its equivalent in ten years, and he'll probably be there as another keynote!
Pete Lindstrom reminds me of my post that says car analogies fail unless the security concern is caused by an intelligent adversary. Inertia is not an intelligent adversary with certain threat advantages.
One final note on adversaries: first they DoS'd us (violating availability). Now they're stealing from us (violating confidentiality). When will they start modifying our data in ways that benefit them in financial and other ways (violating integrity)? We will not be able to stop all of it and we will need our applications and data to help tell us what is happening.
Incidentally, since I'm on the subject of logs I wanted to briefly say why I usually disagree with people who use the term "Tcpdump logs" or "Pcap logs." If you're storing full content network traffic, you are not "logging." You are collecting the actual data that was transferred on the wire. That is collection, not logging. If I copy and store every fax that's sent to a department, I'm not logging the faxes -- I am collecting them. A log would say:
1819 Wed 13 Jun 07 FAX RMB to ARB 3 pgs
or similar. In this session session data could be considered logging, since sessions are records of conversations and not the actual conversations.
That said, logs are great because a single good log message can be more informative than a ton of content. For example, I would much rather read a log that says file X was transferred via SMB from user RMB to user ARB, etc., than try to interpret the SMB traffic manually.
Comments
http://1raindrop.typepad.com/1_raindrop/2007/06/building_coordi.html
"Security Design Patterns" by Blakley and Heath is an excellent (and free) guide to a number of important patterns for building security into your system with examples from Java, J2EE and CORBA systems
http://www.opengroup.org/bookstore/catalog/g031.htm
There are a few other APIPS products as I mentioned in gunnar's blog post follow-up above.
On the other hand, edge, endpoint, or network security offers little in the way of providing information-centric security.
De-perimiterisation would require a layered defense that starts at the host and works out to the network to determine where sensitive data may be released to. The last line of defense when constructing trust zones or channels becomes the firewall which prevents the release of sensitive data out of the enterprise by working in conjunction with the trusted host servers to enforce policy and trust statements.
Thus, de-perimeterisation would seem to require an inside-out approach as opposed to outside-in that exists today.
Ironically,the Open Group has categorically thrown out the possible use of scalable multi-level security (which is what we do), based on traditional barriers of cost and complexity, and which would enable the goals of their organization to be realized.
greetings from the Philippine Instrumentation and Control Society!
I think one of the best articles I've read recently is about logging:
http://www.securityfocus.com/infocus/1888
If developers used a framework such as what is described in the article - we might actually start seeing logs that are actually useful from a security perspective.