- Michael Sutton from SPI Dynamics said that the idea of assembling a team of security people to address enterprise vulnerabilities worked (more or less) for network and infrastructure security because the team could (more or less) introduce elements or alter the environment sufficiently to improve their security posture. The same approach is not working and will not work for application security because addressing the problem requires altering code. Because code is owned by developers, the security team can't directly change it. This is an important point for those who think they can just turn their CSIRT loose on the software security problem in the same way they attacked network security.
Michael also said no security is trustworthy until trusted. (He actually said "trusted." There's a difference. Anyone can "trust" software. The question is whether it is worthy of trust, i.e., "trustworthy.")
- Alan Paller made a few comments. He said we have 1.5 million programmers in the world, so training all of them probably isn't an option. He said SANS is working with Tipping Point to create a "Programmer's @Risk" newsletter like the existing vulnerabilities @Risk newsletter. Alan repeated a recommendation made my John Pescatore that organizations should run security tests against bids as well as upon acceptance.
Alan noted that software testing should be considered a part of a "building permit" (pre-development) and a second "occupancy permit" (deployment in the enterprise). Alan also said PCI is the only worthwhile security standard. Others just require writing about security, while PCI requires a modicum of doing security. (Mark Curphey disagrees!)
- Jim Routh of DTCC said it's important for developers to recognize that security flaws are software defects, and not the security team's problem! His team of 450 inhouse developers uses three stages of testing: 1) white box for developers; 2) black box for integrators; and 3) third party for deployment.
- Mike Willburn from the FBI said FISMA C&A results in "well-documented" systems that score well on report cards but are "full of holes." Bravo.
- Andrew Wing from Teranet said he doesn't let an inhouse project progress to user acceptance training unless it scores a certain rank using an automated software security assessment tool.
- Jack Danahy from Ounce Labs stressed the importance of contract language for procuring. The OWASP Legal Project also offers sample language. Alan stressed the need to build security into contracts, rather than relying on the vague concept of "negligence" when security isn't explicitly included in a contract.
- Michael Weider from Watchfire said he fears user-supplied content will be the next exploitation vector. I shuddered at the horror of MySpace and the like.
- Steve Christey mentioned SAMATE (Software Assurance Metrics And Tool Evaluation).
That's what I can document given the time I have. Thanks to SANS for their leadership in this endeavor.