Telling a Security Story with Charts
The Economist presents these charts for the following reason:
In the spring of 2011 the Pew Global Attitudes Survey asked thousands of people worldwide which country they thought was the leading economic power. Half of the Chinese polled reckoned that America remains number one, twice as many as said “China”. Americans are no longer sure: 43% of US respondents answered “China”; only 38% thought America was still the top dog. The answer depends on which measure you pick. (emphasis added)
The reason I like these charts is that they remind me of how many security practitioners think about "being secure." Managers likely often ask security staff "Are we secure?" The truth is there is no single number, so anyone selling you a "risk" number is wasting your time (and probably your money). However, it would be much more useful to display a chart like that created by the Economist. The security staff could choose a dozen or more simple metrics to paint a picture, and let the viewer interpret the answer using his or her own emphasis and bias.
Another reason I like the Economist chart is that the magazine built it using specified assumptions of future activity, listed in the article. If you disagree with these assumptions you can visit the second link I posted to devise your own charts. Although not shown here, what would be even more useful is showing these charts as a time series, with snapshots for January, then February, and so on. This "small multiples" approach (promoted by Tufte) capitalizes on the skill of the human eye and brain to observe and observe differences in similar objects.
If you had to pick a dozen or so indicators of security for a chart, what would you depict? The two I consider non-negotiable are 1) incidents per unit time and 2) time to containment for incidents.
Comments
Although you kind of dismissed the management question, "Are we secure," picking a dozen or so indicators of security seems to serve exactly that question. But I understand what you mean, and ultimately "Are we secure?" is what a high-level manager needs to know. Analysis merely serves the information requirements of the organization's leadership (and helps leadership understand what it needs to know).
In addition to observed threat activity, reflected in your selections (and defining "incident" as "successful unauthorized access to proprietary information systems" or something similar to that), the measure of an organization's security depends on the accessibility of its information and systems to unauthorized parties. So, I would include measures that reflect basic security requirements: patch deployment speed, the institutional penetration rate of secure computing awareness, the network presence of legacy and high-risk systems and devices, etc.
Still, any assessment must include the caveat that as good as the organization's security practices and record may appear on paper, there is always the chance that an intruder will get in and do damage or steal valuable information. A positive security assessment should not permit decision makers to relax.
However, if there is a definition, it's probably found in the insurance world, and if so it almost certainly contains the factors cost and time at the very least.
For that reason, I would suggest that important indicators must help provide estimates for cost and time. Once we have those, estimates of efficiency may be appropriate additions.
Thus, the only non-negotionable indicator I can imagine would be cost per incident over some suitable period of time.
Personally, I think asking "how secure are we" is a bad question. In terms of security, how do you define secure. There is the quantifiable metrics that can tell the story of "How have we fared so far?". This brings me back to the old BATC days where we had those god awful reports we had to do, but they supported several different stories, such as "How visible are we?", "How many external probes did we encounter?", "How many incidents were due to internal policy violations?", and "How many external intrusion attempts were successful?". With these metrics, you could easily tell good stories, such as "The security improvements we implemented have reduced external intrusions", and "our corporate policy changes and enforcement have reduced policy violations on the network by this much". Asking "how secure are we" based on these metrics is a bad idea however, because you can say "oh, we're very secure, because we have reduced external intrusions to almost nothing, haven't had policy violations in over a year, and we aren't getting probed as often due to new firewall rules", then BAM you get hit by some 0-day exploit, ruining all credibility in that statement with management.
Although not relevant to reg analysis, one suggestion would be to include the setupapi.dev.log to show the first time a device was connected. It just be nice to see everything in one graphic.