Editor’s Note: This is the second part of a four-part series featuring an in-depth overview of Infosec’s (Information Security) Unified Security Metrics Program. In this second installment, we discuss where to begin measuring.
H. James Harrington, noted author of Business Process Improvement, once said “Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” Good piece of wisdom, but where do you start? How do you mine data through the use of metrics in order to provide greater insight into your organization’s security posture, while simultaneously using it as a vehicle to protect your most critical assets?
For Infosec’s Unified Security Metrics (USM) team, there’s plenty of statistical data sources available to mine information from, particularly from IT system logs and dashboards. In fact, early research conducted by the team identified 30 different types of meaningful data to track. Comprehensive, yes, but not realistically feasible, nor sustainable to implement long-term across Cisco. The USM team’s solution centered on the primary outcomes they were trying to achieve, namely, driving security process improvement behaviors and actions within IT. Subsequently, the list was narrowed down to five key measurements:
- Stack compliance: measures vulnerabilities found on the TCP/IP stack (i.e. network devices, operating systems, application servers, middleware, etc.)
- Anti-malware compliance: quantifies whether malware protection software has been properly installed and is up-to-date
- Baseline application vulnerability assessment: computes whether automatic vulnerability system scans have been performed in accordance with Cisco policy and, if post-scan, any open security weaknesses remain
- Deep application vulnerability assessment: computes whether penetration testing has been performed on our most business-critical applications in accordance with Cisco policy and, if post-testing, any open security weaknesses remain
- Design exceptions: measures the total number of open security exceptions, based on deviations from established security standards and best practices
Numerous benefits came from using these measurements. All were readily available, provided good quality data, and could be easily collected and correlated to existing IT service delivery success factors. A great starting point, yet, how do you translate these measurements into meaningful security metrics? For USM, the data output from these baseline measurements were used to calculate two critical security metrics: 1) Vulnerability, which reveals how many vulnerabilities exist in my service, and how many are infrastructure versus application related; and 2) On-time Closure, which answers the question “are vulnerabilities closed, compliant with the team’s given Service Level Agreement?”
During the early rollout phase of this program, IT service owners were not fully convinced these security metrics would yield quantifiable information. However, when USM discovered that only 15% of vulnerabilities were actually closed on-time, leaving Cisco exposed, IT service owners stepped up and managed to raise this percentage to 85% within a year.
Now, twelve months later, despite these early program “growing pains,” IT service owners have now embraced the “value” of these metrics and routinely use them for executive reviews. Before InfoSec launched its USM program, IT service owners and executives had very limited visibility into their security posture and often falsely assumed their IT enterprise was uncompromised and secure. Today, with USM, they have greater confidence, insight, and understanding into what’s happening within the enterprise, which enables them to quickly diagnose, remediate, and fix security issues now and in the future.
Next installment: What makes USM effective?
Hi, Sujata –
Another good one. I am curious if these metrics are your “runs scored” metrics? While I think they are extremely common, it seems to me they are more like batting average than anything.
Have you considered whether these numbers “increase” your “runs scored” for your program?
Regards,
Pete
Thanks Pete for your reply. Security is a full season marathon – not just winning one game. Vulnerability and On-time closure security metrics, or Batting Average as you characterize them, are part of your overall security toolbox. We measure and report them every quarter for the life of the service. As we know threats change constantly and any product/software is going to have vulnerabilities. Equipped with these tools like USM (and others), issues can be quickly diagnosed, remediated or fixed.
Basics matter in security and a good hygiene pays off. Ultimately each vulnerability handled in a timely manner is a run scored!
Cheers,
Sujata
Xsan
I think that we need a new approach to security , we need a real time protection against new threat because the hackers are evolving rapidly and with the new generation of cpus they have a large cpu calculation force which help them crack any passwords in a short period of time