Avatar

Editor’s Note: This is the second part of a four-part series featuring an in-depth overview of Infosec’s (Information Security) Unified Security Metrics Program. In this second installment, we discuss where to begin measuring.

H. James Harrington, noted author of Business Process Improvement, once said “Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.” Good piece of wisdom, but where do you start? How do you mine data through the use of metrics in order to provide greater insight into your organization’s security posture, while simultaneously using it as a vehicle to protect your most critical assets?

For Infosec’s Unified Security Metrics (USM) team, there’s plenty of statistical data sources available to mine information from, particularly from IT system logs and dashboards. In fact, early research conducted by the team identified 30 different types of meaningful data to track. Comprehensive, yes, but not realistically feasible, nor sustainable to implement long-term across Cisco. The USM team’s solution centered on the primary outcomes they were trying to achieve, namely, driving security process improvement behaviors and actions within IT. Subsequently, the list was narrowed down to five key measurements:

  • Stack compliance: measures vulnerabilities found on the TCP/IP stack (i.e. network devices, operating systems, application servers, middleware, etc.)
  • Anti-malware compliance: quantifies whether malware protection software has been properly installed and is up-to-date
  • Baseline application vulnerability assessment: computes whether automatic vulnerability system scans have been performed in accordance with Cisco policy and, if post-scan, any open security weaknesses remain
  • Deep application vulnerability assessment: computes whether penetration testing has been performed on our most business-critical applications in accordance with Cisco policy and, if post-testing, any open security weaknesses remain
  • Design exceptions: measures the total number of open security exceptions, based on deviations from established security standards and best practices

Numerous benefits came from using these measurements. All were readily available, provided good quality data, and could be easily collected and correlated to existing IT service delivery success factors. A great starting point, yet, how do you translate these measurements into meaningful security metrics? For USM, the data output from these baseline measurements were used to calculate two critical security metrics: 1) Vulnerability, which reveals how many vulnerabilities exist in my service, and how many are infrastructure versus application related; and 2) On-time Closure, which answers the question “are vulnerabilities closed, compliant with the team’s given Service Level Agreement?”

During the early rollout phase of this program, IT service owners were not fully convinced these security metrics would yield quantifiable information. However, when USM discovered that only 15% of vulnerabilities were actually closed on-time, leaving Cisco exposed, IT service owners stepped up and managed to raise this percentage to 85% within a year.

Now, twelve months later, despite these early program “growing pains,” IT service owners have now embraced the “value” of these metrics and routinely use them for executive reviews. Before InfoSec launched its USM program, IT service owners and executives had very limited visibility into their security posture and often falsely assumed their IT enterprise was uncompromised and secure. Today, with USM, they have greater confidence, insight, and understanding into what’s happening within the enterprise, which enables them to quickly diagnose, remediate, and fix security issues now and in the future.

Next installment: What makes USM effective?