Taking the Measure of a National Cyber Metrics Program
Cyber metrics appear to be having a moment.
Earlier this year, the Department of Homeland Security stood up its “Cyber Safety Review Board,” a multi-agency, public-private entity with a mandate to evaluate major cybersecurity incidents and offer the government advice on how to proceed. More recently, Congressman Jim Langevin (D-R.I.) introduced a proposal to create a “Bureau of Cybersecurity Statistics” (BCS)—an office mandated to collect and analyze key cyber statistical data—into the draft National Defense Authorization Act.
Neither of these initiatives is starting completely from scratch. For years, a number of agencies and entities have been tracking national-level cybersecurity statistics within their own mandates—perhaps the most relevant of which is the work of the Office of the Chief Economist and the National Risk Management Center in the Cybersecurity and Infrastructure Security Agency (CISA), which is part of the Department of Homeland Security (DHS). Even beyond the government’s own efforts, there are private initiatives to further metrics development in the United States through organizations such as CyberGreen and the Global Cyber Alliance. (For more, please watch our recent panel discussion, “Measuring Success in Cybersecurity,” with leaders from the Office of the National Cyber Director, CISA, CyberGreen and GCA.)
These ongoing initiatives—together with the newly stood-up Cyber Safety Review Board and potential Bureau of Cyber Statistics—demonstrate a profound shift in the field. For years, decisions about cybersecurity, particularly at the national and federal policy level, have been treated as an “art” rather than a “science”—made by individuals rather than by relying on quantifiable measures of risk, failure and improvement. Today, the increasing prominence of national cyber metrics initiatives and the further building out of established, effective measurements and scales has the potential to offer policymakers a new “tool” in their decision-making toolbox.
It’s an exciting moment. For years, there was a general sense from government officials that more rules plus more funding inherently made for better cybersecurity. But now, a more sophisticated crop of policymakers, CEOs, CISOs and experts across the field want to know at a more specific level what works and why—and how to make it better.
But the ongoing success of this push for cyber metrics will rely, as most things do, not on the strength of a good idea but on its quality of implementation. When thinking about this at a high level, there are several, major guidelines that should shape the design of a cyber statistics bureau—and indeed, any federal or national-level program.
First, avoid over-promising. Not everything in cyber is quantifiable. And quantification itself can usher in bias, error and miscalculation of its own. Both proactive and retro-active measurements of cybersecurity can provide utility to policymakers and decisionmakers writ large, but they are not a full solution. Some elements of cybersecurity decision making will remain an art form, and we can embrace that.
Second, focus on metrics that will change outcomes. Just because something can be measured does not mean that it is necessarily useful to do so. More information is not inherently better. Counting the number of cyberattacks against a given sector may have utility, but a much more effective measurement would likely be the average length of an outage across a sector following a cyber incident. The latter tracks the actual negative impact, and is something that people can work to mitigate in the future.
Third, proactively collaborate with industry on data collection. There is a substantial amount of skepticism, wariness and outright mistrust from the private sector on national-level metrics work. In part, there’s concern about the U.S. government’s interest in collecting data from companies, especially if that collection is not voluntary or on an opt-in basis. Questions of liability, anonymization of data, how data will be shared and how proprietary information will be protected may all prove obstacles to an effective system.
Fourth, revitalize information dissemination and sharing across the U.S. government. Earlier this year, an argument between CISA and the Department of Justice and FBI publicly broke out over how information would be shared within the government. In a similar vein, there has long been criticism from industry that the government does not share information across its departments as it should. Without effective information sharing—including the integration of metrics and measurements into policy-planning discussions more broadly—the impact of a metrics program will be blunted. Similarly, if information collected by the government is effectively integrated into the policymaking process to foster better decisions, industry will lose faith in the efficacy of the process.
Conclusion: The hard work of private experts and researchers; ongoing efforts by quantitatively-focused cybersecurity officials in the government; and executive and legislative branch willingness to invest in new initiatives makes this a key moment for metrics. Taking full advantage of this opportunity will rely on the “hows” of the design and implementation process just as much as any substantive decisions about what to measure.