Late last year, Congress came within inches of including a cyber incident reporting requirement in the must-pass annual national defense spending bill (2022 NDAA). This year, they’re trying again: just last week, Sens. Gary Peters (D-Mich.) and Rob Portman (R-Ohio) reintroduced the legislation in a new package of cybersecurity upgrades. As for the House, Reps. John Katko (R-N.Y.) and Yvette Clarke (D-N.Y.) verbally committed to push their version forward last month.

As we may be on the brink of seeing this bill passed, it’s worth taking a moment to clarify exactly what cyber incident reporting is, why we need it codified into law, what’s included in draft legislation and what’s not, and how we expect it to change the security landscape going forward.

This is part one of a two-part series on cyber incident reporting and focuses on the fundamentals of what incident reporting is and the current reporting landscape. For an analysis of the legislation currently being discussed in Congress, see here.

The Big Idea Behind Cyber Incident Reporting

In the United States, private companies and public entities are generally responsible for their own cybersecurity. This, of course, makes sense to us intuitively because it’s the set-up we’ve always known: a hospital should lock down its patients’ personal healthcare data; an online tax preparation software should ensure social security numbers stay safe; and a local utility should guarantee there are no dangerous chemicals in the water supply. The federal or local government, in turn, is supposed to set minimum requirements for cybersecurity standards and offer support, information or guidance when they know of an ongoing risk—or ensure accountability, if necessary, afterward.

But here’s the problem: when it comes to understanding the cyber threats to American businesses, infrastructure and civilians, the government often does not have a clear picture of what is happening in real time, which makes it very difficult for them to respond appropriately. Instead, they have access only to a partial set of statistics, like companies that choose to disclose cyber incidents; select sub-sectors or companies compelled to disclose incidents or breaches; and companies whose systems are so integral to the daily functioning of the U.S. economy that when they’re taken offline, they can’t really hide it. Here’s looking at you, Colonial Pipeline.

The current bicameral, bipartisan effort circulating in Congress to require certain companies to report cyber incidents directly to the Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) stems from this lack of sufficient and timely information. The goal is to create a new, robust and mandatory process through which these companies will directly tell CISA about any cyber incident deemed important enough that the government needs to hear about it.

The ideal here is an environment in which the federal government better understands ongoing cybersecurity threats to private companies and can respond in real time, work with these companies to mitigate the damage, stop attacks from snowballing, and identify better policies and solutions going forward.

Why Cyber Incident Reporting Can Be Difficult to Get Right

For those with limited experience in the cybersecurity industry, it might seem unbearably simple: if a company gets hacked, they should say something—or they should be forced to disclose unilaterally. After all, it’s often our own security that’s at stake: social security numbers,  digital accounts and consumer preferences, secrets and safety. But it’s a much more complicated, ambiguous and legally fraught situation for companies and for the U.S. government than it might first appear.

The easiest way to understand the challenges of reporting cyber incidents is to focus on why companies don’t report. Sometimes, they simply don’t realize they’ve been hit for days, months or even years. Identifying an incident and figuring out if it’s ongoing, how attackers infiltrated and what they may have accessed is a time-consuming process. And often, this painstaking process is made more complicated when the systems typically relied on are also hamstrung—famously, when Facebook went down last year, some employees couldn’t physically access the systems that were offline—because their access to some of the buildings relied on the same system that wasn’t working.

The technical logistics of incident response aside, there’s all kinds of incentives to avoid sharing information—questions of legal liability, requirements to uphold non-disclosure agreements, fears about putting off investors or tanking company stocks, or simply, the shame and stigma around being victimized in the first place. Or perhaps a company did report an incident—but the government or other entity to whom they reported was not timely or effective in their response.

Finally, keep in mind that there’s a difference between reporting to the government and reporting to the public. For example, in 2015, a little-known U.S. agency called the Office of Personnel Management was breached—and some 22 million personnel records were impacted. In this case, while the initial breach was identified in April, it was only disclosed to the public in June. While the handling of that incident was roundly criticized, there are sometimes perfectly reasonable explanations to delay making a breach public: perhaps a company or the government wants to avoid panic, or hold off on tipping their hand and letting hackers know they’ve been found out.

Why We Don’t Already Have a Cyber Incident Reporting Law

Of course, all of this begs a very obvious question: if most people agree that cyber incident reporting is useful and necessary, why doesn’t the United States already have a law for this?

Part of the reason is that cyber incident reporting legislation isn’t dropping into a total vacuum; there is already a complex patchwork of reporting requirements and relationships that exists across various industries. Perhaps the best examples are breach notification laws, in which key sectors like finance and healthcare already have to report if they have reason to think the data that they hold or process has been compromised. Some sectors are not only required to report specific breaches but also incidents more broadly—that is, they have to report even if they’re not sure that data was stolen or processes were disrupted. For example, the Securities and Exchange Commission (SEC) notes that public companies may have an obligation to disclose an incident to their investors.  Other examples of mandated incident reporting apply to federal contractors, federal agencies and defense contractors.

The reason Congress is now looking at a law for cyber incident reporting is because the existing patchwork is insufficient and not systematic. Some sectors have no reporting requirements at all. Others are subject to a complex patchwork of regulations that are resource-intensive to comply with. There’s also a fair bit of concern that in many cases, existing reporting requirements are not broad enough, insufficiently standardized and/or unclear—making them difficult both to follow and enforce.

This is the environment into which current congressional legislation has been introduced. For an analysis of the legislation itself, please see part two.

Image credit: Maksim Kabakou

Featured Publications