Insurance is about information. Outside of the technology industry itself, few sectors are better situated to seize the advantages of so-called “big data” than insurance.

Such was the consensus of a panel presenting before the Property/Casualty Insurance Committee of the National Conference of Insurance Legislators during the group’s recent spring meeting in Little Rock, Arkansas.

Chaired by state Rep. Matt Lehman, R-Ind., the committee heard testimony from various insurance trade association representatives, a risk analytics expert, an insurance regulator and a consumer advocate. Though they differed in the details, each spoke excitedly about the opportunities presented by this emerging field.

“Big data” generally is understood to refer to the emerging ability of powerful computers to crunch data sets that previously would be unthinkably large and find patterns, trends and associations that wouldn’t be obvious to casual observers. Applying the technology of big data to find credible linkages in human behavior and interactions with future claims offers the potential to completely revolutionize underwriting strategies and dynamic pricing models. While “big data” potentially offers some exciting new horizons for insurance, it simultaneously is raising concerns about regulators and legislators for its potential to be both disruptive and opaque.

Frank O’Brien of the Property Casualty Insurers Association of America observed that the industry has occupied itself with collecting risk-related information since its inception. These latest tools are thus best thought of as evolutionary, not revolutionary, developments. At bottom, should novel tools used to capture data be treated any differently than those that have been employed for hundreds of years?

O’Brien’s assertion garnered nods of approval from the industry-heavy crowd. Yet while he’s not wrong about the underlying purpose insurers hope to achieve, that point nonetheless obscures the hugely novel methodologies involved in incorporating “big data” into the business of insurance. The speed and persistence with which the industry is investigating new data-driven opportunities is sufficient evidence of this. The value proposition is enormous.

The challenge that confronts policymakers and regulators is how to differentiate among big-data approaches to identify those that might be problematic.

Wes Bissett of the Independent Insurance Agents and Brokers of America noted there is no single law that can hope to address all the various concepts included in the notion of “big data.” Birny Birnbaum of the Center for Economic Justice argued that insurance regulators must nonetheless adopt a broad approach to big-data concerns that includes a constantly updated list of the data employed by insurers. While his proposal was light on details, evoking skepticism from parts of the panel and audience, Birnbaum maintained that such a list would allow regulators and the public to ensure that data are not used for impermissible and/or discriminatory purposes.

Given that insurance is a highly regulated industry, it’s crucial for both policymakers and legislators to comprehend exactly what novel uses of data will entail. Yet insurers are stuck between the need to be transparent with the public – after all, nothing elicits cynicism like “black box” pricing factors – and the need to protect the competitive advantages that bespoke “big data” strategies provide.

At the panel’s conclusion, Rep. Lehman urged committee members to work with interested parties to bring forward concrete proposals for model legislation, which could be considered when NCOIL meets this summer in Portland, Oregon. There’s not yet any indication what kinds of models the committee might entertain, though a proposal concerning “telematics” (not limited to automotive technology) was endorsed by both Bissett and Birnbaum. Given the savings that deploying such technology could achieve, it’s as good a place to start as any.

Featured Publications