I write you as director of finance, insurance and trade policy at the R Street Institute, a nonprofit, nonpartisan public policy research organization (“think tank”). Our mission is to engage in policy research and outreach to promote free markets and limited, effective government. Since our founding in 2012, R Street has had a successful history of research into public policy regarding the business of insurance. Our most notable contribution in this space has been producing our annual Insurance Regulation Report Card, whose eighth edition will be published in early December.

R Street is not engaged as an insurance underwriter, data firm or model provider. Our thoughts on the Task Force’s Predictive Model White Paper are solely our own, grounded in the principles of limited, effective and efficient government. With that said, we would like to share some of our concerns with this exposed white paper.

While framed as a simple “best practices” document focused on generalized linear models (GLM) deployed in the personal auto and homeowners lines of business, the white paper’s scope clearly goes far beyond that. It proposes new rating standards that do not reflect any existing state actuarial review process for rating plans filed with a GLM. In some places, it even suggests that such standards be applied to other, and possibly even all, model types. Of particular concern is that the white paper in several places recommends states require the collection of information that could breach confidentiality, trade secrets and long-established Actuarial Standards of Practice (ASOPs).

Among the problematic recommendations are the paper’s various instructions concerning raw data, which it defines as “data before scrubbing, transformation etc. takes place.” Regulators should, the paper notes, “review the geographic scope and geographic exposure distribution of the raw data for relevance to the state where the model is filed” as well as “ask for aggregated data… that allows the regulator to focus on the univariate distributions and compare raw data to adjusted/binned/transformed/etc. data.”

The goal appears to be to allow regulators to reproduce a model’s outputs, rather than simply review those outputs. But submitting raw data could pose security risks and may violate contractual obligations with third parties. Our view is that, as a general heuristic, regulators should temper their desire to know everything that goes on inside the proverbial “black box” and instead put the appropriate focus on what comes out of it; that is, focus on rates, not the models that produced them.

The confidentiality concerns extend beyond raw data and to the models themselves. The paper has been amended to better recognize the need for confidentiality protections, but it continues to entrust regulators to determine where it is “appropriate” to guard intellectual property and trade secrets. Third-party vendors can only fiscally justify their significant investment in proprietary algorithms if they are granted certainty that such work product will be protected. Without that certainty, few new vendors will come to the market and existing models could be withdrawn. In either case, the effect would be to stifle innovation.

Needless to say, a withdrawal of existing GLMs would cause significant market disruption. State departments of insurance have been deploying well-established review processes for GLMs for years. The results have been vibrant and competitive insurance markets. As R Street has demonstrated in its annual Insurance Regulation Report Card, no state currently has either a personal auto or homeowners insurance market with a Herfindahl-Hirschman Index (HHI) score that would indicate it is highly concentrated. In our forthcoming edition, we find only Alaska, Louisiana and New York had moderately concentrated auto insurance markets in 2018 and only Alaska had a moderately concentrated homeowners market.

This is a marked difference from the situation that prevailed through the 1980s, when some states saw as much as half of all auto insurance consumers shunted into residual market mechanisms. By contrast, according to the Automobile Insurance Plans Service Office (AIPSO), as of 2018, residual markets accounted for less than 0.1% of the market in 34 of the 50 states. Just four states—Maryland, Massachusetts, Rhode Island and North Carolina—have residual markets that account for more than 1% of auto insurance policies. This greatly improved ability of insurers to segment, classify and price risk effectively can be traced directly to the emergence of dynamic models like credit-based insurance scores.

Regulators should be very cautious before adopting any changes that could reverse those victories.

To its credit, this updated version of the exposed draft does acknowledge a central weakness at the heart of the project, which is the degree to which regulators are expected to ask “the modeler to go beyond correlation and document their basic, causal understanding of how variables used in a model or rating plan are related to risk.” As the white paper notes, this approach significantly exceeds the requirements established in ASOP No. 12. It is, of course, reasonable to require model predictions to bear some resemblance to the subject being modeled, but causality is notorious difficult to prove, and the standards raised here could make the practice of modeling itself untenable.

What also should be noted is the irony that the white paper would recommend that regulators insist upon filings that prove not only the credibility, but the causal nature of modeling assumptions, when the Task Force itself has not begun to credibly demonstrate that such radical shifts in the rate-filing approval process would better serve markets or consumers.

Indeed, given the decades-long record for competitive insurance markets, there simply is no good reason to risk widespread market disruption via unprecedented information requests. Any best practices around regulation of predictive modeling should begin by determining what information is truly “essential” to ensure that rates are sufficient and neither excessive nor unfairly discriminatory.

Featured Publications