R Street Testimony in Opposition to CT SB 2: ‘An Act Concerning Artificial Intelligence’
Testimony from:
Adam Thierer, Senior Research Fellow, R Street Institute
Testimony in Opposition to CT SB 2: “An Act Concerning Artificial Intelligence”
February 26, 2025
Connecticut Joint General Laws Committee
Chairman Maroney, Chairman Lemar, and members of the committee:
My name is Adam Thierer and I am a Resident Senior Fellow in the Technology and Innovation program at the R Street Institute. The R Street Institute is a nonprofit, nonpartisan public policy research organization. Our mission is to engage in policy research and outreach to promote free markets and limited, effective government in many areas, including emerging technology.
This is why SB 2: “An Act Concerning Artificial Intelligence” is of great interest to our organization. The bill represents a significant expansion of state regulation of artificial intelligence (AI) systems that will discourage competition and investment. This would set back Connecticut and the nation, especially as the United States is in tight competition with China for leadership in the global AI market.[1] There are better ways for Connecticut to address concerns about AI systems that would not undermine growth and innovation the way SB 2 will.
SB 2 Undermines National AI Policy Goals
America must be prepared to meet the global AI challenge posed by China and other nations because it has profound ramifications for both our nation’s global competitiveness and geopolitical security.[2] This is why a bipartisan coalition of lawmakers recently noted in a 273-page report that, “the United States must take active steps to safeguard our current leadership position” to “help our country remain the world’s undisputed leader in the responsible design, development, and deployment of AI.”[3]
SB 2 runs counter to these important national priorities by erecting new barriers to AI growth and innovation. This measure, and other proposals like it pending in other states, threaten to duplicate the disastrous regulatory model developed by the European Union (EU).[4] The European regulatory approach for digital technology and AI essentially treats new algorithmic innovations as guilty of speculative future harms, even for products and services not yet on the market.
Unsurprisingly, the EU’s regulatory model has been devastating for competition, investment, and new start-up formation across Europe.[5] Scholars have called Europe “the biggest loser” in the global digital technology race today because of the way the continent’s regulatory environment has so severely undermined its ability to compete globally and attract new investment.[6] While America has 19 of the 25 largest digital companies in the world by market cap, Europe has only two.[7] In 2022 alone, U.S. digital technology firms are contributing over $4 trillion of gross output for the nation, $2.6 trillion of value added (translating to 10 percent of U.S. GDP), $1.3 trillion of compensation, and 8.9 million jobs.[8] This remarkable growth came about thanks to a light-touch regulatory approach that America adopted for digital commerce and speech, in the 1990s.[9]
SB 2 Emulates Europe’s Disastrous Regulatory Model
This is why the United States must reject efforts to import Europe’s failed policy model to our shores as SB 2 essentially proposes. The bill contains many confusing, open-ended provisions that would create formidable compliance headaches, especially for smaller companies. As with the EU’s new AI Act, the Connecticut law is rooted in fears about preemptively identifying and eradicating hypothetical, future harms associated with algorithmic systems. That approach is well-intentioned but misguided.
Specifically, the legislation is preoccupied with trying to eliminate any possibility of “algorithmic discrimination,” especially for “high-risk” use cases or applications where AI systems represent a “substantial factor” in making “consequential decisions.” Innovators are expected to use “reasonable care” as it pertains to “any known or reasonably foreseeable risks” from systems that might pose such concerns.
These open-ended requirements would create a litany of roadblocks and slow the pace of AI innovation in the state. In theory, almost every algorithmic system—like all technological systems more generally—pose some theoretical risks to society. But it would be a mistake to regulate AI through preemptive and highly restrictive layers of red tape based on speculative fears. The only innovators who will be able to deal with the compliance burdens are the largest companies with massive compliance teams and the resources needed to deal with the formidable regulatory and liability costs. “Startups don’t have these luxuries,” observes an analyst from a leading venture capital firm.[10] “They may have minimal legal resources they can devote to compliance, and some startups don’t even have a full-time lawyer on staff,” he notes. “If they need to make changes to their product to comply with a new state law, they would need to pull valuable engineers away from working on baseline elements of product development and monetization.”
The better way to address such concerns is to grant all AI innovators the widest latitude possible to bring new products and services to market, but then hold them accountable if they violate time-tested legal standards that protect consumers against harm and discrimination.[11]
SB 2 Follows Colorado’s Costly and Complicated AI Law
SB 2 is modeled after a similar measure in Colorado that was passed last year. But that Colorado law (SB24-205) serves as a cautionary tale for Connecticut and the nation because it illustrates how complicated and costly such AI regulation will be in practice.[12]
Before the Colorado legislature passed the bill, a group of smaller AI developers and tech entrepreneurs sent a letter to state lawmakers noting how the measure “would severely stifle innovation and impose untenable burdens on Colorado’s businesses, particularly startups.”[13] They cited the law’s “vague and overbroad” definitions and noted that efforts to predict “foreseeable” risks of general-purpose AI is “essentially impossible and invites litigation against fundamental and socially valuable innovations.” These innovators also cited First Amendment-related concerns with the Colorado bill.
Unfortunately, Colorado Gov. Jared Polis signed the measure into law anyway, but cited deep reservations with the bill when doing so. In his signing statement, Polis said that the Colorado law would “create a complex compliance regime for all developers and deployers of AI” through “significant, affirmative reporting requirements,” and that he was “concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.”[14] He went further and suggested the need for a “cohesive federal approach” that is “applied by the federal government to limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines along with ensuring access to life-saving and money-saving AI technologies for consumers.”
With these problems in mind, after the bill signing, Gov. Polis joined the sponsor of the bill and the state’s attorney general in sending a joint letter to the Colorado technology industry community promising to quickly form a Colorado AI Impact Task Force to address concerns “that an overly broad definition of AI, coupled with proactive disclosure requirements, could inadvertently impose prohibitively high costs on them, resulting in barriers to growth and product development, job losses, and a diminished capacity to raise capital.”[15] Unfortunately, that task force recently released its final recommendations and failed to address these problems. The task force cited many “issues with firm disagreement on approach and where creativity will be needed,” to solve, but offered no solutions.[16]
This makes it clear that the Colorado law is not a good model for Connecticut to be following.
Connecticut Has Better Ways to Address AI-Related Concerns
Connecticut has many other long-standing ways to address concerns about AI systems that would not involve a preemptive and paperwork-intensive regulatory system for the most important technology of modern times.[17]
While SB 2 looks to preemptively address “algorithmic discrimination,” the federal government and all U.S. states including Connecticut already have many civil rights laws and consumer protection regulations on the books that cover such concerns. Existing Connecticut laws already address any sort of discrimination that occurs in employment, housing, and public accommodations. AI systems are not exempt from these regulations. Any sort of “algorithmic discrimination” that might be discovered is already flatly illegal under not only these Connecticut civil rights statutes, but also federal civil rights laws.
The same is true for any other consumer harms that might occur because of faults with AI systems. Like every other state, the Connecticut code specifically includes extensive consumer protection regulations and penalties for unfair and deceptive practices. The Connecticut Attorney General’s office and state consumer protection offices can address AI-related harms if they are shown to exist.
But, again, SB 2 proposes to try to discern those harms preemptively for systems and applications that have not yet even been developed or deployed to the public. The state should not be treating innovators as guilty of hypothetical harms when those more sensible and fair remedies exist. Regulation is not a costless exercise. If Connecticut adopts the EU’s misguided approach to technology regulation, it will make new innovation, competition, and job creation in the state much more costly.
Lawmakers should focus on enforcing existing laws and then modify or supplement those laws as needed if gaps are found later.
Meanwhile, Connecticut’s AI policies should ensure that the state’s AI focus is pro-growth and pro-innovation to ensure that the state and nation remains at the cutting-edge of AI innovation globally.
Because SB 2 runs counter to that objective, I encourage you to oppose the measure.
Thank you,
Adam Thierer
Resident Senior Fellow, Technology and Innovation
R Street Institute
Watch the testimony here:
[1] Haiman Wong, “US May Be Losing the Race for Global AI Leadership,” Dark Reading, Step. 25, 2024. https://www.rstreet.org/commentary/us-may-be-losing-the-race-for-global-ai-leadership.
[2] Adam Thierer, “Ramifications of China’s DeepSeek Moment, Part 3: What Both Parties Need to Accept and Do Next,” R Street Analysis, Feb. 18, 2025. https://www.rstreet.org/commentary/ramifications-of-chinas-deepseek-moment-part-3-what-both-parties-need-to-accept-and-do-next.
[3] U.S. House of Representatives, 118th Congress, “Bipartisan House Task Force on Artificial Intelligence,” Dec. 2024, https://www.speaker.gov/wp-content/uploads/2024/12/AI-Task-Force-Report-FINAL.pdf.
[4] Dean W. Ball, “The EU AI Act is Coming to America,” Hyperdimensional, Feb. 13, 2025. https://www.hyperdimensional.co/p/the-eu-ai-act-is-coming-to-america.
[5] David S. Evans, “Why Can’t Europe Create Digital Businesses?” S&P Global Market Intelligence, May 2, 2024. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4781503.
[6] “The Biggest Loser,” The International Economy, (Spring 2022). https://www.international-economy.com/TIE_Sp22_EuropeTechLoser.pdf
[7] “Largest Tech Companies by Market Cap,” las accessed Jan. 24, 2025, https://companiesmarketcap.com/tech/largest-tech-companies-by-market-cap.
[8] U.S. Bureau of Economic Analysis, “U.S. Digital Economy: New and Revised Estimates, 2017–2022,” Dec. 6, 2023. https://apps.bea.gov/scb/issues/2023/12-december/1223-digital-economy.htm.
[9] Adam Thierer, “The Policy Origins of the Digital Revolution & the Continuing Case for the Freedom to Innovate,” R Street Real Solutions, Aug. 15, 2024. https://www.rstreet.org/commentary/the-policy-origins-of-the-digital-revolution-the-continuing-case-for-the-freedom-to-innovate.
[10] Matt Perault, “Setting the Agenda for Global AI Leadership: Assessing the Roles of Congress and the States,” a16z blog, Feb. 4, 2025. https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states.
[11] Adam Thierer, “The Most Important Principle for AI Regulation,” R Street Institute Real Solutions, June 21, 2023. https://www.rstreet.org/commentary/the-most-important-principle-for-ai-regulation.
[12] Adam Thierer, “Colorado Opens Door to an AI Patchwork as Congress Procrastinates,” R Street Analysis, May 20, 2024. https://www.rstreet.org/commentary/colorado-opens-door-to-an-ai-patchwork-as-congress-procrastinates.
[13] https://drive.google.com/file/d/1acltqBnwjnPjRoe2ZCW5Tww_iABMUUj3/view.
[14] Governor Jared Polis, “Letter to the Colorado General Assembly,” May 17, 2024. https://drive.google.com/file/d/1i2cA3IG93VViNbzXu9LPgbTrZGqhyRgM/view.
[15] https://newspack-coloradosun.s3.amazonaws.com/wp-content/uploads/2024/06/FINAL-DRAFT-AI-Statement-6-12-24-JP-PW-and-RR-Sig.pdf.
[16] https://leg.colorado.gov/sites/default/files/images/draft_ai_impact_task_force_recommendations.pdf.
[17] Adam Thierer, “Getting AI Innovation Culture Right,” R Street Institute Policy Study №281 (March 2023). https://www.rstreet.org/research/getting-ai-innovation-culture-right.