This analysis is in response to breaking news and subject to change. Please reach out to pr@rstreet.org to speak with the author.

While the House of Representatives has received most of the attention around data privacy this Congress—especially regarding a comprehensive federal privacy law—a July 11 hearing of the Senate Committee on Commerce, Science, & Transportation provided insight into where the other chamber stands. Notably, while many senators expressed broad support for the American Privacy Rights Act (APRA), ranking member Sen. Ted Cruz (R-Texas) does not support it.

The Need to Protect Americans’ Privacy and the AI Accelerant” delved into how the rise of artificial intelligence (AI) heightens the need for comprehensive federal data privacy laws to protect individuals and guide businesses in AI development and use. Committee chair, Sen. Maria Cantwell (D-Wash.), is a key contributor to the APRA. Several themes emerged during the hearing: the significance of data minimization, ensuring small businesses can innovate in the AI space, a need for data security guardrails, and the national security implications of not having a federal privacy law.

Witnesses included Ryan Calo, a professor at the University of Washington School of Law; Amba Kak, co-executive director of the AI Now Institute; Udbhav Tiwari, director of global product policy at Mozilla; and Morgan Reed, president of ACT | The App Association. All witnesses conveyed the urgent need for a comprehensive federal privacy law.

Federal data privacy and security law

The pressing need for comprehensive federal data privacy and security laws in the context of AI cannot be overstated. There are four main areas of concern here: privacy, security, misinformation and disinformation, and bias. However, it is essential that a federal privacy law remain focused on substantive data privacy provisions rather than provisions about emerging technologies like AI. For example, the APRA originally contained sections that prohibited the collection and use of data in a manner that discriminates against individuals, required impact assessments for covered algorithms, and instituted algorithm design evaluations. These provisions were removed from the APRA’s updated version entirely, with an aim toward more privacy-centric legislation. 

Data minimization

Trained on massive amounts of data, large language models (LLMs) can derive sensitive insights about consumers and businesses. There are concerns about AI’s ability to infer sensitive information from data, which could cause societal harm. Many witnesses conveyed that one benefit of a comprehensive federal data privacy law would be a data minimization standard. Calo stated that data minimization rules could help address AI’s “insatiable appetite for data,” and Kak emphasized that “data never collected is data that is never at risk.” However, it is important not to limit valid use cases like sensitive data from AI systems outright. There are cases in which sensitive data and AI systems are used in medical research or for safety reasons. For example, while people might agree that children’s data should receive heightened protection, a nuanced approach is essential. If an autonomous vehicle company wanted to train its AI systems to detect humans in the roadway, then we should allow them to use images or videos of both adults and children to ensure the vehicles can detect all humans adequately.  

Small business

Many members and witnesses mentioned the importance of small business in the AI technology space. Reed explained that exempting small businesses from APRA requirements might not be the correct approach because the preemption would expose them to the growing complex patchwork of state privacy laws. While large companies might have the resources to navigate these laws, small businesses do not. Reed also highlighted small businesses’ reliance on LLMs created by large AI companies, arguing that disrupting LLM innovation would also affect small businesses negatively.

Data security

There is an intricate link between data privacy and data security. While the APRA included compliance standards for data security, other provisions, such as data minimization, bolstered data security. Technology also plays a role in data security. In his written testimony, Tiwari explained that “[w]hile legislation is essential, technical advances must work hand-in-hand with them to create a more safe and private future.” This includes developing and deploying privacy-enhancing technologies, which play a key role in data security and privacy protection for consumers in general.

National security

The R Street Institute has conducted extensive research on the intersection between data privacy and security, AI, and national security. The concerns about AI’s potential for consumer privacy abuse are valid, which is why R Street has championed the need for a comprehensive federal data privacy and security law. However, as Sen. Eric Schmitt (R-Mo.) noted during the hearing, AI plays a large role in our national security and defense systems. Privacy legislation must not hinder innovation, especially as other nations seek to get ahead of the United States in the AI race.

As policymakers contemplate federal data privacy and security legislation, it is important to note that these protections are crucial—not just for AI, but for all technological developments. Thus, privacy law provisions should not be specific to any emerging technology, including AI. This approach allows for responsible innovation while ensuring privacy protection across the broad technological landscape.

bool(true)
bool(true)
string(2) "50"

Data Privacy.
Data Security.

We must establish a national data security and data privacy law to reduce data security and national security risks, promote global competitiveness, and provide all Americans with privacy protections.