Overcoming Paternalistic Barriers to Personalized Medicine in the AI Age
Should Americans be able to use artificial intelligence (AI) systems, including chatbots, to learn more about their health and potential treatments for their ailments? That question is becoming more pertinent as people are increasingly empowered with new AI tools that could potentially expand personalized medicine and improve health outcomes, but which also carry some potential risks.
Recently, sources reported that OpenAI made changes to the terms of service for its ChatGPT large language model that would limit personalized health and legal advice. OpenAI denied the claims, however, insisting that “model behavior remains unchanged.” Meanwhile, average users on social media and Reddit threads disputed that with examples of how recent updates have led to health-related queries producing results that are more limited.
It is a confusing situation, and one that is likely to intensify in the coming months and years as new regulations or court-based liability threaten to deter important new AI-based health capabilities. Policymakers should be careful about how they address these issues, because important free speech values and personal autonomy issues are at stake. Policy designed for AI-enabled medicine should ensure that people can freely learn about their health and treatment options. If lawmakers act on this front, it should be with an eye toward expanding health information and treatment options, not restricting them.
Mountains of Mandates and Costly Compliance Hassles
Consider the uncertain legal environment that AI developers face today. AI-enabled health innovation is threatened by the “race to the bottom” that will occur if developers get squeezed between the so-called “Brussels effect” of burdensome European Union (EU) mandates, and the “Sacramento effect;” a patchwork of state AI mandates, led mainly by California.
The European Parliament passed the EU AI Act last year, which designates certain activities, such as giving personalized health, legal, and financial advice, as “high-risk.” With this designation comes a trough of responsibilities, including technical documentation, transparency requirements, quality management systems, incident reporting, conformity assessments, and AI literacy tests for employees who are building AI systems.
Meanwhile, in the United States, many state governments are looking to regulate AI aggressively. In fact, over 1,100 AI-related bills were introduced across the nation this year. Most of them are state legislation, and over 250 pertain to health-related concerns. This confusing patchwork will decimate competition by “Little Tech” innovators—especially those using open source AI capabilities—who will struggle with the enormous compliance costs.
As usual, California law will likely play an outsized role in dictating national market outcomes. The state has considered dozens of AI-related bills, and Governor Gavin Newsom recently signed several into law. One of those measures, Assembly Bill 489, would regulate how AI chatbots represent themselves to consumers when delivering information to users. That sounds harmless enough, but it is part of a growing number of different state policies for AI chatbots. That includes a major Illinois law banning “AI therapy” chatbots altogether. In addition to these new regulatory enactments, AI innovators also face the omnipresent threat of looming liability in America’s overly litigious court system.
Information is Empowerment
AI has the potential to become what the late political scientist and communications theorist Ithiel de Sola Pool called a “technology of freedom.” These are tools or platforms that can help democratize access to information that high costs and barriers to entry have previously restricted. People should have access to personalized medical, legal, and financial advice that is effectively free in a market economy. However, governments are eager to limit such easy access to information by preemptively regulating AI systems.
To be sure, new AI tools carry risks and are still improving. While there are some documented cases of AI-related health advice causing bad outcomes, the answer should not be overbearing, paternalistic policies that threaten a fundamental right to speech and information for consumers. Policing quackery is good, and real-world harms can and should be dealt with through existing legal mechanisms and the judicial process. User education and technological literacy must also be part of the solution. By contrast, broad and burdensome regulations in the medical field demonstrate a pattern of governments viewing AI as an opportunity to control speech they disagree with or view as threatening.
However, lawmakers must avoid unthinking “robophobia,” or the notion that a radically different standard than the one we hold humans to must be developed for AI systems before new technologies can be released. Studies on ChatGPT show that it “can match or even outperform physicians in terms of therapeutic decision accuracy, with a similar or slightly better success rate than human doctors.” This is partly because human doctors are so prone to errors due to common human limitations such as fatigue, distraction, multitasking, aging, and information overload. AI doctors will have their own shortcomings, but they triumph over these common human errors. One study on the use of ChatGPT for the study of sleep showed that it “demonstrates promising diagnostic potential in sleep medicine, with moderate to high accuracy in identifying differential and final diagnoses.”
Cost is another obvious hurdle many Americans face when accessing healthcare, both in time and money spent. Healthcare costs have steadily risen, and studies show that Americans spend over two hours in real time for just 20 minutes with a doctor. AI can also alleviate these two problems by providing low-cost, real-time medical information to users.
First Amendment and Free Speech Concerns
The reality of AI regulation is that innovators will likely adapt their systems to conform to the most restrictive legal regimes to avoid any chance of liability. That would preemptively undermine the speech-enhancing, life-enriching, even life-saving potential of these new technologies.
The debate over AI in medicine exists at the intersection of two fundamental, and sometimes competing, legal authorities. On one hand, the Tenth Amendment grants states broad ‘police powers’ to protect the health and safety of their citizens. On the other hand, the First Amendment provides protection for the speech of all citizens. While there are some narrow exceptions to the First Amendment, it provides extremely broad protection. Yet the First Amendment only sets a “floor” of speech protections that states can build upon for even more speech protection. The central legal challenge is then determining where the state’s valid power to regulate medical conduct ends, and where it unconstitutionally infringes on the right to protected speech.
Information produced or conveyed by AI systems should receive First Amendment protection, just as information shared through radio and television does. As one legal summary by the Foundation for Individual Rights and Expression puts it, “Any government restriction on the expressive use of AI needs to be narrowly tailored to serve a compelling government purpose, and the regulation must restrict as little expression as is necessary to achieve that purpose.” It’s difficult to argue that the requirements listed in the EU AI Act or certain state bills fit a description of narrowly tailored to avoid infringing speech rights.
Several court cases affirm the rights of citizens to access such health information, as in Sorrel vs IMS Health, where a state law prevented citizens and marketing companies from using data about how doctors prescribed medicine. The justices struck down that law because it imposed a content-based restriction of speech, stating:
This Court has held that the creation and dissemination of information are speech within the meaning of the First Amendment … Facts, after all, are the beginning point for much of the speech that is most essential to advance human knowledge and to conduct human affairs.
The Court made a sharp distinction between the government’s ability to regulate conduct, such as the operation of a surgery, and things like information and speech. While the government may have a legitimate interest in regulating such conduct, speech has special affordances that create a higher standard for government involvement. AI generally sends and receives information that is speech, not conduct. . When the state regulates speech, it faces a higher constitutional bar.
In a recent ruling in the so-called “death doula” case brought by the Institute for Justice (IJ), an appeals court rejected a state law requiring that death counseling services obtain funeral home licenses and to purchase or rent a funeral home, potentially costing hundreds of thousands of dollars in additional expenses. Death counseling services typically help families develop an end of life plan, like delegating tasks and money, helping families choose a funeral option, and providing emotionally and financially distressed families with support while they make decisions. These are primarily speech activities, and death counseling does not require any technical work like transporting or embalming bodies.
The court invoked the First Amendment, stating that the law sweeps in vast amounts of speech not directly regulated by funeral home licensing activities, explaining that “This approach furthers the state’s interests the way an atom bomb would further the eradication of a residential ant infestation.” While some regulations for technical work are valid, laws that sweep in speech related to those activities trigger a higher threshold where the government must take the narrowest approach possible to achieve direct and legitimate interests. The parallels to AI are apparent, as laws regulating the technical work of doctors may be valid, but a law preventing consumers from discussing their health with AI services veers into the unconstitutional area of speech regulation.
In Hines vs Pardue, another IJ case, the court rejected a Texas law restricting a veterinarian from giving online pet health advice without first seeing the animal. The case reaffirmed the principle that states can regulate technical work, but when laws prevent otherwise legal speech from occurring, they violate the First Amendment. Similarly, AI does not actually conduct any medical work, so laws restricting the ability of users to engage in speech activity with it also violate the First Amendment.
Moreover, AI systems are likely to qualify as an “interactive computer service” under section 230 of the Communications Decency Act. Section 230 protects online media companies from being liable for content posted by third parties (i.e. users of the service). While section 230 as written may not be sufficient to cover all generative AI applications, the spirit of the law should apply to cases like personalized medicine. While AI already contains much information and programming, it still requires a user to input content to produce it. If, as proposed in Colorado and California, AI firms are held liable for every potentially harmful action users take when seeking personalized medicine, they will simply cease offering the service.
Some narrow instances may justify holding firms liable for speech they produce, such as cases of libel, but we cannot risk losing the massive benefits of personalized medicine and free speech because of a few bad outcomes. Section 230 and the First Amendment broadly protect the rights of firms and users to exchange information the way they see fit, even if the topic is a licensed activity like medicine.
Constructive Policy Options
It would be helpful if legislators took steps to extend free speech protections for more medical information. Arizona’s landmark “Free Speech in Medicine Act” lets pharmaceutical manufacturers engage in “truth promotion” of “off-label” uses for drugs (off-label meaning a non-Food and Drug Administration approved use of a drug). As off-label uses became more common and practical, the restriction of off-label information sharing began to limit the sharing of beneficial information about these uses. This reaffirms the principle that people have a right to access speech, even if that speech is commercial in nature. Unfortunately, the idea has not caught on, and only Tennessee passed comparable legislation. Policymakers should create additional protections that align with these measures and key Supreme Court precedents.
As the Supreme Court put it 40 years ago in Virginia Board of Pharmacy v. Virginia Citizens Consumer Council, “Information is not in itself harmful … People will perceive their own best interests if only they are well enough informed, and … the best means to that end is to open the channels of communication rather than to close them.” This precedent clearly shows that more information, more speech, and more education are the optimal solutions – not restrictions on legal speech.
Not only do restrictions on health and medical information threaten fundamental speech rights, they also have negative economic and competitive impacts. Restricting speech from licensed practices like medicine and law amplifies the problems with the government gatekeeping for certain occupations, as the Obama administration acknowledged in a 2016 report on the costs of economic licensing. Licensed practices use the regulatory certification process as a means to deter competition and increase the costs of entry. Personalized AI tools are already showing promise and represent a competitive threat to the traditional model of medicine because they offer a similar service for a fraction of the original price. AI tools present a real chance to address the expensive and bloated current medical system by offering far cheaper, faster, and more effective solutions to many needs. Yet, some states, like North Carolina, have proposed making it illegal to “operate or distribute a chatbot that deals substantially with health information without first obtaining a health information chatbot license.”
Policymakers should take a more enlightened policy approach. Some experts have proposed model legislation to balance risk and innovation by “ensuring patients have the right to access, control, and share their individualized medical information, including algorithm-generated insights,” and “establishing that truthful and nonmisleading AI-generated healthcare outputs are protected speech.” Lawmakers could also require users to give special consent before using AI systems for personalized medicine without a doctor. If users accept the potential risks of the technology, it settles many of the liability issues.
Expanded health literacy, technology literacy, and risk communication can also be part of a constructive government response, and in April, President Trump signed an executive order calling for AI literacy efforts. Lawmakers can provide education to help people become more tech-savvy and better able to make wise choices using all the new information at their disposal.
Constructive solutions such as these begin from the premise that people deserve better options, not to have their rights limited, as new technological capabilities become available. The new approach Americans deserve recognizes the potential exponential increase in well-being from new technology, and creates smart policies to capture those benefits while still dealing with harmful outcomes, but in a less restrictive fashion. AI represents a potentially paradigm changing technology in the field of healthcare, and it would be a tragedy for people to lose access to it due to misguided laws.