Current conversations around age verification often forget that the United States has tried to mandate age verification several times in the past. Throughout these previous attempts, the Supreme Court has regularly recognized that children have First Amendment rights, that a lack of anonymity online chills speech and that chilling speech harms content providers. Indeed, the 1990s saw various laws mandating age verification for online content, all of which were struck down by courts. This legal precedent is still a helpful guide for today’s legislation, where age-verification methods have created a variety of issues that go beyond the First Amendment right to anonymity, including issues related to under-inclusivity; user access and website feasibility; and overbreadth and vagueness doctrines. 

A number of previous legal cases remain relevant to current age-verification issues. In an amicus brief, Professor Eric Goldman reminds readers of the 1996 Communications Decency Act (CDA), “which the Supreme Court largely struck down in Reno v. ACLU as a vague and content-based restriction of protected speech under the First Amendment.” Goldman writes that the law “criminalized the ‘knowing’ transmission of ‘obscene or indecent’ messages to minors over the Internet.” However, using age verification provided websites with an affirmative defense. “But the Court held that age-verification requirements ‘would not significantly narrow the statute’s burden on noncommercial speech,’” writes Goldman, because “it is not economically feasible for most noncommercial speakers to employ such verification.” Today, legislation requiring age verification for social media faces the same challenges because of the various burdens those laws would put on speech protected by the First Amendment.

Similarly, Congress’s 1998 Child Online Protection Act (COPA) was a response to the Court striking down most of the CDA. It was also challenged in various court cases and ultimately ruled unconstitutional by the Third Circuit, which found that age-verification requirements did not comport with the First Amendment and also echoed the district court’s conclusion that age-verification requirements would burden speech. The court held that “users could be deterred from accessing the plaintiffs’ Web sites’ because ‘many Web users are simply unwilling to provide identifying information in order to gain access to content, especially where the information they wish to access is sensitive or controversial.’” Goldman notes that states had passed smaller versions of these bills that were also ruled unconstitutional.

These decisions remain relevant, and today’s users may be similarly unwilling to verify their ages to access content. There are many scenarios in which one might pause before verifying one’s age if intrusive means of verification were necessary. For example, one might need to verify their age to access lewd content, to research information about an embarrassing health issue, or to seek out unpopular content such as Nickelback or Imagine Dragons songs. In cases like these, hackers might find this information valuable and leverage it for blackmail. Consider how someone may be hesitant to engage on an HIV support forum or a forum for marital problems if they feared their posts would be tied to their identity. Indeed, the court in American Civil Liberties Union v. Gonzales added that age verification also deters “many users who are not willing to access information non-anonymously … from accessing the desired information.”

While anonymity has been a key issue in these debates, a series of other issues have arisen from age-verification methods, such as under-inclusivity. The late Justice Antonin Scalia wrote the majority opinion in Brown v. Entertainment Merchants’ Association in which the Court found that laws cannot condition children’s access to non-obscene speech on parental permission. “At the outset,” he writes, “we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.” He explains that “California’s legislation straddles the fence between (1) addressing a serious social problem and (2) helping concerned parents control their children.” While he says both of these are legitimate ends, because they could violate First Amendment rights, they must be pursued in ways that are not under-inclusive or over-inclusive. Here, Scalia notes that the legislation was under-inclusive because it only applies to video games but not to other portrayals of violence in movies, television, books or other mediums. But it is also too broad because it violates the First Amendment rights of minors whose guardians are not concerned about their children’s access to violent video games. For these reasons and more, he explains, the law “cannot survive strict scrutiny.” 

Issues with under-inclusivity apply in contemporary age-verification debates, too. The type of content new laws target is similarly under-inclusive, as they focus on social media only. They do not address books, television, websites where users cannot create accounts, or even, ironically, video games, which are explicitly excluded from this and other age-verification legislation. The legislation is also similarly over-inclusive in preemptively blocking access to speech for: (1) minors whose parents are willing to allow them on social media but not willing to allow their children to use age-verification software, (2) those same parents who are not willing to use age-verification software themselves, and (3) adults without children who are unwilling to verify their ages.

Website traffic is another issue that combines First Amendment issues with website feasibility issues. The Third Circuit recognized that age verification will result in decreased traffic to certain websites. The court wrote that age-verification requirements “present their own First Amendment concerns by imposing undue burdens on Web publishers due to the high costs of implementing age-verification technologies and the loss of traffic that would result from the use of these technologies.”

And if we look even deeper into case law, we see the exact same functional problems that face us today. In 2008, a lower court found that the age-verification methods available at that time were unreliable in their ability to verify age and that there were no products that effectively blocked access to websites by minors. The lower court added that “[t]he affirmative defenses cannot cure COPA’s failure to be narrowly tailored because they are effectively unavailable.”

While technology has improved since then, the problem is now the tradeoff between more effective age verification with compromised privacy/security and less effective age verification that has fewer privacy/security risks. The issue was summarized by the court in ACLU v. Gonzales, which wrote “[b]ecause requiring age verification would lead to a significant loss of users, content providers would have to either self-censor, risk prosecution, or shoulder the large financial burden of age verification.”

This is extremely relevant to today’s discussion. Part of First Amendment jurisprudence is ensuring that the least restrictive means are used by government to achieve its goals. Writing for the majority in Ashcroft v. ACLU, a case challenging COPA, Justice Anthony Kennedy explains that the District Court considered the alternative of filtering software, which is “less restrictive than COPA” and more effective in the goal of restricting content to minors.

Kennedy explains the value of filters that can be used by parents to limit what their children can access, noting that “[t]hey impose selective restrictions on speech at the receiving end, not universal restrictions at the source.” Without COPA and with filters, adults may access speech easily and choose to set restrictions for their children. Kennedy also pointed out that filters avoid the need to condemn broad categories of speech as criminal, which prevents the chilling effect. Kennedy also cites a 2000 report to Congress by the Commission on Child Online Protection—created by Congress to study COPA—which found that parental filters were far more effective than ID or credit card verification. 

Although the current issue relates to a different set of laws and proposals and a different generation of technology, parental content filters limiting what children may access are now more robust and can be used at the router, device, browser or app level. In 2020, the Pew Research Center found that 72 percent of parents who have children ages five to eleven used parental control apps to monitor their children’s internet use. One example of current, app-level technology is Instagram’s parental controls, which allow parents to place time limits on app use and view how their child interacts with the platform. Similarly, the iPhone has its own set of parental controls to limit screen time, prevent purchases, and block access to explicit content and other content of their choosing.

The laws currently under debate similarly appear to interact with the Supreme Court’s “overbreadth doctrine.” This doctrine becomes relevant when a “statute is facially invalid [because] it prohibits a substantial amount of protected speech.” The Third Circuit found COPA to be unconstitutionally overbroad because it limited access to swaths of speech that may not have been obscene. The District Court also found COPA to be overbroad for reasons including “that COPA could apply to a wide swath of the Web and thus COPA would prohibit and chill a substantial amount of constitutionally protected speech for adults.” Age-verification laws that prohibit swaths of access to lawful speech by minors and that chill speech so broadly that adults are also affected would seem to fail under this test. Furthermore, by banning companies from allowing children under the age of 13 onto their platforms, the Protecting Kids on Social Media Act and other legislation with a minimum age requirement—such as one proposal in Texas that sets the age limit to 18—would fail under this test. 

A final potential constitutional issue relates not to the First Amendment, but to the Due Process Clause of the Fifth Amendment’s Vagueness Doctrine. Here, vagueness applies to “scienter requirements,” which are standards used to determine the type of intent or knowledge of wrongdoing. But because these standards may apply in different ways in different situations, they must be defined. In American Civil v. Mukasey, the Third Circuit found that COPA was impermissibly vague for various reasons, including not defining the standard of “knowingly.” The current federal proposal (the Protecting Kids on Social Media Act) similarly does not define a “knowingly” standard. These ideas have been tested in court in other forms already and have manifestly failed.

Instead of seeing this long line of precedent as hazards to avoid in crafting legislation, authors have too often steered toward these same issues. Thus, new proposals need to grapple with these failures to avoid the same pitfalls.

This is part of the series: “The Fundamental Problems with Social Media Age-Verification Legislation.”

Stay up to date on technology policy. Sign up for R Street’s newsletter today.