Of the various strains of bills circulating in state legislatures this year designed to protect kids and teens from harmful online content, one of the most common has been a mandate that mobile devices sold in the state must come with an obscenity content filter pre-installed and activated by default. This sort of device-level content filtering may sound simple and even practical to some lawmakers, but in reality, these proposals are likely to be expensive and difficult to implement for manufacturers; inconvenient for consumers; and challenging to enforce.

The inspiration for the current wave of device filter bills is HB 72, which was passed into law by Utah in 2021. That bill’s enactment was made contingent upon at least five other states passing legislation “in substantially the same form” and taking effect themselves, and in 2023, at least eight state legislatures attempted to move bills that answer this call. These all vary slightly but generally contain the same core requirements:

Most of these bills contain safe harbor language similar to Utah’s, which immunizes devices that deploy “a generally accepted and commercially reasonable method of filtration.”

Bad Incentives

Even the most sophisticated content-filtering programs are never going to be 100 percent successful at blocking all obscene content. Thus, some sort of “good faith” safe harbor is necessary to prevent a device-filter mandate from exposing companies to liability for every single mistake. However, an unintended consequence is that it is likely to stifle innovation in what is currently a thriving market for private content-blocking software, as many parents are understandably likely to assume that these device-level content filters are sufficient.

A related perverse incentive is that having these device-level content filters enabled by default may cause parents to become complacent, believing that their smart devices are sufficiently safe for their kids to use. However, not only are the filters themselves imperfect but they are also easily circumvented by tech-savvy teens, and studies have shown that there is “little consistent evidence that filtering is effective at shielding young people from online sexual material.” Ultimately, technology is not a substitute for parental supervision and education of—and communication with—their children about harmful content they may encounter online, and these mandates will not solve that.

Compliance Difficulties

There are also numerous technical shortcomings that render these device filters impractical to implement, particularly at a state-by-state level. First, most of these bills require that the device filter be installed and enabled by default for any smartphone or tablet bought or activated within the state. To comply with the law, a manufacturer would have to bake the filter into its operating system, but a resident (including a minor) could easily enough buy a device from out of state and activate it outside of the state.

In addition, many of these state laws tie what content these filters are expected to block to the state’s own legal definition of “obscenity,” which frequently includes some material that is not pornographic. For example, Texas, which has attempted to pass a device-filter (SB 417) law this year, has a definition of “obscene” that includes anything that “the average person, applying contemporary community standards, would find that taken as a whole appeals to the prurient interest in sex” and which, “taken as a whole, lacks serious literary, artistic, political, and scientific value.” The boundaries of this definition are subject to be enforced via private civil litigation according to SB 417, meaning that companies may feel compelled to take down a great deal of non-pornographic content that merely contains nudity, from classical art to medical guidance, because algorithmic content filters cannot easily distinguish the difference.

First Amendment Concerns

Putting aside all of the practical and technical problems these bills may create, they are quite simply unconstitutional, according to existing First Amendment case law. As NetChoice has ably laid out, the facts of how these device-filter mandates work is essentially the same as what was required by the 1996 Communications Decency Act (CDA), which similarly aimed toward the meritorious goal of protecting children from harmful content online. The majority of the CDA was struck down unanimously—nine to zero—by the U.S. Supreme Court in the case Reno v. American Civil Liberties Union (ACLU).

One of the court’s main findings in Reno v. ACLU was that “[t]he CDA’s burden on adult speech is unacceptable if less restrictive alternatives would be at least as effective in achieving the Act’s legitimate purposes.” Following the demise of the CDA, Congress passed a more narrowly targeted bill, the Child Online Protection Act (COPA), which mandated that any commercial website selling “material that is harmful to minors” is liable unless they implement some form of age verification or “other reasonable measures” to prohibit access by minors. Once again, the U.S. Supreme Court ruled that such a government mandate was an undue restriction upon adults’ access to protected speech, in part because commercially available content filters “are less restrictive” and “may well be more effective than COPA.”

These state device-filter mandates would apply to every device bought and activated by every resident of the states in which they are enforced, forcing adult owners to disable these filters to have uncensored internet access on their own devices. Meanwhile, a multitude of private parental control and content filtering software options are available to consumers on the private market, on top of the bevy of built-in device controls that tech companies themselves include in their products. The courts will certainly recognize that these mandates do not constitute the “least restrictive means” of protecting children from content that is (for adults) protected speech under the First Amendment.

Parents, Not Governments, Must Decide What Is Best for Their Kids Online

In the Ashcroft v. ACLU case that saw COPA permanently enjoined from taking effect, Justice Anthony Kennedy made the pointed observation that “COPA presumes that parents lack the ability, not the will, to monitor what their children see.” Ultimately, these device-control mandates usurp the decision-making of parents in how and to what degree they believe their children must be protected from the real dangers that exist on the internet.

For parents who want a device-level, password-authorized control on what their kids can access online, those tools exist—and should be employed. But mandating that these protections be deployed by default on every mobile device goes too far. A childless person is not forced to buy a car that has built-in child safety seats; nor should a mobile phone customer be obliged to have to enter a password to disable a filter to access their own device.

Thankfully, the majority of Utah-clone device-filter mandates at the state level have stalled or failed during this 2023 session, but they will certainly return in quantity in future years. Whichever state is first to implement such a measure will likely find themselves embroiled in costly litigation that would ultimately see their law struck down by the courts. It would be far better to consider options that would avoid enacting technically dubious and unconstitutional legislation.

More Online Content Policy

Stay up to date. Sign up for the R Street Newsletter now.