In multiple regards, age-verification bills do not reflect legislators’ aspirations as they are written. From promoting parental choice to restricting obscene content to curbing excessive video consumption, these bills fail to address the goals set by their authors.

These bills don’t promote parental choice—they eliminate it

While their purpose is ostensibly to ensure parents have a say in what children see online, these bills effectively remove parental choice. Currently, parents can opt to use filters or blocking software for certain apps and websites on devices. But age-verification legislation often explicitly or implicitly bans minors from social media.

Though the Protecting Kids on Social Media Act would ban all children under age 13 from using social media platforms, they could still view content without logging in or interacting with it. Texas legislation would ban anyone under age 18 from using social media, and some conservatives have advocated for similar bans. Rather than giving parents options for handling how their children engage online, these bills take them out of the equation completely. 

A legislative proposal in Wisconsin could remove parental choice by implementing mandatory social media curfews for underage users. According to a Wisconsin Public Radio report, “Rep. David Steffen, R-Green Bay, says he’ll sponsor a bill that would give parents full control over their kids’ social media accounts and impose a curfew for social media users under 18.” But a variety of platforms and apps already enable parents to set strict time limits and curfews for their children. Each child is unique, and adults should be able to customize parental controls to meet their needs. Whether it’s limiting usage time for kids who struggle to put down their devices or allowing those with extracurricular commitments to use devices during “curfew” hours, parental controls should be customizable. There is no one-size-fits-all solution.

Another example is the California Age-Appropriate Design Code Act, which arbitrarily forces companies to design their services to “protect children.” Decisions as to what content might be harmful to a child’s mental or physical well-being are subjective, and they are different for every parent and child. But rather than empowering parents to protect their children, California places the burden on the online platforms themselves.

Even laws that don’t explicitly ban those under 18 from using social media may still have that effect. By now, both regulatory compliance hurdles and the “high costs of serving teens” are clear. Meanwhile, users under 18 make up just a small fraction of ad audiences for these platforms (only on TikTok does this group break 30 percent). Bans on targeted advertising, coupled with the high liability cost of allowing underage people to use social media, creates a real incentive to just go ahead and ban these users. If this happens, parents will no longer have a choice when it comes to their kids’ social media use.

All of this is important in light of a new American Psychological Association (APA) report examining both the opportunities and dangers for children who use social media. The report explains how families can work together to encourage positive social media use that benefits children and prepares them to use these websites throughout their lives. 

They don’t shield children from obscenity, either

In a press release accompanying the Protecting Kids on Social Media Act, Sen. Tom Cotton (R-Ark.) complains about “explicit content” on social media. But where this legislation prevents children from using social media or requires parental consent for them to do so, it does not prevent them from accessing this content entirely. According to the plain text of the bill—which exempts the behavior of “merely viewing content, as long as such viewing does not involve logging in or interacting with the content or other users”—the ability of underage users to access explicit content will remain unaffected.

Plenty of websites devoted to explicit content do not require viewers to log in, which means this content cannot be restricted by the law. The only additional limitations would be that underage users may not hold accounts with pornographic websites if they also function primarily as social media websites. But pornographic websites that limit user interaction—for example, ones that do not allow users to comment on posts, engage with other users or upload their own content—would not be covered by this law in any way.

Finally, they don’t curb excessive use (however you define that)

Though the aforementioned press release includes quotes from several sponsors regarding social media “addiction,” the legislation itself does nothing to address these concerns. The general theory of those concerned about social media addiction is that kids are unable to look away from the endless stream of videos on platforms like TikTok. Yet nothing in this bill would prevent young users from watching, so long as they do not have to log in and cannot engage with the content.

Excessive use of social media may not even be accurately described as an addiction—in part because some people benefit from it while others face negative consequences. Many of the studies around young people’s social media use present mixed evidence, and the APA itself stresses that “[u]sing social media is not inherently beneficial or harmful to young people.”

To give the authors their due, it is entirely possible that the authors of these bills intentionally chose not to close these loopholes to avoid creating age-verification requirements for anonymous users, for those who choose not to log in or for non-social media websites. Indeed, U.S. courts ruled the Child Online Protection Act of 1998 unconstitutional in part because it penalized operators of websites that published sexually explicit content if they did not make viewers verify their ages with credit cards or other measures and block minors from viewing it.

In all of these cases, however, the authors should address their legislation’s shortcomings—or at least refrain from making claims it does not support.

This is part of the series: “The Fundamental Problems with Social Media Age-Verification Legislation.”

Get smart quick: Sign up for technology policy right in your inbox.