As access to the internet and social media has become ubiquitous, it is natural and appropriate that concerns have risen about what sort of content and interactions our kids are being exposed to online. Unfortunately, concerns about protecting children from new technologies often manifest in a desire to “do something”—a push for action that can outpace a careful, deliberate understanding of how proposed fixes will work in reality.

True to form, recent concerns about the impact of social media on children have resulted in a tide of proposed legislation—frequently bipartisan—at the state and federal levels that, no matter how well-intentioned, threatens to degrade free speech and the usability of online platforms for all users, including kids. In this post, we provide a non-exhaustive list of five of the most common and troublesome policies that legislators should not pursue in order to make online platforms safer for kids.

1. Don’t require age verification that destroys kids’ and adults’ privacy online.

One central issue with imposing child-specific prohibitions on online platforms is the difficulty posed by age verification. Most platforms that host user-generated content already prohibit minors under the age of 13 from creating a profile and have filters in place by default to protect users under the age of 18 from viewing inappropriate content. However, many kids under 13 can and do create social media accounts (sometimes with parental consent). Thus, many of the proposed online kids’ protection laws obligate social media sites to know the age of their users accurately in some way, meaning self-identification does not suffice.

By its nature, age verification via government ID or other documentation forecloses the possibility of creating a fully anonymous profile on a site for both kids and adults. The right to speak anonymously has been a treasured right in this country since before there was a First Amendment, going back to pseudonymous publications like the Cato Letters and the Federalist Papers. Both the Supreme Court and lower courts have repeatedly upheld the right to anonymous speech in all but the narrowest instances. The burden that age verification systems would place on even adult access to online services because of the loss of anonymity it implies was a major reason that the courts struck down the Child Online Protection Act (COPA) in 2000, and this current wave of age verification mandates may well meet a similar fate.

Platforms such as Instagram have experimented with privacy-protecting ways to screen out minors accurately. Age verification without compromising anonymity, while theoretically possible, is imperfect at best—and tends to require some sort of intrusive component such as a facial scan, or submitting a video of one’s self for review. However, many of the bills being considered default to requiring documentary identity verification such as driver’s licenses, and should at a bare minimum allow platforms to consider privacy-maximizing alternatives instead.

2. Don’t force platforms to collect even more sensitive user data, which is ripe for data breaches.

Another side effect of age verification online is that it typically forces platforms to have to process sensitive, personally identifiable data from their users that they might not otherwise collect—whether about the children themselves or their parents. In the most extreme cases, like Utah’s SB 152, a law might actually require platforms to “securely” retain whatever identification materials they are provided.

Requiring TikTok or Snapchat to collect and store more info about their users, especially minors, seems to conflict with the goal of privacy and safety online. That these proposed laws would compel TikTok—which has been banned on state government devices in more than half of states due to security concerns regarding China—to collect such information about users is troubling. Additionally, any database of sensitive, personally identifiable information presents a tempting target for hackers. Take, for example, the number of times Facebook, which is one of the few social media platforms that already requires full user identity verification, has had its users’ data hacked. At the very least, an age verification requirement should contain a provision forcing sites to delete any personally identifiable information submitted once the user is approved.

3. Don’t blame the algorithms.

Much of the urgency behind the many legislative proposals to protect kids online is driven by anecdotal evidence about how social media and the internet affect kids. An extreme example of this thinking in legislation is Minnesota’s HF 1503, which would ban all algorithms that direct user-specific content toward kids. A number of different states have bills that include bans on all targeted advertising toward children, and numerous bills also attempt to hold platforms liable for causing addiction or other mental harms to kids—such as the federal Kids Online Safety Act (or KOSA, S. 3663).

Thorough research on the effect of online access and social media use by minors shows that it is overly simplistic to describe kids’ access to these technologies as merely harmful or dangerous. Repeated studies have shown that social media is not a mental health danger unique to teens. In fact, some evidence shows it can be beneficial to their well-being, and teens themselves are much more likely to rate social media as mostly a benefit to them.  

Algorithmic content recommendation is at the core of what makes online platforms useful to their users—it helps each of us sort through an unfathomable amount of content online to find what is actually enjoyable or useful. The way to combat kids spending too much time in front of a screen, or being sucked into content loops that affect their mental health, is better communication and education, not making online platforms functionally useless for kids.

4. Don’t take away parental choice.

Some of the bills advancing at the state level go so far that they entirely ban access to social media services by minors, even with parental consent. Texas’ HB 896 would flatly ban kids under the age of 18 from having social media accounts, and the original text of Utah’s HB 311 (since revised) would have barred those aged 16 or younger. Regardless of whether such restrictions are advisable, the decision of whether or not, and when, to allow a child access to social media—and the internet more broadly—should lie with their parents or guardians, not the government.

In any case, attempts to restrict access to social media platforms has been found by the Supreme Court to be an abridgement of users’ free speech, and our highest court has found that laws conditioning a minor’s prior access to (non-obscene) speech on parental permission are similarly unconstitutional.

5. Don’t make surveillance of kids a default.

Many online child safety proposals also take away parental choice by presuming to dictate how tightly controlled or restricted their children’s accounts ought to be by default. Of course, there are defaults that make sense—like how most social media platforms and search engines filter out violent and pornographic content automatically. But several state proposals default to forcing parent co-ownership or full access to a minor’s social media account, sometimes up to the age of 18.

For example, Utah’s SB 152 explicitly requires that all parents or guardians who allow their children under the age of 18 to have a social media profile to be provided password access that grants full visibility of every post or message sent or received by their child’s account. Maryland’s HB 254 also appears to encourage parents of 13 to 18-year-old minors to create joint social media accounts from which they can request all data on demand. Parents will arrive at different value judgments about how much trust they have in their kids to be responsible online and what they ought to have access to, and lawmakers should not be the ones to decide where to draw that line.

Conclusion

There is little doubt that access by minors to the internet and social media raises real and significant concerns about their safety and mental well-being. However, not all of these perils can be solved by mere legislation, and to the extent that new laws are needed, they must take into account how they interact with other principles of good internet governance, such as privacy, security and free speech.

Overall, the goal of policies aimed at protecting children online should be focused on giving parents better and easier-to-use tools to make their own informed decisions about how much access their kids should have to the internet, not on making those decisions for them in advance.