Though they did not warrant their own full posts in this series, three additional issues related to social media age-verification legislation deserve consideration as well. 

All types of legal guardian relationships should be accounted for

Children may live with legal guardians who are not their parents—think grandparents, uncles or aunts, or foster parents, for example. Legislation should not deprive these kids of the same opportunities to use social media that children who live with their parents have. Unfortunately, unless their language gives guardians the same rights and responsibilities as parents, these laws do just that.

The Protecting Kids on Social Media Act, Utah’s Social Media Regulation Act and Arkansas’ Social Media Safety Act handle this issue effectively by consistently using “parent or guardian” rather than just “parent.” While Utah’s law is the only one that cites emancipated minors, the other two appear to include them under “guardian.”

Algorithms are not the enemy

Another issue is that social media laws often inadvertently ban basic functionality by restricting the use of algorithms. The Protecting Kids on Social Media Act states that platforms “shall not use the personal data of an individual in an algorithmic recommendation system unless the platform knows or reasonably believes that the individual is age 18 or older according to the age-verification process used by the platform.” This particular legislation exempts situations wherein recommendations are shown based on the context of a post rather than a user’s identity or personal information. 

While far more narrowly tailored than some legislation, the Protecting Kids on Social Media Act could effectively ban the showing of content based on one’s location, time zone, country, language or interests. It means a platform may not recommend, say, a local karate class to a minor because he is interested in karate and lives in a certain area. Nor could they recommend a soccer league nearby, based on the location of the minor and her interest in soccer. This ban wouldn’t necessarily protect young users—it would just make the internet less useful for them.

Some drafters sometimes suggest banning “algorithms” for children. But an algorithm is functionally just math. In the absence of an algorithm, all you get is raw, unsorted data. Indeed, a “newest-first” stream of posts is one of endless forms of an algorithm. While it might be unacceptable to target content to children under certain conditions, banning content targeting based on location or interests doesn’t solve any problems or make the internet safer for children. Before lawmakers consider banning the use of personal information to recommend content to minors, they ought to establish a connection between that information and the potential harm caused. That way, they could avoid making the internet less useful for minors while actually protecting them, all while better avoiding First Amendment scrutiny. 

Algorithms often make platforms more useful. Some help keep users safe by identifying the likelihood that a piece of content or an account is spam or a scam. Others help people discover local businesses based on their location, interests and other information, including recent internet searches. For example, a bridal store in a specific city may only be recommended to people within a 25-mile radius who list their status as “engaged” on one or more platforms and/or have recently searched for wedding gowns.

The basic purpose of an algorithm is to maximize engagement, which may lead some to use their screens mindlessly (and far too much). While this can become a serious problem, video games and other mediums can have the same effect—proving that algorithms themselves are not to blame. 

Finally, algorithms are functionally speech—long, math-driven speech, but speech nonetheless. Regulating them would probably mean overcoming a strict scrutiny standard under the First Amendment, which is no easy feat. Lawmakers must ensure these provisions are narrowly tailored in order to better avoid scrutiny.

Age-appropriate design codes are too vague

California’s Age-Appropriate Design Code Act attempts to ensure online services are delivered to children in a way that is consistent with what children of their age group can handle and will find safe while also protecting their privacy. The intentions are pure, but the execution is severely flawed.

This design code and others like it implicate the need to verify age and are a core part of the flock of age-verification laws. Indeed, online services cannot determine what form of their content should be served until the user’s age is known. This causes all the same problems mentioned previously in this series. Even though the law only requires sites to estimate age with “a reasonable level of certainty,” it still amounts to age verification.

Because these codes are modeled after the United Kingdom’s Children’s code, they are written far more loosely and vaguely than most American laws. California’s code was actually drafted by a British baroness and co-sponsored by her organization, which may account for vague definitions like “likely to be accessed by children” or “exposing children to harmful, or potentially harmful, content on the online product.” 

Intentional or not, the law and legislative proposals implicate nearly all websites when they use language like “likely to be accessed by children.” It means that every website would either need to verify a user’s age or treat all users as if they were children.

This also causes more privacy problems. For example, the founder of California-based Techdirt noted that the law applies to him because minors access his website. But in order to know which users are minors and which are not, he would have to start verifying age and collecting more personal information about users than he would want to. Essentially, Techdirt and other privacy-conscious sites would be forced to enact privacy-violating practices. 

These laws also place a massive burden on websites by requiring data protection impact assessments (DPIAs). According to the legislation, a DPIA must be completed before a website can launch any new feature. These assessments introduce substantial liability and make sites less likely to introduce new features since each one requires a lawyer. So even if a site has an idea of how it can make its service more useful for children, will it want to risk thousands of dollars in legal fees, let alone a lawsuit? The way in which this law uses so many DPIAs makes it completely unworkable.

Unfortunately, this is not a comprehensive list of problems with age-appropriate design codes. While they are only one subset of age-verification laws, their popularity and massive flaws warrant attention.

Age-verification proposals are rife with problems that drafters must address. Thoughtful proposals will ensure all guardian situations are accounted for, will not demonize algorithms and will not contain language that violates the vagueness doctrine.

This is part of the series: “The Fundamental Problems with Social Media Age-Verification Legislation.”

Stay in the know. Sign up for R Street’s newsletters today.