A long-standing proverb of tech politics is that it’s hard for a legislator to vote against kids. Little wonder, then, that California’s Age-Appropriate Design Code (AADC), AB-2273, was voted through by the legislature with zero “no” votes in August of this year. The bill’s intended objective is to require online service providers to configure data collection and automation systems in favor of greater privacy and safety for under-18 users of their services. The nature of the AADC’s rules, however, have led to dire warnings that the bill will “break the internet.”

While the law does not take effect until mid-2024 and will certainly face challenges in court, it’s worth considering more than just the legal challenges. Given that the law establishes a working group to assess implementation and gives authority to the attorney general to “solicit broad public participation and adopt regulations to clarify” its rules, this article will recommend potential opportunities for improvement focused on the law’s scope and obligations along with technologies used for age verification.

Greater age-gated regulation of company practices feels almost inevitable. California’s AADC follows on the heels of the United Kingdom adopting a similar, identically named bill in 2020. Legislation along similar lines is pending before the U.S. Congress and likely to be revisited next year in the form of the Kids Online Safety Act (KOSA). The U.K. law focuses on privacy and data protection, limiting how data can be collected from and about users, as well as other privacy-related policies and practices; both California’s law and KOSA take the implementation and scale of protection further in various ways, targeting a broader spectrum of platform behavior.

But for all of these laws, the formula in the abstract is the same, and entails three pieces: scope, obligation and verification. Scope refers to the threshold that must be established to determine services subject to the law, i.e. what/which services. Obligation specifies the nature of what the in-scope services must do or must do differently. Verification entails mechanisms that identify specific age contexts that trigger changed practices. Reviewing the AADC against this framework helps us identify six opportunities for forward-looking improvement, which can also serve as guidelines for future legislative proposals—particularly in the event that the AADC is struck down or preempted.

  1. Clarify the nature of weighting factors in applying the knowledge standard

The scope of the AADC raises as many questions as it answers: It applies to services “likely to be accessed by children.” This broad knowledge standard is intentional, and meant to circumvent earlier laws that focused on whether a provider had “actual knowledge” that children were using the service, which created incentives to avoid any such knowledge. These misincentives are also addressed in the proposed federal privacy law, the American Data Privacy and Protection Act (ADPPA), through variable standards based on entity type. That bill is making progress, and would in many ways improve the possible future portended by California law of varied state-by-state approaches.

The AADC’s approach to the question of knowledge offers several “indicators” relevant to a knowledge determination, such as “advertisements marketed to children” and “design elements that are known to be of interest to children.” But as any fan of South Park knows, just making something a cartoon doesn’t indicate that it’s meant for children. The first opportunity to improve the AADC is to make clear that these are factors to be weighed rather than strict scope triggers. In other words, the presence of a single child-friendly design element or some level of similarity to a children’s service should not immediately trigger the law.

2. Clarify how fines are and aren’t triggered with more specificity

Ambiguity in scope also arises in the context of penalty. The AADC imposes a fine of $2,500 “per affected child” where the act is “negligent” and a higher $7,500 fine where the violation of the law is “intentional.” Notably, under certain circumstances, businesses have a 90-day right to cure violations to absolve these penalties. Nevertheless, the penalty approach in the AADC introduces (at least) two significant and related questions: first, what constitutes negligence by a platform (because while intentionality is also murky, negligence is more so); and second, when is a child deemed “affected” by the act. The absurdity of estimating the latter with any semblance of accuracy, absent explicit identification of all users, counsels against its utility for future policy proposals. However, the former offers some opportunity for improvement.

For example, if a service provider undertakes a fulsome self-audit of its products and marketing practices and determines that its services are unlikely to be accessed by children under the law, and documents that assessment, then it should not be considered negligent even if its services are later found to have been accessed by children. The working group specified in the AADC could offer guidance for such a self-assessment, or even provide an online survey tool with questions for consideration, although a voluntary checklist would not be sufficient to define negligence under the law in a manner that provides true clarity for businesses. Clarity on the triggering of fines is the second opportunity for improvement.

3. Focus obligations on targeted impact assessments and mitigation

Once the scope threshold is triggered, the AADC imposes 18 obligations. Some of these, notably the requirement to conduct and document impact assessments and engage in tailored mitigation to reduce the risk of harm to children, are targeted to the purpose of the bill. Others, such as the generic requirements to enforce published privacy policies and terms and to offer privacy tools, reinforce existing practices and convey no practical influence on corporate behavior; rather, they only create an unnecessary risk of liability. One obligation stands out as particularly difficult to implement: the requirement to provide privacy information “using clear language suited to the age of children likely to access that online service.” This seems to convey agency to children to make choices in the context of a bill that otherwise deprives them of such freedom, but more importantly, it is impossible to implement for services used by children too young to read. The working group therefore has an opportunity to improve the AADC by attempting to focus its obligations on targeted impact assessments and mitigations rather than those that principally add compliance burden and liability risk.

4. Encourage accurate descriptions of any tradeoffs in privacy choices

Consistently missing from many conversations around privacy are the inherent tradeoffs in functionality, including features and ease of use. For example, opting out of behavioral targeting of advertisements means that ads will be less likely to align with things that are of interest to the user. To inform users fully of the consequences of any privacy-related choices they might make, service providers must be permitted to describe the tradeoffs involved, especially including additional features and benefits such as the more relevant advertising that can be gained from relaxing privacy settings and allowing greater data collection and use. It is unclear under the AADC whether such education would be considered a dark pattern or other unlawful attempt at manipulation. A fourth opportunity for the working group is to encourage businesses to describe privacy tradeoffs in functional terms sufficient to enable the most meaningful choices possible, and to make clear that such education is not inherently unlawful manipulation.

5. Take identity verification off the table as an age verification mechanism

Most of the discussion around age and children-related legislative proposals deals with the third leg of the stool: verification. While some of the AADC’s obligations, such as the need to conduct impact assessments, operate at the system level, others—particularly the most onerous ones—kick in only during interactions with individual users who are assessed to be children. Under the AADC, businesses are given two nominal paths to make such assessments. In reality, there is only one path, as the second is essentially to treat everyone as a child, which is untenable for many current business models. The remaining path obligates a business to “[e]stimate the age of child users with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business.”

The fifth opportunity is to take identity verification entirely off the table as an approach to solving the age verification problem. In many countries, including the United Kingdom, online safety conversations create massive ancillary pressures to determine the real identity of internet users, even though research consistently shows that ending online anonymity would do more harm than good. Nothing in the AADC implies an obligation to determine user identity, which is fortunate considering the challenges such an obligation would pose for privacy (ironic in an ostensibly privacy-protecting bill). In fact, many of the provisions in the AADC regarding how data can be collected and used to estimate age would make it difficult to determine identity or other personal characteristics beyond age. But additional clarification from the working group would nevertheless be helpful.

6. Encourage the use of standards and widely accessible verification tools, and do not force adoption of expensive proprietary technologies

The standard of “reasonable level of certainty” in the AADC is, on its face, reasonable, but also completely unspecified in practice. There is, then, little surprise that one of the most explicit asks of the working group is to ensure “that age assurance methods used by businesses that provide online services, products, or features likely to be accessed by children are proportionate to the risks that arise from the data management practices of the business, privacy protective, and minimally invasive.” But it will be hard for the working group to create much more clarity, as the risks seem highly variable across businesses, and are therefore difficult to cluster into clear buckets.

Ultimately, businesses will be expected to purchase or create in-house technologies to guess the age of individual users. This creates an inherent risk of regulatory advantage for large companies who can buy or build best-of-class verification tools to minimize their chance of liability, leaving smaller companies to outlay higher relative costs or face greater risk. Increasingly, innovation in age estimation technologies involves the use of artificial intelligence (AI) to guess age based on a picture, which can be taken in real-time from a mobile device or portable computer camera. This is a modern-day twist on facial recognition—one that will certainly not be free to the business, nor subsidized by the regulator demanding it.

The best alternative to inherently expensive and likely imperfect AI estimation is to use a third-party provider who has a legitimate reason to know with reliability the age of the user, such as a banking service. There are inherent privacy challenges in any technical system to create and transmit a reliable age verification token, particularly in trying to solve for the exercise of not letting the age verifier know the site for which the token will be used. But where some amount of trust already exists between a user and a body that can issue an age verification credential, an active Internet Engineering Task Force draft is exploring the possibility of anonymous-credential authentication mechanisms. Such mechanisms could build off existing trust to establish reliable age verification for other online sites and services which are not, and should never need to be, trusted in the same way.

Such technical considerations are beyond the likely scope of the AADC working group. But ongoing technology research and innovation can be abstracted into a sixth opportunity for improvement: focus on investing in standards for age estimation and not relying on or assuming the universal feasibility of expensive, fragile, proprietary solutions, which would have the harmful consequence of creating disproportionate compliance costs for small and medium businesses.

Conclusion

Age-appropriate design is a very difficult, and yet perhaps inevitable, challenge facing lawmakers. Whether it reflects the right balance of policy interests or not, the U.K. and California laws have stepped forward, and the same issues appear likely to be picked up by the U.S. Congress in the near future. Thus, any improvements that can be made by the AADC working group to improve clarity of scope, the impact and balance of obligations, and age verification technologies and safeguards will prove to be immensely valuable to businesses and users of all ages. 

Image credit: New Africa

Featured Publications