KOSA’s Re-Animated Threat to Free Speech and Innovation
The Kids Online Safety Act (KOSA) is back, and with it, a renewed sense of trepidation for anyone concerned about the future of online free expression. Despite revisions and reassurances from proponents, the core of the bill—a broadly defined “duty of care” imposed on online platforms—remains a threat to the open internet. Aimed at protecting minors, this well-intentioned provision could usher in an era of censorship and neuter the online spaces where young people learn, connect, and express themselves.
At first glance, the idea of requiring platforms to exercise “reasonable care in the creation and implementation of any design” to prevent and mitigate a range of potential harms to minors is commendable. But KOSA’s details are indeed devilish. Just a few of the psychological harms companies must address include “anxiety,” “depression,” and “compulsive usage.” While some of these are defined formally in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, they have no concrete definition in an online context.
Now, all platforms—from major social media networks to niche online forums and gaming communities—must proactively identify and mitigate content that could foreseeably lead to these harms in minors. But how are they realistically supposed to do this? What constitutes “reasonable care” when dealing with millions of diverse users and types of content? Will platforms like Steam (a gaming marketplace), AllTrails (a social platform for hikers), and others be subject to the same requirements as companies like Apple and X?
Combining the terms “reasonable care” and “any design” gives the government wide-ranging authority to penalize platform behavior they deem non-conforming. Platforms will have little choice but to deploy overly aggressive content moderation tools, relying heavily on automated systems that lack the nuance to understand context or intent.
A teenager’s post discussing their mental health struggles with a supportive online community could be flagged and removed for using the word “anxiety,” while a forum dedicated to eating disorder recovery could be deemed harmful because it contains the terms necessary for individuals seeking help to find it. And content related to LGBTQ+ identity—crucial for many young people exploring their own identities—could be suppressed under the guise of preventing anxiety or depression if interpreted through a biased lens.
Apple already has screen warnings to track time spent online, as well as parental controls and app-by-app settings. But is it enough to satisfy the government’s “reasonable care” definition of KOSA? Policymakers have no clear answer.
Faced with the potential for litigation, platforms will inevitably resort to over-filtering and censorship. This will make compliance for smaller firms immensely more expensive and difficult, potentially placing a chilling effect on investment and innovation in the space.
Instead of this heavy-handed, censorship-inducing approach, we should focus on solutions that empower individuals and increase transparency.
One vital area for improvement is algorithmic transparency. Instead of broadly banning or restricting content based on vague harms, policymakers should push for platforms to be more open about how their algorithms work—particularly how they recommend and amplify content to minors. Understanding these systems can help parents and researchers identify potential issues and develop strategies to mitigate them. This transparency should not delve into proprietary code but should provide meaningful insights into the factors that influence content delivery and the steps platforms take to avoid amplifying potentially harmful material.
Another essential component is empowering parents with better tools to manage their children’s online experiences. While many platforms already offer parental controls, they can be inconsistent, difficult to find, or lack sufficient granularity. This is why policymakers should encourage (and perhaps even mandate) the development of user-friendly, robust parental control dashboards that give parents and guardians clear visibility and control over their children’s activity, screen time, privacy settings, and content exposure.
Finally, we should explore the potential for industry self-regulation, perhaps drawing inspiration from models like the Entertainment Software Rating Board (ESRB) in the video game industry. As a self-regulatory body, the ESRB provides content ratings for video games based on their suitability for different age groups. While not perfect, this system offers parents valuable information and allows them to make informed choices about the games their children play. A similar model could work for online platforms, with an independent, industry-led body developing guidelines and best practices for content moderation, age-appropriateness, and design features and providing clear standards that platforms can adhere to without the looming threat of government lawsuits based on vague criteria. This approach fosters a sense of shared responsibility within the industry and allows for more flexible and adaptable standards than rigid government regulations.
KOSA’s reintroduction is a clear signal that concerns about minors’ online safety remain a priority—and rightly so. However, the bill’s sweeping “duty of care” provision prioritizes government control and censorship over individual liberty and the vibrancy of online spaces. The goal should be to create a safer online environment by providing individuals with the tools and information they need to navigate the digital world responsibly—not by silencing their speech. In its current iteration, KOSA will not achieve that goal; it will only lead to a more restricted, less dynamic, and ultimately less beneficial internet for everyone.