Continuing Texas’ tradition of attempting to enact sweeping internet regulations, State Rep. Shelby Slawson has introduced HB 18, dubbed the Securing Children Online Through Parental Empowerment (SCOPE) Act. Like so many of the well-intended bills being introduced across the country to try to protect children online, the SCOPE Act would create requirements that are functionally impossible to comply with and would drastically diminish the usability of many internet services for all users, not just children.

“A digital service provider,” HB 18 reads, “shall prevent physical, emotional, and developmental harm to a minor using a digital service,” which includes anything from content that might promote self-harm, eating disorders or bullying; constitutes deceptive marketing practices; or might encourage “patterns of use that indicate addiction.” This duty of care is similar in many respects to California’s problematic Age-Appropriate Design Code (AADC) in that the scope of harms it demands that websites prevent is broad and indefinite—and thus likely impossible to comply with. The content that may be considered harmful to any given child’s mental or emotional well-being is inherently subjective, leaving it to individual judges to define the law’s scope in practice.

The SCOPE Act does not just apply to social media sites but also encompasses basically any website that might conceivably collect any minor’s information if the site “knows or should know that the digital service appeals to minors.” It then includes a list of features that would cause a site to be presumptively “tailored towards minors,” including merely containing “animated characters” or “colloquial use of language that is common among minors.” We explained last year why applying such an expansive knowledge standard doesn’t make sense in California’s AADC—it would impose burdens on many online services that likely pose little to no danger to minors.

And although HB 18 does not contain an explicit mandate that platforms conduct intrusive age-verification techniques, it would certainly place major pressure on platforms to do so, which would endanger the ability of any child or adult to remain anonymous online. This is an issue, as the right to speak anonymously has been recognized by the Supreme Court. Further, the most accurate age-verification services are incredibly intrusive and involve facial-recognition technology or the use of government IDs. This would mean that adults and children would have to verify in that way before accessing most websites. It is also worth noting that Pornhub’s parent company owns the most commonly used age-verification tool.

These issues of the law’s scope and obligation are crucial because any perceived violation can subject platforms to both private civil liability and state consumer protection enforcement. Websites will be faced with little choice but to over-police content massively. For example, a filter designed to remove harmful content on eating disorders or mental health will necessarily also filter out self-help content that some teens might need. The private right of action would provide a sort of heckler’s veto on content, where anyone who thinks any given piece of content on a covered site is possibly harmful to a minor’s mental or emotional health could sue, and platforms are more likely to just take down the content than pay to litigate.

The SCOPE Act also places the same duty of care on a site’s algorithmic content recommendations, and requires that sites engage maximum possible protections by default, which would presumably require any site that might conceivably host minors to have content recommendation disabled. This would seemingly apply to everything from social media news feeds to video streaming services to potentially even sales platforms. But the bill doesn’t even exclude “time-ordered” or “content from people the user follows” from the definition of “algorithm,” so the language would apply even to those circumstances. Sorting and ranking content are two of the fundamental functions that make online platforms useful to its users—forcing them to be disabled by default, even if just for children, is effectively rendering many of these services useless by default.

Protecting children from the real dangers that exist online is a worthy goal, but in so doing, lawmakers need to consider whether the protections they are seeking are actually beneficial and reasonably possible to comply with. Many studies, commissions and panels of experts have tried to crack the code of how best to protect teens and youth online, and nearly all have agreed that a large component of any workable framework for online safety will ultimately continue to be parental supervision and education. The unintended consequences of ill-thought-out interventions stand to make much of the internet less useable for children and adults alike.