Parents, Not Government, Should Protect Children Online
SACRAMENTO — Years ago, U.S. Rep. Maxine Waters (D-Calif.) warned attendees at a conference about the need to watch out whenever public officials claim to be doing something for the children. She was talking specifically about Los Angeles Unified School District’s eminent-domain policies and their impact on South Los Angeles neighborhoods. While it marked the rare instance when I agreed with the California Democrat, she had a point. I often apply that warning to other policy areas.
Lately, lawmakers on the left and right are trying to protect The Children from the ravages of the online world. It’s true that social media poses myriad problems for teens in particular. Obviously, they are susceptible to deception by malevolent strangers. The Mayo Clinic warns that the internet is “distracting them, disrupting their sleep, and exposing them to bullying, rumor spreading, unrealistic views of other people’s lives and peer pressure.”
There’s no obvious solution other than the tried-and-true approach that conservatives have long understood: Parents need to be involved in their children’s lives, and that includes monitoring or limiting their online behavior. Whenever the government gets involved beyond the basics (such as applying existing laws to online predators, etc.), it creates a host of unintended consequences that protect almost no one and harm large swaths of the population.
California, for instance, has passed the Age-Appropriate Design Code Act, which is scheduled to go into effect next summer. It’s less onerous than some proposed federal efforts to essentially ban minors from using social media but still opens the door to myriad problems by making the state government the arbiter of age-appropriate material. As with all California tech-related laws, the standard here ultimately will become the national standard, given the size of our marketplace.
Based on restrictions in the United Kingdom and the European Union, Assembly Bill 2273 requires online services to include “strong privacy protections by design and by default, including by disabling features that profile children using their previous behavior, browsing history, or assumptions of their similarity to other children, to offer detrimental material.”
That sounds reasonable on its face, but it’s not reasonable given the power it places in the hands of regulators. It lets regulators pressure websites into taking down content they deem detrimental to the physical or mental health of a child, explains Josh Withrow, my R Street Institute colleague who specializes in technology and innovation policy. He notes that the law doesn’t clearly define such detriments.
Yet few legislators from either party want to go on the record opposing protections for children, which no doubt explains why the California legislation passed on the floor with zero “no” votes — even though Democratic civil libertarians and conservative skeptics of government regulation should have known better. Supporters echoed all the usual concerns that accompany every moral panic.
The assembly analysis quoted from a coalition of activist groups:
Children across the globe are facing an unprecedented mental health crisis. Even before the onset of COVID-19 and subsequent social distancing and isolation, teen suicide was on the rise; in the US the CDC found that between 2007 to 2017 the suicide rate among people aged 10 to 24 increased by 56%. And in the year between spring of 2020 and 2021 emergency room visits for girls ages 12 to 17 increased by 50%.
Those are tragic figures, but it’s unlikely the new law will fix any of that. The end result will mean the government will compile more extensive information about internet users to ensure that only age-appropriate users are on the sites. Private companies’ moderation policies are problematic enough, but what happens when state bureaucrats and attorneys general get more involved in this process?
Only Parents Should Regulate Teens’ Internet Use
The libertarian Cato Institute also raises concerns about the government controlling these databases of user information — and we know how insecure those can be. Cato also argues that “age-appropriate design codes may hurt the very young people they are trying to help” by limiting teens’ access to useful information. If California and other governments are too heavy-handed — and when have they not been? — then companies will simply ban underage access rather than endure liability.
We’ll have to see if the law passes constitutional muster. (My cynical side believes that lawmakers passed the law to make a statement but know the courts will toss it out.) The federal lawsuit by the privacy group NetChoice argues that the law “presses companies to serve as roving censors of speech on the Internet.… If firms guess the meaning of these inherently subjective terms wrong — or simply reach different conclusions than do government regulators — the state is empowered to impose crushing financial penalties.”
Withrow makes the most compelling point: “The effect social-media use has on teens and adults is very individualized in a way that belies a direct correlation between, for example, teen depression and screen time.” Indeed, living in a free-ish society means that individuals, not officials working in some censorship bureau, get to decide what is appropriate and best for themselves and their dependent children. There’s no one-size-fits-all situation.
Government never has done a good job acting as Big Brother. We need to be particularly on guard when officials claim to be regulating us in the name of The Children.