External Policy Studies Technology and Innovation

Lawmakers “Move Fast and Break Things” when Regulating Social Media, but AI Can (and Should) Be Handled Differently

Author

Shoshana Weissmann
Director, Digital Media; Fellow

Media Contact

For general and media inquiries and to book our experts, please contact: [email protected]

This report was originally authored for the TechREG Chronicle at Competition Policy International.

01 INTRODUCTION

“Move fast and break things” is the infamous onetime motto of social media giant Facebook (now Meta), which highlighted its commitment to pursuing innovation while acknowledging the potential for missteps along the way. Notably, this phrase also describes the American government’s approach toward regulating social media. Once legislators realized the political fruitfulness of targeting these platforms, bills to regulate and hearings to denigrate social media proliferated. But in many cases, these actions lacked a coherent goal and seemed reactionary — not to mention that proposed legislation would have created impossible compliance requirements if enacted. Simply put, legislators are moving fast and breaking things.

Meta has long abandoned its founding ethos. But although many have criticized it, that same attitude is not unusual in the technology space.[2],[3] Whatever issues people have with the approach when used in business, “move fast and break things” is a particularly dangerous philosophy for regulators to adopt. It can eliminate whole categories of businesses, criminalize innocent conduct, and prevent future innovations. Unfortunately, in this “do something” era of social media regulation, lawmakers have embraced it. Few legislative products in this vein have passed in the United States, but a series of endless proposals offer poor means-ends fits to specific harms. This holds true across proposals targeting Section 230, algorithms, and age verification. Meanwhile, carefully considered ideas like creating a federal privacy law, closing loopholes to the Fourth Amendment, and preventing federal employees from pressuring companies to remove online speech have received less attention. But a new trend of mixed efforts concerning artificial intelligence (“AI”) draws further attention to the differences between careful and careless regulation.

02 SECTIONING OFF 230

For years, legislators blamed Section 230 of the Communications Decency Act of 1996 for every perceivable harm posed by social media — from spreading misinformation to enabling censorship and hate speech [4],[5],[6] During congressional hearings, Republicans regularly complained that social media companies removed specific user content while Democrats complained that the same content was not removed faster.[7]

In reality, many complaints about Section 230 are actually about the First Amendment. The U.S. Constitution protects the right to tell many kinds of lies and the right of social media companies to moderate content as they see fit. [8],[9] What many classify as “hate speech” is generally not unlawful, and government attempts to remove it would explicitly violate the First Amendment.[10]

A. Section 230 Protects Social Media Companies

Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[11] It ensures that websites are not held liable for unlawful content posted by users and prevents website operators from being taken to court for speech they did not create or contribute toward.

This law is important because it solved something known as the “moderator’s dilemma.” Before Section 230 became law, the legal standard was that operators who moderated content or tried to keep platforms family-friendly were liable for any unlawful content they missed, while those who did not were assumed to be unaware of unlawful content and therefore not liable.

This standard disincentivized the creation of safe online communities. Both options were untenable: either forgo content moderation, allowing harassment, racism, spam, and other speech to pollute the platform; or moderate any and all content that could potentially cause someone to sue — including whistleblower posts, negative commentary about another person, and even off-color jokes. Notably, there is often no way to know whether certain user content is unlawful. Issues of libel and defamation are litigated in court, making it impossible for a website operator to know if a piece of content is truthful or not.

In one key case that preceded the law, Stratton Oakmont — run by Jordan Belfort, colloquially known as the “Wolf of Wall Street” — sued a platform called Prodigy for defamation and won. A whistleblower posted on a message board that the firm was a “cult of brokers who either lie for a living or get fired.”[12] While Prodigy had no knowledge of the statement’s accuracy at the time, the court determined that the platform was liable because, though it attempted to reduce undesirable content, this particular piece was libelous. The ruling came before Stratton Oakmont was exposed for widespread fraud.

B. Dissatisfaction Drives Legislation

Despite the legislative success that allowed social media to grow throughout the late 1990s and early 2000s, elected officials eventually joined a dissatisfied chorus of users who believed social media companies were removing or restricting too much — or too little — user content. This sentiment manifested in the form of dozens of bills that either encouraged or discouraged content moderation by creating liability for platforms based on user speech.[13] Cosponsors often united behind one piece of legislation in order to achieve inherently opposed goals without directly addressing discrepancies. Sponsors of the Internet Platform Accountability and Consumer Transparency Act, for instance, allude to the fact that their bill would force both more moderation and less moderation.[14],[15]

Sen. Amy Klobuchar (D-Minn.) introduced the Health Misinformation Act of 2021, which would have required the U.S. Secretary of Health and Human Services to work with other agencies and experts to determine what constitutes health misinformation. It would also have amended Section 230, allowing users to sue social media platforms for user-created content that contained “health misinformation” if an algorithm promoted that content. Though Sen. Klobuchar took extensive issue with President Donald Trump’s handling of the COVID-19 pandemic, her bill would have given his administration license to decide what constitutes misinformation about the virus.[16],[17]

Propelled by widespread disapproval of social media moderation practices, reactionary legislation often seeks to achieve opposing goals. By conditioning Section 230 protection on impossible or subjective metrics, many of these proposals would revert the regulatory environment to its pre-230 state.

03 PENALIZING BASIC MATH IN ALGORITHMS

Lawmakers have focused similar legislative efforts on algorithms. Too often, their rhetoric portrays algorithms as nefarious sets of instructions that harm users and exploit them for profit.[18],[19] In reality, an algorithm can be as innocuous as a code that instructs a platform’s home feed to show the newest posts first. Unfortunately, many proposals miss this point.

In 2021, Sen. Marco Rubio (R-Fla.) introduced legislation that would make platforms liable for user-created content promoted to others using an algorithm.[20]The Disincentivizing Internet Service Censorship of Online Users and Restrictions on Speech and Expression Act would penalize platforms that sort data by accounts followed or location; in reverse chronological order; or to exclude spam and scam accounts. Content provided chronologically or alphabetically by an algorithm or the search function would be exempt.

The following year, Minnesota lawmakers considered banning social media platforms with more than 1,000,000 account holders from using algorithms to target user-created content to minors. But H.F. 3724 did not specify which algorithms would violate the law, which meant all of them would.[21] This could have proven disastrous for social media.

Even when narrowed, regulations on social media algorithms often keep platforms from showing users relevant content. Countless proposals attempt to prevent companies from targeting users based on personal information like location or interests. For example, the bipartisan Protecting Kids on Social Media Act would, among other actions, prevent social media algorithms from recommending content to minors based on “personal data.”[22] What this proposal fails to recognize is that targeted recommendations are not inherently harmful — they simply cater to a user’s expressed interests or personal details.

Algorithms also function as speech for First Amendment purposes, which U.S. courts have recognized.[23] All constitutional rights may face limitations, but the government must meet a higher burden of proof when regulating protected rights like the right to free speech. Algorithms can be regulated, but lawmakers would see more success if they focused their efforts on specific harms.

04 AGE VERIFICATION VERIFIES MORE THAN JUST AGE

A more recent trend in social media regulation is the push to require website operators to verify a user’s age before allowing them to log in. Lawmakers are motivated by the belief that social media harms minors (though evidence is mixed) and that restricting them from accessing social media — usually with an exception for parental consent — will address the problem.[24] Unfortunately, besides being literally impossible to implement, most of these proposals also violate the U.S. Constitution and threaten cybersecurity.

A. Age Verification Is Difficult

From a functional standpoint, verifying user age is difficult. The Age Verification Providers Association has touted technology that can simply scan a person’s face to verify their age while protecting their privacy and identity.[25] But this use of biometrics inherently invades privacy. Facial recognition has also proven to lack accuracy, demonstrating bias based upon gender, race, and other factors.[26]

Using government ID cards to verify age would also create problems, as millions of Americans do not have one.[27] Not only does conditioning access to social media in this way infringe upon First Amendment rights, but — as the Supreme Court recognized two decades ago regarding credit cards as age verification — minors could ostensibly use their parents’ cards to satisfy the requirement.[28]

Virtual private networks (“VPNs”) make it extremely difficult to verify age because users can mask their location to circumvent local laws. A person using a VPN in Utah, a state that has enacted age verification requirements for social media, could avoid verifying their age by making it appear as though they were somewhere else.[29] Meanwhile, California law prohibits treating VPN users differently than other traffic.[30] Though it is impossible to detect all VPN use, Utah’s law functionally requires platforms to verify the ages of VPN users — thereby creating an unattainable compliance standard.[31]

Australian government abandoned plans to force user age verification for more explicit content in 2023.[32] Their official statement said, “[i]t is clear from the roadmap at present, each type of age verification or age assurance technology comes with its own privacy, security, effectiveness or implementation issues.”[33] This is a stunning assertion from a country that has forcefully regulated social media.[34] Additionally, though France is moving ahead with age-verification policy, its data protection agency, Commission nationale de l’informatique et des Libertés (“CNIL”), stated that no existing solution meets their requirements for accuracy, population coverage, or privacy protection.[35]

B. Age Verification Threatens Cybersecurity

Considering the lack of accurate and privacy-protecting solutions, it is clear that implementing age-verification requirements for social media would create major cybersecurity risk. Whether a proposal requires operators to verify age or establishes government methods, the entity in charge faces constant data hacks and breaches. More than 80 percent of companies have been hacked, and more than half of consumers have experienced cybercrime.[36],[37] From 2020 to 2021, instances of cybercrime involving ransomware doubled.[38] Government agencies have also been breached hundreds of times, affecting millions of individuals.[39] In fact, cybercrime is estimated to have cost the government “over $26 billion from 2014 to October 2022.”[40] Even when aware of vulnerabilities, the government often fails to act.[41]

Cybersecurity risk is lower when less sensitive information is held in fewer places.[42] Poorly protected databases, misconfigured access points, and third-party access are all risks to consider before forcing operators (or the government) to collect personal data. Because age-verification policies compel users to share their most sensitive information, they create serious risk to all Americans.

C. Age Verification Infringes Upon Our Rights

These policies also violate the First Amendment. The right to anonymous speech is well-established and applicable online.[43] However, the most accurate age-verification methods — including government IDs, face scans, and credit cards — also verify identity. Identity verification is a necessary part of verification in proposals that either ban minors from social media entirely or condition them on guardian consent.[44] In order for a parent to consent to their child using social media, not only must a platform establish that the guardian is an adult, but also that they are the guardian of that specific child. Even if technology were available to safely and securely verify user age, operators would still need to verify identity to establish this relationship.

It is unlikely that these laws would pass constitutional muster since online speech would no longer be truly anonymous if they were enacted, allowing the “chilling effect” to proliferate.[45] In simple terms, if users must prove their age via face scans or government identification, they are likely to feel they must censor their speech to avoid criticism or retaliation. Given that it results from direct First Amendment violations, the chilling effect can cause laws to be ruled unconstitutional.[46]

Another issue is that children also have First Amendment rights.[47] Justice Antonin Scalia wrote the majority opinion in Brown, et al. v. Entertainment Merchants Assn. et al., in which the Supreme Court ruled the government cannot require parental permission for minors’ access to non-obscene speech — a key ruling in this debate since the majority of social media content is not obscene.[48] In his opinion, Scalia wrote that the Court “doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.” He explained that the California law in question, which restricted the sale of violent video games to minors, attempted to achieve the legitimate ends of “(1) addressing a serious social problem and (2) helping concerned parents control their children.” However, the law failed because it focused only on video games and included children whose guardians approved their access to those games.

Some courts have understood the burden of age-verification requirements on websites in the form of reduced access to their content.[49] Others have found age-verification laws unconstitutional due to ineffective methods that place an unnecessary burden on speech.[50] Lawmakers must consider the decades of precedent regarding age verification, First Amendment violations, and other functional and cybersecurity concerns when creating proposals for social media regulation.

05 MOVING SLOWLY AND BUILDING THINGS

Unfortunately, in their quest to “move fast and break things,” lawmakers have disregarded certain carefully considered social media proposals including a federal comprehensive data privacy and security law, a law to keep the government from evading the Fourth Amendment by purchasing data, and a law that prevents the government from pressuring social media companies to remove legal speech from their platforms.

A. Prioritizing Privacy and Security

While Congress has failed to pass a comprehensive privacy law, 13 states have done so.[51] Though these laws improve privacy policy, they also create a complicated regulatory landscape. Indeed, one major benefit of a national privacy law is that it would apply across the country. Clarifying a broader framework for rules and standards around online privacy would also help alleviate some of the concerns that lead to ineffective legislation.

Effective federal privacy legislation would address cybersecurity risk, which has the potential to manifest national security risk.[52] Americans’ sensitive consumer data is currently vulnerable to foreign adversaries and could be used for purposes of espionage or to increase the effectiveness of other attacks.[53] Additionally, user health data has increasingly spread outside the traditional U.S. medical system. Apps allow users to track various biometric data, which could reveal sensitive health information. For example, most menstrual tracking apps share data with third parties.[54] In short, protecting personal data helps protect national security.

B. Stopping Government from Evading the Fourth Amendment

One proposal, the Fourth Amendment Is Not For Sale Act, would prevent the government from buying data they are otherwise prohibited from obtaining without a court order.[55] The accompanying press release explains that its purpose is to close “the legal loophole that allows data brokers to sell Americans’ personal information to law enforcement and intelligence agencies without any court oversight — in contrast to the strict rules for phone companies, social media sites and other businesses that have direct relationships with consumers.”[56]

Part of the legislation reads, “[a] law enforcement agency of a governmental entity and an element of the intelligence community may not obtain from a third party in exchange for anything of value a covered customer or subscriber record or any illegitimately obtained information.”[57] Ensuring Fourth Amendment protections do not end where other parties obtain user data and offer it for sale is a commonsense solution to a real problem.

C. Stopping Government from Jawboning

Another proposal seeks to explicitly prevent the federal government from “jawboning,” or pressuring social media companies to maintain or remove content from their platforms.[58] The idea is worth considering, even if the proposal itself needs extensive fine-tuning. A Cato Institute publication defines jawboning as “the use of official speech to inappropriately compel private action.”[59] One recent example is the Biden administration demanding that Facebook moderate speech to their liking and threatening to remove Section 230 as a punitive measure.[60]The Cato Institute has logged dozens of instances in which elected officials have demanded similar compliance from social media companies.[61]

A bill introduced by Congressman James Comer (R-Ky.) would require federal employees to submit an extensive report to the Office of Special Counsel and certain congressional committees at least 72 hours before “tak[ing] any action to censor any unlawful speech.” The requirement changes to no more than 72 hours after in cases regarding “properly classified national security information” and does not apply in cases regarding child exploitation or certain other matters.

But information-sharing can be a legitimate government function. Platforms are not always aware of potential unlawful activity, and informing them without pursuant threats would pose no issue. Another complicating factor is that federal employees do not lose their own First Amendment rights when they opt to work for the government. Therefore, if this line is drawn, it must be drawn carefully so as not to infringe upon their rights as private citizens.

06 A NEW EYE ON AI

Many lawmakers recognize the promise of AI technology, while others seek to destroy it. Regardless, their proposals generally represent a gentler and more careful approach toward technology regulation.

Some legislation reflects uncertainty about the future and direction of AI technology. Rep. Anna Eshoo (D-Calif.) introduced legislation to study AI’s ability to formulate bioweapons.[62] Another proposal seeks to dedicate resources for AI experimentation.[63] Bipartisan Senate legislation instructs the Secretary of Health and Human Services to develop a strategy to handle AI threats regarding public health.[64] Similar legislative proposals at the federal and state and state levels also shift responsibility from AI companies to government agencies.[65]

On the other side of the coin, Sens. Josh Hawley (R-Miss.) and Richard Blumenthal (D-Conn.) put forth a mangled proposal to remove Section 230 protection from any claims where “the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.”[66] While there is uncertainty as to how Section 230 would apply to AI, this law could penalize companies if a user creates a fake short story about someone and uses it in a libelous way or even if a user spellchecks what the platform could not know is tax fraud.

Notably, AI is already regulated in several ways.[67] As the R Street Institute noted, “The Food and Drug Administration (FDA) has been using its broad regulatory powers to review and approve AI and ML-enabled medical devices for many years already.”[68] Additionally, the Federal Trade Commission has touted its ability to handle AI used to deceive consumers.[69]

Though still somewhat mixed in their utility, AI regulation proposals have been less destructive overall than those aimed at social media. Lawmakers ought to slow down and embrace this new technology to their benefit, concentrating on legislation that fills gaps and advances AI’s most productive uses.

07 CONCLUSION

Nearly a decade of “moving fast and breaking things” in pursuit of social media regulation has done more harm than good. A more careful approach — moving slowly and building things — would produce fewer unintended consequences and effectively address those that remain. In AI regulation, Congress has the opportunity to pursue more productive legislation that protects the rights of all Americans.


[1] R Street Institute https://www.rstreet.org/people/shoshana-weissmann/.

[2] Nick Statt, Zuckerberg: ‘Move fast and break things’ isn’t how Facebook operates anymore, CNET (April 30, 2014), https://www.cnet.com/tech/mobile/zuckerberg-move-fast-and-break-things-isnt-how-we-operate-anymore.

[3] ‘Move Fast and Break Things’: Pros and Cons of the Concept, MasterClass (June 21, 2022), https://www.masterclass.com/articles/move-fast-and-break-things.

[4] 47 U.S.C. § 230(c)(1)

[5] Press Release, Office of U.S. Sen. Amy Klobuchar, Klobuchar, Luján Introduce Legislation to Hold Digital Platforms Accountable for Vaccine and Other Health-Related Misinformation (July 22, 2021), https://www.klobuchar.senate.gov/public/index.cfm/2021/7/klobuchar-luj-n-introduce-legislation-to-hold-digital-platforms-accountable-for-vaccine-and-other-health-related-misinformation.

[6] Rosie Moss, The Future of Section 230 | What Does It Mean For Consumers?, National Association of Attorneys General (July 21, 2023), https://www.naag.org/attorney-general-journal/the-future-of-section-230-what-does-it-mean-for-consumers.

[7] Mike Masnick, House Judiciary Spends 5.5 Hours Making Themselves Look Foolish, Without Asking Many Actual Tough Questions Of Tech CEOs, Techdirt (July 30, 2020), https://www.techdirt.com/2020/07/30/house-judiciary-spends-55-hours-making-themselves-look-foolish-without-asking-many-actual-tough-questions-tech-ceos.

[8] Angel Eduardo, Why (most) lies are protected speech, and why they should stay that way, Foundation for Individual Rights and Expression (June 29, 2023), https://www.thefire.org/news/why-most-lies-are-protected-speech-and-why-they-should-stay-way.

[9] Jess Miers, Your Problem Is Not With Section 230, But The 1st Amendment, Techdirt (Nov. 2, 2020), https://www.techdirt.com/2020/11/02/your-problem-is-not-with-section-230-1st-amendment.

[10] David Hudson, Is hate speech protected by the First Amendment?, Foundation for Individual Rights and Expression (Feb. 8, 2022), https://www.thefire.org/news/hate-speech-protected-first-amendment.  

[11] 47 U.S.C. § 230(c)(1)

[12] Stratton Oakmont, Inc. v. Prodigy Servs., 23 Media L. Rep. (BNA) 1794 (N.Y. Sup. Ct. 1995).

[13] Search conducted on Congress.gov for legislation related to “Section 230 of the Communications Decency Act” across various congresses (93rd-118th).

[14] Press Release, Office of U.S. Sen Tammy Baldwin, Baldwin Joins Legislation to Strengthen Rules, Transparency For Online Content Moderation, Hold Internet Companies Accountable (Feb. 21, 2023), https://www.baldwin.senate.gov/news/press-releases/baldwin-joins-legislation-to-strengthen-rules-transparency-for-online-content-moderation-hold-internet-companies-accountable.

[15] Press Release, Office of U.S. Sen. Bill Cassidy, Cassidy, Schatz, Thune Reintroduce Legislation to Strengthen Rules, Transparency for Online Content Moderation, Hold Internet Companies Accountable (Feb. 16, 2023), https://www.cassidy.senate.gov/newsroom/press-releases/cassidy-schatz-thune-reintroduce-legislation-to-strengthen-rules-transparency-for-online-content-moderation-hold-internet-companies-accountable.

[16] Amy Klobuchar (@amyklobuchar), “Seven months and over 5 million cases of coronavirus later, and the Trump administration still doesn’t have an adequate national testing strategy. That’s why we need @JoeBiden and @KamalaHarris in the White House.” Twitter, 8:58 PM, Aug. 12, 2020, https://perma.cc/B5PH-4G9T.

[17] Amy Klobuchar (@amyklobuchar), “Donald Trump knew how deadly COVID was back in February, but chose to say nothing. Now he’s holding a rally INDOORS. The American people deserve better than this.” Twitter, 9:45 PM, Sept. 13, 2020, https://perma.cc/5D37-RQHE.

[18] David Shepardson, U.S. senators say social media letting algorithms ‘run wild,’ Reuters (June 26, 2019), https://www.reuters.com/article/us-usa-tech-congress/u-s-senators-say-social-media-letting-algorithms-run-wild-idUSKCN1TQ21L.

[19] Elizabeth Nolan Brown, In Defense of Algorithms, Reason (January 2023), https://reason.com/2022/12/07/in-defense-of-algorithms.

[20] Disincentivizing Internet Service Censorship of Online Users and Restrictions on Speech and Expression Act, S. 2228, 117th Cong. (2021).

[21] Minnesota H.F. 3724, (92nd Sess. 2022).

[22] Protecting Kids on Social Media Act, S. 1291, 118th Cong. (2023).

[23] Alison Dame-Boyle, EFF at 25: Remembering the Case that Established Code as Speech, Electronic Frontier Foundation (April 16, 2015), https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech.

[24] Mike Masnick, APA Report Says That Media & Politicians Are Simply Wrong About Kids & Social Media; Media Then Lies About Report, Techdirt (May 12, 2023), https://www.techdirt.com/2023/05/12/apa-report-says-that-media-politicians-are-simply-wrong-about-kids-social-media-media-then-lies-about-report.

[25] Mike Masnick, Age Verification Providers Say Don’t Worry About California Design Code; You’ll Just Have To Scan Your Face For Every Website You Visit, Techdirt (Aug. 29, 2022), https://www.techdirt.com/2022/08/29/age-verification-providers-say-dont-worry-about-california-design-code-youll-just-have-to-scan-your-face-for-every-website-you-visit.

[26] Jon Porter, Federal study of top facial recognition algorithms finds ‘empirical evidence’ of bias, The Verge (Dec. 20, 2019), https://www.theverge.com/2019/12/20/21031255/facial-recognition-algorithm-bias-gender-race-age-federal-nest-investigation-analysis-amazon.

[27] Maryland Today Staff, UMD Analysis: Millions of Americans Don’t Have ID Required to Vote, Maryland Today (April 13, 2023), https://today.umd.edu/umd-analysis-millions-of-americans-dont-have-id-required-to-vote.

[28] American Civil Liberties Union v. Gonzales, 478 F. Supp. 2d 775 (E.D. Pa. 2007).

[29] Utah S.B. 152, 2023 Gen. Sess. (March 23, 2023) (enacted), https://le.utah.gov/~2023/bills/static/SB0152.html (last accessed Nov. 19, 2023).

[30] SB 152: Restrictions on Social Media Use by Minors, Libertas Institute (2023), https://libertas.org/bill/sb-152-restrictions-on-social-media-use-by-minors.

[31] Shoshana Weissmann and Canyon Brimhall, Age-verification laws don’t exempt VPN traffic. But that traffic can’t always be detected., R Street Institute (Aug. 29, 2023), https://www.rstreet.org/commentary/age-verification-laws-dont-exempt-vpn-traffic-but-that-traffic-cant-always-be-detected.

[32] Josh Taylor, Australia will not force adult websites to bring in age verification due to privacy and security concerns, The Guardian (Aug. 30, 2023), https://www.theguardian.com/australia-news/2023/aug/31/roadmap-for-age-verification-online-pornographic-material-adult-websites-australia-law.

[33] Australian Government Department of Infrastructure, Transport, Regional Development, Communications and the Arts, Government response to the Roadmap for Age Verification (August 2023), https://s3.documentcloud.org/documents/23932898/government-response-to-the-roadmap-for-age-verification-august2023.pdf (last accessed Nov. 19, 2023).

[34] Monica Biddington, Regulation of Australian online content: cybersafety and harm, Parliamentary Library Briefing Book, Australian Parliament House (n.d.), https://www.aph.gov.au/About_Parliament/Parliamentary_Departments/Parliamentary_Library/pubs/BriefingBook46p/Cybersafety (last accessed Nov. 19, 2023).

[35] Commission Nationale de l’Informatique et des Libertés, Online Age Verification: Balancing Privacy and the Protection of Minors (Sept. 22, 2022), https://www.cnil.fr/en/online-age-verification-balancing-privacy-and protection-minors (last accessed Nov. 19, 2023).

[36] Duke University’s Fuqua School of Business, More Than 80 Percent of Firms Say They Have Been Hacked, Duke University/CFO Magazine Global Business Outlook Survey (n.d.), https://cfosurvey.fuqua.duke.edu/press-release/more-than-80-percent-of-firms-say-they-have-been-hacked.

[37] Clare Stouffer, 115 cybersecurity statistics + trends to know in 2023, Norton Blog (Sept. 1, 2022), https://us.norton.com/blog/emerging-threats/cybersecurity-statistics.

[38] Check Point Research Team, The New Ransomware Threat: Triple Extortion, Check Point Blog (May 12, 2021), https://blog.checkpoint.com/security/the-new-ransomware-threat-triple-extortion.

[39] Id.

[40] Id.

[41] U.S. Government Accountability Office, Cybersecurity High-Risk Series: Challenges in Establishing a Comprehensive Cybersecurity Strategy and Performing Effective Oversight, GAO-23-106415 (Jan. 19, 2023), https://www.gao.gov/products/gao-23-106415 (last accessed Nov. 19, 2023).

[42] Shoshana Weissmann, Age-verification legislation discourages data minimization, even when legislators don’t intend that, R Street Institute (May 24, 2023), https://www.rstreet.org/commentary/age-verification-legislation-discourages-data-minimization-even-when-legislators-dont-intend-that.

[43] Shoshana Weissmann, Age-verification methods, in their current forms, threaten our First Amendment right to anonymity, R Street Institute (June 1, 2023), https://www.rstreet.org/commentary/age-verification-methods-in-their-current-forms-threaten-our-first-amendment-right-to-anonymity.

[44] Michael Murney, Texas Bill Would Ban Social Media for Everyone Under 18, Governing (Dec. 13, 2022), https://www.governing.com/community/texas-bill-would-ban-social-media-for-everyone-under-18.

[45] Frank Askin, Chilling Effect, The First Amendment Encyclopedia (last updated Dec. 2, 2023), https://firstamendment.mtsu.edu/article/chilling-effect.

[46] Weissmann, supra note 42.

[47] Tinker v. Des Moines Independent Community School District, 393 U.S. 503 (1969).

[48] Brown, et al. v. Entertainment Merchants Assn., et al., 564 U.S. 786 (2011).

[49] American Civil v. Mukasey, 534 F.3d 181 (3d Cir. 2008).

[50] Id.

[51] Lewis Rice LLC, Cybersecurity and Data Privacy–U.S. State Privacy Laws (n.d.), https://www.lewisrice.com/u-s-state-privacy-laws (last accessed Nov. 19, 2023).

[52] Brandon Pugh and Steven Ward, R Street Institute Comments on Senate’s HELP Committee’s Request for Feedback on Health Data and Privacy, R Street Institute (Sept. 28, 2023), https://www.rstreet.org/outreach/r-street-institute-comments-on-senates-help-committees-request-for-feedback-on-health-data-and-privacy.

[53] Id.

[54] BBC Staff, Period trackers to be reviewed over data concerns, BBC News (Sept. 7, 2023), https://www.bbc.com/news/technology-66740184.

[55] 117th Cong., 1st Sess., Fourth Amendment Is Not For Sale Act of 2021 (2021).

[56] Press Release, Office of U.S. Sen. Ron Wyden, Wyden, Paul and Bipartisan Members of Congress Introduce The Fourth Amendment Is Not For Sale Act (April 21, 2021), https://www.wyden.senate.gov/news/press-releases/wyden-paul-and-bipartisan-members-of-congress-introduce-the-fourth-amendment-is-not-for-sale-act-.

[57] 117th Cong., 1st Sess., supra note 54.

[58] H.R. 140, 118th Cong. (2023).

[59] Will Duffield, Jawboning against Speech: How Government Bullying Shapes the Rules of Social Media, Policy Analysis No. 934 (Cato Institute, Sept. 12, 2022), https://www.cato.org/policy-analysis/jawboning-against-speech.

[60] Id.

[61] Duffield, Policy Analysis No. 934–Annex (n.d.), https://www.cato.org/sites/cato.org/files/2022-09/policy-analysis-934-annex.pdf (last accessed Dec. 6, 2023).

[62] H.R. 4704, 118th Cong. (2023).

[63] H.R. 5077, 118th Cong. (2023).

[64] S. 2346, 118th Cong. (2023).

[65] National Conference of State Legislatures, Artificial Intelligence 2023 Legislation (2023), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation (last accessed Nov. 19, 2023).

[66] S. 1993, 118th Cong. (2023).

[67] Adam Thierer, The Most Important Principle for AI Regulation, R Street Institute (June 21, 2023), https://www.rstreet.org/commentary/the-most-important-principle-for-ai-regulation.

[68] Id.

[69] Michael Atleson, Keep your AI claims in check, Federal Trade Commission (Feb. 27, 2023), https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check (last accessed Dec. 3, 2023); Michael Atleson, The Luring Test: AI and the engineering of consumer trust, Federal Trade Commission (May 1, 2023), https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust (last accessed Dec. 3, 2023); Elisa Jillson, Aiming for truth, fairness, and equity in your company’s use of AI, Federal Trade Commission (April 19, 2021), https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai (last accessed Dec. 3, 2023).

Read the original report at Competition Policy International.