The future of tech policy involves CATS–and they will be SAD.
Five years ago, I made a prediction: I said that the next two to 10 years of tech policy debates would be defined by CATS: Competition, Algorithms, Tracking and Speech. Halfway through the long end of that window, the major tech and internet policy conversations in the United States and Europe have, by and large, reflected that assessment. It’s time for an update: For the next few years, the tech policy CATS will be SAD–because they’re dealing with Surveillance, challenges to Anonymity and managing Disinformation on an international stage.
The CATS are still kicking, and defining tech and internet policy conversations.
Competition is big. A 2019 set of expert reports in the United States, United Kingdom and Europe led to the initiation of legislative actions in Congress and the British Competition and Markets Authority, as well as the adoption of the Digital Markets Act (DMA) in Europe. These moves transform the operating environment for internet companies in massive ways, though it will take years for the cycles of legislation and litigation to work themselves out.
Platform responsibility and accountability conversations have become increasingly focused on the role played by machine learning systems and automation, or in the parlance typically used by policymakers, “algorithms.” Among the dramatic revelations in the “Facebook Files” leaked by a whistleblower, the social media giant’s algorithms took center stage. The result was the introduction of the Algorithmic Accountability Act in Congress, and an emphasis on algorithmic transparency along with provisions for risk assessment and auditing of algorithms in the E.U.’s Digital Services Act (DSA).
Distinct from government conversations, Apple’s decision in 2020 to make it easy for iOS users to block tracking has had massive consequences, including an alleged $10 billion in lost revenue for Facebook in 2022. The introduction of the American Data Privacy and Protection Act (ADPPA) has moved the United States closer to a privacy bill, and ADPPA would impose rules on data collection in the context of targeted advertising, impacting the technical methods of tracking in a meaningful way.
No issue, not even competition, has garnered more news headlines in recent years than online speech and harm. The E.U.’s DSA will in all likelihood set a global standard for digital platform accountability via new rules for online speech intermediary services, just as the earlier General Data Protection Regulation (GDPR) changed the paradigm for privacy. The United Kingdom has made substantial progress on its own aggressive Online Safety Bill (OSB). The United States has seen a wave of proposals related to the infamous “Section 230” statutory immunity, none of which have become law, though some states have adopted highly problematic (and certainly unconstitutional) anti-censorship bills.
None of these issues will resolve any time soon. In the near term, the most immediate developments to watch will be the resumption of the UK’s OSB discussions under a new prime minister, and the implementation of the DSA and DMA in the EU. However, there are wildcard possibilities in Australia and the United States that could have a major normative and practical impact on any or all of the CATS. At the same time, the United States in particular could end up making very few significant public policy moves over the next few years, at least when counting only those moves that survive the inevitable court challenges.
While the CATS are still kicking, they are now SAD, because joining the field of internet policy primacy over the next few years will be surveillance, anonymity, and disinformation.
Surveillance: Our phones go everywhere with us, so they know a lot about us, and the government wants to, too.
Following the Snowden revelations of 2013 and passage of the USA FREEDOM Act in 2015 which reauthorized some government surveillance authorities, conversations within the internet policy community around surveillance became a bit less active. They remained so despite the renewal of another law in 2018 and the continued presence of an executive order, EO 12333, which at one time raised grave public interest concerns.
But as mobile internet use becomes more and more ubiquitous, the threat landscape for surveillance continues to grow. In the United States, the recent Dobbs Supreme Court decision and the prospect of differential state-by-state abortion laws are raising new concerns regarding law enforcement access to location data from mobile phones and applications. Worldwide, transparency reports and other sources illustrate a trajectory towards more government surveillance and control; Freedom House’s recent Freedom on the Net report showed a decline in global internet freedom for the 11th straight year. Facebook, for example, reported more than 400k government requests for data in 2021, a 2.5x increase over roughly 160k requests in 2017.
Undoubtedly, the war on encryption will continue, as evidenced by the international intergovernmental statement in 2020 asserting that the United States, United Kingdom, India and other signing governments “support strong encryption,” but nevertheless raise concerns regarding “public safety” arising from end-to-end encryption “precluding the ability of law enforcement agencies to access content.” Somehow, though, this encryption doesn’t seem to frustrate the ability of governments to issue more and more requests for data from the private sector every year.
Anonymity: Governments want to force platforms to know if you’re a dog, especially an underage dog.
Tied into concerns with government surveillance, but sufficient to merit separate consideration, is the rise in real identity mandates, including identifying the age of a technology user with the goal of protecting minors from harm. The U.K.’s Age-Appropriate Design Code is the global leader on this, but the state of California has its own identically named effort. While differentiating services and regulations based on age has a long history—consider the U.S. Children’s Online Privacy Protection Act—the mechanics of proving age have historically been left to the discretion of the platform. The result is largely a voluntary system such as alcohol-oriented websites which ask users to report their birthday.
The evolution of law is towards greater enforcement of identity in general, not only to manage age-specific policies, but to minimize online harms attributed to anonymous users. Research continually shows that forced use of real identity online accelerates harassment and harm. But many U.K. politicians continue to push forward misguided interventions, with some willing to consider banning anonymous social media accounts entirely. Australia, similarly, has proposed tying social media accounts to specific offline proof of identity, such as drivers’ licenses.
Even in the United States, where the First Amendment offers uniquely strong protections for speech, a recent court decision rejected a Section 230 defense under the theory that the digital platform in question should have gone further to warn an underage user of the potential for use and to prevent minors from being connected through the service with predators. Such expectations seem difficult if not impossible to realize in practice without more intervention to ensure age and/or identity verification by platforms.
Disinformation: International discord risks new digital walls from Western democracies.
Disinformation is, of course, related to the ‘S’ of speech, and public policy proposals in that space will directly affect disinformation, including both the DSA and the on-point updated Code of Practice on Disinformation. But disinformation has grown beyond a concern over user relationships with internet platforms into a frontier for nation-state discussion. Notably, the United Kingdom took steps to tie together its Online Safety Bill with its National Security Bill as a means of streamlining the government’s ability to force platforms to block foreign disinformation content.
This escalation is understandable. The effect of disinformation is more than just individual online harm, as it can undermine democracy at its core. At the same time, online speech policy is already a fragile and delicate balance where multiple critical rights and economic considerations are at stake. Escalation brings national security into the mix, and however justifiable such an action may be, it will only make the balance harder to achieve, and raise the costs and risks facing free expression and commerce.
Consider the recent Council on Foreign Relations report, which proposes a shift to American foreign policy, responding to malicious nation-state conduct through a tougher response to attackers, including those working to proliferate disinformation. The report claims that the end of the global internet runs the risk of legitimizing an “us vs them” conflict and declaring the ultimate “Great War for the future of the internet.” As with any war, civilian casualties appear inevitable; here it is everyday freedoms of online speech and commerce at risk.
The evolution of technology and the professionalization of its practice may, eventually, deliver happy virtual puppies to replace the SAD CATS.
The road ahead for internet policy may be filled with potential potholes, but there are plenty of reasons to be optimistic. More than 63 percent of the world’s population is now connected to the internet in some fashion. And computer technology has not only improved dramatically, but also somewhat plateaued, extending the viable life of laptops and smartphones when compared to models from more than five years ago. New inexpensive smartphones provide access to the same content and opportunities as flagship models. Amidst high-profile conversations of economic policy and nation-state tensions, it’s important to remember the value of the internet in everyday life and celebrate continued improvements in access to the internet’s benefits.
In addition, investment in a healthy internet has grown substantially in recent years, with a particular emphasis in improving trust and safety practices. The field of privacy has long had its communities of practice, notably the International Association of Privacy Professionals (IAPP), and its multistakeholder convening bodies like the Future of Privacy Forum (FPF). But other fields within internet policy have not reached the same level of professionalization. Filling that void today are the Trust & Safety Professionals Association, a shared community of practice for individual trust and safety practitioners; the Digital Trust and Safety Partnership, an organization of tech companies around shared best practices and a commitment to continuous improvement; and the Integrity Institute, a community of people dedicated to improving the integrity and quality of the internet. Collectively, these institutions can make a big practical difference in both the short and long term.
Finally, innovation continues. While the crypto craze has burned up a lot of oxygen as it sets the world’s virtual wallets on fire, the potential future of the metaverse and immersive reality technologies is brighter. Artificial intelligence grows more powerful and fantastical every day, such as OpenAI’s Dall-E 2 generating imagery from nuanced keyword prompts, although the increasingly “datafied state” supporting the training of complex matching algorithms may well pose unknowable future challenges.
As is so often the case, the question remains: How can governments engage in a manner that mitigates harm and simultaneously aligns the positive forces that can be brought to bear by the market and by users, all within the complex system of governance that is the internet? Today, legislators are trying in many regions, but it will be many years before we can gauge the effects.