Last month, a coalition of European media companies filed a formal complaint against Google’s AI Overviews—which summarize answers to search queries—under the European Union’s Digital Services Act (DSA). This is troubling on several levels and reflects a deepening regulatory hostility toward AI and innovation in Europe that should alarm U.S. firms and policymakers alike.

The complaint alleges that AI overviews are “traffic killers” for media and journalism websites because the technology gets its information from those sites, summarizes it, and places it above the original content on Google’s website. No data on the volume or scale of this phenomenon is included in the complaint, which also broadly accuses the technology of disseminating incorrect content and misinformation in its summaries—again, without any evidence or examples.

One concerning aspect of the situation is that the complaint wasn’t even brought by a regulator or court of law. Rather, a group of media companies and trade associations submitted it to Germany’s digital services coordinator (DSC), a regulatory proxy created under the DSA to evaluate and escalate complaints. DSCs are not elected officials, and they lack direct democratic accountability. Yet under the DSA, they have the power to recommend enforcement actions that could lead to enormous fines—up to 6 percent of a company’s global revenue. That’s not governance; it’s deputized policymaking via bureaucratic fiat.

Furthermore, a large language model (LLM) summarizing content is functionally no different from a human doing the same. When a journalist reads multiple sources and writes a summary, that’s considered fair use or original expression. LLMs perform a similar task, processing publicly available data and generating a new, transformative output. Preventing machines from summarizing because they are more efficient would be like banning calculators because they multiply faster than humans can. Regardless of whether people or software are doing it, summarization is an essential part of learning, reasoning, and informing.

As for misinformation, it’s crucial to remember that the concept is not always fixed or easily definable. In a free society, open discourse includes incorrect, controversial, or unverified claims. The ability to generate or repeat misinformation isn’t a bug—it’s a reflection of expressive freedom. The alternative is political control over what models are allowed to say, which creates serious risks of censorship and abuse. There may be a few small exceptions where laws like libel apply, but generally, LLMs should be free to reflect the messy, evolving nature of public knowledge without interference. Unfortunately, there are no laws like the First Amendment or Section 230 to protect European citizens.

For leading American technology firms, the cumulative impact of the EU’s digital regulatory regime is starting to look less like rule-of-law governance and more like an innovation tax to be paid by American firms and both American and European consumers.  

Apple recently issued a rare public threat: If certain European regulations are not amended, it may stop selling iPhones in the EU altogether. Complying with the EU’s Digital Markets Act (DMA), which mandates side loading and interoperability requirements, could force Apple to overhaul core features of its operating system and undermine security and user experience. Numerous AI features in search engines, voice assistants, and productivity tools are already unavailable to European users because companies can’t square innovation with the EU’s legal obligations.

In effect, Europe is pushing these firms to choose between global rollout and EU compatibility. The outcome is that features are delayed or never launched. Entire products are withdrawn or region-locked. And the most exciting breakthroughs in consumer AI happen elsewhere, mostly in the United States and Asia. Without access to EU markets, American firms are limited in their ability to innovate and improve product designs around the world.

The broader economic logic of these policies is also deeply flawed. EU regulators claim they are defending competition and promoting innovation by reigning in tech giants. But their approach is based on a zero-sum mentality—assuming that if a platform benefits from new features, someone else must be losing.

This view ignores the fundamental economics of innovation, especially in digital markets. New products like AI Overviews create value by lowering transaction costs, improving access to information, and enabling better decision-making. And these benefits aren’t confined to Google—they spill over to hundreds of millions of users and thousands of complementary businesses.

Media companies claim to be losing traffic, but search engines have always struck a balance between helping users find answers and directing them to third-party sites. There is no “right to traffic,” and U.S. courts have consistently rejected the idea that platforms owe traffic to publishers. If publishers are losing engagement, then they should improve their offerings rather than seeking regulatory power over platform design. Even if there is some redistribution of attention, consumer gains are likely far greater than losses to any one sector.  

Lessons for U.S. Policymakers

As anti-Big Tech sentiment grows in the United States, some policymakers are looking to the EU as a regulatory template—a serious mistake to make.

Europe’s DSA and DMA frameworks are not pro-consumer, pro-market, or pro-innovation. They are bureaucratic, protectionist, and deeply hostile to the dynamic evolution of digital platforms. They invite rent seeking from legacy industries, empower unelected regulators to shape software design, and impose costs that deter experimentation and worsen the consumer experience.

If the United States follows this path, we should expect the same outcomes: slower rollouts, fewer features, and weaker global leadership in AI. Instead, U.S. policy should focus on enabling experimentation, ensuring clear liability rules, and encouraging competition through ease of entry—not by micromanaging algorithms or enforcing design mandates.

Our Technology and Innovation program focuses on fostering technological innovation while curbing regulatory impediments that stifle free speech, individual liberty, and economic progress.