WORD ON THE STREET

Some links on patent reform

I’ve been digging back in on some materials related to patent policy and the case for patent reform, and thought it might be useful to others to post some links to R Street’s work over the years as well as works by other groups. Check them out below.

Publications with R Street scholars:

Publications by other center-right groups:

Academic and government publications:

 

How to talk to your family about net neutrality

Multi Generation Family Celebrating Thanksgiving

It’s nearly Thanksgiving – that time of year where we all try to cram our families through airport security on the same day so we can gather around the table to argue about politics.

This year is likely to prominently feature wonky topics such as tax reform and – oddly enough – net neutrality. While telecom regulation isn’t normally a salient subject in family settings, that may change tomorrow; the Federal Communications Commission (FCC) just released its proposal to rollback Title II regulation of the Internet, aka “Net Neutrality” (our substantive thoughts on the issue can be found here).

Activists didn’t waste any time in attacking the plan and taking to various social media platforms to shout their objections. But not all of us agree with left-wing activists (and we suspect most of them don’t know much about telecom policy).

With all the confusion and misinformation pervading this discussion, here are some points to share with your family should the subject come up around the table this Thanksgiving.

1) It’s not the end of the Internet. The Internet as we know it was built without Title II regulation. In fact, the current regulations only took effect in mid-2015. Cases of net neutrality “violations” were few and far between in the decades before Title II regulation, and they were resolved without prescriptive regulation. Going back to the pre-2015 light-touch framework would hardly pose an existential threat to your favorite websites.

2) There will still be “cops on the beat.” Scary scenarios in which an ISP blocks content from its competitors will still be illegal. And even as the FCC steps aside from regulating the Internet, the Federal Trade Commission still has ample authority and expertise to hold ISPs to their promises and punish them if they engage in unfair competition methods. State attorneys general also have the power to bring enforcement actions using state-level consumer protection laws.

3) The Internet has never “treated all traffic the same,” nor should it. Different kinds of data are sent over the Internet, and they don’t all need the same treatment. A half-second delay in delivering an email or part of a software update isn’t a big deal. The same delay for applications like real-time multiplayer games or video chats could render them unusable.

Additionally, some Internet applications are non-neutral. If you use T-Mobile’s Binge On, you get slightly lower-quality video in exchange for free streaming. That such a service hurts consumers would be news to those who have signed up for it in droves.

4) The issues you’re worried about might not be net neutrality concerns. We’ve all had bad experiences with our ISPs’ customer service departments, but those are separate issues. More regulation, especially the kind designed for a 1934 telephone monopoly, is not going to improve the situation.

5) More broadband deployment is the long term solution. What will make things better is more competition in the marketplace, which means more broadband deployment from all sources, including wireline and wireless. Thus, instead of fixating on net neutrality, we should focus on removing barriers to deployment. The Title II regulations are one such barrier that has depressed investment. Repealing them will get us back on the road to faster Internet for all.

AT&T should acquire Time Warner despite DOJ challenge

ThinkstockPhotos-123909553

The following post was co-authored by R Street Tech Policy Associate Joe Kane. 


This week, the Department of Justice (DOJ) formally challenged AT&T’s proposed acquisition of media conglomerate Time Warner by filing a complaint to block the merger with the U.S. District Court for the District of Columbia. Despite this challenge, the merger should be allowed to proceed, as both the facts and legal precedents strongly suggest that DOJ’s challenge lacks merit.

Time Warner produces video content and programming through its Warner Bros., Turner Broadcasting System and HBO subsidiaries. AT&T distributes content from Time Warner and other producers through its wireline, mobile and satellite networks. These two firms don’t compete against each other in any relevant market, so this represents a classic example of a vertical merger. It is very rare for the DOJ to challenge a vertical merger such as this, and even rarer for the courts to block one — it hasn’t happened in decades.

Vertical mergers are almost always pro-competitive and pro-consumer in nature. It’s horizontal mergers, in which competitors in the same market seek to combine, that are more likely to be problematic and thus subject to antitrust scrutiny. AT&T abandoned its attempt to acquire T-Mobile in 2011, for example, after the DOJ filed suit to block it. With vertical mergers, however, the combined firm can achieve valuable efficiencies that it can pass onto consumers in the form of lower prices and/or better products or services. And no firms exit the market, so consumer choice does not decrease. Thus, the benefits to consumer welfare from such mergers almost always exceed any corresponding harms.

Here, the efficiency gains that AT&T and Time Warner could achieve are both obvious and substantial. In addition to benefitting from economies of scale (e.g., by combining their legal teams or human resource departments), control over the entire chain of distribution for Time Warner’s premium video content — from the production studio to the viewer — could allow the combined firm to deliver premium content to AT&T subscribers at substantially lower costs, or to develop new service offerings to compete with the innovative video services being developed by the likes of Apple, Amazon, Netflix and Disney.

The combined AT&T-Time Warner may well have greater leverage and bargaining power in carriage negotiations — Time Warner may get better deals with other distributors when licensing its content and AT&T may get better deals with other programmers when licensing their content. That may squeeze competing programmers and distributors, including giants like Disney and Verizon, by eating into their profit margins and forcing them to innovate in order to survive in the market.

But the antitrust laws don’t protect competitors; they protect competition. The recent vertical merger between Comcast and NBCUniversal — which was allowed to proceed despite identical concerns about increased leverage in carriage negotiations — is indistinguishable from the proposed AT&T-Time Warner merger. There is simply no reason to change course now and block AT&T’s acquisition of Time Warner.

The DOJ surely knows how weak its case is, so expect to see further negotiations about merger conditions in the coming weeks. AT&T has already signaled that it’s unwilling to accept any structural conditions, such as divesting political lightning rod CNN, but a targeted behavioral condition governing the licensing of Time Warner’s content to competing online video distributors, like Sling TV, may be enough to grease the wheels and get this merger over the line.

Whether AT&T is willing to accept such conditions, or whether it pushes its hand in court to try to get the merger approved without any conditions, remains to be seen. Regardless, the benefits from the merger would be substantial and undeniable, far outweighing any likely harms. AT&T’s acquisition of Time Warner should be approved posthaste.

 

 

Kosar talks book publishing with CHCW podcast

R Street Vice President of Policy Kevin Kosar, along with food writer Monica Bhide, were recent guests of the Charles Houston Community Writers and sat for an extended discussion of publishing for the group’s podcast. Video is embedded below:

Puerto Rico: Storms and savings

shutterstock_728622295

Puerto Rico has a long history of many disastrous hurricanes, as once again this year with the devastating Hurricane Maria. These disasters recur frequently, historically speaking, in an island located “in the heart of hurricane territory.” Some notable examples follow, along with descriptions excerpted from various accounts of them.

  • In 1867, “Hurricane San Narciso devastated the island.” (Before reaching Puerto Rico, it caused “600 deaths by drowning and 50 ships sunk” in St. Thomas.)
  • In 1899, Hurricane San Ciriaco “leveled the island” and killed 3,369 people, including 1,294 drowned.
  • In 1928, “Hurricane San Felipe…devastated the island”…“the loss caused by the San Filipe hurricane was incredible. Hundreds of thousands of homes were destroyed. Towns near the eye of the storm were leveled,” with “catastrophic destruction all around Puerto Rico.”
  • In 1932, Hurricane San Ciprian “caused the death of hundreds of people”…“damage was extensive all across the island” and “many of the deaths were caused by the collapse of buildings or flying debris.”
  • In 1970, Tropical Depression Fifteen dumped an amazing 41.7 inches of rain on Puerto Rico, setting the record for the wettest tropical cyclone in its history.
  • In 1989, Hurricane Hugo caused “terrible damage. Banana and coffee crops were obliterated and tens of thousands of homes were destroyed.”
  • In 1998 came Hurricane Georges, “its path across the entirety of the island and its torrential rainfall made it one of the worst natural disasters in Puerto Rico’s history”…“Three-quarters of the island lost potable water”…“Nearly the entire electric grid failed”…“28,005 houses were completely destroyed.”
  • In 2004, Hurricane Jeanne caused “severe flooding along many rivers,” “produced mudslides and landslides,” “fallen trees, landslides and debris closed 302 roads” and “left most of the island without power or water.”
  • And in 2017, as we know, there was Hurricane Maria (closely following Hurricane Irma), with huge destruction in its wake.

These are some of the worst cases. On this list, there are nine over 150 years. That is, on average, one every 17 years or so.

All in all, if we look at the 150-year record from 1867 to now, Puerto Rico has experienced 42 officially defined “major hurricanes”—those of Category 3 or worse. Category 3 means “devastating damage will occur.” Category 4 means “catastrophic damage will occur.” And Category 5’s catastrophic damage further entails “A high percentage of framed homes will be destroyed…Power outages will last for weeks to possibly months. Most of the area will be uninhabitable for weeks or months.”

Of the 42 major hurricanes since 1867 in Puerto Rico, 16 were Category 3, 17 were Category 4 and 9 were Category 5, according to the official Atlantic hurricane database.

Doing the arithmetic (150 years divided by 42), we see that there is on average a major hurricane on Puerto Rico about every 3.5 years.

There is a Category 4 or 5 hurricane every 5.8 years, on average.

And Category 5 hurricanes occur on average about every 17 years.

There are multiple challenging dimensions to these dismaying frequencies–humanitarian, political, engineering, financial. To conclude with the financial question:

How can the repetitive rebuilding of such frequent destruction be financed?  Thinking about it in the most abstract way, somewhere savings have to be built up. This may be either by self-insurance or by the accumulation of sufficiently large premiums paid for insurance bought from somebody else. Self-insurance can include the cost of superior, storm-resistant construction. Or funds could be borrowed for reconstruction, but have to be quite rapidly amortized before the next hurricane arrives. Or somebody else’s savings have to be taken in size to subsidize the recoveries from the recurring disasters.

Is it possible for Puerto Rico to have a long-term strategy for financing the recurring costs of predictably being in the way of frequent hurricanes, other than using somebody else’s savings?


Image by JEAN-FRANCOIS Manuel

 

Why cloture benefits both parties

shutterstock_674041957

Senate Rule XXII requires an affirmative vote of “three-fifths of the senators duly chosen and sworn” to invoke cloture, or end debate, on any “measure, motion, or other matter pending before the Senate … except on a measure or motion to amend the Senate rules, in which case the necessary affirmative vote shall be two-thirds of the senators present and voting.”

Consequently, cloture is typically understood today as making minority obstruction possible. A three-fifths vote is effectively required to schedule an up-or-down vote on most questions, absent the unanimous agreement of all 100 senators. However, ending debate on presidential nominations only requires a simple-majority vote. (Democrats used the nuclear option to reduce the threshold to invoke cloture on most nominees in 2013 and Republican did the same for Supreme Court nominees earlier this year.)

Notwithstanding the recent use of the nuclear option, cloture remains a time-consuming process when the Senate is considering nominations and legislation. For most debatable measures, the entire process requires four calendar days to complete. This gives individual senators the ability singlehandedly to delay consideration of the majority’s agenda on the Senate floor simply by withholding their consent to expedite the decision-making process. Given this, the number of cloture votes is frequently cited as evidence of minority obstruction.

But there is more to cloture than minority obstruction.

It is certainly not incorrect to view cloture motions and minority obstruction as related. However, such a narrow focus overlooks the many advantages that the cloture rule offers Senate majorities. Then-Majority Leader Harry Reid, D-Nev., acknowledged these benefits in an exchange with then-Minority Leader Mitch McConnell, R-Ky., on the Senate floor in July 2012. “The filibuster was originally … to help legislation get passed. That is the reason they changed the rules here to do that.”

The majority, acting through its leadership, can use cloture to structure the legislative process to its advantage. When viewed from this perspective, the incidence of cloture votes also reflects an increase in the influence of the majority leader and, by extension, the majority party, in the Senate’s deliberations.

The evolution in the use of cloture during the second half of the 20th century increased the influence of the majority leader. Cloture is now utilized preemptively on a routine basis to speed consideration of legislation regardless of time spent on the floor. In this process, the majority limits the minority’s ability to debate measures freely and offer amendments pursuant to the Senate rules. Such behavior may simply result from the anticipation of expected obstruction by the minority party. It could also represent a genuine effort to push the majority’s agenda through the Senate unchanged in a timely manner. The restrictive process could also be utilized to defend carefully negotiated legislation from killer amendments or to protect majority party members from having to take tough votes.

The majority leader frequently uses cloture as a scheduling tool when the Senate considers major legislation. While filing cloture is a time-intensive process, it provides the only clearly established procedure for the resolution of debatable questions in the Senate. Thus, the cloture rule provides a small degree of certainty in an otherwise uncertain environment. The majority leader can use such certainty to his advantage by scheduling votes at the end of the week and immediately before a long recess to force an issue. Obstructing senators are less likely to risk the ire of their colleagues by forcing a rare weekend session.

The cloture rule also gives the majority leader the ability to impose a germaneness requirement on amendments to legislation post-cloture. Such a requirement may spare majority party members from having to take tough votes on nongermane amendments. It also protects carefully crafted legislation from poison-pill amendments unrelated to the underlying issue.

Finally, cloture is often utilized by the majority leader for symbolic purposes. By triggering an up-or-down vote on legislation, cloture establishes a clearly defined line of demarcation between the majority and minority parties on controversial issues. Such votes can be presented as take-it-or-leave-it propositions. The proponents of such measures can often portray the senators who vote against them as not supporting the underling legislation.

Without the cloture process, the majority leader would not have these important, albeit limited, tools at his disposal, and he would thus be unable to structure the legislative process to the majority’s advantage using existing Senate rules. When combined with the practice of filling the amendment tree, the cloture process further allows the majority leader to limit the ability of individual senators to participate in the legislative process without having to change the Senate’s rules to reduce their procedural prerogatives.

The fact that the majority leader regularly files cloture early in the legislative process, before any actual obstruction can be said to have occurred on a measure, is illustrative of the benefits that Senate majorities derive from the cloture process. As the figure below demonstrates, the instances in which cloture has been utilized during the early stages of a measure’s consideration on the Senate floor have increased dramatically since 2001. This dynamic can be isolated and the majority’s pre-emptive use of cloture can be more readily discerned by comparing the total number of cloture motions filed in a Congress to the number filed when omitting those motions filed on the first day of a bill’s consideration or very early in the legislative process.

Cloture

The takeaway from this is that the cloture process may benefit both the majority and the minority parties in the Senate today.


Image by Jonathan O’Reilly

 

How the FCC’s media ownership reforms could save local news

shutterstock_308058626

The following post was co-authored by R Street Tech Policy Associate Joe Kane. 


Local news is in decline. As advertising revenues plummet and both reporters and subscription dollars increasingly flow to a handful of coastal media outlets, local newspapers and broadcasters throughout the rest of the United States struggle to get by.

Shifts in media consumption in the digital age are partly to blame, but these local news outlets also are hamstrung by arcane ownership restrictions that inhibit their ability to innovate and compete. The Federal Communications Commission’s decades-old restrictions on local media ownership may have made sense when Americans’ news outlets were limited to local newspapers, radio and three commercial TV broadcasters (ABC, CBS and NBC — Fox wasn’t formed until the mid-1980s). But with the rise of cable news and the commercial internet, these restrictions now skew the media marketplace and become more outdated every day.

Thankfully, this broken situation is about to be fixed. This week, the FCC is set to pass commonsense reforms to its local-media ownership rules that are long overdue. These updated rules will better reflect the realities of the current media landscape and allow local newspapers and broadcasters to compete with other media outlets on a level playing field. The changes include eliminating bans on media cross-ownership, updating local-broadcast merger rules and allowing broadcasters to enter into joint sales agreements (JSAs) for advertising without automatically qualifying as merged entities for purposes of the ownership restrictions.

FCC Chairman Ajit Pai recently outlined the importance of eliminating the cross-ownership bans. Like many FCC rules, the bans contemplated a siloed and heavily concentrated media market, which in no way resembles the cornucopia of media outlets available to Americans today. The cross-ownership bans date back to the 1970s, when local broadcasters and newspapers provided the only access to news in many markets. At that time, prohibiting any one owner from controlling both a radio station and a television station in the same market, or a newspaper and a television or radio station in the same market, was a way to ensure Americans had access to a diverse array of viewpoints and news sources.

However, with the rise of cable news and the internet, these cross-ownership bans no longer make any sense. Jeff Bezos (Amazon CEO and world’s richest man) was allowed to buy the Washington Post, and Facebook or Google legally could try to buy The New York Times. But a local broadcaster buying a struggling newspaper is strictly forbidden.

That simply makes no sense. Any merger that threatens to create a monopoly or lessen competition substantially (like those NYT hypotheticals), could still be blocked under general antitrust law. But many cross-ownership deals between local newspapers and broadcasters would raise few, if any, antitrust concerns, so the per se ban on them should be removed. Moreover, allowing cross-ownership between broadcasters and newspapers would likely lead to more coverage of local issues.

The FCC is also updating its rules for mergers among broadcasters, again to recognize the changing media marketplace. Previously, a top-four broadcaster and a smaller broadcaster were allowed to merge only if doing so left at least eight independently owned broadcast stations in the market. This so-called “Eight Voices Test” doesn’t count cable news or the internet as even a single “voice” in the market, which is absurd, given the effectively infinite capacity for independent voices on these platforms. Thankfully, the FCC is set to eliminate this outdated test and allow general antitrust law to govern these mergers instead.

Similarly, the FCC is relaxing its rule that prohibits all mergers between top-four broadcasters, choosing instead to review these mergers on a case-by-case basis. Currently, the FCC requires the four biggest TV broadcasters to be independently owned, regardless of how many other stations are in the market. This nationwide, bright-line rule is not appropriate in all markets. For example, in a market with two very large stations and several smaller stations, a merger between the third and fourth biggest stations could benefit both consumers and competition by putting greater pressure on the two biggest stations. In many cases, such a merger would be harmful, but employing case-by-case review will allow the FCC to evaluate actual market conditions, rather than sticking to a rigid line drawn in a bygone era.

Finally, the FCC is amending its rule that treats any broadcasters with joint sales agreements (JSAs) as being under common ownership. Again, this is simply a case of the FCC modernizing its media ownership rules to bring them more in line with the antitrust rules that govern competition in every other sector. The current rules assume that if two broadcasters use a JSA in advertising sales, it automatically gives one station enough control over the other to amount to common ownership. It’s true that such arrangements can amount to collusion and unfair restraints on trade, depending on the degree of control they exert. But they can also greatly reduce costs for struggling broadcasters who cannot afford their own sales teams. The current restriction on JSAs harms the public interest by blocking these efficiency gains. Going forward, whether JSAs are attributable for purposes of ownership restrictions will be assessed under general antitrust standards.

The media marketplace is increasingly converging toward the internet and over-the-top services, yet the FCC’s local media ownership rules were devised before the internet even existed. The commonsense reforms the FCC has proposed for these antiquated rules are well overdue. By removing unnecessary restrictions and updating its standards, the FCC can balance the playing field, stimulate investment and help save local news media.


Image by Zerbor

Virtue signaling won’t save the planet, but state compacts might

shutterstock_526388254

Senior U.S. climate officials arrived Monday in Bonn, Germany, a week into the latest meeting of the United Nations-sponsored climate change project known as the Conference of Parties-23 (COP-23).

To no one’s surprise, the “rest of the world” (which is to say, Europe and the American political left, mostly) remains unhappy about the United States’ decision to withdraw from the Paris Climate Accord in June. Nonetheless, they are committed to find a way to persuade the country (which is to say, the red states) to see the error of its ways.

Over the weekend, four Democratic governors from states with active environmental movements—Jay Inslee of Washington, Jerry Brown of California, Kate Brown of Oregon and Terry McAuliffe of Virginia—verbally thrashed the Trump administration, although Brown was taken aback when even he was booed and heckled by “climate justice” protestors.

But to no avail.

On Monday, as several of Trump’s most senior climate negotiators took part in a panel talk on “clean fossil fuels,” attendees started singing a clever protest song to the tune of Lee Greenwood’s “God Bless the U.S.A.”

But the Trump administration still plans to exit the Paris Agreement. What gives?

Suffice it to say, taking moral umbrage at the United States doesn’t have the same coercive power over American policy as the Pentagon’s nuclear umbrella over Europe or the U.S. Navy and its 11 aircraft carriers keeping the world’s trade routes has had on global policy.  Hence, the distinction between “hard power” and “soft power” made many years ago by Harvard’s Joseph Nye.

The top-down approach to climate change the United Nations prefers was never going to work. Major climate meetings have been taking place for 23 years—hence the name COP23—but have never succeeded in creating a workable international scheme. On two separate occasions, the United States has signed up and then removed itself from a global climate agreement, first in 2001 under George W. Bush and now in 2017 under Trump.

Thankfully, a more decentralized approach to carbon policy is quietly gaining steam, as states and cities band together to pursue their own goals. Speaking during a panel discussion in Bonn, McAuliffe celebrated the recent election wins in Virginia, which ushered in a new swath of Democrats who will enjoy something like parity with Republicans in the Legislature’s lower house, not to mention a new Democratic governor, lieutenant governor and attorney general.

This means Virginia will likely become a member of the nine-state Regional Greenhouse Gas Initiative, which has had some success cutting emissions from the power sector. Carbon markets are less economically efficient than a carbon price, but since its creation in 2005, RGGI state carbon emissions have fallen 40 percent, thanks in large part to the development of natural gas reserves from hydraulic fracturing. While RGGI is not an ideal vehicle to place a market price on carbon, this type of compulsory, cost-sharing system is the longest-lasting successful carbon market still in existence.

Along with Virginia, the election of a Democrat to replace outgoing Republican Gov. Chris Christie of New Jersey also means that state may rejoin RGGI, after leaving the group in 2011.

In other words, the growth of regional carbon markets is still a going concern. It even could force real U.S. emissions reductions in the coming years, even as the sound and fury of U.N. meetings along the Rhine continue to signify nothing.


Image by r.classen

 

Massachusetts carbon tax bills are a mixed bag

shutterstock_736054342

The search for climate change solutions that keep science front and center and political preferences secondary has led to one frustration after another. But that soon may change. A revenue-neutral carbon tax holds the promise of reducing carbon emissions without increasing the size of government – the principle objection of conservatives who long have been skeptical of more prescriptive climate regulations.

If properly designed, a revenue-neutral carbon tax can employ market-based incentives, rather than government regulations and subsidies, to ensure that pollution is appropriately priced. While no U.S. state has adopted a carbon tax to date, much less a revenue-neutral version, Massachusetts is poised to become the first state to successfully pass a hybrid version.

Two carbon-pricing bills of note have been filed this legislative session. S.1821, filed by state Sen. Michael J. Barrett, D-Lexington, and H.1726, filed by state Rep. Jennifer Benson, D-Lunenburg, both seek to assess fees on carbon emissions. Yet the Senate bill is, as a matter of both politics and policy, by far the better of the two.

Barrett’s bill would simply assess a fee on emissions without adding to government bureaucracy. That way, Massachusetts taxpayers would pay only for the price of their pollution, and no more. Conversely, the House proposal would divert 20 percent of the revenue generated by the tax into a so-called “Green Infrastructure Fund” to support investments in “transportation, resiliency and clean energy projects that reduce greenhouse gas emissions, prepare for climate change impacts, assist low-income households and renters in reducing their energy costs, and create local economic development and employment.”

While that laundry list of well-intentioned spending certainly aspires to assist the commonwealth, there’s no indication that it will better dispose of the funds it collects than private actors would under a system in which carbon emissions are appropriately priced. In other words, a revenue-neutral carbon tax could achieve all of the benefits sought by establishing a green infrastructure fund, without creating new government programs or adding to government waste.

A revenue-neutral carbon tax need not harm Massachusetts’ economy, as evidenced by the well-balanced policy approach taken by the western Canadian province of British Columbia, which adopted a similar fee-and-dividend approach to carbon pricing in 2008. In fact, the United Nations Framework Convention on Climate Change estimates B.C.’s tax has reduced province’s emissions by up to 15 percent with no observable drag on overall economic performance. In fact, between 2007 and 2014, British Columbia’s real gross domestic product grew by 12.4 percent, stronger than the Canadian average.

The only downside to the Senate bill is that it, unfortunately, does nothing to reduce the tax burden on Massachusetts residents. Rather than use the fee to lower, say, the income tax; the revenue would finance a Carbon Dioxide Emissions Charges Rebate Fund. All proceeds would be returned to residents and employers in the in the form of rebates. Analysis from the Center on Budget and Policy Priorities concludes that if large rebates were distributed through an efficient delivery system, they would be able to protect low-income households from the brunt of the tax, but would not be able to fully cover households and businesses with large carbon footprints.

A better approach would allow be to apply the revenues to reduce or eliminate more destructive taxes like the corporate excise tax or the personal income tax. Taxing bad things, like carbon emissions, rather than good things, like labor and investment, would build ongoing support for the carbon tax among citizens and businesses and allow any negative effects to be more than offset by a growing state economy.

While critics in both parties and on both sides of the climate change debate may find fault with the Senate bill, it is step in the right direction for the country and the commonwealth. If enforced properly, this legislation will reduce harmful carbon emissions and benefit Massachusetts residents and businesses, without contributing to the stream of wasteful government spending and unnecessary bureaucratic growth.

If legislators and environmental groups are serious about addressing climate chance, they should do so in a way that truly benefits everyone. If, in fact, the goal is to reduce carbon emissions and bring economic benefits to residents and businesses, a revenue-neutral carbon tax is the best way forward.


Image by funnybear63

 

A new development involving the Congressional Review Act

shutterstock_687899419

A debate has broken out in the regulatory-reform community this past year over how properly to construe the reach of the Congressional Review Act. Traditionally, most observers have viewed the CRA as a tool by which Congress could repeal new regulations issued within the last 60 legislative days. But some legal scholars have argued that, while this is broadly correct, it’s far from clear when the CRA’s 60-day clock should start ticking.

Paul Larkin from the Heritage Foundation is among those to point out that, under the CRA’s text, the clock cannot start until the regulation in question has been submitted to Congress. Because many agency rules are never officially submitted to Congress—even ones that were promulgated many years ago—the 60-day clock was never activated for those rules and Congress could thus still repeal them using the fast-track mechanism.

Another component of this debate has been clarifying what, exactly, constitutes a “rule” for CRA purposes. The text of the CRA incorporates the Administrative Procedure Act’s definition of “rule,” which as Larkin points out, “has been recognized as quite broad.” This broader interpretation of the term “rule” could encompass informal agency actions like policy statements or guidance, which do not go through the more formalized process of notice-and-comment rulemaking under the APA.

Congress has so far appeared reluctant to embrace this broader interpretation of the CRA’s text and use it to repeal rules and other agency action stretching back into previous administrations. But that could be changing. The Wall Street Journal editorial board and other media outlets are reporting that Sen. Pat Toomey, R-Pa., recently asked the Government Accountability Office to issue a determination as to whether a 2013 leveraged-lending guidance document from the Obama administration constituted a “rule” for CRA purposes.

The GAO finally has issued its ruling, concluding that the lending guidance was, in fact, a rule under the CRA, meaning it is eligible for repeal under the act. Further, under Senate precedent, the publication of a GAO report such as this one is treated as the official trigger for the CRA’s 60-day legislative clock. As the nonpartisan Congressional Research Service has noted:

In some instances, an agency has considered an action not to be a rule under the CRA and has declined to submit it to Congress… In the past, when a Member of Congress has thought an agency action is a rule under the CRA, the Member has sometimes asked GAO for a formal opinion on whether the specific action satisfies the CRA definition of a ‘rule’ such that it would be subject to the CRA’s disapproval procedures.

GAO has issued 11 opinions of this type at the request of Members of Congress. In seven opinions, GAO has determined that the agency action satisfied the CRA definition of a ‘rule.’ After receiving these opinions, some Members have submitted CRA resolutions of disapproval for the “rule” that was never submitted…

Members have had varying degrees of success in getting resolutions recognized as privileged under the CRA even if the agency never submitted the rule to Congress. It appears from recent practice that, in these cases, the Senate has considered the publication in the Congressional Record of the official GAO opinions discussed above as the trigger date for the initiation period to submit a disapproval resolution and for the action period during which such a resolution qualifies for expedited consideration in the Senate…

It remains to be seen if Congress will pursue a resolution of disapproval under the CRA to repeal this particular rule on leveraged lending, but if it does, the potential implications run deep. Congressmen could ask GAO to issue more opinions determining whether past agency actions constitute rules for CRA purposes, and then seek to repeal them. The law firm Cleary Gottlieb observed in a memorandum on this development:

The GAO’s Leveraged Lending Opinion casts a shadow of uncertainty over the applicability and future viability of the Agencies’ leveraged loan supervision regime, and critically, other agency actions that could be characterized as ‘rules’ subject to Congressional disapproval. In fact, if Congress seeks to address other agency ‘rules’ that were never submitted to Congress under the CRA, the total volume of agency interpretations and statements of policy that could potentially become subject to Congressional disapproval would be very large indeed.

The Red Tape Rollback project (of which the R Street Institute is a partner) has been compiling a list of agency actions and rules that were never properly submitted to Congress and are therefore potentially still eligible for repeal via the CRA. We’ll see where Congress goes from here, but it’s possible it could be on the brink of adopting a broader interpretation of the CRA.


Image by iQoncept

 

Top