Omnibus provision expands public access to non-confidential CRS reports

Today Congress passed its $1.3 trillion omnibus spending bill. A small provision (pages 1082 and 1092 to 1104) will offer a significant public benefit: it directs the Congressional Research Service (CRS) to expand public access to its reports. This provision also requires the Librarian of Congress to work with CRS to establish a new website to host these newly accessible resources.

R Street has long helped spearhead the campaign for public access to CRS reports. R Street Vice President of Policy Kevin Kosar, himself a veteran of CRS, applauded this development:

For decades only lobbyists and insiders had access to CRS reports, which explain the workings of government agencies and policies. Happily, Congress has remedied this gross inequity by making the reports available to everyone.

Broader access to CRS’ nonpartisan reports will help media and the public better discern the truth in a world flooded with half-truths and outright nonsense.

R Street applauds the work of Sens. Patrick Leahy, D-Vt. and John McCain, R-Ariz., and Reps. Kevin Yoder, R-Kan.; Tim Ryan, D-Ohio; Leonard Lance, R-N.J.; Blake Farenthold, R-Tex.; and Mike Quigley, D-Ill., along with all of the members who were co-sponsors of standalone legislation and who played instrumental roles in this victory.


Don’t punish Uber and Lyft for DC Metro’s problems


A new proposal from Washington, D.C. Mayor Muriel Bowser is set to raise a variety of taxes, including increased fees for transportation network companies (TNCs) like Uber and Lyft, in order to help prop up D.C.’s floundering metro system – the Washington Metropolitan Area Transportation Authority (WMATA). The proposal, which is part of the Mayor’s draft budget, would increase city fees on each trip from 1% to 4.75% – costs which will inevitably be passed on to consumers.

While it’s true that the city’s metro rail system has financial problems, an additional fee on TNCs is a misguided approach to fund and fix WMATA’s problems.

Part of the transit system’s struggles stem from efforts to improve system safety and catch up on neglected maintenance. But it is also because ridership, which has been in decline since 2012, is trending downward faster than ever (perhaps it’s all the fires?).


metro chart

Source: Understanding Rail and Bus Ridership, WMATA, October 2017.

Critics argue that part of this decline is due to TNCs taking away riders. Those critics are correct – at least in part. According to a report from WMATA, many would-be riders opt for TNCs over the metro – particularly at night. However, TNCs are just one of many factors that have contributed to WMATA’s falling ridership numbers – telework, cheaper gas prices, a drop in reliability, and safety concerns also play a role.

Additionally, the advent of TNC carpool services such as Lyft Line and UberPool have made it cheaper (in some circumstances) to take a TNC than to jump on the Metro. Via, a newer entrant to the market, offers shared rides anywhere in D.C. for $2.95 (a ride on Metrorail is anywhere from $2 to $6). For many commuters, this is an attractive alternative to using the local bus or rail system.

A recent R Street paper – Beyond legal operation: The next ridesharing policy challenges – argues that cities should let TNCs compete with public transit, rather than trying to “even the playing field” through taxes or regulations. If TNCs provide a service that consumers prefer, despite hidden subsidies that drive artificially high demand for the alternative, those consumers should be free to choose that service without the paternalistic meddling of the city council.

Furthermore, the increasing popularity of ridesharing may not be totally at odds with public transportation. A recent study by the American Public Transportation Association found that people who use shared modes of transportation like TNCs are less likely to own a car and more likely to use public transit. If this dynamic is true in the D.C. metro area, imposing a fee on TNCs could do more harm than good to WMATA’s overall health. On the other hand, if more commuters were to use a combination of TNCs and transit, it could go a long way towards reducing the burden on public infrastructure (such as parking or peak strain on public transit).

To be clear, Mayor Bowser’s proposal unfairly punishes TNCs for offering a superior product, and will ultimately give us all worse options for getting where we need to go. If we have any hope of making services like the D.C. Metro more reliable, cleaner and safer (i.e. less likely to catch fire), we should allow TNCs and transit to compete.


Image credit: jorik

How much has the dollar shrunk since you were born?


The depreciation of the U.S. dollar’s purchasing power has been endemic from the post-World War II years up to today. It got completely out of control in the 1970s and has continued apace since then, although at a lower rate. Our fiat currency central bank, the Federal Reserve, has formally committed itself to perpetual depreciation of the purchasing power of the currency (otherwise known as 2 percent inflation), every year forever.

It is hard intuitively to realize how big the effects of compound interest are over long periods of time, whether it is making something get bigger or smaller. In this case, it means how much average prices are multiplying and how much the dollar is shrinking.

The following table simply shows the Consumer Price Index over seven decades, starting with 1946. For each year, it calculates how many times average prices have multiplied from then to now, and how many cents were then equivalent to one 2017 dollar. For example, in 1948, I was in kindergarten. Since then, prices have multiplied by a factor of 10 times. Today’s $1 is worth what $0.10 was then. Taking another example, in 1965, I graduated from college and luckily met my future wife. Prices have since multiplied 7.8 times. And so on.

You may find it interesting to pick a year—say the year you were born, graduated from high school, first got a regular paycheck, got married or bought a house—and see how much average prices have multiplied since. Next, see how many cents it took at that point to have the equivalent purchasing power of $1 now. In my experience, most people find these numbers surprising, including the changes from more recent times – say, the year 2000. They become inspired to start remembering individual prices of things at various stages of their own lives.

Multiplying Prices and the Shrinking Dollar over Time, 1946-2017

price level chart

You can also project the table into the future and see what will happen if the future is like the past.

Since average prices can go up over 10 times in the course of an single lifetime—as the table shows they, in fact, have—it is easy to see one reason it is hard to generate sufficient savings for retirement. You have to finance paying what prices will be in the future when you are retired. In the last 40 years (see 1977 on the table), average prices have quadrupled. Then, $0.25 bought what $1 does now. So if you are 40 years old now, by the time you are 80, prices would quadruple again. Good luck with your 401(k)!

Image by forestpath

Atlantans will soon behold a new sight lumbering down major thoroughfares – self-driving semi-trucks.

The sky-blue big rigs are part of a pilot program launched by Waymo, a pioneer in the development of highly-automated vehicle technologies. Last year in Arizona, Waymo rolled out among the first ever fleets of vehicles capable of operating without driver input. Since that time, the company has broadened the scope of its work from passenger vehicles to an application that will have immediate, widespread impact – heavy trucking.

After vetting jurisdictions nationwide, Waymo’s leadership concluded that Atlanta, as a logistics and research hub, was an ideal location to introduce the newest application of its self-driving systems. While some have bellowed about safety concerns, Atlantans should embrace these self-driving tractor trailers because of the promise they hold to cure some of the nation’s most pressing transportation issues. In time, the technologies being tested on Waymo semis may be a part of the solution to the very problems currently plaguing Atlanta roadways, including fatalities and traffic congestion.

When it comes to the first of those problems – roadway fatalities – 2017 was a grisly year. In fact, there were roughly 40,000 automobile accident deaths across the nation, including 1,550 in Georgia alone. And, after years of decline, distracted driving – often in connection with texting and driving – has caused vehicle fatality rates to trend upward.

Self-driving technologies offer a solution to human error-related accidents in general, but their potential to prevent semi-truck fatalities is where the rubber really hits the road. This is because human error accounts for 94 percent of vehicle accidents. Truckers who are fatigued, impaired or merely distracted represent a heightened risk for those with whom they share the road, since big rigs are 20-30 times heavier than traditional passenger vehicles. When they crash, the results are often catastrophic. In 2016, semis accounted for more than 10 percent of all vehicular fatalities.

For that reason, the arrival of self-driving technologies will be a safety game-changer. Waymo’s new self-driving semis will aid truckers by cutting down on many of the factors that lead to accidents. It will relieve them of the need to pay the same level of attention they are currently required to maintain for hours on end.

Beyond essential life-saving benefits, self-driving semis also have the potential to alleviate the interstate gridlock for which Atlanta has become infamous. That’s because congestion is caused by a host of factors that are largely the result of human error, including accidents, rubbernecking and a phenomenon called a “phantom traffic jam” – in which drivers just inexplicably slow down. Fortunately, highly-automated technologies are not prone to rubbernecking or any other delay-inducing behaviors that contribute to traffic congestion.

In fact, a recent study found that replacing as few as 5 percent of automobiles with self-driving vehicles would dramatically reduce traffic jams caused by the human penchant to “stop-and-go” in traffic. This will likely lower blood pressure for drivers throughout Georgia.

Be it for safety reasons or as a simple matter of convenience, Atlantans should celebrate the arrival of self-driving vehicles and self-driving semi-trucks in particular. As traditionally-operated vehicles are more frequently replaced by driverless alternatives, Georgians can look forward to a day in which traffic is no longer an unadulterated nightmare and vehicle-related fatalities are far less common.

What’s more, Georgia’s willingness to be a leader in the testing and deployment of highly-automated vehicles serves as a model for the rest of the country. The fact that California firms would travel across the nation to refine their technologies in Georgia speaks volumes about the political environment our leaders have cultivated. Embracing an approach to innovation predicated on flexibility and freedom, rather than rigidity and regulator permission, is what will position Georgia to be a leader in the 21st century. The shiny blue semis that will soon be traveling down our roads are something for which we should all be proud.


Occupational licensing hurts Georgians

Like many Georgians, I’ve found licensing requirements in other states somewhat amusing. For instance, Massachusetts requires fortune tellers to be certified to divine the future and Louisiana strangely requires florists to be licensed to sell their goods.

Yet Georgians are not immune from ludicrous licensing schemes. The Peach State has its fair share of similarly-bizarre requirements. PreNeed cemetery sales agentsauctioneers and even librarians must be certified. Perhaps a license really is the only method of ensuring that librarians really grasp the antiquated Dewey Decimal System, but common sense suggests otherwise.

While at first blush these requirements are amusing, their effects are often as insidious as they are profound. Some licensing boards require applicants to clear costly and time-consuming hurdles, which often include mandated minimum education, annual continuing education, numerous exams and, of course, paying educational institutions and the state for the privilege of being allowed to work. The expenses associated with these requirements are far from minor. In fact, a recent study found that Georgia requires on average $185 in fees, over 460 days of education and the passage of two exams in order to acquire a license in any given profession.

The significance of the challenges presented by licensing regimes has grown over time as the number of professions subject to them has risen. In fact, since 1950, the percentage of workers required to obtain licenses to practice their profession has jumped from 5 percent to nearly 30 percent.

In practice, these mandates often create unnecessary, unfair and arbitrary barriers to employment. Consider that in Georgia, while emergency medical technicians must complete only 110 hours of education, barbers must complete a whopping 1,500 hours of education.

These requirements are more than simply arbitrary; they also prevent individuals from obtaining work and take those already employed away from their jobs to fulfill continuing education requirements. While these onerous mandates negatively impact a host of workers, more often than not, they have a heightened adverse effect on the underprivileged, who can ill-afford the educational and licensing expenses or missing work to satisfy the mandates.

My father once told me, “If you have to pay to work, then it is probably a scam.” Those sage words largely ring true here. Licensing requirements in many instances are not only wholly unnecessary, but costly, which sounds very much like a scam. In many cases, the mandates exist at the behest of industry professionals to ensure fewer individuals are admitted to their profession. This artificially provides seasoned practitioners with better job security and wages, given that it limits their competition.

Other licensing requirements purport to protect consumers from unqualified or unscrupulous practitioners. However, rather than relying on a bureaucratic government licensing system, non-government, third-party organizations can accomplish the same objectives without many of the burdens. Institutions like the Better Business Bureau, Yelp and Angie’s List already do this, which renders much of the state’s occupational licensing unnecessary. Alternatively, requiring private businesses and practitioners to be bonded and/or subject to public inspections could also encourage proper behavior and mitigate risks to consumers in a less-disruptive manner.

Occupational licensing has become unnecessarily onerous in Georgia and an undeniable burden on the Peach State’s workforce. Fortunately, help may soon be on the way. Earlier this legislative session, Rep. Buzz Brockway, R-Lawrenceville, introduced HR 744 to encourage the Georgia Occupational Regulatory Review Council to periodically audit current boards that grant individuals occupational licenses, and the House of Representatives approved the measure.

The resolution is intended to determine which, if any, occupational licenses are actually necessary. HR 744 is a positive, if symbolic, step in the right direction. Yet given the drag occupational licensing imposes on the state’s economy, the proliferation of such licensing demands more immediate and concrete legislative action than periodic reviews.

Georgia was recently rated the number one place to do business. While this is a proud distinction, Georgia is dubiously ranked as the 14th most burdensome state when it comes to licensing for lower-income occupations. Clearly more needs to be done, possibly including reducing the obstacles to obtain these licenses or repealing unnecessary licensing laws altogether. At least HR 744 starts the conversation.


The Roots of Section 230 Show Why We Shouldn’t Abandon Intermediary Liability Protection


Back in the early days of the Electronic Frontier Foundation (I was the organization’s first employee) we at EFF were united by the belief that the online world, whether we were talking about privately owned platforms or the open internet, was going to be a central locus for freedom of expression. EFF was founded in 199o, a few years before the internet was opened up for commercial development, which led unsurprisingly to an abrupt climb in public participation on the Net. So we had a head start when it came to anticipating how free-speech issues would play out online.

One early obvious source of social tension arising from growth of this new medium was government anxiety and intervention, first focused on the threat of hackers and computer intrusion. Quickly thereafter the calls for government intervention centered on issues such as encryption (which makes wiretapping harder), copyright infringement, the easy spread of pornographic content, the prospect of cyberterrorism, “bomb-making information on the internet” and so-called “hate speech.” In its first decade, EFF tackled those issues, partly to stop government overreach but also to create breathing space for individuals and companies to develop online services and internet “spaces” where free speech could flourish.

EFF’s primary focus in those years was challenging government impulses to control, constrain and regulate internet speech. But at the same time we saw cases arise from the actions and policies of then-dominant platform providers (in those days they included the highly-moderated Prodigy, plus the somewhat less moderated CompuServe and AOL). Because these companies hosting digital forums were not bound by the First Amendment—yet might, for all we knew, become the market-dominant platforms of the internet era—we at EFF believed it was best that they join a social consensus that allowed digital freedom, freedom of expression and freedom of association. The First Amendment is a limitation on government action but, we argued, private companies ought to value freedom and privacy too.

As I wrote in an (unsigned) editorial in EFF News (volume 1, number 0): “We at EFF do not dispute that Prodigy is acting within its rights as a private concern when it dictates restrictions on how its system is used. We do think, however, that the Prodigy experience has a bearing on EFF interests in a couple of ways. First, it demonstrates that there is a market – a perceived public need – for services that provide electronic mail and public conferencing. Second, it illustrates the fallacy that ‘pure’ market forces always can be relied upon to move manufacturers and service providers in the direction of open communications. A better solution, we believe, is a national network-access policy that, at the very least, encourages private providers to generate the kind of open and unrestricted network and mail services that the growing computer-literate public clearly wants.”

On the other hand, we knew early on that in order to liberate the platform and service providers—to give breathing space to free expression and privacy—it was critical that neither statute, nor regulation, nor caselaw compelled providers to move in the opposite, more restrictive, direction. Drawing upon earlier Supreme Court precedent, a federal district court case called Cubby, Inc. v. CompuServe, Inc. (1991) suggested that the best way to classify online platforms was as something not quite like a common carrier (e.g., a postal service or telephone company) and not quite like a publisher (Penguin or the New York Times) either.

In Cubby, a federal judge suggested (in a closely reasoned opinion) that the proper First Amendment model was the bookstore – bookstores, under American law, are a constitutionally protected space for hosting other people’s expression. But that case was misinterpreted by a later decision (Stratton Oakmont, Inc. v. Prodigy Services Co., 1995), so lawyers and policy advocates pushed to include platform protections in the Telecommunications Act of 1996 that amounted to a statutory equivalent of the Cubby precedent. Those protections, in Section 230, allowed platform providers to engage in certain kinds of editorial intervention and selection without becoming transformed by their actions into “publishers” of users’ content (and thus legally liable for what users say).

In short, we at EFF wanted platform providers to be free to create humane digital spaces without necessarily acquiring legal liability for everything their users said and did, and with no legal compulsion to invade users’ privacy. We argued from the very beginning, about the need for service providers to be just, to support human rights even when they didn’t have to and to provide space and platforms for open creativity. The rules we worked to put into place later gave full bloom to the World Wide Web, to new communities on platforms like Facebook and Twitter and to collaborative collective enterprises like Wikipedia and open-source software.

In pure economic terms, Section 230 (together, it must be said, with the Digital Millennium Act’s notice-and-takedown provisions regarding copyrighted works) has been a success—the leading internet companies (among Western democracies at least) have been American. Section 230, with its bright-line rules barring internet services’ legal liability for content originated by a service’s users (rather than the services themselves) brought the Cubby model into the 21st century. Services could “curate” user content if they wanted to (just as a bookstore has a First Amendment-grounded right to choose which books it carries and sells), but wouldn’t be liable either for content they overlooked or for content they had (mis)judged to be lawful. In the digital world, Section 230 gave the platforms something like common-carriage legal protections but also autonomy to shape the character of their online “spaces.”

Because some platforms have been hugely successful, and because market shakeouts have left some players like Facebook and Google dominant (at least for now), other players have sought to roll back Section 230. Most recently the ostensible focus has been on sex trafficking (and commercial sexual services generally), which some critics believe has been made worse by online platforms like—even though Backpage almost certainly isn’t protected by Section 230 given the service’s role in originating sex-service content. But, really, the concern about internet sex-trafficking is being used as a stalking horse for players who are looking for opportunities either to sue the platforms and win big bucks or to impose stronger censorship obligations on the platforms for a variety of reasons—not the least of which is today’s moral panics about social media and big tech, which I’ve written about here and here.

This isn’t to say we should never revisit Section 230 and consider whether its protections need to be refined. Maybe they do. But given that there is a larger moral panic going on about social media (e.g. Russian-sponsored trolls using Facebook to push for Brexit or Donald Trump’s election), we shouldn’t rush to judgment about amending or repealing Section 230. Most ordinary internet users love Google and Facebook (even when they’re sometimes irritated by what they find on these and other platforms). We ought not to hurriedly undermine the legal protections that allowed these American success stories to flourish. Even if today’s internet giants can survive the loss of Section 230 and absorb the costs of censorship compliance, new market entrants likely can’t. Which means that hobbling 230 will stifle the competition that got us to today’s rich internet in the first place.



Image credit: kryzhov

Nothing is inevitable in politics, even in the Senate


The future is not inevitable. And it cannot be predicted with certainty. There is no iron law of history according to which events unfold inexorably.

This is especially true in Congress where, according to Stanford University political scientist, Keith Krehbiel, members with incomplete information make policy decisions on a daily basis under conditions of uncertainty.

Yet despite this, a number of senators appear confident that they can predict with certainty what will happen in the future. In my previous post, I pointed to recent comments by Ted Cruz (R-Texas) regarding the filibuster. He confidently asserts that its days are numbered. “I think if the Democrats ever regain the majority, they’ll end legislative filibuster…That’s where their conference is.”

Cruz’s comments reflect the broader sentiment among Senate Republicans. That is, they are certain that their Democratic colleagues will eliminate the filibuster in the future when they retake the majority. The irony is that Republicans’ confidence about the fate of the filibuster has driven them to push for abolishing it first.

Political theorists call this dynamic the Oedipus effect, a term coined by Karl Popper. The Oxford English Dictionary defines the Oedipus effect as “the influence of a prediction on the predicted event.” Today, we commonly describe it as a self-fulfilling prophecy.

With regard to the filibuster, the fear that it will not exist in the future is leading many Republicans to call for its elimination today. And in doing so, they partially confirm their earlier predictions. Their only mistake being that Republicans, not Democrats, would be responsible for eliminating the filibuster.

The good news for those concerned about the filibuster’s fate is that its elimination is not inevitable. In my previous post, I listed a number of reasons why Republicans cannot assert with certainty what Democrats will do based solely on the fact that they nuked the filibuster in 2013.

Simply acknowledging Republicans’ inability to predict the future does little to reduce the anxiety they feel as a result of its uncertainty. In a sense, they should be heartened by that uncertainty. The fact that the future is not predetermined makes it possible for senators to influence what will happen in it.

The ability to obstruct, at least as we commonly think about it today, is rooted in the Standing Rules of the Senate. Specifically, the ability to filibuster is granted by Rule XXII. Rule XXII requires the support of two-thirds of all senators present and voting to end debate on a proposal to change the Standing Rules. This effectively precludes the majority party from altering the text of those rules over the objections of the minority party.

Yet notwithstanding these super-majoritarian hurdles, Senate majorities have always had the ability to determine the institution’s rules. They may overcome the super-majoritarian barriers erected by Rule XXII by establishing a new precedent by simple-majority vote. While minorities may filibuster such efforts, their appeals can be tabled without debate, also by a simple-majority vote. As a consequence, the majority has the technical means to overcome minority obstruction so long as it is willing to do so.

But Republicans may also deter a future Democratic majority from using its power to eliminate the filibuster. Specifically, they can make credible threats to retaliate should Democrats threaten the nuclear option. This links the Democratic majority’s efforts to go nuclear with sub-optimal outcomes for individual Democrats should they do so. The expectation of increased costs should deter a sufficient number of Democrats from supporting the nuclear option to the extent that the Republicans’ retaliatory threats persuade them that it would be more difficult to achieve their individual goals in a post-nuclear Senate.

Previous work in this area has focused almost exclusively on Rule XXII and the cloture process when explaining how Senate minorities may use parliamentary procedure to constrain the majority. Yet a limitation of such approaches is that they do not demonstrate precisely how those procedures ultimately circumscribe the ability of Senate majorities to change the institution’s rules in the first place. It is true that provisions of Rule XXII requiring a three-fifths super-majority to end debate on nominations and legislation and a two-thirds super-majority to end debate on proposals to change the Standing Rules empower the minority and constrain the majority. Yet Rule XXII itself is subject to alteration by a simple-majority. Thus, the super-majoritarian provisions of Rule XXII are themselves insufficient to prevent the utilization of the nuclear option. In short, Rule XXII itself cannot constrain a determined majority to the extent that it may be changed or circumvented by the nuclear option.

The Constitution, along with the relative importance of the majority’s agenda in a particular Congress, gives Senate minorities the necessary leverage with which to protect the procedural prerogatives granted to them by the institution’s rules and practices. Specifically, these are the provisions in Article I, section 3, clause 4 designating the Vice President as the Presiding Officer of the Senate and Article I, section 5, clause 3 stipulating that any member may call for a recorded vote with a sufficient second.

To the extent that these constitutional provisions make certain retaliatory tactics resistant to restriction via the nuclear option, and to the extent those tactics impose costs on members of the majority party, then the threat to use them makes it possible for Republicans to prevent an otherwise willing Democratic majority from eliminating the filibuster in the future.

Specifically, Republicans can increase the costs of processing nominations and considering routine legislation for Democrats- even in a post-nuclear majoritarian Senate- by requiring a recorded vote for confirmation and passage, respectively. Doing so would increase the physical costs for senators and negatively impact other priorities on the Democrat’s legislative agenda due to the time required to conduct recorded votes.

Similarly, Republicans can increase the political costs of passing that agenda for individual Democrats by forcing votes in relation to politically difficult amendments. They can do so by offering a so-called third-degree amendment despite the majority leader filling the amendment tree and then appeal the subsequent ruling of the Chair that the amendment is not in order. Doing so forces a recorded vote in relation to the amendment and/or the majority to filibuster the appeal.

Both tactics are resistant to restriction via the nuclear option. The only real way to prevent individual senators from offering prohibited amendments and then appealing the ruling of the Chair to force a recorded vote is to have the Presiding Officer not call on senators seeking recognition. Setting aside the impracticality of barring some members, or all members, of the minority party from speaking on the Senate floor in perpetuity, the institution’s constitutional structure effectively precludes chamber majorities from delegating such authority to the Presiding Officer.

Article I, section 3, clause 4 of the Constitution stipulates: “The Vice President of the United States shall be President of the Senate.” The Constitution only allows the Senate to select its Presiding Officer in the absence of the Vice President. Yet any power senators delegate to the President pro tempore will also be available for the Vice President to use whenever he assumes his role as the Presiding Officer of the Senate. As the constitutionally designated Presiding Officer, the Vice President is thus charged with administering the Senate’s rules and ensuring order. Yet because the Vice President is not directly accountable to the Senate, its members have historically been unwilling to delegate significant authority to the President pro tempore, who the Senate may select, because they cannot prevent the Vice President from assuming the Chair and exercising that authority in such a way that would be harmful to their interests.

While obstruction and the value of the Senate’s time have both increased significantly in recent years, it is unlikely that senators would reevaluate delegating significant authority to the Presiding Officer in the face of minority retaliation for going nuclear. Imagine a Democratic majority allowing Vice President Mike Pence, or a Republican majority allowing Vice President Elizabeth Warren, a significant voice in how the Senate sets its agenda and conducts its business! In the absence of a strong Presiding Officer, the tactic of appealing the ruling of the Chair cannot be restricted because members will always have recourse to the floor. The Chair may rule such appeals dilatory, and thus out of order. But those ruling may be appealed.

By threatening these tactics, Republicans can reduce the likelihood that Democrats will eliminate the filibuster over their objections in the future. They offer Republicans the procedural means to persuade rank-and-file Democrats that it is not in their interest to nuke the filibuster. But their effectiveness depends on the extent to which Republican senators are willing to defend the filibuster in the face of additional nuclear efforts to restrict their rights. The current push to further gut the filibuster suggests that both parties would like to see it limited, if not eliminated entirely, in the future.


What the Senate’s past tells us about a future nuclear option

“It is difficult to make predictions, particularly about the future.”

We would do well today to heed Mark Twain’s version of the well-known proverb regarding our inability to say with certainty what tomorrow will bring. Politics, after all, is a complex and dynamic process. Recent events alone should inject a dose of humility in our assumptions about what the future holds.

But in the Senate, members appear more certain than ever that they can see the future.

Take, recent comments by Ted Cruz (R-Texas) regarding the filibuster. He confidently asserts that its days are numbered. “I think if the Democrats ever regain the majority, they’ll end legislative filibuster.” Moreover, Cruz believes that it is inevitable based on his understanding of where Democrats are on the question today. “That’s where their conference is. And it doesn’t make any sense for it be a one-way ratchet — for us to have our hands tied and for them to be able to pass with a simple majority.”

Cruz’s comments reflect the certainty with which many of his Republican colleagues know that Democrats will eliminate the filibuster when they take over the majority at some point in the future. Such confidence is leading many of them to push for abolishing it first.

But senators cannot assert with any certainty what Democrats will do in the future based solely on the fact that they did so in the past. Nothing is inevitable.

Admittedly, the manner in which a party behaved in the past sheds some light on the way in which its members are likely to conduct themselves in the future. Yet the historical record suggests that future action is not determined solely, or even primarily, by past behavior.

Take, for example, just three instances in which the Senate explicitly violated its Standing Rules by creating a new precedent via the nuclear option. Yet rather than using the nuclear option again in the future, the Senate instead reversed the precedent established in each case. The fact that successive majorities did not go nuclear again and again to gut the Standing Rules in each case suggests that additional considerations should be taken into account before making predictions about what will happen to the filibuster whenever Democrats regain the majority in the Senate.

Rule XXII (1975)

The Senate created a new precedent in 1975 at the beginning of the 94th Congress that restricted the filibuster in violation of Rule XXII. Specifically, Senator James Pearson (R-Kansas) attempted to amend Rule XXII to reduce the threshold required to invoke cloture. Yet because the rule clearly required a two-thirds vote to end debate on such a proposal, Pearson’s effort was dependent on a ruling from the presiding officer (or a vote of the Senate) that a simple-majority could invoke cloture on a proposal to amend the Senate’s Standing Rules at the beginning of a new Congress (i.e. the nuclear option). Majority Leader Mike Mansfield (D-Montana) raised a point of order against Pearson’s motion on the grounds that it violated Rule XXII. The presiding officer declined to rule on the question and instead submitted it to the full Senate to be decided. A simple-majority of the Senate subsequently tabled the Mansfield point of order on February 20 by a vote of 51 to 42, thereby endorsing the argument that a simple-majority could end debate on a proposal to amend the Senate’s rules at the beginning of a new Congress.

However, the Senate moved to reconsider the vote by which it tabled the Mansfield point of order on March 3. And on the following day, the Senate voted to sustain Mansfield’s point of order by a vote of 53 to 43, thereby reversing the earlier precedent. This action brought Senate practice back into compliance with Rule XXII.

Rule XVI (1999)

Rule XVI of the Standing Rules prohibits legislating on an appropriations bill. But the disposition of an amendment offered by Senator Kay Bailey Hutchison (R-Texas) to the Emergency Supplemental Appropriations and Rescissions for the Department of Defense to Preserve and Enhance Military Readiness Act of 1995 (Public Law 104-6) during the 104th Congress established a precedent that superseded this prohibition. Specifically, the Hutchison amendment changed federal law regarding endangered species. Senator Harry Reid (D-Nevada) raised a point of order that the amendment violated Rule XVI, which the presiding officer subsequently sustained. Senator Hutchison then appealed this ruling to the full Senate, which overturned the ruling of the Chair by a vote of 57 to 42. The Hutchison amendment was subsequently adopted by voice vote. This action created a new precedent that permitted legislating on an appropriations bill, despite the fact that the decision of the presiding officer was correct technically and the Hutchison amendment was in direct violation of Rule XVI.

Yet as in the previous example, the Senate subsequently reversed the precedent, thereby bringing Senate practice back into compliance with Rule XVI. In the 106th Congress, Majority Leader Trent Lott (R-Mississippi) introduced a standing order as a simple resolution (S. Res. 160) that would have the effect of reversing the precedent established by the Hutchison amendment. The Senate passed S. Res. 160 on July 22, 1999 by a vote of 53 to 45.

Rule XXVIII (2000)

Rule XXVIII of the Standing Rules bars senators serving on joint House-Senate conference committees from airdropping provisions into the final version of legislation. In other words, matter not included in either the House or Senate legislation is not eligible to be included in the compromise agreement that is voted on in both chambers before being sent to the president to be signed into law.

Yet as with Rule XXII in 1975 and Rule XVI in 1999, the Senate created a new precedent (the “FedEx precedent”) that explicitly violated this provision of Rule XXVIII during the 104th Congress. During consideration of the Conference Report for the Federal Aviation Reauthorization Act of 1996 (Public Law 104-264), Majority Leader Lott raised a point of order that the Conference Committee exceeded the scope of conference by including provisions relating to Federal Express, thereby violating Rule XXVIII. The presiding officer subsequently sustained the point of order. In response, Lott appealed the ruling and the Senate overruled the presiding officer by a vote of 39 to 56. Consequently, the FedEx precedent superseded the provisions of Rule XXVIII prohibiting extraneous matter from being included in conference reports.

The Senate restored Rule XXVIII during the 106th Congress. Specifically, the Department of Commerce and Related Agencies Appropriations Act of 2001 (HR 5548) included the following provision reversing the precedent established during the 104th Congress:

Sec. 801. Beginning on the first day of the 107th Congress, the Presiding Officer of the Senate shall apply all of the precedents of the Senate under Rule XXVIII in effect at the conclusion of the 103rd Congress.

This provision was eventually included in the Conference Report to accompany the District of Columbia Appropriations Act for fiscal year 2001 (Public Law 106-553) that was signed into law on December 21, 2000. Additionally, an identical provision was included in the Consolidated Appropriations Act of 2001 (Public Law 106-554), which passed the Senate on December 15, 2000 and was also signed into law by the president on December 21. These actions brought Senate practice back into compliance with the Rule XXVIII.

Additional Considerations

As these cases suggest, additional considerations must be taken into account when speculating on the likely parliamentary behavior of future Senate majorities.

First, Senate majorities are fluid. That is, they change over time. Intra-party dynamics may shift with the election of new members whose views on the filibuster differ from their more senior colleagues. And neither is the position of incumbent members on institutional questions like the filibuster static. It too may change with time in response to experiences like serving in the minority.

Second, the broader political environment will inevitably shape the views of the individual senators who will compose future Senate majorities. Claims that Democrats will inevitably move to eliminate the filibuster in the future must take into account additional considerations such as the geographic distribution of majority-held Senate seats (e.g. red-state Democrats vs. blue-state Democrats), presidential approval and behavior, public opinion on the filibuster, and overall congressional productivity.

For example, Democratic senators representing red states may be less likely to support the nuclear option in the future to empower a progressive president of their own party if doing so makes it more likely that policies opposed by their constituents will become law. Presidential popularity may also impact the ability of party leaders to corral the votes needed to go nuclear.

In addition, public opinion on the filibuster and an imperial presidency may deter individual senators from supporting the nuclear option if doing so is cast in terms of enacting the president’s agenda over all objections.

Finally, more general levels of congressional productivity may undermine arguments that obstruction is excessive or that presidents have no choice but to implement their agenda via unilateral executive action in the absence of efforts to eliminate the filibuster.

To be fair, acknowledging the limits of our ability to accurately predict the future does little to assuage Republicans’ frustration with their inability to legislate in the present. However, the only alternative to eliminating the filibuster is not acquiescing in the status quo.

The Senate’s rules and practices already give the majority a way to stop endless delays of the legislative process. And using those rules will be of greater benefit to Republicans, as well as the institution more broadly, than changing them via the destructive nuclear option. This is because enforcing the Senate’s current rules and practices makes obstruction costlier for members trying to slow down or stop the majority’s agenda.

By increasing the price members pay to obstruct successfully the Senate’s business, those members will be likely to be more selective


Which House offices had the highest staff turnover in 2017?


Casey chart 1, version 2.jpg

By Casey Burgat

Which House members churned through employees in 2017 and which retained staff at a higher rate than their colleagues?

Relying on disbursement data from the House of Representatives, cleaned and verified by LegiStorm, a new LegBranch analysis reveals which Representatives led offices with the highest and lowest staff turnover rates in 2017. This snapshot analysis is part of a larger (forthcoming) project on Hill staff turnover that includes over 15 years worth of data. 2017 turnover rates for every House member will be released next week.


Why does staff turnover matter?

Though Hill staff often remain anonymous to even the most watchful congressional observers, members of Congress rely heavily on them to execute the policy and representational work for which their offices are responsible. Staff in district and Hill offices are largely responsible for the day-to-day work of handling constituent requests, researching, drafting, and advancing policy, and running effective communications operations on behalf of their bosses.

Put simply, staff matter. The level of staff turnover in a member’s office can have serious consequences on a lawmaker’s ability to fulfill his or her representational and policy duties. High levels of staff turnover can decrease the efficiency and effectiveness of the entire office. Replacing staff requires that office attention be spent interviewing, hiring, and training new staff rather than on constituency service or policy. As private sector studies have regularly found, high employee turnover disrupts office divisions of labor, depresses morale, and undermines continuity of operations.

What’s more, Hill staff often develop and  maintain expertise in a given role or on specific policy matters; when the office loses that expertise, the whole enterprise suffers from the departure of institutional memory.

What do the data show?

Turnover rates used in this analysis were constructed by dividing the number of staff that left a particular office (either voluntarily or involuntarily) by the total number of staff the office employed during the year. The median level of staff turnover for 2017 was 17.46 percent. Representatives who left Congress during 2017, and thus experienced 100 percent staff turnover, were not included in the analysis.

The chart above, and corresponding tables below, show the House members with the lowest and highest levels of staff turnover in 2017. Rep. Joseph Crowley (D-NY) led the most stable House office with only a single aide (0.035 percent turnover) departing during 2017. Rep. David Young’s (R-IA) office experienced the highest level of staff turnover with 50 percent of his aides departing in a single year. Forty-four Representatives had turnover rates above 30 percent in 2017, a pace that would result in the entire office turning over in just a three year span.

Variations in the data by party and member gender

Nine of the 10 most stable House offices are held by Democrats. On the other end of the turnover spectrum, nine of the 10 offices with the highest staff turnover are Republican.

Of the 44 lawmakers with 30 percent or higher turnover, 34 (or 77 percent) are Republican. Some of this turnover can likely be attributed to the Trump administration hiring veteran Republican staff off the Hill to help fill White House and federal agency positions. But, as revolving door research suggests, higher turnover in Republican offices could also be due to majority-party staff cashing in on their connections and moving to positions with businesses and other special interest organizations.

Women lawmakers lead more stable offices. Female members represent 19.3 percent of House lawmakers, yet lead 35.7 percent of the 14 offices with the lowest rates of staff turnover. Conversely, female members represent only two of the 14 (14.3 percent) offices with the highest turnover rates.

Tables 1 and 2 depict the Representatives with the highest and lowest staff turnover rates, respectively. Included in the tables are the number of staff employed and the number of departing aides.

Additionally, the tables show the number and percentages of staff that departed a lawmaker’s office, but remained working in Congress (i.e., left one member and joined another member’s office or a congressional committee). This measure allows a deeper look at intra-office environments in that it highlights the lawmakers whose departing staff decided to continue working in Congress versus those who decided to leave the institution altogether.

casey staff post - table 1.jpg

casey post table 2, version 2.jpg

Of course, not all turnover is bad. Offices should look to replace poorly performing staff and the addition of new people can bring fresh perspectives and more efficient operations.

But given the degree to which members of Congress heavily rely on their staff, and the growing concern among some congressional observers that Congress is increasingly run by young (and often inexperienced) staff, data on turnover rates are particularly relevant. Offices that experience higher rates of turnover, especially in higher level positions, will generally be less effective in providing constituent service and affecting public policy. Offices with more turnover are also more likely to turn to lobbyists and special interests to fill the information void left by departing staff.

High levels of turnover can raise many concerns and almost none of them are good. Working in Congress can be unrewarding. Hill jobs have limited opportunities for advancement, yet typically require staff to work long hours in a high pressure atmosphere characterized by ongoing political brinkmanship. Financial compensation for staff is low compared to many private sector jobs. Additionally, civic-minded staff are growing more and more frustrated by their inability to help develop and advance policy, as an increasing amount of policy-making is being done by party leaders with little input from the rank-and-file and their staffs. The environment is one where staff are sometimes treated horribly, as the growing number of cases of sexual harassment by members reveals.

We should want our best and brightest serving in the offices of our elected officials. Congress, lawmakers, and constituents are better served by Hill staff who find their work fulfilling and choose to stay beyond just a few years.


FWS wants to expand the Coastal Barrier Resources System by 136,000 acres


A net 135,705 acres and 37 units across four Northeast states would be added to the John H. Chafee Coastal Barrier Resources System, under a proposed remapping put forward by the U.S. Fish and Wildlife Service.

The plan, published in the March 12 edition of the Federal Register, marks good news for fans of a free-market approach to conservation. Signed by President Ronald Reagan in October 1982, the Coastal Barrier Resources Act sets aside 1.3 million acres of protected wetlands, coastal barrier islands and aquatic habitat as, effectively, a “federal subsidy-free” zone. Units within the system, which also includes an additional 1.8 million acres of state and federal parkland designated as “otherwise protected areas,” do not have access to federal funding for roads and other infrastructure, cannot participate in the National Flood Insurance Program and are ineligible for federal disaster relief under the Stafford Act.

In essence, the bargain struck by the CBRA is this: land owners are free to develop in whatever ways they deem appropriate, including within the protected coastal system. They just aren’t entitled to a single dime of taxpayer money to do it. We long have felt this paradigm strikes the right balance between respecting private property and avoiding spending that serves to encourage bad behavior. The CBRA can and has been used as a model for other market-based approaches to the environment, from the U.S. Department of Agriculture’s Conservation Compliance program to the Florida Legislature’s 2014 decision (based on an R Street proposal) to bar new development seaward of the Coastal Construction Control Line from getting subsidized insurance from the state-run Citizens Property Insurance Corp.

The system’s benefits were on full view during the 2017 hurricane season. While Hurricane Harvey is estimated to have caused as much as $125 billion in damage, including massive flooding in and around Houston, the destruction would have been far worse were it not for where the Category 4 storm made landfall – at San José Island, an uninhabited barrier island that is entirely within the CBRS. Much of the coastal regions of surrounding Aransas County likewise fall within CBRS units, and are thus largely free of development.

The new updates stem from FWS’ Hurricane Sandy Remapping Project and cover changes to 112 existing units and 36 proposed units in Delaware, Massachusetts and New Jersey, as well as proposing for the first time a CBRS unit in New Hampshire. (Updates to CBRS maps covering Connecticut, Maryland, New York, Rhode Island and Virginia will be subject to public comment later this year.) The FWS plan calls for expanding the system’s acreage footprint in both protected land and aquatic habitat in each of the first four states:

cbrs table

FWS is required by law to review its maps at least once every five years to account for natural changes in the landscape. Additions to the system can be made where the area is reasonably considered a coastal barrier feature or related to a coastal barrier ecosystem, and where its inclusion would be related to the CBRA’s purpose of minimizing federal spending and protecting wildlife and natural resources. FWS can remove areas from the system where it identifies technical mapping errors.

The program also was directed by Congress in 2006 to undertake a pilot digitization project, which it completed in 2016. The original paper maps for Delaware and Massachusetts previously were converted to digital maps in 2013, while New Jersey’s was converted in 2015. This most recent update to the Massachusetts map also includes modifications that account for natural changes, voluntary additions and additions of excess federal property that weren’t included during the digitization process. Since the proposed New Hampshire unit is new, there was no paper map to digitize.

These updates all will ultimately have to be approved by Congress, and that can be a fraught process. Developers, who don’t tend to like being cut off from the spigot of taxpayer subsidies, inevitably argue for scaling back the existing system as much as possible. That’s why it’s crucial that a left-right alliance of environmentalists and taxpayer advocates stand up to defend the CBRS and to expand the model to new areas, such as those that face wildfire risk. Not only is this a conservation program that has proven to work, but it remains one of Ronald Reagan’s enduring legacies.

Map image from the U.S. Fish and Wildlife Service

What the new TV series “For the People” gets right – and wrong – about criminal justice in the U.S.


The latest Shondaland creation – “For the People” – premiered last Tuesday, with 3.2 million viewers getting a chance to experience high-stakes, high-profile federal litigation. The drama follows six new attorneys (three federal public defenders and three prosecutors) as they work in the “Mother Court” – the District Court for the Southern District of New York.

In the first episode (mild spoiler alert ahead), public defender Allison Adams and Assistant U.S. Attorney Seth Oliver argue at a pretrial hearing where the defendant is not being detained. The judge, Nicholas Byrne, huffs in exasperation at Oliver, given the amount of damages is only $9000.

“For the People” gets a few things right about the criminal justice system – the tension between public defenders and prosecutors, the immense power of the government, the difficult decisions made during plea bargaining. The initial hearing featured in the pilot, however, is far from the reality most clients experience in criminal courts across America. In many jurisdictions, even the most minor of cases – homeless individuals trespassing to get shelter, public urination and shoplifting candy bars – can end up with a person remaining in detention. Here, the amounts of theft or damages are far, far less than $9000.

Most courts across America still operate based on a cash bail system. This means that once arrested, individuals go in front of a judge or commissioner who is supposed to assess whether they are a public safety risk and whether they are likely to return to court for later proceedings. Too often, judges impose cash bail amounts that have nothing to do with whether a person is actually a public safety risk. Bails in the thousands and tens of thousands are not uncommon, even for nonviolent, low-level misdemeanors.

Cash bail might work for the well-off; it is supposed to serve as collateral, which is returned to defendants if they attend their next court date. The reality is, however, that most defendants are indigent and don’t have the money to put up for bail. Instead, they remain in detention for weeks and sometimes months at a time. Because they are incarcerated, they often lose their jobs, their homes and their children, while taxpayers spend thousands housing people who have not been convicted of any crime and pose no threat to public safety.

Instead of cash bails, a more objective analysis of a person’s case through a pretrial risk assessment would make more sense. If a person is actually a public safety risk, no amount of money should be sufficient for them to buy their freedom and put their neighbors at risk. For those who are not a public safety risk but need more oversight to guarantee they attend court, tools like GPS monitoring, cellphone court date reminders and pretrial supervision can provide assistance in lieu of inappropriate and costly pretrial confinement. Simply put, a person’s wealth should not determine their freedom; the time for pretrial reform is now.

“For the People” might be the usual juicy Shondaland fare, but it has the potential to change the way we think about the criminal justice system by bringing pretrial hearings into our living rooms. Maybe one day, more judges, legislators and stakeholders will be like the fictional Judge Byrne, focused on prioritizing the right cases and embracing smart, effective pretrial policies.

Image credit: sirtravelalot

Next generation wireless networks need updated regulations


5G promises to dramatically advance the technological frontier. This next generation of wireless connectivity will not only make our smartphones faster, it will open up possibilities for entirely new applications – whether it’s having fun with virtual reality or performing remote surgery.

But for these improvements to be realized, we need make sure regulations keep pace with the evolution of technology. The Federal Communications Commission (FCC) is poised to take a significant step in that direction this week by voting on an order to clarify that fundamental differences in 5G technology mean it will require different regulatory treatment than its predecessors.

The 4G and 3G technology we use today relies mostly on large towers and antennas known as macro cells. Those deployments fall under both the National Environmental Policy Act (NEPA) and the National Historic Preservation Act (NHPA), which require review processes to make sure that deployments don’t have an undue impact on the environment or historical areas. These review processes, however, don’t make much sense for the small cells that will play a crucial role in the deployment and operation of 5G.

As the name implies, small cells are much smaller than those huge cell towers along the roadside. Often times they are no bigger than a shoe box or a gallon milk jug. And instead of being attached to new towers, small cells are often placed on existing structures like buildings or utility poles.

In short, small cells don’t have the same environmental or historical impact as macro cells and, therefore, the need for review is diminished. Nevertheless, these review processes are still required by FCC rules and pose a significant barrier to cell deployment. A recent Accenture Strategy report found that NEPA and NHPA reviews imposed average additional costs of $9,730 per small cell. Over 200,000 of these new small cells will be needed to fully deploy 5G networks in the next few years, and they will be subject NEPA and NHPA review. That means the old rules present a $2 billion-plus hurdle to next-generation wireless services.

The FCC is poised to exempt small cells from these review processes. This is a sensible change because, aside from and despite their outsized cost, NEPA and NHPA reviews virtually never find fault with small cell deployments.

Sprint, for example, recently needed to undergo NEPA review for 250 deployments. Not one of them found a significant environmental impact. This is the predictable outcome of small cell deployments that are attached to existing structures or built on land that has already undergone prior review. Adding a small cell doesn’t make much of a difference, so the NEPA rules create onerous costs without doing much to protect the environment.

The same is true of NHPA reviews. Over 99 percent of the time, these reviews result in no changes because there was no significant historical impact. These reviews more often function as a revenue source for tribal groups that exploit the law for an unintended purpose. Tribal review isn’t required if someone wants to put up a billboard or a flagpole. The fact that a similar, or likely even less-disruptive, structure may have a radio attached is not a good reason to demand exorbitant fees.

Similarly, NHPA review isn’t required for Wi-Fi networks, but cell networks incur the burdensome review requirements simply because they use different spectrum bands. The spectrum that a radio uses makes no difference in its historical impact, so requiring additional assessment on that basis is nonsensical.

In addition to the review exemptions for small cells, the FCC order would make modest changes for traditional macro cell deployments. Rather than exempting larger deployments from NEPA and NHPA review entirely, the order would establish greater clarity and more-predictable timelines for those processes. Establishing deadlines is essential to speeding up processes that can drag on for well over a year. The order would also eliminate the need to review towers solely because they are located in floodplains, as long as they are deployed high enough off the ground.

The technological possibilities of tomorrow depend on getting the regulations right today. The FCC is right to exempt small cells from review processes intended for much larger deployments. Removing these regulatory barriers and accelerating deployment will be critical to ensuring that the United States leads the way in 5G.


Image credit: chombosan

Clark Packard on FBN Varney & Co

Clark Packard discusses the implications of President Trump’s announcement on steel and aluminum tariffs with Stuart Varney.


Competing Mortgage Credit Scores:  A decision for those who take the risk


The use of credit scores by Fannie Mae and Freddie Mac, as one part of their decisions about which mortgages they will buy and guarantee, is by nature an “inside baseball” mortgage-finance discussion, but it has made its way into the regulatory reform bill passed by the Senate March 14.

How such scores are statistically created, how predictive they are of loan defaults, how to improve their performance, whether to introduce new scoring methods and the relative predictive ability of alternative methods are above all technical matters of mortgage credit-risk management. These questions are properly decided by those who take the mortgage credit risk and make profits or losses accordingly. This applies to Fannie and Freddie (and equally to any holder of mortgage credit risk with real skin in the game). Those who originate and sell mortgages, but bear no credit risk themselves, and those with various political positions to advance, may certainly have interesting and valuable opinions, but are not the relevant decision makers.

Should Fannie and Freddie stick with their historic use of FICO credit scores, or use VantageScores instead, or both or in some combination?  Naturally, their own scores are favored by the companies who produce them and they should make the strongest cases they can. Should Fannie and Freddie more experimentally use other “alternative credit scores” different from either?  This can also be argued, although it remains theoretical.

The Senate bill requires Fannie and Freddie to consider alternative credit-scoring models and to solicit applications from competitive producers of the scores for analysis and consideration. That is something a rational mortgage credit business would want to do from time to time in any case, and in fact, Fannie and Freddie have analyzed alternative credit scores. The bill further requires that the process of the review and analysis of credit score performance must itself be reviewed periodically, which is certainly reasonable.

Thus the bill would require a process. But when it comes to the actual decisions about which credit scores to use and how to use them in managing the credit risks they take, Fannie and Freddie themselves are the proper decision makers. In my view, they would not necessarily have to make the same decision. Moreover, either or both could decide to run pilot program experiments, if they found that useful.

The Federal Housing Finance Agency (FHFA), Fannie and Freddie’s regulator and conservator, has in process a thoughtful and careful project to consider these questions, has solicited and is gathering public comments from interested parties and displays a very good grasp of the issues involved. But I do not think that, at the end of this project, the FHFA should make the decision. Rather, Fannie and Freddie should make their own credit-scoring decisions, subject to regulatory review by the FHFA—and of course in accordance with the regulatory reform bill, if it becomes law, as I hope it will.



The Senate: At War with Itself!?

In this Legislative Branch Capacity Working Group session, Senate experts James Wallner and Molly Reynolds will consider the reasons for its dysfunction today. The wide-ranging discussion will touch on partisan dynamics, recent procedural changes, and the way in which party leaders try to control the Senate today.


Red State = Clean Energy: How Market Driven Clean Energy is Transforming the Texas Electric Grid

On March 8, R Street, the Texas Clean Energy Coalition and The American Conservative brought together a panel of experts to discuss how Texas has combined the nation’s most competitive electricity market and the highest level of wind generation with a cheap and reliable electricity system. Josiah Neeley moderated the panel, which included Mayor Dale Ross of Georgetown, Texas, former Texas Public Utilities Commissioner Ken Anderson, Cheryl Mele of the Electric Reliability Council of Texas, and Elizabeth Lippencott of the Texas Clean Energy Coalition.


Trump’s steel tariffs are different from Bush’s – they’re worse


President Donald Trump’s decision to move forward with tariffs of 25 percent on imported steel and 10 percent on imported aluminum has drawn comparisons to the steel tariffs President George W. Bush levied in 2002, which were widely panned as ineffective. As the Trump administration implements its own futile protectionist scheme, it is important to understand why these tariffs likely would inflict even more pain on American consumers than Bush’s did.

Legal Authority

President Bush’s tariffs on imported steel were imposed under Section 201 of the Trade Act of 1974. Section 201 tariffs, also known as “safeguards,” can be invoked when a surge of imports threatens to injure a domestic industry. Safeguards are temporary, do not apply to individual countries and the levy usually declines over time. Safeguards were recently imposed by the Trump administration on imported washing machines and solar products.

By contrast, Trump’s steel and aluminum tariffs will be implemented under Section 232 of the Trade Expansion Act of 1962. Section 232 authorizes the U.S. Commerce Department to investigate whether imports of a particular item pose a threat “to the national security.” If the department finds that they do, the president has 90 days to propose a remedy, with wide latitude for what that remedy might entail. Unlike safeguards, tariffs issued under Section 232 are not temporary and do not decline over time.

Safeguards are widely used internationally under rules recognized by the World Trade Organization, which also adjudicates disputes on their fairness. China, Japan and the European Union successfully challenged the 2002 steel safeguards at the WTO and the Bush administration withdrew the tariffs after only about 18 months.

The General Agreement on Tariffs and Trade, the precursor to the WTO, provides for a national security exemption, which was created at the United States’ insistence. Codified in Article XXI, the exemption is extremely broad and widely considered “self-judging” – that is, the WTO’s Dispute Settlement Body is exceedingly unlikely to strike down a country’s claim that the import of some item threatens their national security. Invocations of the national security exemption by member-nations have been challenged a handful of times, but there has never been a binding GATT or WTO decision against them.

For these reasons, many legal scholars believe the national security exemption is the single most powerful exception to international trade rules. Thankfully, the authority has been invoked only sparingly and generally in good faith over the past 70 years. As the world’s largest and most important economy, and as the architect of the national security exemption, the United States has a special responsibility to invoke Article XXI judiciously. A haphazard invocation made on weak grounds could jeopardize the rules-based global trading system and open the door to similar claims by other nations.

The Trump administration now threatens to upend that delicate balance, as the national security case to restrict steel imports is particularly thin. The largest U.S. suppliers of steel are allies and the nation has a number of agreements that require other countries to provide supplies in case of a true emergency. In a letter from Defense Secretary James Mattis to Commerce Secretary Wilbur Ross, the Pentagon chief noted that the department “does not believe that the findings in the [Section 232 report] impact the ability of DoD programs to acquire the steel or aluminum necessary to meet national defense requirements.” Mattis further urged restraint in imposing tariffs, so as not to damage relationships with key allies.

Job Loss

The data vary, but it’s clear the Bush steel tariffs sparked significant job losses, particularly in steel-consuming industries. This is important, because workers in those industries vastly outnumber steel-mill workers. Gary Hufbauer of the Peterson Institute for International Economics estimated about 3,500 steel-industry jobs were preserved by the 2002 tariffs, but 12,000 to 43,000 jobs were lost. According to the Trade Partnership, an economic consulting firm, about 200,000 jobs and $4 billion in wages were lost due to high steel prices. One author of that study pegged the number of jobs lost due directly to the steel tariffs at 60,000.

More recently, the Trade Partnership projected President Trump’s steel and aluminum tariffs will reduce net employment by nearly 146,000 jobs. Employment in the steel and aluminum sectors would rise by 33,464 jobs, but at the cost of 179,334 jobs throughout the rest of the economy. Two-thirds of the job losses would affect low-skilled and production workers. And these projections don’t include any of the effects from foreign retaliation against American exports, which already have been threatened for items ranging from bourbon to Harley Davidsons.

The Forest for the Trees

Bush’s steel tariffs came in concert with his administration’s work to secure Trade Promotion Authority, which passed in 2002 by narrow margins. The TPA was then used to negotiate various free-trade agreements, most notably the Dominican Republic-Central America Free Trade Agreement (CAFTA). In other words, Bush accepted some protectionist steel measures to serve the broader goal of trade liberalization. This wasn’t a unique phenomenon. President Ronald Reagan countenanced certain protectionist measures with a commitment to broader trade liberalization, including laying the foundation for the Uruguay Round and the North American Free Trade Agreement (NAFTA).

That is not the case today. Trump has professed hostility to trade liberalization since the 1980s and his actions in office have matched his rhetoric. The United States backed out of the Trans-Pacific Partnership (TPP) and the administration has threatened to pull out of the U.S.-South Korea Free Trade Agreement (KORUS) and the WTO. It continues to threaten to withdraw from NAFTA while renegotiating the pact with unreasonable demands on automotive rules-of-origin and a sunset clause. Unless the president’s statement at Davos regarding the United States potentially rejoining the TPP comes to fruition, it’s hard to see how today’s steel and aluminum protectionism could serve the broader goal of trade liberalization.


The costs of protectionism are well-known – a loss of economic freedom; higher prices for consumers and businesses; foreign retaliation against exports; job losses; and declines in productivity and output. We can expect all of the above from the steel and aluminum tariffs. The Bush administration’s foray into steel protectionism failed spectacularly, but the toll the Trump tariffs will exact will likely be even worse.

Image by Joseph Sohm


William Murray on the Climate Lede: Is the Carbon Tax Dead?

On March 6, 2018, Federal Energy Policy Manager William Murray appeared on E&E’s podcast “The Climate Lede” to discuss the fate of the carbon tax. Murray spoke about the low chances of a carbon tax being passed at the state or federal level in the next several years in the wake of the Washington State Senate’s recent failure to pass a carbon tax proposal championed by Democrat Governor Jay Inslee.


San Francisco wants redundant automated vehicle demo because reasons


*Marc Scribner co-authored this piece.

The City of San Francisco is special. How special? In spite of having no evident expertise, the City’s mayor, Mark Farrell, has asked the manufacturers of “autonomous vehicles” to submit to local “safety assessment exercises” before deploying their vehicles. While there will be a strong urge among many Bay Area-based firms to go-along with the mayor’s demand, they should resist. Compliance may seem benign, but risks setting a dangerous precedent for the industry as it moves toward nationwide deployment.

The crux of the problem is twofold. First, local jurisdictions simply are not qualified to evaluate what is “safe” behavior when it comes to vehicle design, safety or performance. For this reason, state transportation regulators have smartly deferred to their federal counterparts at the National Highway Traffic Safety Administration (“NHTSA”) on such issues. And, to their credit, both NHTSA and Congress have been actively working to bring highly-automated vehicles (“HAV”)  into the existing regulatory fold.

For that reason it’s astonishing, if not unsurprising, that San Francisco’s political establishment seems to think it can master what state regulators recognize is beyond their core area of competency in the mere weeks before vehicles receive state deployment permits from the California Department of Motor Vehicles. To be clear, San Francisco cannot and will not be able to master anything of the sort.

What’s more, in the letter, the City provides a bogus rationale for the “safety assessment exercise.” The City claims that the exercise would provide manufacturers an opportunity to educate first responders and others about how to interact with highly-automated vehicles in the event of an emergency.

Yet, that requirement is already met by the filing of a “law enforcement interaction plan” at the state-level as a condition of being granted a permit. In other words, as soon as permits are granted, San Francisco will be notified of where vehicles will operate, how vehicles will operate, and the ways in which first responders should interact with them in the event of an emergency. Farrell’s “safety assessment exercise” is clearly redundant of existing state-level regulation.

Therefore, since none of this is actually about safety, it appears that the City is attempting to scramble for authority that more appropriately rests elsewhere. Which leads to the second major problem presented by Farrell’s request. Should manufacturers comply, other sub-state jurisdictions will be empowered to make similar demands.

It is not clear that Farrell’s request has much legal significance, since he cites no binding authority and notes that the state has – rightly – not given him the power to regulate the technology. However, an act of compliance would certainly be given precedential significance in the eyes of other jurisdictions. Sad will be the corporate counsel who is faced with the prospect of explaining to Los Angeles why her firm only intends to comply with San Francisco’s request.

Central to the development of HAV regulation has been the attempt to avoid a “patchwork” approach to oversight. While the danger of a patchwork at the state-level is significant, the danger of different requirements at the sub-state level is even greater. Consider, in California alone, there are 4,435 local governments. Each of those entities has the same legal, though not political, power of San Francisco. Therein lies the risk of compliance with Farrell’s demand. The sheer scale of the compliance demands created by taking such an approach could effectively grind HAV deployment to a halt.

To avoid the problems associated with a patchwork of local government regulations and requests, state legislators in Sacramento and around the nation should make clear the roles of various levels of government and work to prevent undue local discrimination against HAVs.

Several states have already enacted legislation preempting local authorities from meddling with HAV deployment, including Illinois, Nevada, North Carolina, Tennessee and Texas. These red, blue and purple states recognized that the best way to promote automated driving system development – and the resulting safety and mobility benefits – while ensuring effective government oversight, is to reinforce the traditional roles of federal, state and local vehicle and traffic regulators.

Just as it would be senseless for every state to attempt to reproduce NHTSA’s vehicle safety and performance efforts, having California’s 4,435 local governments attempt to reproduce authorities rightly possessed by state regulators runs counter to the goal of efficient, effective government oversight. Innovators would be strangled in red tape and inexperienced local regulators would be overworked on matters outside their areas of expertise. This is a recipe for less meaningful oversight of HAV testing and deployment, not more.


Image credit: Olivier Le Moal

Leadership super PACs and the further centralization of power

The direct primary was one of the Progressive Era’s most consequential reforms. Diminishing the role of parties in the nominating process meant that campaigns became more candidate centered and presumably more interesting to voters. By increasing popular participation in electoral politics, reformers hoped that government would become more representational, and that candidates would serve the interests of their constituents, not their parties.

For candidates, taking control over their campaigns meant having to raise money and hire staff. Candidate “branding” became important as the parties relinquished control over congressional elections and media began to play an increasingly influential role in politics. Today, candidates preside over sprawling and complex campaign operations. The average cost of winning a House seat is almost $1.5 million, while a Senate seat runs just over $12 million. The ability and willingness to raise enormous amounts of money is practically a prerequisite for serving in Congress today.

As the role of parties in elections shifted over the course of the 20th century, party committees became service-oriented organizations; rather than handpick candidates and direct their campaigns, they worked to elect the candidates chosen by voters. National party organizations still play an important role in congressional elections, but most of their financial and on-the-ground support is reserved for competitive general election races (though they occasionally do insert themselves into primary races). Fundraising is their chief enterprise.

The alliance between party committees and candidates isn’t always comfortable (Roy Moore’s candidacy for the Senate caused major rifts in the Republican party, for example), but for the most part, committees work to get as many of their members elected as possible. Maintaining or gaining majority control of the chamber is the overriding goal.

Candidates take their seats in Congress not “owing” the party anything because by getting elected, they helped the party achieve its team-oriented goal. There’s no personal attachment or bond formed between candidate and party organization; both play their parts in the election, then move on.

But what if it were party leaders – rather than party organizations – footing the bill for candidate campaigns? Would this dynamic personalize the donor/benefactor relationship in ways that might extend beyond the campaign? The oversized role that congressional leadership super PACs are currently playing in congressional elections may provide us with an opportunity to find out.

The Supreme Court’s Citizens United decision in 2010 opened the campaign finance floodgates to super PACs, committees that can raise unlimited amounts of money and make unlimited independent expenditures in elections. The decision led many to predict the (further) demise of parties which are held to stricter fundraising standards. In short, why would big donors give limited amounts to party organizations when they could give unlimited amounts to super PACs?

Indeed, spending by congressional party organization committees increased only modestly between 2012 and 2016, particularly on the Republican side. The NRCC, for example, raised $162.8 million in 2012 and $170.6 million in 2016 – an increase of less than $8 million over four years. At the same time, spending by congressional leadership super PACs skyrocketed. In 2014, the four super PACs affiliated with the Democratic and Republican leaders of the House and Senate spent a combined total of $114 million; in 2016, these four PACs spent $232 million. Party committees, together with congressional leadership super PACs, outspent all non-party spenders combined by $29 million in 2014 and $132 million in 2016.

So much for the demise of the parties?

Spending by non-party super PACs in House races declined from 2014 to 2016. However, the opposite is true of the two super PACs affiliated with House leadership. Party spending, combined with spending by super PACs run by Paul Ryan and Nancy Pelosi, represented 88-percent of the independent spending in the 34 most competitive House races in 2016. Independent spending in these races exceeded candidate spending by a ratio of 1.31 to 1.  Pelosi’s super PAC spent approximately $44 million on these races and Ryan’s super PAC spent approximately $40 million. In 2014, these super PACs spent $25 million and $10 million, respectively.

It’s worth noting that prior to 2004, candidate spending was never surpassed by outside spending. In 2008, five races broke that norm and in 2012, outside spending topped candidate spending in 10 races. In 2016, candidates in 27 races were outspent by outside groups. Most of this spending can be attributed to parties and leaders.

Given this trajectory, what can we expect from leadership super PACs in 2018? Paul Ryan’s Congressional Leadership Fund (CLF) provides us with one example.

Heading into the 2018 midterm election, the CLF launched a highly sophisticated operation, opening field offices in 27 competitive districts. Corry Bliss, CLF’s executive director, said that the committee has “rejected the traditional model of super PACs” and is “doing things differently by operating a national, data-driven field program.” The committee has already raised almost $37 million and spent over $11 million in three special election races. Eight more field offices are scheduled to open this year as part of the committee’s “$100 million campaign.”

Decisions about investing in these races will be made with an eye toward candidate loyalty. “When we allocate resources this year, heavy preference will go toward those who supported the speaker and president’s legislative agenda,” according to Corry Bliss. That message was made clear last year when the CLF pulled its support for Rep. David Young (R-IA) after he opposed his party’s healthcare bill. The committee closed its field office in Young’s district and transferred staff elsewhere. Bliss said the CLF would not support a candidate who cannot support the president and House leadership.

For the parties, elections are a numbers game; win majority control and move on to the next election. But for party leaders, elections are about building a loyal base as much as they’re about winning. Having served as Speaker since 2015, Paul Ryan certainly understands the challenges of managing divisions within his party. By choosing to throw his significant financial and operational support behind candidates who will toe the leadership line, he sends a clear signal to vulnerable incumbents and first-time candidates alike: Support me and I’ll support you.

There is absolutely nothing surprising about this dynamic. Party leaders have long used their candidate campaign and leadership PAC committees to build support networks in the chamber. Leadership super PACs, however, represent a significant new development in the role that party leaders play in elections. Leaders can give candidates $2,700 from their campaign committees and $5,000 from their leadership PACs, per election. But they can spend unlimited amounts on candidate campaigns, via their super PACs. The $6.2 million Ryan’s super PAC spent getting Karen Handel (R-GA) elected last year, and the $3 million the committee has already spent trying to get Rick Saccone elected in Pennsylvania’s 18th district, is a lot more than the $5,000 his leadership PAC can spend on each candidate’s behalf.

By adding super PACs to their arsenals, party leaders are inserting themselves into candidate campaigns in ways that likely matter well beyond the campaign. Much has been written about the centralization of power in the House, mostly focusing on how leadership increasingly determines policy content and tightly controls floor debate and votes. In this environment, there are fewer entrepreneurial opportunities for rank-and-file members and that suits party leaders just fine. Better to elect and preside over a party of foot soldiers than renegades.

In the absence of legal intervention, unlimited independent spending will continue to dominate the campaign finance landscape. With majority control of both chambers at stake, the 2018 midterms are already shaping up to set new outside spending records, with parties and their leaders taking the lead. Given this dynamic, we should expect to see the ranks of party loyalists continue to swell as more experienced and independent minded members head for the doors.


WBAL Morning News with Bryan Nehman: Clark Packard on Trump’s Steel and Aluminum Tariffs

Clark Packard joins WBAL’s Morning News with Bryan Nehman to discuss the impact that the steel and aluminum tariffs proposed by President Trump would have on the economy.

Dr. Megan Reiss on Cyberlaw Podcast

Dr. Reiss was part of The Cyberlaw Podcast’s news roundup on March 5, 2018. She discussed the attribution problem with the cyberattack on the Olympics and the Crowdstrike’s new report on the blurred lines between state-sponsored cyberattacks and cybercrime.

AEI Event: Eliminating Fannie Mae and Freddie Mac without legislation

A panel of housing finance experts met at AEI last Tuesday to discuss how the government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac could be eliminated without legislation.  Moderated by R Street’s Alex J. Pollock, the panelists detailed the distortions of the current housing finance system dominated by Fannie and Freddie, and proposed a reform plan that protects homebuyers and taxpayers and does not require Congress to act.

The Bubble Economy – Is this time different?

Two decades after Alan Greenspan’s famous “irrational exuberance” speech at AEI in 1996, Dr. Greenspan spoke at AEI again, addressing record-high global stock and bond market prices following unprecedented central bank balance sheet expansions.  Following Greenspan’s keynote address, R Street’s Alex J. Pollock led an expert panel that discussed whether the world economy is now experiencing an asset market price bubble and what might be done about it.


Fannie has reached the 10% moment, after all


After receiving thoughtful inquiries from two diligent readers (to whom, many thanks) about our calculation of the U.S. Treasury’s internal rate of return (IRR) on its senior preferred stock investment in Fannie Mae, we have carefully gone back over all the numbers starting with 2008, found a couple of needed revisions, and recalculated the answer.

The result is that Fannie has indeed reached its “10 percent moment.” Even after its fourth quarter 2017 loss, and counting the resulting negative cash flow for the Treasury in 2018’s first quarter, we conclude that Treasury’s IRR on Fannie is 10.04 percent. Freddie, as we previously said, was already past 10 percent and remains so.

So the 10 percent Moment for both Fannie and Freddie has arrived. We believe the stage is thus set for major reform steps for these two problem children of the U.S. Congress, but that the most important reforms would not need congressional action. They could be taken by agreement between the Treasury as investor and risk taker, and the Federal Housing Finance Agency (FHFA) as conservator and regulator of Fannie and Freddie.

Since Treasury has received in dividend payments from both Fannie and Freddie the economic equivalent of repayment of all of the principal of their senior preferred stock plus a full 10 percent yield, it is now entirely reasonable for it to consider declaring the senior preferred stock retired—but only in exchange for three essential reforms. These could be agreed between Treasury and the FHFA and thus be binding on Fannie and Freddie. The Congress would not have to do anything in addition to existing law.

These reforms are:

  1. Serious capital requirements.
  2. An ongoing fee paid to Treasury for its credit support.
  3. Adjustment of Fannie and Freddie’s MBS guarantee fees in compliance with the law.

CAPITAL: Fannie and Freddie’s minimum requirement of equity to total assets should be set at the same level as for all other giant, too-big-too-fail regulated financial institutions. That would be 5 percent.

CREDIT SUPPORT FEE TO TREASURY:  Neither Fannie nor Freddie could exist for a minute, let alone make a profit, without the guarantee of their obligations by the Treasury (and through it, the taxpayers), which, while not explicit, is entirely real. A free guarantee is maximally distorting and creates maximum moral hazard. Fannie and Freddie should pay a fair ongoing fee for this credit support, which is essential to their existence. Our guess at a fair fee is 15 to 20 basis points a year, assessed on total liabilities. To help arrive at the proper level, we recommend that Treasury formally request the Federal Deposit Insurance Corp. apply to Fannie and Freddie their large financial institution model for calculating required deposit insurance fees. This would give us a reasonable estimate of the appropriate fee to pay for a government guarantee of institutions with $2 and $3 trillion of credit risk, entirely concentrated in real estate exposure and, at the moment, with virtually zero capital. It would thus provide an unbiased starting point for negotiating the fee.

ADJUSTMENT OF MBS GUARANTEE FEES:  Existing law, as specified in the Temporary Payroll Tax Cut Continuation Act of 2011, requires that Fannie and Freddie’s fees to guarantee mortgage-backed securities be set at levels that would cover the cost of capital of private regulated financial institutions engaged in the same risk—this can be viewed as a private sector adjustment factor for mortgage credit. Whether you think this is a good idea (we do) or not, it is the law. But the FHFA has not implemented this clear requirement. It should do so in any case, but the settlement of the senior preferred stock at the 10 percent moment would make a good occasion to make sure this gets done.

These three proposed steps treat Fannie and Freddie exactly like the giant, too-big-to-fail, regulated, government guaranteed financial institutions they are. Upon retirement of the Treasury’s senior preferred stock with an achieved 10 percent return, the reformed Fannie and Freddie would be able to start accumulating retained earnings again, building their capital base over time. As their equity capital grows, the fair guarantee fee to be paid to the Treasury would decline.

The 10 percent moment is here. Now a deal to move forward on a sensible basis can be made.

A Feudal Feeling


Each year, I run a massive trade deficit with Target.  I spend a lot there and the Minneapolis-based retailer buys NOTHING from me. It’s unfair. In order keep my money at home, I will henceforth deal only with businesses based in my home state of Virginia. Furthermore, if I do ever go to Target, I’ll set up a system that takes money from me and spends it on things my elected representatives know are good for me. I don’t really object to this tax because, after all, it’s for my own good, enriches my neighbors and ultimately me.

And most importantly, Target is going to be hurt really badly. In fact, the new store the company wants to build near my house–which is really only a way of extracting even more money from my community—might not be built as a result. This will further reduce the drain of wealth on me and my neighbors.

And there are local alternatives, after all. Dollar Tree, which is Virginia-based, sells most of what I need so I can start shopping only there. But thinking about it, that probably isn’t enough it’s based on the other side of the state.

I really want to keep my money in Northern Virginia where I live. There are some farmers’ markets that allow only local growers and at least one or two custom tailors in the area from which I can buy my clothes. Sure, it will cost more but I’ll be getting much better quality in organic produce and custom clothing and supporting local jobs at the same time. WIN!

But, of course, that really doesn’t go far enough either. The farmers mostly live a few counties over and the tailors might well decide to spend their money on a trip to Disney World or maybe even a cruise in the Caribbean on a ship that employs only a handful of Americans. They might even put it into a 401(k) that invests in other countries rather than right here in Northern Virginia.

The real solution, then is that I should make everything myself and grow all of my own food. That way I can REALLY keep my money near home. I think I could probably dig up my backyard to plant potatoes and raise a few chickens in my kitchen. I don’t know how to sew at all but maybe I can buy a sewing machine and learn. After all, I NEED to keep my money at home. That’s what’s most important.

Now there are risks here, of course, but there are ways to mitigate those too. We can increase social spending using some of this new tax money and provide better welfare benefits for everyone. As we know, however, the principle of subsidiarity strongly suggests that we’re best off administering them close to home. Maybe, instead of owning my own house/farm (it’s going to be tough making it on a half-acre but I can manage) I could ask someone else for protection and use of some extra land. He could then run a business that would take a certain share of my crops and sometimes ask me to help out on certain projects like building roads and maybe helping out if another nearby businessperson got into a disagreement with him. Since he’ll probably have more money than I do, maybe he could build a big house. In return, he’d protect me and provide me with food and other resources if I ever hit hard times. Everyone around me could get help from the same person. Maybe he could make an agreement with someone who had an even bigger business and more land elsewhere.

As such, my whole neighborhood could be self-sufficient for almost everything and VERY wealthy as a result.   This is such a great idea, I don’t know why I hadn’t thought of it before.

The only thing I can’t think of is what I should call the system. But, in any case, it’s really awesome. No?


Image credit: Eugene Ivanov


R Street’s Guide to Policy Panels at SXSW 2018

In addition to its high-profile music and film festivals, Austin’s South By Southwest (SXSW) also hosts one of the biggest annual gatherings of technologists, activists, government officials, entrepreneurs and academics in the country.

While SXSW offers incredible opportunities to mingle with like-minded individuals and promote free-market ideas about tech policy, it can also be a difficult landscape to navigate. How does one choose from so many panels covering the same topic: blockchain? We’ve been told it’ll fix everything.

Each year, we publish a guide offering our assessments of the best policy-focused events. Check it out below. A (★) indicates a representative from the government will be participating. An (R) designates an R Street policy expert is a panelist.

March 9, 2018

March 10, 2018

March 11, 2018

March 12, 2018

March 13, 2018

March 15, 2018


Unofficial SXSW Stuff You Should Check Out:

March 10 – March 12, 2018

Event video: What is the NDAA? And why should you care?

Dr. Megan Reiss moderates a panel of cybersecurity experts on Feb. 26, 2018 who discuss whether the Defense Department has the tools, infrastructure, and workforce to effectively compete with competitors in cyberspace. Panelists discussed what, if any, steps Congress could take in supporting Department of Defense’s efforts to bolster its cyber capabilities.

Watch the video here.

Congressional Budget Process Discussion

In this Legislative Branch Capacity Working Group session, the group takes stock of the broken congressional budget process and the likelihood of its reform with budget expert guests, Maya MacGuineas and Philip Joyce. Discussion topics ranged from the latest shutdown threats and spending deals to perennial reform proposals such as biennial budgeting.

Rep. Darin LaHood on congressional dysfunction and the prospects for reform

Efforts to reform congressional dysfunction typically come from senior members who are tired of not getting anything done, former members who can speak openly (and critically) about what’s wrong with the institution, and various organizations dedicated to improving Congress’s capacity and performance. Not so with H. Con. Res. 28, which would establish a joint committee on the organization of Congress to study and make recommendations to improve the organization, operations, and functions of Congress. The resolution is sponsored by Rep. Darin LaHood (R-18th/IL), elected to Congress less than three years ago. Rep. Dan Lipinski (D-3rd/IL) is the resolution’s primary co-sponsor.

It’s been just over one year since LaHood introduced H. Con. Res. 28. We sat down with him to talk about the resolution, dysfunction in Congress, and the prospects for reform in the 115th.

Despite his relatively short stint in Congress, LaHood is well versed in the politics of dysfunction. Prior to winning a seat in the House, he focused on ethics and transparency issues as an Illinois state senator, and as a state and federal prosecutor. He worked as a Hill staffer in the 1990s, and his father, former Rep. Ray LaHood, represented Illinois’s 18th district from 1995-2009. He came to Washington with the perspective that government is supposed to be effective.

So far, H. Con. Res. 28 has 64 co-sponsors from both sides of the aisle, many of whom are freshmen. There are no leaders signed on to the bill and according to LaHood, this is by design. For now, he and Lipinski want the effort to be more organic and “grassroots driven.” Once they have at least 100 co-sponsors, they’ll determine a strategy for taking their resolution to leadership.

Asked if there was any one event or issue that served as the impetus for H. Con. Res. 28, LaHood said that for him, it was the “constant blowing of timelines and deadlines.” Whether it’s passing CR’s, doing things by omnibus, dealing with fiscal cliffs or the debt ceiling, Congress isn’t doing what it’s supposed to do. LaHood said that dysfunction is the common denominator that needs to be addressed.

LaHood and many of his freshmen co-sponsors immediately recognized institutional dysfunction. New members are instantly aware of Congress’s low approval ratings and lack of productivity.  There’s a strong desire, he said, to see a return to regular order and to establish a regular process so that members can serve more effectively. This doesn’t happen anymore – everyone knows it, yet dysfunction continues to persist. LaHood says his colleagues have a lot to say about what’s wrong with Congress and a lot of good ideas.

The idea for a joint committee on the organization of Congress was born out of a desire to provide members from both sides of the aisle with a platform or mechanism for bringing ideas to the table. That doesn’t exist now, according to LaHood. “Members are here for 72 hours, running around like crazy, and there are no opportunities to sit down and discuss these issues in a cerebral way.”

He and Lipinski made a conscious choice to propose a forum for ideas rather than prescribe their own solutions to the problems. The committee would provide a much-needed setting for the exchange of ideas. Discussions would be bipartisan and bicameral, and take place at both the staff and member level. One goal is for members to build relationships across the aisle and across chambers—something that he says members have little time to do, but would go a long way in terms of improving dysfunction.

As for whether the current dysfunction can be blamed on the members themselves, LaHood says there’s plenty of blame to go around. There are members who resist change because the system as it currently works serves them well. And there are members who are ideologically extreme and don’t have much interest in working across the aisle. That said, he believes all members need to be accountable to their constituents and should embrace a transparent process that actually works.

In the absence of a dedicated congressional reform effort, LaHood isn’t very optimistic about Congress resolving its current dysfunction. Some members, according to LaHood, are looking hopefully at the recently established Joint Select Committee on Budget and Appropriations Process Reform. If that committee actually “works,” then there’s hope for a joint committee on congressional organization, operations, and functions. But institutional reform isn’t going to happen on its own — there are simply too many things that need to change.


Promoting Innovation at the FCC

*Joe Kane coauthored this piece.

In the United States, the Federal Communications Commission (FCC) must approve all new technologies or services that emit electromagnetic radiation — so, basically anything that uses electricity — before they can be offered to the public. There are very good reasons for this precaution, such as preventing cancer and harmful interference. But the process of obtaining FCC approval is typically long, arduous and fraught with uncertainty. Thankfully, the FCC is finally proposing to streamline and formalize this process.

Review of new technologies and services is governed under Section 7 of the Communications Act. This provision is barely 100 words long, but its message is clear: “It shall be the policy of the United States to encourage the provision of new technologies and services to the public.” Section 7 also provides a basic framework to review applications for new technologies or services according to that policy. However, the FCC has never codified specific rules to govern the Section 7 review process, forcing entrepreneurs to navigate a complex, opaque bureaucratic maze before they can bring their innovations to market.

Consider the recent case of LTE-U, a wireless technology developed by Qualcomm to increase spectral efficiency and provide additional throughput in unlicensed spectrum bands. Mobile carriers sought to deploy this technology in the United States as early as 2014, and internal tests showing that LTE-U could peacefully coexist with other unlicensed technologies, like Wi-Fi, were completed in early 2015. However, the FCC still had to take public comment on the matter. After concerns were raised by members of the Wi-Fi community, follow-up questions were asked, additional costly tests were done and a complicated pre-approval process was established. Final approval of LTE-U wasn’t issued until 2017, leading some carriers to pass on the technology and some industry analysts to question whether that three-year delay may have killed off LTE-U altogether.

The example with LTE-U isn’t even the worst of it. Another entrepreneur, LightSquared, sought to compete head-to-head with incumbent mobile carriers by launching an ancillary terrestrial component (ATC) to pair with its existing satellite network. LightSquared needed approval to modify its existing licenses, which was conditionally granted by the FCC in 2011 but rescinded after commercial GPS providers and the National Telecommunications and Information Administration (NTIA) complained of potential interference. That setback forced LightSquared into bankruptcy; its successor, Ligado Networks, is still waiting on FCC approval to this day.

Even if the FCC were to reject Ligado’s petition, it would at least have the opportunity to challenge that decision in court. Instead, the petition has languished for over seven years. The FCC has never given a final answer either way, so Ligado is still stuck in limbo. That’s simply unacceptable.

Some bureaucratic oversight is needed to ensure new technologies and services don’t harm existing users, but the FCC’s current Section 7 review is too complex, unpredictable and slow. This creates regulatory uncertainty that stifles, rather than encourages, innovation. Reforming Section 7 and codifying its processes may not save Qualcomm’s LTE-U or Ligado’s ATC, but it will ensure that future innovators won’t suffer the same fate of watching their prized new technologies or services wither on the vine while undergoing endless regulatory review.



The Creative Side of R Street


Shoshana, R Street’s digital media specialist, mixes her own hair dyes and dyes her own hair. So she’s not a big fan of the lawyers and legislators who say that hair dyeing requires special licensing permission. Shoshana does all sorts of creative things. Besides custom-coloring her hair, she sews her own gowns. And her Twitter GIF game is unmatched.

But what if, as with hair-dyeing, the government regulated creativity? What if a law said that, before being creative, she had to get a license?

This week is Fair Use Week – a yearly celebration of important and essential limits on copyright laws. Laws that affect people like Shoshana. But she is far from the only creative person at the R Street Institute. The creativity starts at the top, with our president Eli, who writes restaurant and poetry reviews, and keeps the office full of pirate jokes and South Park–inspired portraits.

Some R Streeters are artists near professional levels: Nila sings Slavic folk songs in an a cappella trio and sells records, and Ann is a published photographer.

Others do it out of love of the craft: Kevin makes videos about fishing and wrote a book about whiskey, Jarrett and his wife created a recipe book as their wedding favor, and Christie makes miniature figurines out of modeling clay.

Easton carved a wooden chest for his goddaughter, and LT built a complete portable tiki bar out of PVC pipe.

Like all creative people, R Streeters depend, in making their own crafts, on the creations of others. Sometimes, this is literally turning old things into new ones. Erica makes parts of old books into new journals and art, and would make antique buttons into jewelry.

Screen Shot 2018-02-28 at 3.04.35 PM


Lauren finds old furniture and refinishes it. Shoshana buys hair dye and mixes it with conditioner to make lighter colors.

Art builds upon art, and that is where copyright law comes in. Copyright law is supposed to help creators by making it illegal to copy their creative work without permission. This makes sense in the obvious cases—you should not be allowed to rip off books, songs or movies.

But not all copying is unproductive theft. When art builds upon art, some amount of copying is required to make future art. Because of that, copyright law, when taken too far, can actually hurt creators rather than helping them—the opposite of what copyright is supposed to do.

Consider Caroline, who taught pole dancing fitness, a creative endeavor in itself. She and her fellow instructors put videos of their dance routines up on Facebook, only to have them flagged because of copyrights in the background music. That’s one way copyright can interfere with creativity.

Fair use is a safety valve that keeps copyright law from going too far. It is an exception to the law, recognizing that some amount of copying must be allowed to serve the needs of ordinary people, and especially ordinary creative people.

Ask any copyright lawyer about how fair use works and you will be told that it is difficult to explain and unpredictable in practice. And it is true that, on the margins, courts can be indecisive about fair use. But fundamentally, the core purpose is simple: Fair use helps to ensure that all artists, big and small, can take part in creating art that builds upon the work of others.

The big creators, the Hollywood directors and executives, have the money and connections to hammer out complex copyright licensing deals for permission to create. Individual creators are just as important—a new study finds that almost 15 million independent creators earned almost $6 billion in 2016—but they can’t walk into Hollywood boardrooms and strike deals.

Instead, fair use allows these independent creators—like all the creators at R Street—be creative.

Jon composes piano covers of hip-hop songs for his own personal entertainment. Fair use is for Jon.


Dan used to create his own electronic music for himself and small parties, including “plunderphonics” involving copious short music samples assembled into a totally new work. Fair use is for Dan.

And the GIFs and memes that Shoshana and her fellow R Streeters create, that put an entertaining and effective point on otherwise dry Washington policy topics? Fair use is for all of them.

Without fair use, copyright law would only allow those who can deal with the complexities of copyright licensing to create and build upon others’ work, as all creativity does. This sends a message to the small creators that the club of creators is limited to Hollywood executives. And that everyone else must sit back and receive whatever movies and TV shows we are handed.

The message of fair use is inclusive—to paraphrase Chef Gusteau in Ratatouille, “anyone can cook create.” That is the message for Jon’s covers and Dan’s music and Shoshana’s GIFs. That is the message for all the creators at R Street. And that is the message for all independent creators, who make this world a funnier, happier, prettier, better place.




Header image credit: Kirasolly

EPA Hearing in San Francisco May Excite and Inform (Or Not)

A listening tour by the Environmental Protection Agency (EPA) will arrive in San Francisco tomorrow to hear from the “Left Coast” about how to create a new Clean Power Plan that passes muster with the Trump administration.

The Bay Area has perhaps the most environmentally conscious electorate in the country. This means that the majority of session participants will likely view this as an opportunity to voice their unhappiness with the administration’s approach to environmental policy.

And the administration is playing its role as antagonist to a ‘T’. Trump has dramatically reversed the federal government’s approach to Obama-era policies not only on the Clean Power Plan, but also on the Clean Air Act, the Waters of the United States, fracking emissions from oil and gas drilling, and is attempting to scale back the Endangered Species Act. And yes, Trump pulled the United States out of the Paris climate accord.

The original Clean Power Plan was a clever, well-designed and unprecedented plan – originally written by a brain trust at the Natural Resources Defense Fund. Its aim was to lower U.S. greenhouse gas emissions dramatically in order to conform to promises made at the Paris Climate Accords in 2015. It allowed states to pick from a menu of strategies to reduce emissions.

However, EPA Administrator Scott Pruitt now wants to downsize the plan dramatically. He and many others in the administration think that the interpretation of section 111(d) of the Clean Air Act was a dramatic overstretch of power by the Obama administration (The Supreme Court seems to have similar reservations – in early 2016 they stayed the regulation as it was being reviewed by a lower court). Now, Trump’s EPA is working hard to scale back the plan to a more traditional interpretation of Section 111(d) wherein rules were based on measures applied to a specific installation. This is also known as an “inside the fence line” interpretation that would largely concern coal-fired power plants as part of the new proposal.

Ultimately, this is a test of how critically one sees climate change action. For those on the political left, climate change has been “a cause célèbre” for many years. Some of its strongest voices call California home.

However the Trump administration disputes many of the particulars of climate science – including the consensus that greenhouse gas emissions create climate change and threaten the United States with sea-level rise.

Interestingly, there are very few coal plants in California. But the tour’s stops are strategic – there are natural political economies at play when selecting the sites of the hearings. The next stop is in Gillette, Wyoming, a very un-San Francisco town of 31,000 that just happens to sit atop the massive Powder River Basin coal fields, where roughly 40 percent of U.S. coal production takes place.

Do not expect a final decision to be issued anytime soon. There is a method to the madness of environmental regulation, and it involves public comment periods, draft proposals and A LOT of opportunity for federal courts to invoke stays, injunctions or vacate previous decisions before it is all done.

It’s quite possible the final judicial review of a new Clean Power Plan will outlast the Trump administration itself, if one can imagine life in America post-Trump.



C. Jarrett Dieterle Testifies on Occupational Licensing Before the Committee on Small Business

R Street Institute’s Director of Commercial Freedom, Jarrett Dieterle, testifies in front of Congress about occupational licensing. He discussed the burdens occupational licensing places on individuals and entrepreneurs, while also laying out options the federal government could pursue to reform excessive occupational licensing.

Fannie falls further from its ’10 Percent Moment’


When Fannie Mae and Freddie Mac were bailed out by the U.S. Treasury, which bought enough of the firms’ senior preferred stock to bring the net worth of each up to zero, the original deal was that the Treasury, on behalf of taxpayers, would get a 10 percent return on that investment.

For some time now, Fannie, Freddie and their supporters have ballyhooed how many dollars they have paid the Treasury in dividends on that stock, but that is an incomplete statistic. The question is whether those dollars add up to a completed 10 percent return. For that to happen, the payments have to be the equivalent of retiring all the principal plus providing a 10 percent yield; this is what I call that the “10 Percent Moment.” We can easily see if this has been achieved by calculating the internal rate of return (IRR) on the Treasury’s investment. Have Fannie and Freddie at this point provided a 10 percent IRR to the Treasury or not?

The answer is that Freddie has, but Fannie, by far the larger of the two, has not.

Freddie’s net loss in the fourth quarter of 2017 means the Treasury has to put $312 million back into it to get Freddie’s capital up to zero again. This negative cash flow for Treasury will reduce its IRR on the Freddie senior preferred stock, but only to 10.7 percent. Freddie has still surpassed the 10 percent hurdle return.

On the other hand, Fannie’s fourth quarter loss means the Treasury will have to put $3.7 billion of cash back into it, dropping the Treasury’s IRR on Fannie from 9.79 percent in the fourth quarter of 2017, to 9.37 percent. That’s not so far from the hurdle, but the fact is that, as of the first quarter of 2018, Fannie has not reached the 10 Percent Moment. Fannie and its private investors need to stop complaining about paying all its profits to the Treasury until it does.

When both Fannie and Freddie achieve the 10 Percent Moment, it would be reasonable for the Treasury to consider declaring its senior preferred stock in both fully retired, in exchange for needed reforms. At that point, Fannie and Freddie’s capital will still be approximately zero. They will still be utterly dependent on the Treasury’s credit and unable to exist even for a day without it. Reforms could be agreed to between the Treasury and the Federal Housing Finance Agency (FHFA)—as conservator and therefore boss of Fannie and Freddie—and carried out without needing the reform legislation, which is so hard to achieve. There will be a new director of the FHFA in less than 11 months.

Surely a restructured deal can emerge from this combination of factors.

Image by mspoint


Crippled Congress = expanded executive powers

Congress is unquestionably polarized. Many of the nation’s most pressing issues are mired in gridlock. These facets of our politics are understood, but the consequences of our current circumstances rarely receive the attention they clearly warrant.

Indiana University political scientists Edward Carmines and Matthew Fowler address why these consequences matter in their recently published paper, “The Temptation of Executive Authority.” The authors thoroughly detail a specific outcome of our current political environment that many on the congressional capacity bandwagon have warned of for years: when Congress doesn’t have the capacity to legislate, the president will.

Citing a perfect storm of increased polarization, contentious elections for congressional majorities and the White House, and regular instances of divided government, the authors argue that recent presidents of both parties have taken advantage of the inability and/or unwillingness of Congress to craft and pass legislation and have done so themselves. These reinforcing influences “have led to the expansion of executive authority at the expense of a diminished legislature.” In short, the mantra of three co-equal branches of government is becoming less accurate as the executive gains strength.

According to Carmines and Fowler, “Congress, in a nutshell, no longer seems up to the challenge of taking effective action to deal with the major problems facing the country.” The authors back up their argument with five indicators of decreased legislative productivity, including fewer bills passed by Congress, the declining percentage of bills going to conference, and even the increase in cloture votes taken to end debate on pending bills. By all measures, Congress’ capacity to legislate has taken a serious hit, giving the president an opportunity to take up the slack.

Important to keep in mind

We know Congress has cut its own resources, from technology, to support agency expertise, to staffing resources within its own offices. But, let’s remember an essential point: Congress can fix this. In fact, Congress can act alone to fix this.

In each of the instances where Carmines and Fowler highlight presidents making use of, and even expanding on, their executive authority, they often couch the action as a response to congressional inaction. For example, they write presidents have chosen to use executive authorities because they faced a “recalcitrant and uncooperative Congress,” or a Congress that “has become ideologically bifurcated and unwilling to compromise,” or a Congress that “lacks the capacity and perhaps the will to play a coequal role.”

And that’s the point. The policy vacuums filled by executive actions only exist because Congress creates them.

When we talk about legislative capacity we should be aware of the likelihood that the current environment exists because enough members of Congress want it to exist. Legislating is hard; there are uncertain outcomes, unintended consequences, and often politically dangerous consequences given that bipartisan compromise is almost always necessary for bills to be signed into law. Many members have made the calculation that it is politically advantageous to let the president lead.

Congress can curb the use of executive action by taking action themselves, but they have to want to. Though Congress can and should increase their own institutional capacity via hiring more, and better compensating staff, the will to take back legislative superiority must come first. As R Street Governance Senior Fellow James Wallner argues, and as you will read in much of his upcoming work, active congressional participation is best thought of as “the politics of effort.” And as Carmines and Fowler conclude, the executive, in the absence of effort from the legislature, will not hesitate to take and expand on his authorities to implement his policy preferences.


What is going on with WHOIS?

WHOIS” is a database administered by the Internet Corporation for Assigned Names and Numbers (ICANN). It contains data such as who owns and operates a domain name and how to contact them. There has been much public chatter about WHOIS recently thanks to Europe’s General Data Protection Regulation (GDPR), which is slated to take effect on May 25, 2018. GDPR has heightened standards for what personal information may be made publically available online meaning that some of the data in WHOIS will likely run afoul of the new regulation. This, therefore, both necessitates and provides an opportunity to reevaluate the disclosure and use of WHOIS data.

ICANN President and CEO, Göran Marb recently laid out three models for how to change WHOIS in order to bring it into compliance with GDPR. “Model 1” would apply only in the European Economic Area (EEA) — the area affected by GDPR. This model would withhold personal information from the public but allow access to anyone who self-certifies they have a legitimate interest in the data. This model is only a modest change from a completely publically available WHOIS since there is no verification mechanism to see if a party’s interest is indeed legitimate. For this reason, it is questionable whether Model 1 is GDPR compliant.

“Model 2” has received significantly more interest. It would create a layered system in which most data are non-public, but certain, predefined groups would be able to gain more access after a formal accreditation process. This proposed model has two variations: 2A, in which the new process applies only to the EEA, and 2B, which applies to the whole WHOIS system.

A Model 2 approach seems to be akin to ICANN’s ongoing efforts to replace WHOIS altogether with Next-Generation Generic Top-Level Domain Registration Directory Services (RDS) which will likely extend some kind of layered access to classes of certified users. The main drawback of implementing this model now is the clock: there is not enough time to thoughtfully complete and implement a fully developed and layered approach by the time GDPR takes effect in May.

That brings us to “Model 3,” which makes most data non-public and does not release it to anyone except to comply with a court order. This model most clearly complies with GDPR by closely tying access to the purpose of WHOIS. It would still allow the intended functions of WHOIS as a repository of data necessary for administrative functions, without making those data publicly available. Domain registrars need to be able to keep track of transfers of domain names and back up ownership records, but the registrars can do their jobs even if the data are not available to the public. Other interested parties, such as law enforcement, may want access to the data, but their goals must be considered separately from the purpose of WHOIS itself. Additionally, they would still be able to access the data if they get a court order. This model is supported by groups like the Electronic Frontier Foundation and the Internet Governance Project.

As a long term solution, Model 2B, in the form of a new RDS, is preferable as it will account for the legitimate interests of all parties without publicly disclosing everything to everyone. 2B is preferable to its model 2 counterpart because it to provides a uniform, international standard rather than carving up domain name policy along political borders.

We also should not rush the process; a slapdash layered approach to WHOIS would likely create more problems than it solves. Therefore, in the meantime, ICANN should work diligently to complete the new RDS and adopt Model 3 as a stopgap measure in order to comply with GDPR.

The intersection of WHOIS and GDPR highlights the ways in which Internet governance is increasingly bumping into traditional regulation by nation-states. If it ever was, the Internet is no longer a domain outside the reach of governments. It is still a global ecosystem, but, as in this case, global policy can be swayed by regulations in a particular region. Maintaining the legitimacy of private Internet governance rather than government intervention is likely to become an increasingly difficult but important struggle.

Too Organic for the Government

One just can’t make this stuff up. Earlier this month, the U.S. Environmental Protection Agency (EPA) got a stay until May 1 of a court mandate requiring farmers to report government-declared “dangerous gasses” released by decomposition of manure.

Under the U.S. District Court of Appeals ruling, these so-called “hazardous substance releases” – which have been occurring since livestock appeared on earth – will have to be reported to first responders. The only question is when.

I take it back. There are some obvious additional questions: How will the first responders protect us from these dangerous agricultural releases?  What emergency response protocol springs into effect when they get notified – not of a sudden release happening after a train wreck or a plant explosion – but of a constant release that can’t even be quantified except by a formula taking into consideration the number of animals, the weather, the climate and the geographical area that manifests the hazard? Because “[t]he purpose of the notification is for federal, state, and local officials to evaluate the need for an emergency response to mitigate the effects of the release to the community.”

This is not the same issue as manure management to protect groundwater quality, which is a serious and widely-supported policy much more in line with the purposes of critical environmental protection. A 2005 spill in Lewis County, New York, for instance, is estimated by the state’s Department of Environmental Conservation to have killed 375,000 fish in the Black River and was judged instrumental in hastening promulgation of state waste management rules.

The EPA, lately much-criticized for mission creep, ruled that emergency releases of agricultural ammonia and hydrogen sulfide – which have been byproducts of farming since farming was a thing – were not something they needed to be especially concerned about under either the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), also known as “Superfund,” or the Emergency Planning and Community Right-to-Know Act (EPCRA). Thus, agricultural releases were exempted from the final rule when it was adopted in 2008.

The EPA was subsequently sued by environmental and animal rights groups who are on a mission to wipe out livestock farming and were disturbed that agriculture got a pass on reporting these discharges. The final rule was struck down last year after several years of litigation. When the court’s ruling takes effect, the EPA’s regulatory exemption will no longer apply.

In 1978, President Jimmy Carter declared a landfill near Niagara Falls, New York, known as “Love Canal” a “federal emergency area.”  William T. Love was an entrepreneur who wanted to dig a canal to join the two levels of the Niagara River that were separated by Niagara Falls. The original project failed after a mile of digging, which produced a trench 10 feet deep and 15 feet wide. The U.S. Army apparently buried some waste from chemical weapons experiments there, and then from 1947 to 1952, Hooker Chemical and Plastics Corporation filled it up with toxic waste. Eventually hundreds of homes and a school were all built on several layers of dirt sitting on top of 21,000 tons of toxic waste poured into the spot. When the school building foundation punctured the copper liner, people began to get sick, and around 800 families were relocated over the next couple of years.

CERCLA was enacted two years later to authorize the government to compel responsible parties to remediate toxic sites. Fees were instituted on chemical and petroleum industries to pay if the responsible parties could not, or failed to, clean up. In the aftermath of the worst industrial accident in modern history – occurring at the Dow Chemical /Union Carbide pesticide plant in Bhopal, India, in 1984, which left thousands dead from the gasses leaking out of the plant – the CERCLA law was expanded by the Superfund Amendments and Reauthorization Act (SARA). Title III of this Act, signed in 1986, is the community right-to-know law.

These laws provide important protections to the public and tools to mitigate some extremely damaging assaults on the natural environment. The need for these laws was widely-appreciated at the time and to this day. These fundamental environmental protections and remediation efforts brought back Lake Erie, cleaned up Superfund sites, reclaimed strip-mined areas and made thousands of river-miles swimmable and fishable. These real accomplishments are just not comparable to tweaks in deference to people who just don’t like the idea of farming livestock. The comparison makes this latest skirmish all the more pernicious.

I can’t blame the EPA because this is not their doing. In fact, they seem to be doing everything they can to be helpful, including allowing an annual report of “continuing emissions” instead of daily reporting. They are not requesting that producers monitor or reduce emissions.

A bipartisan group of senators including Sens. Joni Ernst, R-Iowa, Deb Fischer, R-Neb., Joe Donnelly, D-Ind., John Barrasso, R-Wyo., Mike Rounds, R-S.D., Pat Roberts, R-Kan., Heidi Heitkamp, D-N.D., Chris Coons, D-Del., and Tom Carper, D-Del., introduced a bill on Valentine’s Day to protect farmers, ranchers and livestock markets from these additional EPA reporting requirements. In press releases describing the introduction of the Fair Agricultural Reporting Method (FARM) Act, the words “common sense” were used repeatedly.

In the meantime, you will not be shocked to hear that while the EPA estimates that only 44,900 producers will meet the reporting threshold, the U.S. Poultry and Egg Association estimates that 141,000 poultry farms will have to report, and the National Cattlemen’s Beef Association thinks that over 68,000 of its members will be filling out the forms once released. But have no doubt that there will be an inevitable dilution of real environmental warnings to the community that are lost in the noise when you start notifying first responders that there is sometimes a lot of manure on farms with hoofstock or poultry.


Cryptocurrencies and blockchain: Techno-gold or fool’s gold?

A group of economic policy experts met at AEI on Monday to discuss the regulatory challenges that bitcoin and other cryptocurrencies pose.

Alex J. Pollock of the R Street Institute started by providing the historical context in which cryptocurrencies have gained popularity. Clemson University’s Jerry Dwyer explained key trends in global bitcoin exchanges and outlined several recommendations for cryptocurrency regulations.

Next, Bert Ely of Ely & Company Inc. highlighted the role that the blockchain technology plays in propagating cryptocurrency and what it means for policymakers. He also discussed the role that speculations play in driving the market value of cryptocurrency and the current flaws in cryptocurrency mechanisms, pointing out that there is no argument for central banks to issue cryptocurrencies.

AEI’s Paul H. Kupiec described the function that currencies have played historically, from ancient Croesus to colonial America. He pointed out that, although cryptocurrencies might represent a financial bubble, such bubbles are in fact frequent in financial history.


Congressional Testimony of Paul Rosenzweig: Data Security in the Modern World

The Subcommittee on Financial Institutions and Consumer Credit called a hearing to examine the issue of data security in the financial sector. Senior Fellow Paul Rosenzweig testified in favor of a “light touch” that was based on standard setting and guidance rather than hard regulatory mandates.



Click here to read the transcript.

Podcast: Grading California’s Wildfire Response

In the wake of California’s recent and devastating wildfires, the state’s legislature has responded with a slew of proposals that would insulate homeowners from the true cost of living in high-risk areas. R Street’s Adams discusses these proposals, and the Golden State’s uniquely problematic system of insurance regulation, with scholars at the Pacific Research Institute.

The Fed as a Piggy Bank?  Of Course!


The Bipartisan Budget Act passed last week had a little item in it to help government revenues by confiscating $2.5 billion of the retained earnings (they call it “surplus”) of the Federal Reserve Banks.  When they found out about it, the commercial banks, which own all the stock of the Federal Reserve Banks, weren’t happy.

“Critics say the plan is yet another example of Congress turning to the Fed as a source of funding,” the American Banker reported.  It is now “common to use the Fed as a piggy bank,” complained the Independent Community Bankers association.

The Fed as a piggy bank?  Of course, what else?  According to the Federal Reserve’s own press release, the Federal Reserve Banks paid $80.2 billion of their 2017 profits to the Treasury.  In other words, 99 percent of their estimated net profit for the year of $80.7 billion goes to the government to help reduce the budget deficit.  To confiscate another $2.5 billion only increases the aggregate take by 3 percent.

The Federal Reserve Banks have the highest rate of profitability of any bank, with a 2017 return on equity of about 195 percent.  Of course, they are also astronomically leveraged, with assets of about 107 times equity, or a tiny capital ratio of a 0.9 percent.  Almost all of that leveraged profitability goes right into the Treasury, every year.

The Federal Reserve Banks paid aggregate dividends to their shareholders of $784 million in 2017, or less than 1 percent of what they paid the government, which is a greedy business partner, it seems.

The Federal Reserve System is many things, but one of them is a way for the government to make a lot of money from the seignorage arising from its currency monopoly.  The Fed creates money to buy bonds from the Treasury, collects the interest, then gives most of the interest back.  It also uses its money power to buy mortgage-backed securities from Fannie Mae and Freddie Mac, which are owned principally by the Treasury, collects the interest on them, and then sends most of it to the Treasury.

As my friend and banking expert Bert Ely always reminds me, it is easier to understand what is going on if you simply consider the Treasury and the Fed as one interacting financial operation, and consolidate their financial statements into one set of books, clarified by consolidating eliminations.  Then you can see that on a net basis, the consolidated government is creating money instead of borrowing from the public in order to finance its deficits and in order to generate vast seigniorage profits for itself.  The Fed makes a very useful front man for the Treasury in this respect.

The first congressional confiscation of Federal Reserve retained earnings was in 1933.  Then they were taken to provide the capital for the newly formed Federal Deposit Insurance fund.  So as usual in financial history, taking the Fed’s retained earnings is not a new idea.  The Federal Reserve Banks are a politically useful piggy bank, to be sure.

The struggle between objectivity vs. neutrality continues at the Congressional Research Service

CRS Mazanec 02-13-2018.png

Recently, leadership of the Congressional Research Service and the Library of Congress were presented with a memorandum. It expressed concern that the agencies’ analysts, attorneys, and reference experts were being muzzled a bit.

“We are concerned that CRS risks falling short of its mission if it holds back the independent analysis that Congress has directed us to provide. Sparking our concern, CRS has appeared to avoid reaching conclusions in some topic areas with high potential for political controversy. In some such topic areas, CRS operates as a neutral compiler of facts and opinions, with little of the expert analysis, appraisal, and evaluation of their credibility that Congress requires. CRS also seems to have avoided a few topics or facets of topics almost entirely. Yet these risk-avoidant strategies, while certainly understandable, could in fact increase other risks such as under-utilizing CRS’s valuable personnel; contributing to polarization; and, ironically, inviting a perception of partisan bias. Perhaps worse, given the mission of CRS, is the risk of a slow slide into irrelevance.”

(Disclosure: I was shown the memorandum and signed it.)

This debate is not a new one at the agency. I first saw it erupt back in 2004, when nationally renown senior specialist Louis Fisher was taken to task for expressing his concerns about executive branch encroachments on legislative branch authorities. CRS leadership produced a memorandum directing analysts to produce work that gave the appearance of neutrality, as opposed to objectivity. The former standard amounts to saying “On the one hand X, on the other hand Y.” The latter standard says, “Here are what the facts and analysis indicate.”

Plainly, this dispute within CRS continues. Analysts want to be respected experts who can convey objective analysis to Congress while agency managers fear cuts to the agency’s budget and people losing their jobs. Yet as the January 12, 2018 memorandum makes plain, the stakes are no mere tempest in a Beltway teapot:

“As you know, the current climate of ‘alternative facts,’ ‘fake news,’ conspiracy theories, and declining trust in a common reality poses problems for the United States’ political system. While technological and social trends increase the need for information literacy, people across the political spectrum do not know where to turn for reliable information. Many end up in polarized ‘bubbles.’ These trends threaten democracy, in part by eliminating shared factual grounds on which people and their legislators can debate, compromise, and seek consensus. In this climate, CRS’s mission has never been more vital.”

Legislative support agencies occupy a particularly difficult position these days. The agencies, which include the Government Accountability Office, Congressional Budget Office, Library of Congress Law Library, and CRS, were established to add knowledge to the political process. Make Congress smarter — who could object to that?

As it turns out, plenty of folks can. Knowledge can be threatening. Facts can undermine arguments being made for or against a policy. It is a problem as old as politics. Remember what happened to Socrates when he employed reason to question Athenian conceptions of justice?

The problem of knowledge in politics becomes more acute in the era of hyper-partisanism. Anything one writes might be used by one faction or another as a club to pound the other. Politicians tend to feel besieged and want to control the narrative. So they sometimes lash out at anyone who writes or says anything to contradict that narrative. Last year some members of Congress advocated outsourcing the Congressional Budget Office’s budget scoring duties to private sector think-tanks. Why? Because they were upset about how the CBO tallied an Obamacare repeal bill. Whether this particular score was right or wrong can be debated, but gutting a legislative support agency over a single score is inarguably a gross overreaction.

Exacerbating the challenge further are the perceived stakes: namely, partisan control of Congress. Party control of our national legislature has switched back and forth rapidly since the early 1990s. We have not seen swings like this since the post-Civil War period, notes political scientist Frances Lee. Democrats are out, Republicans are in, then the GOP is out, and later back again. Now the focus is on whether Democrats will reclaim control of Congress after November 2018. One effect of this peculiar state of politics is that each party tends to view nearly everything through the prism of the next election. Which means legislative support agencies’ work too often gets viewed less for its intrinsic value and more as a bother or even a threat.

All legislative support agencies feel the threat of legislative retribution. In the 1990s, GAO had its budget cut 25 percent. The Office of Technology Assessment was zeroed out by Speaker Newt Gingrich in 1995, and its staff let go. By virtue of running Congress’ think-tank, CRS leaders feel especially vulnerable. CRS analysts and reference specialists interact with congressional staff every day. In FY2016, there were:

“more than 62,000 requests for custom analysis and research. The Service hosted more than 9,200 congressional participants at seminars, briefings, and training; published more than 3,500 new or updated reports; summarized more than 6,300 bills; and maintained nearly 10,000 products on its website for Congress,, which received over 1.7 million views. Overall, CRS provided confidential, custom services to 100% of Member and standing committee offices.”

When the Internet began becoming ubiquitous two decades ago, CRS reports went from being hard copies that were read only by a small number of folks on the Hill to the subject of stories in the New York Times. The arrival of the World Wide Web, smartphones, and the bitterly contentious environment on the Hill, as I described elsewhere, slammed CRS. The once insular agency found its staff being trashed by legislators, media, and bloggers. All for doing their jobs. It was never enough for external critics to write, “CRS’s analysis is fair but falls short for the following reasons.” Instead they couched their critiques in terms of CRS being biased or in the bag of one party or the other.

Agency management struggled to respond to this development, which meant anything the agency wrote or said risked setting off a political firestorm. Over the past two decades, CRS leadership has confronted the hyper-partisan, Internet-connected world mostly by trying to hide staff from it. Once upon a time CRS’s experts regularly appeared on panels at academic conferences, wrote for journals and other public media, and spent long stretches working as detailees in the House or Senate. These days, such activities occur much less frequently. As management sees it, the less visible staff are, the less vulnerable the agency is.

While neutrality and invisibility might appear to be rational strategies from the perspective of the agencies’ higher-ups, it is soul-crushing to staff. Nowadays, CRS’s analysts feel pressured to frame everything as “some say this, and some say that” and to shrug in the face of legislators and staff when asked, “What do you think?” (Congressional staff hate such unresponsiveness, by the way.) Being an expert means reaching conclusions. It also means being able to write freely (but responsibly) and follow the facts and data where they lead. For these reasons, CRS has hemorrhaged talent, which is bad for the agency and bad for Congress.

There are no easy answer to this lamentable state of affairs. The Internet is not going away, and hyper-partisanism shows no signs of flagging. Hostility to facts and expert opinion is an ineradicable fact of life. Things could improve if both legislators and CRS management stepped back and took deep breath.

The next Congress would do well to adopt an internal rule that legislators and staff will not publicly berate legislative support agencies or accuse them of being biased. Certainly, civil servants can get things less than 100 percent correct. But so can legislators. That they do explains why CRS and other agencies exist to begin with. Hence, if a legislator thinks a CRS report is objectionable, then he or she should put out a press release politely taking issue with the analysis’ framing or methodology, and leave it at that. If the critique is sound, sympathetic media will report it and the agency itself will take note and do better next time. This is not rocket science, it is civility. Besides, if a legislator’s case is so weak that they fear a CRS analysis might sink it, well, they probably should rethink what they’re advocating.

CRS leadership, for its part, needs to learn to better read the signals coming from Capitol Hill. If a staffer calls to grouse about a report, there’s no reason to pull a fire alarm. Even during the budget slashing years of Speaker Gingrich, CRS’ budget was never axed. Indeed, so long as agency leadership maintains friendly relationships and an open line of communication with its authorizing committees and appropriators, there is no way one cranky legislator (or even a bunch of them) is going to hurt the agency. CRS leadership needs to be confident on this count, and to mill that confidence into a clear message to CRS staff: “We have your back.” Analysts and reference experts at CRS need to believe that, lest they continue to self-censor themselves. Finally, CRS management also should recognize that its reputation on Capitol Hill very much depends on the agency being seen as useful and objective. That means CRS experts need to be permitted to write clearly and to share their objective assessments–even if they are not neutral.


ICYMI: Top reads on Congress

Congressional capacity, retirements, parties

Matt Glassman, “Why Congress Doesn’t Always Do the Right Thing,” New York Times:

“Will arguments to “do the right thing” persuade lawmakers? Don’t hold your breath. Such exhortations are rarely heeded by politicians because the structural incentives of the institution usually trump policy considerations.”

James Wallner, “When hatred of Trump leads to disdain for debate,” Washington Examiner:

“By trying to delegitimize those with whom they disagree, commentators like Cohen shrink the political sphere to deny their opponents the right to participate in the first place. In the process, they conveniently sidestep the need to engage in a substantive debate over what’s acceptable presidential behavior or what constitutes good public policy.”

David A. Hopkins, “Don’t Expect Much Legislation from Congress in 2018,” Honest Graft:

“Even during normal political times, the internal operation of Congress gets much less than its rightful share of attention from the news media and public. With Donald Trump as president? Forget it. But amidst all the other drama of this eventful week, a few important clues emerged about the road ahead for Congress in 2018. They all seem to point in the same direction: to a relatively unproductive legislative year.”

John T. Bennett, “Nunes Memo Aftermath Could Stifle Legislative Agenda,” Roll Call:

“The memo’s release and the Democrats’ fiery response adds a flammable dispute to an ever-growing pile of political kindling only weeks into a midterm election year with control of both chambers in play.”

Tara Golshan, “The simple explanation for all the Republican retirements: Congress sucks,” Vox:

“But there are some overarching trends worth mentioning: Congressional leadership has increasingly centralized decision-making away from individual lawmakers, and there’s a growing understanding that House Republicans could slip into the minority after this midterm election cycle. Paired together, lawmakers are likely asking themselves the point of being in the Capitol, said Jason Roberts, a political scientist at the University of North Carolina Chapel Hill who studies Congress.”

Heather Caygle and John Bresnahan, “It will be an intraparty war,” Politico:

“A stealthy discussion is already underway within the Democratic Caucus, particularly among members whose only experience in Congress is in the minority. Assuming Pelosi either leaves on her own or is pressured to step down, her exit would trigger a messy battle between the party’s old guard, led by House Minority Whip Steny Hoyer (D-Md.), and the party’s younger members, represented by House Democratic Caucus Chairman Joe Crowley (D-N.Y.).”

Alexander Bolton, “Republican agenda clouded by division,” The Hill:

“The looming question, however, is whether McConnell and other GOP leaders are willing to risk a backlash from the conservative base by cutting deals with Democrats — especially with primary elections quickly approaching. Some suggest the answer is to let senators legislate on the floor, something McConnell has vowed to do on immigration.”

Jennifer Shutt, “The Appropriator in Winter: Frelinghuysen’s Last Stand,” Roll Call:

“The House Appropriations chairman is going out amid a blizzard of Republican infighting; lackluster presidential approval dragging down many of his “blue state” GOP colleagues; the increasing polarization of the electorate; and greater influence of Southern and Western conservatives at the expense of Northeastern moderates like himself. And then there is the long, slow decline of the appropriations process, which lost its sheen for many when earmarks were banned, discretionary spending was slashed to the bone and “government by CR” became the rule rather than the exception.”

Sharon LaFraniere and Nicholas Fandos, “How Partisan Has House Intelligence Panel Become? It’s Building a Wall,” New York Times:

“To committee members of both parties, the division of one room into two is emblematic of how far the panel, a longtime oasis of country-first comity in a bitterly divided Congress, has fallen since it began its Russian inquiry last year. Any pretense that committee members will come together to get to the bottom of that matter – or any other – has disappeared.”

Joe Lieberman, “We’re well beyond partisanship, our national government has lost civility,” The Hill:

“However, today we confront a more tempestuous political environment. The basic rhythms of the national legislative process — the norms that prompted Republicans and Democrats to work together in the service of the greater good — are gone. Our democracy is proving unable to meet the challenges of the moment. We face real trouble ahead.”

Sam Rosenfeld, “The Polarizers,” (podcast) New Books Network:

“Rosenfeld tracks the people—the Architects in his subtitle—who initiated changes in party rules and institutions that facilitated the development of the parties. The book is rich in historical details and meaning for our current political moment.”

GAI at Georgetown University, “Congress, Two Beers In,” (podcast)

GAI’s senior fellows discuss congressional politics on a weekly basis.


Ryan Kelly, “‘It’s the Custom of the House to Hear the Leader’s Remarks’Roll Call:

“It appears John Boehner set the precedent for Minority Leader Nancy Pelosi’s remarks on the House floor today. Back on June 26, 2009, then-Minority Leader John Boehner talked for over 20 minutes and received a ruling from the chair, when Democrats tried to interrupt him, that “it’s the custom of the House to hear the leader’s remarks” during morning hour speeches. “

Jordain Carney, “Senate headed for freewheeling debate on immigration bill,” The Hill:

“The Senate will be starting from scratch next week when it begins debating immigration legislation on the floor, a key choice that could impact the outcome. Senate Majority Leader Mitch McConnell (R-Ky.) said Wednesday he will use a nonimmigration bill as the starting point for floor debate, a decision in line with a weeks-long promise that the process will be “fair.””

Dean DeChiaro, “Senate Immigration Debate to Begin With Blank Slate,” Roll Call:

“Senate Majority Leader Mitch McConnell said Wednesday he will kick off next week’s debate over the fate of 690,000 “Dreamers” with a shell bill that does not include immigration-related language. The debate “will have an amendment process that will ensure a level playing field at the outset,” the Kentucky Republican said on the Senate floor.”

David Winston, “Opinion: To Filibuster or not to Filibuster,” Roll Call:

“To filibuster or not to filibuster. That is the question and only Senate Democrats can supply an answer. The choice is clear. More uncertainty for the country and putting economic growth at risk — or a willingness to accept compromise neither side may like but both can live with.”

Rachel Bovard, “Government shutdowns are the dysfunction of new Senate norm,” The Hill:

“Shutdowns, once reserved for dramatic standoffs and last resorts, are now becoming a normal way the Senate negotiates. That shutdowns have become so predictable is a reflection of the dysfunction of the Senate itself.”

Theresa Hebert, “Working with Coalitions in Congress,” Quorum:

“In Congress, little can be done alone. Every member in the House and Senate is part of a state delegation and a variety of committees and caucuses. Here is a glimpse at the groups Rep. Karen Bass (D-CA-37) is a part of in the House of Representatives and tips for how your organization can work with the respective coalitions.”

Budget, debt limit, earmarks

Philip Wallach, “Americans would have a patriotic duty to ignore a debt ceiling crisisWashington Post:

“For either party, consciously choosing to starve the nation of needed funds to make some point (even a broadly popular one such as the undesirability of enormous government debt) would incur huge political costs, much greater than any government shutdown. Members of Congress would have to be politically suicidal to keep up a standoff in which President Trump and his administration insist on the urgency (indeed, the national security imperative) of raising the debt ceiling quickly and without strings attached, so there would be every reason to expect a quick defusing of the crisis.”

James C. Capretta, “Scrap the U.S. Debt Limit Before It’s Too Late,” Real Clear Policy:

“The federal government has a major debt problem, but the solution is not the current statutory limitation on government borrowing, which is counterproductive and could inadvertently cause permanent damage to the U.S. economy. The limitation should be scrapped immediately and replaced with a less risky modification to the budget process, one that encourages political leaders to focus on long-term deficit reduction.”

No Labels, “Stop Continuing Continuing Resolutions,” Real Clear Policy:

“Lawmakers will likely pass another short-term spending bill — the fifth since September 30 — to keep the government functioning. But there is a renewed sense of urgency that a final budget, and not just another continuing resolution, is needed. While many lawmakers have been involved in budget negotiations, a few have had an outsize influence.”

Jennifer Shutt, “Five Continuing Resolutions? Par for the Course on Capitol Hill,” Roll Call:

“Should it be signed into law, the fifth stopgap measure — introduced late Monday — would expire on March 23. At that point, lawmakers would be 174 days into fiscal 2018, with none of the 12 appropriations bills enacted on time. But the fact is veteran lawmakers like Lowey, who first entered the House in 1989, and even newer members such as those elected in the tea party wave of 2010 that ushered in GOP control, have learned to live with “governing by CR.””

Casey Burgat, “Examining the Case for Biennial Budgeting,” R Street Institute:

“Biennial budgeting has been suggested for decades as a potential reform that would help alleviate many of the ills within the broken congressional budgeting process. This policy paper takes stock of the proposed advantages and criticisms of transitioning the federal government to a biennial, rather than annual cycle. Ultimately, I argue that a biennial budgeting model may provide some marginal benefits to a clearly dysfunctional budget process, but will do little to solve the more pressing problems, such as true spending priority differences between political parties.”

Tom Schatz, “How Congress Can Restrain the Executive Branch Without Reviving Earmarks,” The Federalist:

“The answer to members’ complaints about their supposed lack of control over executive branch spending is greater oversight and renewed efforts to authorize programs, not earmarks. There is no comprehensive list of oversight hearings or their outcome, or any comparison from one Congress to the next. Oversight hearings tend to repeat the same subject matter. Joint hearings within the House are rare, and joint hearings between the House and Senate are extremely rare.”

Congress and sexual harassment

Cristina Marcos, “House passes landmark bill to overhaul sexual harassment policy on Capitol Hill,” The Hill:

“The House passed landmark legislation on Tuesday to overhaul Capitol Hill’s sexual harassment policies following a string of recent revelations that multiple lawmakers engaged in misconduct. Passage of the bill by a voice vote means it now heads to the Senate, where its future is uncertain but could be helped by momentum from the “Me Too” movement highlighting sexual harassment.”

Michael Stern, “Sexual Harassment and the Office of Congressional Ethics,” Point of Order:

“My purpose here is not to analyze CARA’s proposed reforms or take a position on the bill. I merely observe that, on its face, CARA seems to be a textbook example of how “regular order” is supposed to work. Congress identifies a problem, holds hearings, and proposes a legislative solution, preferably reflecting a broad consensus within the committee of jurisdiction. CARA in fact is cosponsored by every member of the Committee on House Administration. It also very bipartisan, with 14 Republicans and 20 Democrats listed as sponsors or co-sponsors.”

Congress and big data

Michael J. Gaynor, “Can big data predict which bills will pass Congress?Washington Post Magazine:

“In 2013, Tim Hwang and his childhood buddies Jonathan Chen and Gerald Yao came up with what they believed was a better way. The result was a company called FiscalNote, which aims to use data to shed light on the hidden components that help a bill become a law. FiscalNote’s software crawls government websites to pull data from over 1.5 million active bills across Congress, 50 state legislatures and 9,000 city councils — and seeks to predict the likelihood of each of those bills passing.”

Jennifer Victor, “Use big data to explain politics rather than predict it,” Vox:

“When social scientists use big data to engage in analyses of this type, the primary goal is to explain rather than predict. Prediction is fun but may not allow us to understand the underlying causes of a phenomenon or outcome. This is where the dissatisfaction comes in. Using the data to focus on developing a clearer understanding of how the world works, how humans interact in it, and how these interactions produce outcomes, can provide enlightenment. Ultimately, this enlightenment can arm us with higher quality information than prediction alone.”

Beth Simone Noveck, “Congress is Broken. CrowdLaw Could Help Fix It,” Forbes:

“Around the world, there are already over two dozen examples of local legislatures and national parliaments turning to the internet to improve the legitimacy and effectiveness of the laws they make; we need to do the same here if we are to begin to fix congressional dysfunction.”


Mandated Electronic Logging Devices will Pump the Brakes on the Trucking Industry

The economy is finally emerging from a period of its slowest growth in several decades, and guess what’s about to put the brakes back on?

Following a frenzied, multi-pronged effort by independent over-the-road truckers to stave off the new federal requirement that drivers use electronic logging devices (ELDs) to calculate hours of service, the Federal Motor Carrier Safety Administration (FMCSA) has opened a comment period to consider a five-year exemption for small businesses at the request of the Owner Operator Independent Drivers Association (OOIDA).

This is an unusual issue because the rule forces compliance with laws designed to vouchsafe public safety that have been on the books for years. Smaller-scale truckers are saying they can’t run a business without chiseling on federal requirements. And what little information has been recently forced out into the open suggests they’re right.

In some alternate universe, truckers drive when they feel up to it, sleep when they get tired and spend more time with their families, which is good for everybody. In latter-day America, one-size-fits-all federal regulations say that when your allotted hours are up, the truck must be parked for 10 hours. One recent graduate of a trucking academy was parked in front of three diesel pumps, and was afraid to move his truck so that others could refuel because one minute over the driving time could mean points on his commercial license. Points on a commercial license lead to fines and trouble at work.

Moreover, if you are transporting honeybees to pollinate for almond-growers in California – as thousands of semis are doing every year – you can’t stop for any length of time or the bees will come out of their hives. And you certainly can’t stop and unload a rhino, tiger or giraffe at a rest stop. Aquaculture transporters, horse transporters, and other transporters of living cargo are all impacted in a way that Washington doesn’t seem to understand. Thanks to the requirements, drivers can’t subtract even 10 minutes from a rest period and add it to the time it takes to get to the destination.

When court challenges to the rules failed and congressional relief got bogged down, the transportation world’s own version of the “woodwork” effect manifested, exposing a widespread work-around effort to keep the economy going. I am informed by a west-coast independent trucker taking a load of prison doors to North Dakota that he was instructed how to finagle a logbook as part of the training given at his trucking school. He also tells me that a company in his area just received 17 trucks back from drivers who have walked away from their 12-percent loans out of necessity because strict enforcement of the rules means one load a week instead of two, and they just can’t scratch out a living anymore.

As these effects ripple through the economy, shipping rates will rise more than they have already, the last thing the country needs as economic growth shows strength that even a Nobel-winning economist swore we would never see again.

Hence the additional comment period. Many of the comments indicate that, ironically, strict enforcement of hours-of-service rules does not enhance safety. When the anecdotal evidence is compiled, it appears that newly-credentialed drivers rushing to make the terminal by the expiration of their driving time are the ones mostly in the ditch nowadays.

The Swift Trucking company – which merged with Knight Transportation to form a $5 billion trucking operation that now constitutes the third-largest in the American industry – has experienced 2,256 crashes in the last two years while using the mandated ELDs. According to information no longer available to the public on the FMCSA website but generated for litigation by the Truck Accident Attorneys Roundtable, this is an increase of 71 percent in fatal accidents since the 24-month period prior to 2012, when ELDs had not yet been installed. Over the same period, injury accidents increased approximately 52 percent and overall trucking accidents increased approximately 54 percent.

There are other dislocations associated with the government cracking down on hours driven and mandating rest periods. A few of them are detailed in a letter Indiana Attorney General Curtis T. Hill, Jr. wrote to the FMCSA’s general counsel on behalf of the estimated 200,000 truckers residing in his state. (It is actually alarming that other state AGs have not taken up the cause of an industry that accounts for nearly one out of every 14 jobs in Indiana and 3.5 million jobs nationally.) His concerns are technical, having to do with the lack of oversight on self-certification by manufacturers of the devices; the inability to verify that the device is actually compliant with complex technical requirements; the costs associated, which “have the potential to put some carriers out of business”; and the short time (eight days) allowable for replacing a noncompliant device.

The FMCSA granted a 90-day waiver from the Dec. 18, 2017 implementation date for agricultural transportation, but at least 31 different organizations with agricultural interests have written the Department of Transportation and the FMCSA to articulate concerns and ask for a reasonable solution to the particular challenges of hauling fish, pets, wildlife, horses, honeybees, cattle, pigs and poultry. Approximately 250,000 livestock-haulers have to be not only qualified truckers, but also stockmen responsible for hauling animals with the least amount of stress, requiring additional training in animal welfare.

There are no good outcomes if animals must be parked for the 10-hour period following the expiration of hours-of-service. Doubling up on drivers is a challenge when the nation is already in short supply and many are reportedly quitting over government regulation.

Moreover, the waiver only applies to equipment with sleeping quarters. Most of the hauling of horses, for instance, is done with trucks that do not have expensive sleeping berths; typically, horse-transporters who have sleeping space have it in their trailers. Most owners who show horses will therefore fall outside of the exemptions as currently written, and recent news articles have noted expanded enforcement of the underlying rule.

This is a fascinating example of government policy that has been in effect for years, rationalized on the basis of public safety but widely disregarded in order to do business in the real world. Having already suffered a large loss of revenue thanks to mandated inefficiencies in delivery, the major trucking companies are supporting the ELD mandate because it threatens to put many smaller competitors out of business. Rates are already going up, and pricing will turn very aggressive as smaller firms face an uneconomic landscape to continue serving their markets.

Delaying the impact, looking at further exemptions and measuring actual safety performance against projections would be a good first step. In the same sense that dynamic budget-scoring is now state-of-the-art over simple manipulation of tax rates, highway safety rules should be data-driven but informed by considering what people are actually doing to move freight. Mega-trucking companies willing to take a regulatory hit to put the independent owner-operators out of business should be thoroughly investigated. The unique needs of the trucking of live animals and insects also bear further investigation. Setting up a commission to review the success of the hours-of-service mandate in assuring public safety would allow the necessary public input and consideration of several years of data on file. If it turns out that new enforcement rules are causing too many drivers to barrel down the highway or to take turns at a higher speed than they otherwise would, a do-over is warranted.

It is obvious that hours-of-service is only one determinant of successful shipping. Truck driver is the most popular job in my home state and in many others. We possess every motivation to sort this out on the best terms for them, other drivers on the highways, the shippers and the American economy.





Are Electric Vehicles a Threat to the Texas Electrical Grid?

Electric vehicles (EVs) can be a strangely polarizing topic among conservatives. Lots of conservatives remain skeptical of EV technologies and dislike the fact that they receive government subsidies. At R Street, we of course share their opposition to technology-specific subsidies, including subsidies for EVs. However, in some cases dislike of subsidies for EVs has blurred into a dislike of EVs themselves, even in cases where doing so means rejecting the logic of market principles.

As an example, consider a recent study by Wood Mackenzie that looked at (among other things) the effect of EVs on the Texas electrical grid. The study found that 60,000 EVs charging simultaneously would draw 70 gigawatts of power, equal to the current peak electrical-demand for the ERCOT* region of Texas. Since 60,000 vehicles would represent only around a quarter of a percent of Texas’ current 24 million-vehicle fleet, some have taken the study to mean that large-scale EV deployment is incompatible with a stable grid.

That conclusion, however, is mistaken. For one thing, the Wood Mackenzie study assumes each EV’s 100-kilowatt battery can be fully charged in five minutes. Currently, a battery that size takes significantly longer to charge. And while average charging time is expected to decrease in the coming years, the less time it takes an EV to charge, the less likely it will be that EVs will all be charging at the same time. By analogy, if every owner of a gasoline vehicle tried to refuel at the same time, the result would be massive gas lines and shortages. Yet in reality, this typically doesn’t happen because markets are able to encourage more rational fueling patterns. In the same way, electricity markets send the right pricing signals to avoid charging during times of high system-stress (high prices) and shift charging to lower-demand periods (low prices). For example, EVs tend to be charged at night, where there is currently plenty of surplus electrical supply.

More importantly, assuming that a growth in EVs would imperil the electric grid overlooks the dynamic nature of the market. In Texas, electricity is deregulated, meaning that decisions about how much generation to build are made by the private sector. If demand for electricity is expected to increase due to EVs, generators will respond by building additional capacity to meet that demand. Markets have proven themselves quite capable of dealing with substantial changes in demand and generation mix, and there is no reason to think that increased demand from EVs would be radically different.

Nor does the fact that EVs receive government subsidies (as bad as they are) fundamentally change this equation. Subsidies for EVs mean that the effective consumer price of EVs is lower, which results in more purchases than would occur without the subsidy (although, given the high price-point of many EVs, the effect here is less than you might think). But a lower price for EVs could just as easily come through some private-sector breakthrough. If markets are capable of integrating demand from lower-priced EVs, then they can do so regardless of whether those lower prices are driven by technological improvements or by subsidies.

While opposing subsidies is good, we should not let this distract us from the adaptability of markets overall. Texas has a robust, competitive electricity system, and as long as this system is maintained, it is more than capable of dealing with growth in electric demand from EVs.

*ERCOT (the Electric Reliability Council of Texas) manages the grid for the Texas Interconnection, which represents approximately 85 percent of the state’s total electric load.


SOTN2018 – James Bessen and Charles Duan say “Poppycock!” to Techno-Pessimists

At State of the Net 2018, Charles Duan and James Bessen discuss the future of the labor markets in a world of artificial intelligence and automation on the Tech Policy Grind podcast.


EPA Enters New Phase over WOTUS; Enviro Markets are a Better Solution to the Problem


The Environmental Protection Agency (EPA) and the Army Corps of Engineers agreed on Jan. 31 to delay a rule regarding the Clean Water Act for two years as they first repeal and then replace it.

Cue the lawyers!

Ten states – led by New York, Massachusetts and California – are now suing the EPA in an attempt to get the agency to stop delaying a 2015 rule defining “waters of the United States,” or WOTUS, which supporters say was designed to limit pollution in about 60 percent of the country’s waterbodies.

It turns out that farmers and ranchers across the country were up-in-arms about the rule. A problem that plagued much of the Obama administration’s agenda also beset the Obama-era WOTUS rule: There was no obvious limiting principle to the government’s behavior. Government regulators could expand the rule’s jurisdiction any time they wanted without judicial intervention, and certain readings of the rule suggested that federal control could expand to cover virtually all the water, and much of the land, throughout the country.

EPA Administrator Scott Pruitt put a stop to the Obama-era rule after attorneys general from states like Mississippi, Texas and Louisiana (hello, red states!) argued it would be incorrectly applied to places far away from the “navigable waters” over which the feds have legal influence.

So in myriad ways, the environmental litigation in the Trump era is a bizarre mirror-image of the Obama era, with blue states boasting strong environmental groups suing the federal government to take action – the opposite of what occurred during the second half of the Obama administration.

It’s easy to forget that states like Oklahoma – with current EPA Administrator Pruitt as its attorney general – sued Obama’s EPA over a dozen times during the fight over the Clean Power Plan and mercury rules.

But if lawyers’ fees are the only good to come out of this political fight, why not ask if there’s a better way to actually protect the environment from farm-related pollution?

A better method would be to build up environmental markets – the kind that incentivize limiting pollution at the source rather than penalizing polluters later in court. A private marketplace for soil conservation, clean water and wildlife habitat already exists to the tune of at least $3 billion, and that amount could grow if properly supported by legislation.

As it just so happens, a new study co-written by R Street and the Center for American Progress (CAP) highlights how private investment could be leveraged for conservation. The report recommends a number of bipartisan, market-based approaches to increase the economic benefits of ecosystem conservation – all without the government involvement!

What the WOTUS fight is actually over is how to best limit the use and abuse of pollutants, like chemical fertilizers, that can turn into water pollutants. The status quo is for states and environmental plaintiffs to gum up the federal courts system for years. A better answer is to let environmental markets maturate so that environmental values can be priced. It’s a superior solution to endless lawsuits, that’s for sure.


Image credit: diy13

Ohio Maintains Its Standing


“We’re ahead of everybody,” Gov. John Kasich told me just after I attended the latest legislative briefing on self-driving vehicles. His effort to convert the Rust Belt state into a hotbed of investment in connected technology recently spawned an executive order creating “Drive Ohio,” the latest initiative to keep Ohio’s transportation profile state-of-the-art.

History records several pretenders to the title of motor car inventor among Americans, Germans, Belgians, Swiss, Austrians and Frenchmen who designed pieces of the first self-powered vehicles. It is relatively unchallenged, however, that Ohio is credited with the first automobile accident in 1891. New connected and driverless technology promises to atone for initiating this trend by lowering crash statistics substantially and to implement other transformational changes to the landscape and lifestyle of the early 21st century.

The Ohio governor and the state’s legislative branch see a huge opportunity to convert the state’s second-place ranking among the 50 states for auto-related businesses, along with its first-place ranking for engine- and transmission-building, into a top seed in the race to bring in billions of technology investment dollars. Drive Ohio is but the latest thrust, and likely represents an inflection point in the effort to allow all Ohio public and private partners to offer expedited service to out-of-state collaborators.

Launched Jan. 18, the executive order combines four existing projects that total 164 miles of test-driving, including a peak-traffic shoulder lane to Columbus’s John Glenn International Airport and a stretch of I-90 along the top of the state (specifically for experiencing lake-effect snow). By bringing together developers of advanced mobility-technology and those responsible for building infrastructure, Drive Ohio provides a one-stop government shop for companies that want to set up a partnership with any existing projects.

Even prior to this executive order, the state has been busy teeing up an effort to attract production of the next generation of vehicles through both funding and facilities. The U.S. Department of Transportation awarded Columbus a large “Smart Cities Challenge” grant. Driverless vehicles – including semis using technology developed by Otto, a connected-vehicle trucking division of Uber – have been tested on a 35-mile strip of four-lane highway from the edge of Columbus to East Liberty for over a year. The Transportation Research Center (TRC); the Ohio State University; and Easton Town Center – a world-class shopping mall that hosts 25 million visitors annually – are all preparing to make history. The 241-mile Ohio Turnpike is already fitted end-to-end with fiber optic cable, and sensors will be implanted along a 60-mile stretch as well. It alone hosts a billion traveled miles and 11 million commercial truck trips annually.

East Liberty is the nearest town to the TRC. The TRC is the largest multi-user automotive proving-ground for drivers and vehicles in North America – a “gearhead nirvana,” according to The Columbus Dispatch. Nestled in a rural setting of 4,500 acres adjacent to the Honda Motor Company manufacturing facility and a plethora of automotive parts-suppliers, TRC is also the testing facility used by the National Highway Traffic Safety Administration’s Vehicle Research and Test Center, the federal vehicle test-laboratory for the nation.

A year ago, the TRC announced a $45 million, 540-acre expansion with funding from Ohio State University, JobsOhio and the state to create the Smart Mobility Advanced Research and Test (SMART) Center. This new facility will simulate different network connectivity and other infrastructure, varying climatic conditions indoors and outdoors, crashes and several levels of traffic conditions. It will have a 12-lane intersection and a test platform wider than 50 highway lanes.

Since Michigan and Pennsylvania have joined Ohio to form the Smart Belt Coalition, two more of the nation’s designated top automotive proving-grounds will be brought on board: the City of Pittsburgh and the Thomas D. Larson Pennsylvania Transportation Institute, and Michigan’s American Center for Mobility (ACM) at Willow Run. Carnegie-Mellon University will join the academic powerhouses at Ohio State and the University of Michigan. The transportation departments and turnpike commissions of all three states will be added to the effort.

Even though driving a truck is the most popular job in Ohio, as it is in many states, trucking trade associations project the expected shortage to reach more than 174,000 drivers nationally by 2026 if current trends hold. Nevertheless, at the legislative committee hearing, it was axiomatic that one of the lawmakers would declare that as any legislation moves forward, she would add a provision that requires a driver to be present in every truck, no matter what level of sophisticated autonomy engineers can develop.

This should provide an interesting debate on public policy as the state government continues to profess its desire to be a critical part of the nation’s transportation future. Also coming soon are more conversations about vehicle insurance and cybersecurity of the connected software. I am guessing that none of these collateral issues will be a deal-breaker for this new world of transportation.




CLOUD Act highlights need to modernize cross-border data framework


*This post was co-authored by Charles Duan, Associate Director of Technology and Innovation Policy at R Street.

On Wednesday, Sens. Orrin Hatch, R-Utah; Christopher Coons, D-Del.; Lindsey Graham, R-S.C.; and Sheldon Whitehouse, D-R.I., introduced the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which deals with law enforcement’s access to communications information stored in the cloud. We think the CLOUD Act is an important first step to dealing with the difficult problem of cloud data stored overseas and encourage policymakers to use the bill as a key component in reforming the legal procedures for law-enforcement access to online communications.

This bill arises in the context of United States v. Microsoft, currently pending at the Supreme Court. That case will consider whether U.S. law enforcement can legally obtain emails stored on Microsoft’s cloud email service when those emails are physically stored on servers in a foreign country. The underlying issues are complex yet important to every American who uses cloud services. R Street filed an amicus brief with the Supreme Court to emphasize these complexities.

The depth and difficulty of the issue also highlights the need for a legislative solution. Indeed, as global demands for cross-border data increase, frustrations with the status quo will continue to worsen. If left unaddressed, these frustrations will push nations toward undesirable policy alternatives, including data-localization and stricter controls on the internet. Allowing the Supreme Court to be the final arbiter on cloud data-access would force a choice between two extremes, neither of which is desirable. It is incumbent on Congress to think prospectively and craft a path forward that accounts for the myriad technological and international legal ramifications of cloud data storage.

If enacted, the CLOUD Act would establish a framework for U.S. law enforcement to obtain emails stored on foreign cloud servers, as in the Microsoft case. The government is expected to withdraw the case if the bill is enacted. The framework largely mirrors the International Communications Privacy Act (ICPA), which R Street previously supported.

At the same time, the larger framework for law enforcement’s access to electronic communications is decades old and widely considered outdated. The CLOUD Act currently is limited to the extraterritoriality issues discussed above, and it neglects to address whether warrants or other showings of cause ought to be required as part of the procedure for accessing cloud-stored data.

R Street has been supportive of reforming the Electronic Communications Privacy Act (ECPA) – the law governing this larger communications framework – to extend the warrant requirement to all content data, not just those less than 180 days old. There is widespread support for such reform, with the Email Privacy Act having passed the House last year by voice vote. The right way forward, in our view, is to use the CLOUD Act not as a complete solution, but rather as a component of these broader efforts to bring electronic communications law into the 21st century.


A state-owned bank for New Jersey?


New Jersey Gov. Phil Murphy and some supportive state legislators are promoting the idea to establish a bank owned by the state, holding the state’s deposits and making loans considered politically popular. Is this controversial proposal a good idea?  It’s certainly not a new one.

In the 19th century, banks with majority ownership by the states were set up by Alabama, Georgia, Illinois, Indiana, Kentucky, Missouri, South Carolina, Tennessee, Vermont and Virginia. None of these has survived. An instructive case is the State Bank of Illinois, which “became entangled in public improvement schemes” and went bankrupt in 1842.

“In nearly all states” before the Civil War, report John Thom Holdsworth and Davis Rich Dewey, “provision was made in the charters requiring or permitting the State to subscribe for a portion of the stock of banks when organized.” Among the reasons were that the state “should share in the large profits” which were expected, and “because ownership would place the state in the light of a favored customer when it desired to borrow,” Dewey and Robert Emmert Chaddock note in their State Banking Before the Civil War.

Ay, there’s the rub. Such ideas led a number of states to sell bonds and invest the proceeds in bank stock, hoping the dividends on the stock would cover the interest on the debt. “Every new slave state in the South from Florida to Arkansas established one or more banks and supplied all or nearly all of their capital by a sale of state bonds.”

There is one (and only one) state-owned bank operating today, the Bank of North Dakota. The bank is owned 100 percent by the state and its governing commission is chaired by the governor of the state. Its deposits are not insured by the Federal Deposit Insurance Corporation, but are instead guaranteed by the State of North Dakota, which has bond ratings of AA+/Aa1 (In contrast, New Jersey’s bond ratings are A-/A3.) The bank has total assets of about $7 billion and is thus not a large bank. But it has strong capital and is profitable, the profits helped by being exempt from federal and state income taxes. The Bank of North Dakota was founded in 1919, so is almost a century old.

A less hopeful analogue is the Government Development Bank for Puerto Rico, owned by the Commonwealth of Puerto Rico and designed to operate as an inherent part of the government. It was established in 1942 under the leadership of Rexford Tugwell, the Franklin Roosevelt-appointed governor of the island, an ardent believer in central planning. The Government Development Bank, which had total assets of about $10 billion in 2014, has been publicly determined to be insolvent and will impose large losses on its creditors.

According to a report from the Federal Reserve Bank of Minneapolis, Alexander Hamilton, the father of the federally chartered and 20 percent-government/80 percent privately owned First Bank of the United States, “concluded that a national bank must be shielded from political interference: ‘To attach full confidence to an institution of this nature, it appears to be an essential ingredient in its structure that it shall be under a private not a public direction under the guidance of individual interest, not public policy.’” If this principle applies as well to state-owned banks, how is such a bank to devote itself to politically favored loans?

Would a Bank of New Jersey be likely to resemble more the Bank of North Dakota or the Government Development Bank of Puerto Rico?  Or perhaps the State Bank of Illinois?

Image by sevenMaps7


The most important ‘shared mobility principle’ is freedom


To bring our collective visions of the future to fruition requires public policies that are humble enough to acknowledge just how much we can’t know about innovations and technologies that have not yet arrived.

That’s why it’s frustrating to see a collection of the most innovative and forward-thinking firms in the world—including Didi Chuxing, Lyft, Ola Cabs, Uber Technologies, Via Transportation and Zipcar—come together to support “shared mobility principles for livable cities” that would foreclose all sorts of opportunities for economic and technological progress.

Some of the 10 principles—which look to lay down literal rules of the road for autonomous vehicles and other emerging transportation technologies while upholding goals like lower emissions and greater data-sharing—are totally unobjectionable. Number four declares that signatories will “engage with stakeholders” when they “may feel direct impacts on their lives.” That’s good news, because the universe of “stakeholders” who would be affected by some of the other principles includes just about everyone.

In particular, it’s No. 10 on the list that is especially problematic. It proposes that autonomous vehicles in urban areas “should be operated only in shared fleets.” Among the sundry benefits the principles document proposes would flow from a shared fleet model are:

Shared fleets can provide more affordable access to all, maximize public safety and emissions benefits, ensure that maintenance and software upgrades are managed by professionals, and actualize the promise of reductions in vehicles, parking, and congestion, in line with broader policy trends to reduce the use of personal cars in dense urban areas.

All of those things may prove true, and fleet ownership may be the model that makes the most economic sense for many urban consumers. But the only way to test whether any of it is true is through the free choices of consumers and manufacturers, not command-and-control centralized planning. Prescribing a one-size fits all ownership model to fit everyone’s varied lifestyles and consumer preferences is the antithesis of the American way.

If it were actually true that what city dwellers really need is less freedom to make transportation choices that best fit their own needs and preferences, most of these firms—especially the transportation network companies—wouldn’t even exist today. In the half-decade since the TNCs first emerged, they have empowered the disadvantaged to reintegrate into society, helped cut the number of DUIs and provided meaningful employment opportunities to those with the fewest options. All of those benefits were, until recently, unimagined and unknowable. Now, we take them for granted.

But that’s why there’s a terrible irony in companies that came to prominence, in part, precisely because they didn’t have fleets to service would now seek to enshrine a shared fleet model as the only option.

Policies that hinder competition are bad both for innovation and for the city dwellers of tomorrow. Therefore, as a stakeholder in this debate, we at R Street propose a principle of mobility of our own:

  1. That free people be allowed to move freely and in their transportation mode of choice, while paying for any primary and secondary costs of their actions on others.

Achieving “sustainable, inclusive, prosperous, and resilient cities” of the future demands we do no less.

Image by posteriori

Power in the House: The House Freedom Caucus and Intraparty Organizations

In this Legislative Branch Capacity Working Group Session, Professor Matthew Green of Catholic University discusses the House Freedom Caucus and political and policy influence of intraparty organizations and coalitions in the House of Representatives. Prof. Green is the author of a book on the history of the speakership and a forthcoming volume on Newt Gingrich.


Hezbollah Probe Will Finally Receive the Support It Deserves

Attorney General Jeff Sessions recently announced that he is establishing a task force to investigate and combat the illicit activities of the Iran-backed terrorist organization, Hezbollah.

The task force was born of a Politico report finding that the Obama administration backed off of investigating Hezbollah out of fear of imperiling the Iran nuclear deal.

Because Hezbollah is capable of acting around the globe, the United States must take the threat seriously not only to protect our interests abroad, but to prevent Hezbollah from acting against us here at home. This task force is an important step toward achieving that goal.

The task force’s origin story is somewhat political in nature. Therefore, the team will have to be hyper-aware of any appearance of partisanship. It should not get caught up in potential political gains for the administration; instead, it should focus on the primary objective of protecting American interests. The goal is, and should be, combatting Hezbollah’s ability to participate in and gain from the drug trade and terrorist activities.

Hezbollah activities in Latin America are not a guarded secret. For instance, Navy Admiral Kurt W. Tidd, commander of Southern Command for the U.S. military, testified before the Senate Armed Services Committee in April 2017 that “Hezbollah has been present in small pockets scattered throughout the region for decades. They’ve been actively engaged largely in criminal activities to raise funds to support the terrorist activities of Hezbollah in other parts of the world.”

Yet even with reports of Hezbollah participating in money laundering and drug trafficking in such close proximity to the United States, stopping the group’s illicit activities has not been a major priority as compared to combatting other terrorist groups, most notably ISIS. The creation of this task force will ensure that the Justice Department gives appropriate attention to the Hezbollah threat.

Experts say that Hezbollah not only threatens U.S. efforts to stem the drug trade in Latin America, but poses a growing threat to U.S. national security interests in the Middle East. For instance, in addition to being militantly anti-American and anti-Israeli, Hezbollah actively supports Syrian President Assad’s regime. Last year, the Trump administration chose to strike Syria’s military in response to Assad’s use of chemical weapons against his own people and remains critical of the Assad regime.

The Trump administration has also initiated a public campaign to crack down on Hezbollah as part of a larger effort to counter Iran. The goal of the campaign is to “expose” Hezbollah officials for their illicit activities. Having become a leading political party in Lebanon, Hezbollah is now taking great strides to further establish its legitimacy. The administration’s campaign attempts to highlight not only for the United States and Lebanon, but for the whole world, that legitimating Hezbollah is unacceptable while it participates in terrorism, money laundering and the drug trade. To support the campaign, the administration is offering $10 million in cash reward bounties for aiding in the arrest of leading Hezbollah figures.

Now that the administration has renewed its focus on Hezbollah, Session’s decision to launch a task force may also have repercussions beyond the Justice Department. With such a public focus on combatting the Hezbollah threat, we should expect the Departments of Treasury, State, and Defense to similarly renew their focuses on the terrorist organization. Additionally, this task force and the investigations it conducts should put Congress in a better position to develop ways to help combat terrorist financing efforts in the future.

It is in America’s interest to combat Hezbollah’s illicit activities. This task force has the potential to bring much-needed focus to that job.



Florida should limit the prosecution of children as adults

Golden scales of justice, gavel and books wood brown background

Florida’s lawmakers are currently considering two pieces of legislation – Senate Bill 936 and House Bill 509 – which would create guidelines to limit adult prosecution of children and keep more kids in the juvenile justice system.

In Florida, there is no minimum age for transferring a child to adult court. Children as young as 12 years old have been tried as adults. The crimes do not have to be violent; in fact, most children tried in adult court are there for non-violent offenses. Thanks to Florida’s direct-file statute, prosecutors have complete control over which children stay in juvenile court and which are transferred to the adult system.

Once transferred to adult court, children face harsh consequences. The adult system is ill-suited to serve young people. Science shows, as the Supreme Court has articulated time and time again, that kids are different. Young people’s brains, as well as their decision-making capabilities, are still developing. Yet in the adult system, youth are unlikely to receive educational opportunities to reach their full potential.

Charging children as adults is bad policy. Studies demonstrate that children who are in the adult system reoffend more quickly and go on to commit more serious crimes. In contrast, most youth in the juvenile justice system will never offend again. By sending children to the adult system, Floridians are actually creating more crime.

This may seem counterintuitive at first glance – after all, shouldn’t harsher penalties in the adult system deter children from committing crime?

In reality, juveniles who receive long prison sentences when tried as adults are not “scared straight,” they’re scared to death. Children in the adult system are at vastly higher risk of violence, sexual abuse and suicide than those in the juvenile system. Those who survive often become hardened criminals more likely to reoffend.

The bills under consideration would reduce adult prosecution of children by prohibiting children younger than 14 from being prosecuted as adults, eliminating direct file of 14- and 15-year-olds, and limiting the circumstances in which 16- and 17-year-olds can be prosecuted as adults. Judicial oversight is an important component of both bills.

Children who commit crimes should be held accountable, but the best way to do it – for them and for society – is to allow them to remain in the juvenile system. On January 22, Florida’s Senate Committee for criminal justice will consider SB 936. Legislators should seize the opportunity to reform direct file and lay the groundwork for a better juvenile justice policy.


R Sheet: GOP Tax Reform Impact on Booze


This past December, Congress and President Donald Trump passed the “Tax Cuts and Jobs Act,” which significantly reduced federal taxes across the board. While the legislation’s impact on general corporations and individuals was the subject of substantial analysis, its overhaul of federal alcohol taxes has received much less attention.

However, the reform bill incorporated a version of the Craft Beverage Modernization and Tax Reform Act, which reduced federal excise taxes for all kinds of alcoholic beverages, from beer to distilled spirits and wine. This marks the first decrease in federal wine excise taxes in over 80 years,  and the first in distilled spirit excise taxes since the Civil War.

Estimates have predicted that the tax reduction will save the alcohol industry up to $4.2 billion over the next two years. While substantial, this tax savings may also prove transient, as the alcohol tax reductions are slated to expire in two years, on December 31, 2019. It’s also worth keeping in mind that alcohol producers remain subject to state-level excise taxes—which can vary significantly from state to state—as well as state-imposed markups in control states.

This R Sheet summarizes the various tax treatments that alcoholic beverages will receive as a result of the Tax Cuts and Jobs Act.

Read full R Sheet here: 2018 R Sheet 2 GOP Tax Reform Impact on Booze

Is Science or Policy Preference Leading the Way? Comments on the FDA Medicinal Nicotine Webcast


Early afternoon, on Dec. 12, 2017, I sat in on the Duke/Margolis Center for Health Policy’s webcast, “FDA’s New Regulatory Framework for Tobacco and Nicotine: The Role of Medicinal Nicotine.” This two-hour webcast featured Dr. Scott Gottlieb, the new Food and Drug Administration (FDA) Commissioner, as keynote speaker, with a distinguished panel of six individuals representing the FDA, Tobacco-Free Kids, GlaxoSmithKline Consumer Healthcare and the academic community. The panelists considered the role of medicinal nicotine in future tobacco control efforts, given the FDA’s commitment to a new nicotine-based policy and the reduction of cigarette nicotine to non-addictive levels.

My main interest in the webcast was to gain insight into the intentions of leading American tobacco-control advocates. Historically, policy has been guided by a preconceived notion that abstinence is the only appropriate course of action – and research has generally been aimed towards that end goal. I wanted to find out whether this administration’s tobacco control policy is based on science and public health principles or on fixed policy guidelines that dictate what research is to be considered or disregarded by federal authorities – and why.

The FDA has recently disregarded evidence demonstrating the efficacy of e-cigarettes, related vapor products (Electronic Nicotine Delivery Systems, or “ENDS”) and smokeless tobacco for both risk reduction and smoking cessation. The agency also believes that ENDS are attracting large numbers of young non-smokers to nicotine despite substantial evidence to the contrary.

The reality is that cigarettes are, by far, the most hazardous and most addictive tobacco-related products on the market. Though federal agencies recognize the distinctive risk posed by combustible cigarettes, they refuse to endorse ENDS and smokeless products for tobacco harm reduction, believing that such endorsement would recruit large numbers of non-smokers to tobacco use.

We can all agree that it is best never to use tobacco products or, once one has started, to quit. Unfortunately, most smokers cannot quit on their own or with the current medicinal nicotine products. For them, substituting cigarettes with a lower-risk product that satisfies their urge to smoke – i.e., tobacco harm reduction – would appear to be the best option.

However, several barriers to tobacco harm reduction efforts exist. One such barrier is the insistence that smoking is a disease. True, addiction to cigarettes is a disease for some smokers. For all, however, it is a behavior that can be addressed by encouraging simple substitution with lower-risk products that can satisfy the urge to smoke.

The second major barrier includes the fact that federally-funded research has mostly been limited to potential hazards of both tobacco and nicotine use. Combined with the government’s dismissal of data on the personal and public health benefits of tobacco harm reduction, this barrier results in research that starts with hypotheses seeking to confirm only one potential outcome of e-cigarettes and other reduced-risk products.

But perhaps the most important barrier to overcome is the fear of the gateway effect. The limited research that has been done in support of this hypothesis clearly shows that e-cigarettes are overwhelmingly used by current and former smokers, not teens taking up smoking for the first time. This being the case, the FDA’s new top priority should be setting aside the conceptual barriers noted above and determining whether e-cigarettes act as a gateway either towards or away from combustible cigarette use.

Overall, my attendance at the webcast and reading of this literature has left me with the impression that, despite hype to the contrary, science has nothing to do with tobacco control policy as it relates to tobacco harm reduction. The FDA’s policy is instead based entirely on traditional thinking within the tobacco control community, with so-called “science” limited to promoting pre-determined policy guidelines.

Image credit: LezinAV

Dow 36,000?


The Dow Jones industrial average today surged to more than 26,000. Observing this historical stock price boom should make us recall a book and a forecast published in 1999, at the top of the tech-stock bubble: Dow 36,000, by James Glassman and Kevin Hassett. As wildly over-optimistic as this book was in its day, does its forecast look less wild now, 18 years later?

How high can a price go? Higher than you thought. (Also lower, of course.) A price has no physical reality, but is the interaction of human expectations, strategies and emotions, naturally including periodic irrational exuberance.

Can you remember now how you felt about the stock market just two years ago? On Jan. 19, 2016, the DJIA closed at a little more than 16,000. On that day, what odds would you have set on its closing more than 60 percent higher than that by today, as it has in fact done?  Not high odds, I’ll bet.

I would take another 38 percent increase from the current level to get to 36,000. Your odds on that, say in the next two to three years, thoughtful reader?

Speaking of prices, used copies of Dow 36,000 are available from Amazon for as little as $1.99 (it was originally issued at $25). If the DJIA does continue its amazing ascent, I predict that the book’s secondary market price will rise accordingly.

Dow 36,000 suggested its prediction might be fulfilled in 10 years. At the end of 1999, the DJIA was 11,497. Ten years later, at the end of 2009, it was 10,428, one thousand points farther from 36,000 than it started off. In the meantime, it had seen the March 2009 low of 6,507. From that low, however, the DJIA has now covered 20,000 of the 30,000 points needed for the Dow 36,000 forecast one day to come true.

Or perhaps it hasn’t. A 1999 dollar is worth a lot more than a current dollar, needless to say. So let’s adjust Dow 36,000 for inflation.

In January 2018 dollars, the book would need to be retitled:  Dow 53,000.

Image by Pavel Ignatov


There’s Oil in Them There … Ocean, If We Decide We Want It


Well, that was fast. Less than a week after the Trump administration dramatically reversed direction and proposed opening up roughly 90 percent of the U.S. coastline to offshore drilling out to the 200-mile limit, Florida quickly got an exception.

Interior Secretary Ryan Zinke had a quick sit-down with Florida’s Republican Governor Rick Scott in the Tallahassee Airport on Jan. 9 and was speedily convinced to remove the Sunshine State from the government proposal.

Cue the stampede. Within a day, five Democrat governors from New York, Oregon, North Carolina, Delaware and Washington state all requested – via Twitter, of course – that their states be exempted as well.

Like most things during the Trump era, policy gets lost in the political spectacle. The hot take here being that Zinke made the decision to boost Scott’s credibility on the issue as he prepares to run for the U.S. Senate against Democrat Bill Nelson, an ardent opponent of offshore drilling.

But beyond the hot take, two interesting questions remain.

One: How much hydrocarbon is under the U.S. offshore continental shelf?

Two: Does the United States need more oil and gas?

In terms of oil and gas availability, there is currently quite a bit – possibly much more than has been forecast. The Interior Department in 2016 estimated U.S. offshore oil reserves to be 90 billion barrels and natural gas reserves to be 327 trillion cubic feet. By comparison, the U.S. uses about 7 billion barrels of oil a year, and 90 billion barrels of oil at today’s prices is worth a cool $5.4 trillion. The natural gas would add another several trillion dollars in wealth.

But here’s the rub. Nobody really knows how much is there. The potential supplies could be much more or much less; there have been no advanced 3-D seismic surveys done in the last 30 years that would give a definitive answer.

The Obama administration claimed in 2010 to want to allow seismic surveys of the Atlantic offshore, but then successfully dragged its feet for years until cancelling the effort altogether in early 2017.

Many people in the environmental community would like to keep the knowledge of the underlying hydrocarbon secret. And, given the ambiguity concerning the societal benefits of drilling, politicians often err on the side of caution, favoring the existing environmental endowment over the unknown potential income from offshore development.

In terms of whether the U.S. needs the oil, the geopolitical answer is definitely “yes,” and the economic answer is “probably.”

From a simple strategic standpoint, the United States will need oil and especially natural gas resources 30 years hence, given the current trend toward electrifying the passenger car fleet. It would be nice to drive automated cars in 2047 on electricity derived from renewables and natural gas, not crude oil. Global demand for petrochemicals, which produce all of the plastic the world uses, isn’t going anywhere but up in the next three decades.

Climate change is also a serious concern, but the problem with cracking down on the U.S. oil and gas industry is the free-rider syndrome concerning climate emissions around the world. Russia and Saudi Arabia will simply fill in any lost production from North America. It’s also arguable that allowing U.S. exports of natural gas to Asia, Africa and Latin America would keep nations in those areas from using coal-fired generation to expand their electricity grids.

Given the four-decade period of economic dependence on oil importers from the Middle East, it’s not a stretch to argue that the United States would be in a better position, both financially and politically, if it was not highly dependent on a resource dominated by Saudi, Russian and Iranian interests. The cost of U.S. military presence in the Persian Gulf – which is expressly necessary to keep open the Strait of Hormuz, through which about 20 percent of the world’s oil passes each day – is estimated to be in the tens of billions of dollars each year.

As things stand, Zinke doesn’t have to exempt any additional states from the drilling plan, although he may choose to do so. From the period 2019 to 2024, the new plan envisions over 40 individual lease sales covering all tracts of the outer continental shelf (OCS), spanning almost the entire U.S. coastline. It would also include development off of nearly the entire coast of Alaska.

Interestingly, there is no revenue-sharing for offshore drilling, except for a small revenue-sharing program for Gulf Coast states. This means that, by law, the federal government gets 100 percent of the revenues from any hydrocarbon production along the Atlantic, Pacific or Arctic coasts. This 100-0 spilt has made no sense for decades, given how much of the oil-spill risk falls on individual coastal states.

States in the west like New Mexico, Utah and Wyoming get a 50-50 split in royalties from the oil produced on federal lands within state boundaries. It would be unwise for coastal states to agree to any offshore drilling unless a proper revenue-sharing bill was passed through Congress.

So, don’t expect drilling off the coast of Santa Monica and New York’s Long Island anytime soon. The government is required to carefully consider eight factors before it can make a decision about where to dig new oil wells. These include local geology and ecology, national energy needs, local ocean use, and oil and gas companies’ interests.

The industry’s top priorities are the eastern Gulf of Mexico (which is why the Florida exemption is significant), the Atlantic Coast from Georgia to Virginia, and anywhere off the coast of Alaska.

Presumably, this program will play out over the rest of President Trump’s tenure and beyond, since five-year drilling plans take so long to come to fruition. A future Democratic administration would likely pull the plug on much of the current plan. What Trump and his minions are trying to do – it seems – is gain enough additional knowledge about the scale of the resource offshore to potentially change the political calculations of future production.

Stay tuned.


Image Credit:Kurt Adams


A Prosecutor’s Case for FOSTA


*Lars Trautman cowrote this post.

Responding to issues of online sex trafficking from the likes of, the Senate and House have each produced legislation to expand civil and criminal liability for online hosting companies: the Stop Enabling Sex Traffickers Act (SESTA) in the Senate, and the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) in the House. Both purport to provide prosecutors with the tools to target bad actors. But which one provides the tools that will work?

The answer is FOSTA. While SESTA writes a new, potentially more expansive definition into the existing sex-trafficking law, FOSTA uses clear language to give prosecutors a new way to catch websites enabling sex traffickers. Perhaps this is why FOSTA has garnered the support of the organization representing many of the prosecutors charged with enforcing these crimes, the National Association of Assistant United States Attorneys.

Both bills change criminal sex trafficking laws in an effort to reach malfeasant website operators, but they do so in very different ways. SESTA tinkers with the existing criminal statute by defining “participation in a venture” of sex trafficking to mean knowingly “assist[ing], support[ing], or facilitate[ing]” an act of sex trafficking. SESTA’s added definition uses broad terms that have resisted uniform interpretation in the courts. This ambiguity exacerbates the very uncertainty the bill attempts to resolve.

By contrast, FOSTA creates a brand new offense criminalizing any commercial endeavor that acts with “the intent to promote or facilitate” illegal prostitution, with aggravated penalties if it involves five or more people, or is done with reckless disregard of the fact that it contributes to sex trafficking. This language represents a clear, albeit high, bar that conduct must hurdle to be found criminal.

That clear standard is good news both for prosecutors and the general public. Vague and ambiguous laws are difficult to prosecute properly; they make it tough for individuals to avoid potentially criminal behavior, rely too heavily on prosecutorial discretion to determine culpable actors, and ultimately force the courts to write details into the law. All of this fosters the kind of
uncertainty that is anathema to conscientious prosecution. As such, FOSTA’s higher criminal intent standard is well worth the clarity of language that accompanies it.

Not only is FOSTA’s language cleaner, its potential impact on real cases is more readily apparent. Consider, for example, the recent case against, which hosted content consisting of sexual solicitations and thinly veiled prostitution. The government was forced to settle for an illegal prostitution conviction despite having alleged that the website advertised minors for prostitution. While the Stop Advertising Victims of Exploitation (SAVE) Act of 2015’s addition of “advertising” to the list of culpable actions may help prosecutors bridge the gap from illegal prostitution to sex trafficking in instances like this, the dearth of prosecutions using this new language suggests it may not be enough.

If that’s the case, it’s difficult to imagine how SESTA’s new definition of “participation in a venture” would help. On the other hand, it’s easy to see how FOSTA’s new aggravated offense could be triggered, thereby enabling prosecutors to pursue sex trafficking charges against websites like

In attempting to catch guilty parties escaping conviction under present law, Congress needs to decide whether prosecutors would be better served by a bigger trap or a more precise one. If Congress is truly trying to empower prosecutors, FOSTA’s targeted approach is clearly superior.


Lars Trautman is a senior fellow at the R Street Institute and a former state prosecutor. Arthur Rizer is the director of criminal justice and security policy at the R Street Institute and a former federal prosecutor



Image Credit: ChameleonsEye

Alex Pollock speaks at the House Financial Services Committee’s Subcommittee on Monetary Policy and Trade

R Street Distinguished Senior Fellow Alex J. Pollock’s opening statement to the House Financial Services Committee’s Subcommittee on Monetary Policy and Trade, at their Jan. 10, 2018 hearing “A Further Examination of Federal Reserve Reform Proposals.”

Protecting the Internet Economy

Sasha Moss speaks on protecting the Internet economy at CES. American internet leadership is based on wise policy decisions, like not holding social media platforms liable for the actions of their users. Experts discuss the rights and responsibilities of internet companies regarding user-generated content.

Podcast: Meltdown and Spectre with Joe Unsworth of Gartner and will GDPR spark a Data War in 2018?

“What will 2018 bring in the field of cybersecurity? Senior Fellow, Paul Rosenzweig, plays Karnak and predicts developments for the next year. Some are obvious (there will be more security breaches). Others less so, like his suggestion that a data trade war may be just around the corner.”

Energy markets excel through bomb cyclone


Another frigid test produced another reliable display from competitive electricity markets. Whether it’s a polar vortex or the latest bombogenesis—unquestionably the coolest word of the new year—markets are at their best when demand is high and supply is tight. In fact, the “bomb cyclone” underscores the robustness of markets without subsidies, despite heavy retirements of legacy power plants.

It’s important to note that no one or two power resources saved the day. Rather, a broad portfolio of resources combined to keep the lights on, as with all severe weather events. These portfolios have had extensive turnover since the 2014 polar vortex, and yet the lights stayed on thanks to voluntary, low-cost investments from the private sector. This makes the case for empowering markets, not subverting them through interventions to prop up politically favored resources.

Nevertheless, the cold has reheated degenerative arguments over fuel types. This is political banter, not legitimate policy conversation. But if the banter infects policy, such as with the U.S. Energy Department’s anti-competitive proposal to subsidize unprofitable coal and nuclear plants, then interventions undermine market signals for grid reliability and increase costs by billions.

The bomb cyclone demonstrated that markets reward any resource that performs well during stressed conditions. Power price escalations, which mirror spikes in natural gas prices, provide large revenue bumps for non-gas resources commensurate to their value during stressed grid conditions. For coal, a common profitability measure is a “dark spread,” which is the difference between its fuel costs and wholesale power prices. Using an average fuel efficiency rate of 10,493 British thermal units/kilowatt-hour, the recent cold spell yielded profits for a typical coal plant in tremendous excess of any other period this past year.

coal plant profit

Source: Derived from data in the Dark Spread Model of S&P Global Market Intelligence

Note: Cost of power based on energy prices in the Western Hub of PJM Interconnection, LLC

Huge, temporary net revenues for power plants are not excessive windfall profits but rather key market signals that reward dependable power plants when they’re needed most. If that’s enough to keep them profitable, then they shouldn’t be retired. But if their revenues remain insufficient, the market has signaled that their value during cold spells isn’t enough to justify retaining them. Instead, lower-cost resources will take their place.

In the mid-Atlantic and Northeast, constrained pipelines caused spot prices for natural gas to escalate rapidly. This caused “price inversions,” where oil and coal generators, which are typically more expensive to operate than efficient gas plants, become temporarily less expensive than gas plants. Markets swiftly accounted for this by dispatching coal and oil plants ahead of more expensive gas plants.

Several media outlets and public officials conflated dispatch with generation availability, which misleads the reliability discussion. Just because coal and oil operate more during cold spells doesn’t imply they are more reliable. It just means it’s more economical to dispatch them ahead of gas plants when gas prices are high. Only if certain types of plants are less available to operate—for example, if there were a large disparity in power-plant outage rates across fuel types—could one begin to make a reliability critique by fuel type. Even then, the conversation should focus on whether markets send the right signals overall, not to intervene to save particular fuel types.

The only way markets undervalue resources during severe weather events is if market prices fail to reflect economic fundamentals or the scarcity value of grid conditions. Such price-formation problems wouldn’t mean certain fuel groups or technologies were specifically undervalued. It would mean any resource producing or reducing power was undervalued.

Market prices generally reflect system conditions quite accurately. Prices provide crucial signals of resource value to private capital during stressed conditions, which drives voluntary investments in a manner that meets reliability needs in innovative, low-cost ways. In contrast, monopolies use central planning to make investments that meet reliability needs at much higher cost. For example, monopolies often overprocure expensive “firm” pipeline service to fuel gas plants, whereas competitive generators procure the least-cost combination of firm pipeline service, oil-fired back-up generation, liquefied natural gas imports and other means to bolster gas supply.

Yet the market advantage diminishes when political interventions—such as subsidizing oil inventories, pipeline expansion or retention of unprofitable coal and nuclear plants—suppress price signals that deter voluntary investment. This raises the costs of meeting reliability needs and, in some cases, perversely affects reliability. In short, markets handle cold shocks just fine, but they struggle with political shocks.

Market rules have improved this decade to rectify price formation deficiencies, but some fuel-neutral reforms could further enhance the economic efficiency and reliability of markets. Ironically, political interventions harm price formation, working at cross-purposes with constructive policy reforms. Continued improvement in market rules, coupled with better political discipline, will ensure the market advantage grows to keep consumers warm, their options open and their pocketbooks full.

Image by Andrew F. Kazmierski


Economic analysis soon could play bigger role at FCC


Telecommunications in the 20th century offered case study after case study in what happens when economics are sidelined in policy discussions. The negative effects of decisions to protect the Bell telephone monopoly and allocate spectrum via “beauty contests,” rather than markets, still linger today in the form of inefficiencies and depressed competition.

It’s not that economics didn’t have contributions to make in these areas; they simply were not incorporated into Federal Communications Commission policy. Nobel laureate economist Ronald Coase showed in 1959 why it was nonsensical not to price the spectrum, but the first spectrum auction didn’t happen until 1994. This sad tale of eschewing economics continued in the 21st century with the “economics free zone” of the 2015 order that imposed heavier regulations on internet service.

The current FCC seeks to change this pattern. Chairman Ajit Pai announced last year that he wanted to “create a culture of economics at the FCC” and was seeking to develop an office in the FCC that would “give economists early input into the decision-making process.” The commission appears likely to do just that later this month, when it votes on an order that would create an Office of Economics and Analytics. The OEA would provide a focal point for FCC economists currently spread across different bureaus, allowing them to identify and analyze economic issues associated with everything from complex auctions to communications policy writ large.

The proposed order helpfully identifies many specific areas in which the OEA will coordinate and contribute economic analysis to FCC actions. It also directs the bureaus to collaborate with the OEA in carrying out their functions, suggesting the office will not be merely symbolic but will meaningfully impact the commission’s decision0making.

One result of a greater role for economics might be increased regulatory humility. As another Nobel Prize-winning economist, Friedrich Hayek, once wrote:

 The curious task of economics is to demonstrate to men, how little they really know about what they imagine they can design.

Pai and future FCC chairs should be prepared to incorporate economic analysis even (and especially) when it’s unpopular or doesn’t fit with their own designs. A more central role for economic analysis may uncover fundamental shortcomings in policies and programs that have worthy goals but unseen costs that make them economically harmful.

Some universal service programs may be examples. “Closing the digital divide” is good political rhetoric, but making it a reality through subsidies requires accounting for opportunity costs. As University of Florida professor Mark Jamison recently argued:

If we’re talking about taking money from one place and putting it to broadband that people are unwilling to pay for, then we get less housing, we get less food, we get less education — something else is given up.

Communications regulation is littered with such trade-offs. From paperwork to pole attachments, ferreting out opportunity costs and countervailing benefits is essential to making rational (let alone smart) policy. We should welcome an increased role for economics and hope the OEA becomes a significant voice at the FCC.

Image by wrangler

Coming to our (energy) senses


On Monday, the Federal Energy Regulatory Commission (FERC) rejected the Energy Department’s anti-competitive proposal to subsidize coal and nuclear plants in the false name of grid resiliency. In so doing, FERC’s integrity proved as resilient as the grid during the Bomb Cyclone, even in the face of intense pressure from allies of an administration that appointed four of its five sitting commissioners. Such an adherence to principles and evidence, rather than political leverage, warrants kudos to its leadership. Fans of fair competition, innovation and “consumer interests” generally can breathe again—but not too deeply.

In the decision, FERC sought exploration of the vague concept of “resiliency” by directing grid operators to submit information on certain resilience issues and concerns. Digging into resiliency is a credible pursuit in theory but, in practice, it can also be a precarious gateway to intrusive government. Specifically, it’s a launch pad to expand central planning that carries some degree of fuel-picking bias. For example, to some, “resiliency” means preparing for exceptionally unlikely scenarios that the private sector cannot foresee. Such a definition invites paternalistic arguments wherein the government defines whatever creative contingencies it wants with little validation of the likelihood of their occurrence. Recently we’ve seen suggestions to “Hollywoodize” electric system planning by using various doomsday scenarios more reminiscent of a Die Hard plot than anything believable. Electric planning already accounts for credible contingencies—it should not speculate wildly to include incredible ones.

For this reason, a constructive approach to resiliency begins not only with defining the concept but by determining whether it differentiates from reliability—and then examining whether there’s any unique market failure present. If so, then proposals for market reforms must focus on aligning the economic incentives of market participants with the efficient and resilient operation of the electric system. Further, any effort to address a newly-defined market failure must fully account for the potential of government failure. In any event, FERC must lay out economic principles that ensure proposed reforms enhance market performance, rather than placate incumbent interests.


To this end, FERC’s new proceeding on resiliency starts on the right track by aiming to:

  1. Develop a common understanding of bulk power resiliency;
  2. Understand how organized wholesale electricity markets assess resilience; and
  3. Evaluate whether additional Commission action regarding resilience is appropriate.


However, if FERC subsequently decides additional action is necessary, it should first advance a robust economic framing to ensure that the initiative does not stray from the principles of market design. After all, misguided resiliency reforms could create foundational flaws in market design that entrench anti-competitive rules for years, as parties that benefit from such rules are highly resistant to later efforts to change them. For this reason, new rules must age well. Efficacious market design should increasingly account for technologies that provide customers with more autonomy to differentiate the degree of reliable service they receive – if regulators permit it – and such a model should extend to any resiliency framework, as well.

Big decisions lie ahead for FERC and the electric industry. The resiliency initiative could emerge as a tool to placate rent-seekers and their political allies. But FERC’s decision offers a new hope that competitive forces have reawakened. Let’s hope cooler heads continue to prevail through future discussions on aligning incentives with resilient grid operations.


Testing Texas power


The Lone Star State approaches electricity policy—among other things—a bit differently. The Texas system creates a unique set of investment incentives for power plant and demand-side resource developers. Combined with shifting market fundamentals, this structure has contributed to the best electric industry performance of any state in the past decade, while simultaneously retiring old polluting facilities and ushering in a new wave of clean energy development. More than 5,600 megawatts of fossil-fuel capacity will retire or mothball in the coming year, while 2,200 megawatts of natural gas, wind and solar look to come online this winter, along with growing prospects for energy storage.

Such economic shifts foreshadow political ones. In a recent GreenTechMedia podcast, the R Street Institute and the Environmental Defense Fund noted the possibility of green and pro-market interests converging around competitive electricity markets, with Texas as the model to emulate. Ironically, Texans’ “freest” market for electricity boasts a philosophy that directly contradicts the shockingly anti-competitive subsidies proposed by the U.S. Energy Department, led by none other than the former Texas governor who helped to develop the state’s successful policy. As such, the next couple of years of market and policy developments in Texas could have large implications for the state, but even larger ramifications for policy elsewhere.

States rely on either regulated monopolies or one of two market models to drive resource investment. Texas is the only state that relies exclusively on wholesale prices to spur investment in power plants. The Electric Reliability Council of Texas (ERCOT) uses an energy market that reflects the marginal cost to operate the grid and employs shortage (or “scarcity”) pricing that administratively sets prices above marginal cost when resource reserves run short.

Other deregulated, or “restructured,” states use capacity markets, which procure a minimum level of resource capacity, in addition to energy markets. Capacity markets supplement energy markets, providing additional revenue to signal investment decisions. States that retain the monopoly utility model rely on inefficient regulatory processes to determine investments. The “energy-only” Texas model, capacity markets and monopoly regulation have all proven capable of facilitating investment, but the economic implications of each differ markedly.

Prices in energy-only markets more accurately reflect system scarcity than capacity markets, especially when it comes to the duration of shortages. This may prove particularly advantageous as resources with time-varying output profiles—think variable resources like wind and solar and use-limited resources like energy storage—become more economical. At the same time, inexpensive natural gas and resources with no fuel costs push marginal costs down, forcing power plants to receive more of their total revenue from shortage pricing and exposing any vulnerabilities an energy-only market has with inaccurate price formation.

ERCOT does have some design flaws that affect price formation. These are currently under review by state regulators, raising the question of whether the weaknesses or virtues of the energy-only market will play out.

Thus far, the virtues are winning. ERCOT is capacity-long, and the net capacity decrease from forthcoming retirements and additions should put it close to the economically efficient reserve margin (i.e., the level that maximizes the benefits, minus the costs of electric reliability). Yet this level, which economists at the Brattle Group estimate at around 10 percent, falls below the 14 percent reserve target established by an archaic industry standard. That target is based on a standard industry practice that fails to weigh the costs and benefits of electric reliability, instead aiming for the likelihood of a reliability event occurring only once a decade (if you think that sounds arbitrary, you’re onto something). All told, while economists may be giddy with recent developments, alarmists and industry traditionalists will raise red flags.

Not only does the adjusted level of resources in ERCOT appear reasonable, but the retirements are consistent with economic fundamentals. They come as no surprise, though the simultaneous timing is a bit sudden. Profitability analysis of baseload resources suggest that the day to exit the market was fast approaching. Had this capacity retired prior to the summer of 2016, ERCOT would have triggered scarcity pricing instead of experiencing humdrum prices. This led analysts at ICF International to estimate that the retirements may be worth billions in 2018, with potential for levels of scarcity pricing not seen since 2011. If this occurs, it’ll provide insight into recent developments of price-responsive demand, an immensely important area for development in electricity markets.

Another interesting question is to what extent, in what form and in what location new resources will come online. Forward prices have risen since the retirement announcements, which provide the basis for resource valuation and investment decisions. ERCOT’s independent auditor and an economist on ERCOT’s board expressed optimism that this would signal entry of new resources, with the latter saying, “Now is the time for us to let the market work.”

Since the 2000s, natural gas generation dominated new construction, while wind saw strong gains in Texas. Continued gas and wind builds are expected, but supplanting this progress is an upsurge in the prospects for solar and energy storage. As costs and federal subsidies for renewables fall in parallel the next few years, Texas will offer the most useful laboratory to see how well renewables compete on their merits. At the same time, energy storage developers have their sights set on displacing gas-fired combustion turbines to meet peak power needs.

While Texas should see continued wind growth, the drivers and nature of those projects represent a new era for wind to compete on its merits and maximize its market contributions. In the past, nearly half of all revenues for some wind projects in Texas came from the federal production tax credit. But as the credit phases out, wind developers will obsess less with siting projects to maximize output (to maximize the subsidy value) and will more strongly consider market factors. These factors include locational energy pricing, as transmission congestion drives up the price and value for resources in “import-constrained” areas. This creates opportunities for wind growth in non-traditional locations, like coastal Texas, which fetch premium market prices.

The potential for large solar additions to contribute to ERCOT’s reliability needs—measured by solar’s effect on scarcity pricing—will prove a fascinating development. Solar output tends to align with scarcity conditions far more than wind – 82 percent for solar and 16 percent for wind, according to an ICF analysis. However, this diminishes rapidly with moderate levels of systemwide solar penetration, unlike wind, as analysts from the National Renewable Energy Laboratory to the Institute for Energy Research have noted. Thus, the long-term fate of deep solar expansion is very sensitive to cost-effective storage developments or alternative solar output methods. ERCOT’s base design is ideally suited to signal this value, but enhancements would help considerably.

In order for Texas to signal new entry and retirements efficiently, the modest-to-moderate flaws in ERCOT’s market design require correction. An excellent report by Bill Hogan and Susan Pope outline remedies that would improve price formation in ERCOT’s energy market and enhance transmission policy. This has spurred a thorough regulatory discussion likely to result in several changes to ERCOT’s market design.

ERCOT should enact reforms that enhance market efficiency, regardless of how they alter competitive relationships among technologies. For example, incorporating the cost of transmission-line losses in ERCOT’s energy market and using market-based policies for transmission planning, instead of socializing costs, would put resources to more productive uses but would disadvantage wind generation. Enhancing scarcity pricing and making it localized, instead of just systemwide, would further benefit market efficiency and improve prospects for energy storage, which can site near population centers that often experience localized scarcity.

All told, competitive electricity markets best serve our economic and environmental interests. Empowering markets spurs innovation and facilitates transitions to breakthrough technologies far more efficiently than the regulated monopoly model. With competitive markets like ERCOT driving costs down and clean energy investment up, a coalition of consumer, environmental and conservative interests have reinvigorated calls for electric competition outside Texas. Calls for state competitive reforms from Nevada to Florida need a role model. Meanwhile, tens of billions in cost overruns at several power plants in the Southeast provide a harsh déjà vu of the consequences of socializing risk under the monopoly model, leaving conservatives wondering why their red brethren in Texas have fared so well. And, of course, Texas provides learning value for federal policy.

While the anti-competitive Energy Department proposal has something for (almost) everyone to hate, the pro-competition Texas model has something for nearly everyone to love. The Sierra Club praised the beauty of ERCOT’s market leading to clean-energy development, while the conservative Texas Public Policy Foundation touts the market as the best place to decide the fate of generation fuels, not policy interventions that pick winners and losers. Greens and conservatives should closely monitor Texan developments, as they just might spark political convergence. This potential to align the conservative, consumer and green agendas is indeed energizing, and the workings of an unholy energy alliance have flashed in the wake of resistance to anti-competitive proposals at the state and federal levels.

James Wallner on Niskanen Center’s Podcast

On the latest episode of the Niskanen Center’s podcast, Political Research Digest, James Wallner discusses his new book, On Parliamentary War, and the prospects for the filibuster with host Matt Grossmann. The filibuster effectively requires 60 votes to take action on important issues in the Senate. But in recent years both Democratic and Republican majorities have both acted to restrict the practice by using the so-called nuclear option. Wallner provides insights into why Senate majorities resort to the nuclear option. He also finds that Senate minorities can deter the majority from going nuclear by threatening to retaliate in response.

American Spirit: A Story of Virginia’s Liquor Laws

Jarrett Dieterle, R Street’s Director of Commercial Freedom and editor of, appears in a documentary on distilling regulations produced by the Federalist Society. He’s featured along with producers, industry observers, and other policy experts discussing the current state of Virginia’s alcohol laws and how they hurt both producers and consumers.

R Street’s Pollock on jumpstart legislation, capital reserves for SIFIs

The podcast summarizes how to have realistic, fundamental reform of Fannie Mae and Freddie Mac. This requires having them pay a fair price for the de facto guarantee from the taxpayers on which they are utterly dependent, officially designating them as Systemically Important Financial Institutions (SIFIs) which they obviously are, and having Treasury exercise its warrants for 79.9% of their common stock. Given those three steps, when Fannie and Freddie reach the 10% Moment, which means economically they will have paid the Treasury a full 10% rate of return plus enough cash to retire the Treasury’s Senior Preferred Stock at par, Treasury should consider their Senior Preferred Stock retired. Then Fannie and Freddie could begin to accumulate retained earnings and begin building their capital in a sound and reformed context.


Quick hit: Trump actually is outsourcing policymaking to Congress


All I want for Christmas is some cyber insurance


The following post was co-authored by Jennifer Huddleston Skees, a legal research assistant at the Mercatus Center at George Mason University.

What do you get the modern friend who has everything, and thus everything to lose? How about a hand-wrapped, tinsel-infused personal cyber insurance policy?

The market for commercial cyber insurance for businesses is already fairly robust and still growing rapidly. Now, a market for personal cyber insurance is emerging in response to risks associated with the ever-growing internet of things. Such offerings empower individuals to defend themselves against ransomware, identity theft and eventually could evolve to include coverage for any hypothetical digital threat that the writers of Netflix’s Black Mirror can imagine.

Worried that the new gadget you got for the holidays leaves you at increased risk of identity theft? Think Alexa might be spying on you? Concerned about exposing the intricacies of your biome to 23 and Me? A personalized solution awaits. Some policies even cover homeschooling for children affected by cyberbullying.

In general, these policies have come in one of two formats. American International Group has begun offering a standalone personal cyber insurance line. The other type of personal cyber insurance is offered as an endorsement or rider to a homeowners policy. For example, Chubb now offers coverage as part of its enhanced “family protection” product that allows for recovery for cyberbullying or other cybersecurity harms that result in a quantifiable injury. Like most new insurance products, there is still a good deal of variance in terms of policy limits and specific coverages.

Right now, most of these policies are targeted at wealthy individuals. But as risk profiles and data about the frequency, severity and cost of attacks become available, the prices of insurance policies will be more accurate and the type and number of insurance policies will expand. For most people, existing policy offerings do not make sense, because cybercriminals target databases or individuals with high-value information. However, in the future, they could become more commonplace.

While more than 84 percent of people surveyed report having concerns about online privacy or cybersecurity, few actually take proactive steps to mitigate these risks. Because consumers by and large aren’t protecting themselves, some advocates have called for governments to step in and regulate the privacy and security features of devices. Policymakers cite consumer protection as a reason to use taxpayer dollars to pursue restrictive regulations on privacy and the internet of things.

The growing availability of personal cyber-insurance options could empower consumers to protect themselves, and they provide a better solution than regulations that could impede innovation. Providing optional coverage with the existing insurance products we already buy, such as homeowners’ and renters’ insurance, could serve to spark a conversation about what is available. Rather than forcing a one-size-fits-all solution on society, these products allow individuals to select policies that fit their needs.

In these early stages, personal cyber insurance will appeal mostly to those who are truly risk-averse or who have a lot to lose. And investing in personal cyber insurance coverage likely won’t ever be as exciting as unwrapping a set of his and hers light sabers. But if you’re doing some last-minute shopping for the digitally savvy, you can now give the gift of security and peace of mind this holiday season with a two-pack of guaranteed compensation for cyber-related loss!

Image by Photon photo


Repowering Puerto Rico’s Future

The state of Puerto Rico’s electricity system remains deeply troubling, but restoration efforts offer a light at the end of the tunnel. Now is the time to start talking about what comes after power restoration: repowering Puerto Rico’s energy institutions. Doing so will require deep thinking on the role of government in electricity policy.

PREPA: A Case of Bad Governance

Hurricane Maria put a punctuation mark on a classic case of bad governance. For decades, the Puerto Rico Electric Power Authority (PREPA), the state-run utility, operated as a “monopoly that regulates itself; sets its own rates without actual oversight; incurs operational, managerial, and administrative deficiencies whose actual cost, at the end of the day, is borne directly by customers; and whose governance lacks transparency and citizen participation.” Favoritism and foul play were the norm.

In 2014, Puerto Rico established an energy commission to oversee the severely underperforming PREPA. However, PREPA’s reliability worsened after 2014 – marked by severe system reliability problems at all infrastructure levels – while its debt ballooned to $9 billion. Synapse Energy Associates found PREPA in dire need of monetary, intellectual and human capital infusion. Despite clear problems, corrective regulatory actions that would adversely affect PREPA were politically difficult to enact. PREPA declared bankruptcy right before Maria hit.

Good governance must be the theme of repowering Puerto Rico. That spans basics like enforcing property rights – electricity theft is a major problem in Puerto Rico – to mechanisms for transparency and accountability, like independent audits, political independence and ethics rules. It also requires an economic paradigm capable of attracting capital and putting it to its most productive use.

Puerto Rico at a Crossroads

Post-restoration, Puerto Rico doesn’t need to be told what infrastructure to invest in. Rather, it would benefit from guidance on how countries have built modern energy institutions that achieve reliable service at an affordable rate. If the economic paradigm is solid, efficient investment decisions based on economic fundamentals, not political preferences, will follow.

The electric industry’s options include the market model – where government facilitates competition – or a centrally-planned approach, where government owns the resources or regulates a private monopoly to substitute for competition. The market model requires sophisticated institutions to design and competently administer markets. To pursue this option, Puerto Rico would require extensive institutional upgrades, a prerequisite that appears unattainable in the next five years. Thus, in the short-term, the economic paradigm will have to adjust to something more simplistic, i.e., central planning.

The commission took a step towards improved central planning by requiring PREPA to develop an integrated resource plan (IRP). IRP is a central planning process for determining what resources will meet peak demand at the lowest reasonable cost. It emerged in the states in the 1980s in response to poor planning by monopoly utilities.

Indeed, in many ways, Puerto Rico is struggling to catch-up to where states were 30 years ago. As it does so, the island will encounter the same challenges that states with a regulated monopoly model face today.

In recent years, monopolies have struggled to plan for declining demand and rising distributed resource value, both of which exist in Puerto Rico. In response to these issues, the energy commission opened a proceeding in October seeking suggestions on how to use distributed resources and microgrids. This could challenge PREPA’s monopoly on electric service. This underscores the importance of using competitive mechanisms within the monopoly paradigm.

While a degree of central planning is necessary, Puerto Rico should not wed itself to monopoly ownership of assets. To inject some degree of competition into the mix, the planning process should identify needs and hold competitive auctions to procure least-cost investments. Third-party procurement through contractual agreements reduces the likelihood and consumer consequences of mismanagement in project development. That seems particularly valuable in the case of PREPA. The cost of competitive procurement will reflect investor confidence. To attract private capital, the commission must enact a sufficient rate scheme to keep the utility financially sound in order to provide confidence for investors.

The Path to Good Governance

As such, near-term goals for Puerto Rico include good governance basics, an efficient and sufficient ratemaking process, and use of best practices in integrated resource planning and competitive procurement. Puerto Rico should consider privatizing PREPA if sufficient political appetite exists. If institutional integrity and sophistication advance, the opportunity to entertain more advanced market mechanisms may unfold down the road.

Simply put, the state of economic development reflects the quality of institutions and policies. The electric industry is more sensitive to these factors than most. The more Puerto Rico embraces good governance, the better its economy’s outlook.

Joe Kane talks net neutrality on Mike Check

Technology Policy Associate Joe Kane was on the Mike Check radio show to talk about the FCC’s vote to restore light touch regulation of the Internet. He discussed why the old rules were harmful and how Internet users will still be protected going forward.

R Street hosts dinner on justice reform in Nashville

As part of a continuing series of stakeholder discussions aimed at addressing the justice reform movements in states and localities nationwide, R Street’s justice and national security policy director Arthur Rizer held a salon dinner in Nashville on December 7th, during ALEC’s State’s and Nation Policy Summit.

The event brought together representatives and justice policy experts from both state and national think tanks, advocacy groups, and foundations spanning the political spectrum to discuss ongoing and future reforms in Nashville.

The discussion focused on four different agenda items. First, participants examined pretrial reform and addressed common barriers to jail reform across municipalities and state legislatures. Policing reform, primarily on the implications of militarization, as well as juvenile justice issues, were also primary topics of interest. Finally, the dialogue centered on how best to cement existing reforms in instances of “tough on crime” backlash.

Attendees included: Lauren Krisai (Reason Foundation), Craig DeRoche and Kate Trammell (Prison Fellowship), Julie Warren (Right on Crime), Jenna Moll (Justice Action Network), Ron Shultis (Beacon Center of Tennessee), Michael Leland and Ken Hardy (Pew Charitable Trusts), Cameron Smith, Ian Adams, and Alan Smith (R Street Institute), Brianna Walden (Charles Koch Institute), Sal Nuzzo (James Madison Institute), and Daniel Dew (Buckeye Institute).


World Trade Organization conference could be consequential for e-commerce

* This piece was cowritten by Farzaneh Badiei who serves as the Executive Director of the Internet Governance Project.


The World Trade Organization (WTO) will be holding a ministerial conference from Dec. 10 to 13. This conference could be of high importance for e-commerce and internet governance.

The WTO has discussed the e-commerce work program at every ministerial conference – which occurs every four years – since 1998. It has not, however, advanced on any e-commerce-related discussions, and discussions that have occurred have not been binding.

For the WTO to get involved with e-commerce in a more binding fashion, an agreement must be made. A WTO e-commerce agreement would prevent data localization and maintain better privacy protection for consumers. Additionally, the WTO should consider balancing intellectual property rights in the context of e-commerce and implementing strong fair-use provisions. 

A brief background on WTO activities on e-commerce

In 1998, the WTO issued a declaration that established a work program to identify trade issues related to e-commerce. Four councils were mandated to carry out the work program: the Council for Trade in Services, the Council for Trade in Goods, the Council for TRIPS and the Committee for Trade and Development. The WTO ministers have considered the work program at each of the ministerial conferences and have instructed the work program to continue.

However, there is no sign that the councils have taken binding action related to e-commerce. Despite early initiation of and involvement in the trade issues work program, the WTO’s involvement with setting trade rules regarding e-commerce has been minimal. The WTO’s only decision regarding e-commerce since 1998 was the Declaration on Electronic Commerce, which stated that “Members will continue their current practice of not imposing customs duties on electronic transmissions.” The declaration has remained unchanged and has had positive effects on free flow of information and digital free trade.

WTO should have a more active role in e-commerce

The passive role of the WTO might not last; various trade agreements are being negotiated and discussed in different forums, and these negotiations include e-commerce chapters. Europe and the United States, among other countries, have already requested that e-commerce-related topics be discussed at the ministerial meeting this month. The flurry of interest in the issue make now an excellent time for the WTO to look into coming up with trade related e-commerce policies.

Member states have also raised the need to discuss the role of the WTO in e-commerce, and whether this role should change, at the WTO Goods Council. Some member states have agreed to discuss the formation of a working party on e-commerce. A working party at the WTO would have more authority to make decisions and start negotiations, and would thus represent a step forward towards e-commerce involvement.

Why is the WTO a suitable forum to discuss e-commerce?

Data localization hampers digital trade, requires information services to incur substantial costs to provide their services globally, and defeats the very cross-border nature of the Internet. Additionally, data localization can have damaging effects on freedom of expression and other human rights. In countries with weak or no privacy protection laws, data localization can lead to surveillance and activist arrests. With the rise of internet-of-things (IoT) devices and cloud computing, cross-border data flow is gaining even more importance.

Historically, trade agreements have helped protect and sustain information services and the free flow of information. The WTO should agree on rules that facilitate cross-border data flow and prevent data localization. This can prevent data localization, which is a form of non-tariff trade barrier, can be framed as trade protectionism and it will not contribute to the growth and expansion of IoT industry.

Moreover, with a multilateral agreement on minimum privacy protection for consumers, WTO can commit its members to consider privacy measures in their local laws. This measure would be especially beneficial to those countries with no privacy laws. The practice of not imposing customs duties on electronic transmissions should also be indefinitely binding on the member states.

Intellectual property rights, digital trade and the WTO

Intellectual property rights (IPR) are government-granted protections used to encourage innovation and creative output by ensuring monetary compensation for the use of a work. Since the WTO’s institution of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), a balance has existed in terms of protected works throughout the member countries. Whilst TRIPS is broad, its light-touch approach has provided guidance for countries to cultivate their own domestic laws relating to intellectual property.

The protection and enforcement of IPR has always been a longstanding component of United States international trade policy. But as IPR creeps into more internationally traded goods –  especially those conventionally seen as “low tech,” such as household goods and automobiles – it is implicit upon the WTO to continue to its broad approach to IPR protection. As more and more goods become “smart,” countries cannot allow the enforcement of IPR to put undue burdens on consumers, researchers and tinkerers. The WTO should be careful not to impose excessive copyrights or “digital locks” that keep users from accessing goods they legally purchase.

Since IPR has been stripped from the Trans-Pacific Partnership, the WTO would do well to retain a balanced approach to IPR. The WTO can achieve such a balance by promoting open and flexible general exceptions like the American four-step fair use test. While it may be unrealistic to expect WTO agreements to include language mirroring U.S. law, the WTO could include similar language to the TPP’s call for all parties to “endeavor to achieve balance” in copyright. Currently, forty-seven countries have some form of fair use.

The WTO should also express caution when considering IPR term length. Regarding copyright, for works other than a photographic work or work of applied art, the term length is calculated on a basis other than the life of the person, and it will be no less than 50 years. Regarding patent, it is 20 years from the filing date. Protecting works is important and this should be stressed, however onerous term length will stifle derivative works and the ability of users to enjoy the fruits of a creator’s labor in new and innovative ways.



Fighting climate change does not mean going vegan

Here at R Street we like to think of ourselves as red meat conservatives. I mean that literally. On Friday, R Street will host its annual “Meatfest,” where R Street staff will celebrate the close of another successful year by heading to Fogo de Chão and eating unlimited quantities of tasty cooked animal flesh. Meat is to R Street what cowbell is to Blue Oyster Cult. You can never have enough.

So it was pretty disturbing for me to read that special taxes on meat are being contemplated in order to fight climate change:

Some investors are betting governments around the world will find a way to start taxing meat production as they aim to improve public health and hit emissions targets set in the Paris Climate Agreement. Socially focused investors are starting to push companies to diversify into plant protein, or even suggest livestock producers use a “shadow price” of meat — similar to an internal carbon price — to estimate future costs.

The idea is being analogized to taxes on tobacco and sugar.

Ideas like this give climate advocacy a bad name. Granted, climate change is a thing, and R Street has long supported a broad-based carbon fee to deal with the risks of climate change. But when governments selectively impose taxes on some sources of emissions but not others, they can give the impression that “fighting climate change” is less about protecting the planet than it is about waging culture war fights indirectly. As I wrote previously about a proposal to tax having children because of their carbon footprint:

Calling a tax on kids a carbon tax is a bit like calling a tax on Coke (but not Pepsi) a soda tax. The tax might reduce consumption of one type of soda (namely, the best kind), but it’s unclear the extent to which it would reduce overall soda consumption, as opposed to just encouraging people to drink other types of soda.

A lot of skepticism about climate change is driven by the idea that elites just want to tell people how to live their lives, what lightbulbs to use, what car to drive, what not to eat. And that’s bad! A plane trip to Burning Man has a larger carbon footprint that grilling steaks in your backyard. Ignoring the first while attacking the second is not only bad policy, it’s bad strategy. Because meat is delicious, and if you tell people they have to eat veggie burgers to stop climate change, they are going to tell you to scram.

Net neutrality’s effect on investment: It’s complicated


This week, the Federal Communications Commission (FCC) will vote to remove 2015 rules that regulated broadband service under Title II of the 1934 Communications Act. Throughout the debate over this move there have been several attempts to portray the plan as mistaken or outright lying about the effect of Title II on investment. However, these claims continually make the mistake of looking at absolute numbers rather than what’s known as a counterfactual.

Consider an example: You’re in the apple business. Last year, you bought five apple trees from your supplier. Business was going well, so this year, we’d expect you to invest in 10 apple trees. But then someone imposes costly regulations on the apple industry, and you only buy eight trees. Did the regulation increase or decrease the number of trees you bought? Or did it have no effect? Can we even tell?

Clearly you increased your investment in apple trees compared to last year–eight is larger than five–but that’s not the right question. To determine the regulation’s effect, we have to ask: What would your investment in apples trees have been this year if the regulation had not gone into effect?

This is a much more difficult question to answer. The world is a complex place where numerous factors may impact investment decisions. Maybe the price of apple-tree-growing supplies has increased. Maybe the economy went into a recession. If all we know is that last year you bought five trees, the regulation went into effect, and then you bought eight trees the following year, neither opponents nor proponents of the regulation should wave around this correlation as absolute proof for their side.

Yet this is what we’ve seen time and again in the net neutrality debate. Article after article has claimed that Internet Service Providers (ISPs) “increased … capital expenditures” and “continued to invest” despite reclassification of broadband as a Title II service, so any suggestion that Title II hurt investment has been “proven indisputably false” and, in fact, the rules “haven’t affected overall industry investment. 

It’s also troubling that many of these stories rely so heavily on a Free Press report that makes such elementary blunders as failing to adjust for inflation. Doing so reverses the study’s purported findings, showing a decline, rather than an increase, in investment. 

Simply looking at what happened after the reclassification does not, in itself, tell us anything about what would have happened over the same time period without Title II regulation. Maybe these companies would have invested more in broadband infrastructure. Maybe their investment levels would have declined, but Title II regulation protected the “virtuous cycle” and actually increased investment. We can’t know just by looking at bottom-line expenditures.

Luckily, more sophisticated econometric methods can help us zero-in on how the regulation affected broadband investment. Economist Dr. George Ford of the Phoenix Center conducted an insightful, methodologically-sound study that accounts for an additional complicating factor: the fact that the FCC has had the Title II option on the table since 2010. That date is a better place to start looking for effects, since companies account for what might happen when making investment decisions rather than waiting for final rules to take effect. And, in fact, Ford found that the threat and later imposition of Title II regulation did decrease investment ISPs’ investment–by $160 to 210 billion from 2011 to 2015. 

You need not agree with Ford’s conclusion here, but refuting him requires engaging in counterfactual analysis–trying to figure out not just what happened after we enacted the 2015 Title II rules, but what would have happened without the real possibility of Title II regulation. These aren’t questions that can be answered by looking at bottom-line numbers for particular years and seeing if they rose or fell; answers to these questions require econometric analysis of the sort Ford conducted. 

The other common feature of reporting on broadband investment is that many of these articles tout statements by ISPs to their investors as proof that broadband investments were not harmed by Title II. As we at R Street Institute document in our reply comments to the FCC, however, the statements themselves don’t actually support that portrayal. 

Some ISPs did say that their current business practices wouldn’t be affected by the new rules; that is, they wouldn’t have to stop blocking, throttling, or engaging in paid prioritization because they weren’t doing that in the first place. Of course, rules banning something that you don’t do won’t affect your day-to-day activities very much. ISPs’ statements to investors and the Securities and Exchange Commission about the effect of reclassification on long term investment prospects, however, did clearly list Title II regulation as a significant threat.

Broadband investment is the best way to close the digital divide and create competition that will produce better quality services at lower prices. Regulations that create uncertainty and increase the cost of investment result in fewer people getting access to broadband and fewer options for those who have access. 

Regardless of your position on net neutrality, we should all take care to ask the right questions and embrace their complexity rather than cutting corners to score points for our side.

Shoshana Weissmann Joins Matt Lewis’s Podcast

RSI’s Digital Media Specialist joined Matt Lewis to talk about her career, Twitter, passion for sloths, and conservatism in cities.

Exceptions to the Rule: The Politics of Filibuster Limitations in the U.S. Senate

Hoover’s Director of Washington Programs, Michael G. Franc, interviews author Molly E. Reynolds on her latest book, Exceptions to the Rule: The Politics of Filibuster Limitations in the U.S. Senate.

Many people believe that, in today’s partisan environment, the filibuster prevents the Senate from acting on all but the least controversial matters. But that knowledge is not exactly correct. In fact, the Senate since the 1970s has created a series of special rules-described as “majoritarian exceptions”-that limit debate on a wide range of measures.

Reynolds argues that these procedures represent a key instrument of majority party power in the Senate. They allow the majority-even if it does not have the sixty votes needed to block a filibuster-to produce policies that will improve its future electoral prospects, and thus increase the chances it remains the majority party.

Clean Energy Choices

Across the country, clean energy is growing rapidly in states that allow customers to choose their electricity supplier. The following is a panel discussion held at the Cannon House Office Building on Nov. 30. What can lawmakers learn from these developments, and what further policy reforms are needed to unleash the power of competitive forces to deliver cheaper and cleaner energy?

Panelists include:

Michelle Patron, Director of Sustainability Policy at Microsoft

Dylan Reed, Head of Congressional Affairs with Advanced Energy Economy (AEE)

Devin Hartman, Electricity Policy Manager and Senior Fellow with R Street Institute

Frank Caliva, Senior Spokesman with American Coalition of Competitive Energy Suppliers (ACCES)

Charles Hernick, Director of Policy and Advocacy with Citizens for Responsible Solutions (Moderator)

Greentech Media Podcast: Beyond Subsidies

R Street senior fellow Devin Hartman and EDF’s Lenae Shirley discussed on Greentech Media’s The Interchange podcast how competitive electricity markets align the conservative and green agendas. The reasons are simple – competition and consumer choice drive economic development and innovation and deployment of clean energy resources. This has resulted in the political right, left and clean energy industry expanding conversations on how clean energy can compete on its merits. Hartman discusses pro-market, pro-consumer reforms, while the hosts note political convergence around unease with the monopoly utility model.

AEI Event: Is the Bank Holding Company Act obsolete

Most of America’s 4969 are owned by holding companies, so the Bank Holding Company Act of 1956 is a key banking law. But do the prescriptions of six decades ago may not still make sense for the banks of today. The act for most of them creates a costly and arguably unnecessary double layer of regulation. Its original main purpose of stopping interstate banking is now completely irrelevant. One of its biggest effects has been to expand the regulatory power of the Federal Reserve–is that good or bad? Does it simply serve as an anti-competitive shield for existing banks against new competition? Some banks have gotten rid of their holding companies–will that be a trend? This conference generated an informed and lively exchange among a panel of banking experts, including the recent Acting Comptroller of the Currency, Keith Noreika, and was chaired by R Street’s Alex Pollock.


Future of Internet Freedom with FCC Chairman Pai

R Street and the Lincoln Network co-hosted a Nov. 28 event on the future of telecom policy and Internet freedom, including the Federal Communications Commission’s upcoming agenda. The speakers included Federal Communications Commission Chairman  Ajit Pai, Federal Trade Commission Chair Maureen Ohlhausen and FCC commissioners Mike O’Rielly Brendan Carr.

Those speeches were followed by a panel discussion featuring: Tom Struble, technology policy manager at the R Street Institute; Brent Skorup, research fellow at the Mercatus Center at George Mason University; and Roslyn Layton, visiting scholar at the American Enterprise Institute. The panel was moderated by Jessica Melugin, adjunct fellow at the Competitive Enterprise Institute.

When it comes to criminal justice AI, we need transparency and accountability


A recent Wall Street Journal article makes the case that—in regulating artificial intelligence, including those applications used to aid the criminal justice system—we should emphasize accountability, rather than prescriptions to make every algorithm completely transparent. While authors Curt Levey and Ryan Hagemann make important points, the article misses key details about the state of machine learning and the fundamental differences between requirements demanded by government procurement agents and regulations that would affect the broader market.

Levey and Hagemann argue that calls for algorithmic transparency in areas like criminal justice risk assessment are misguided because they fail to account for the opaque nature of advanced machine-learning techniques. Furthermore, they believe transparency requirements—for both training data and source code—would unfairly undermine trade secrets and competitiveness in the market for such software.

Their argument about artificial intelligence regulation thus has three components:

  1. Transparency requirements will not be effective with machine-learning techniques, because each is a “black box.”
  2. Transparency requirements are undesirable because they undermine intellectual property and market competition.
  3. It is not appropriate for government to impose transparency requirements even on itself, including risk assessments in the criminal-justice system.

To be sure, there are good reasons to avoid broad-based algorithmic transparency requirements for every AI application. As I discussed in greater length at Cato Unbound earlier this year, such rules would stifle competitiveness and innovation.

But the criminal justice system is not an ordinary market, and the government is not an ordinary firm. Just as ordinary firms may and often do decide to use open source software, it is entirely appropriate for the government to make determinations about what it will require in contracts with its vendors.

Unlike ordinary firms, government also has constitutional obligations to be transparent, such as in upholding citizens’ rights to due process and equal protection under the law. Statutory obligations like the Freedom of Information Act and other “sunshine” laws; the jurisprudence of criminal procedure; 51 federal and state constitutions; and myriad court precedents all set out additional rules and protections. Notions of equity, predictability and, yes, transparency are at the heart of what our justice system strives to provide.

I’ve argued before that we should err on the side of transparency in the criminal justice system. This could be done by requiring, as part of procurement processes, that all algorithms that inform judicial decisionmaking in sentencing be built and operated on an open source software platform. Everything from the source code to the variable weights to the (anonymized) training data would be available for public scrutiny and could be audited for bias.

The government would likely have to pay more upfront for a transparent open source system, as it would essentially be buying the algorithms outright rather than renting them, and continued investment would be needed for their development. However, with a more open ecosystem, there are good reasons to think the costs to taxpayers could be offset by philanthropic investment and engagement from civil society.

There may indeed be mechanisms to validate risk-assessment software in criminal justice that stop short of disclosing training data, continuous outcome data or the underlying code. But such an approach requires taking unnecessary risks that the system will be abused, in addition to fomenting public backlash against the technology. Even setting aside civil liberties considerations, opting against transparency in criminal justice AI would keep a developing ecosystem opaque, when it could benefit from broad-based collaboration and the input of diverse stakeholders.

Thus, it seems that transparency would be desirable if feasible. Let’s address some of the feasibility concerns and proposed harms of transparency more specifically.

It’s first worth noting that the algorithms used in the criminal justice system today are relatively simple from a technical perspective, and do not rely on advanced neural networks. Their set algorithmic weights can be discovered via transparency and do not yet suffer from the concerns about “black box” machine learning. Most of these systems, like the Public Safety Assessment (PSA) tool used in New Jersey, can be calculated by hand in a short period of time if you have the relevant background and criminal history.

Systems like the COMPAS algorithm used in Wisconsin are proprietary, which makes it difficult to know exactly how they operate. However, based on sample tests obtained by ProPublica, they still seem to be within the realm of pen-and paper.

Below is a part of the published PSA, which illustrates how simple it really is.


There are structural limits as to the kinds of variables in play in a risk assessment. While a judge can consider any number of extraneous factors, a computer system must rely on a uniform dataset. This might include such variables as age, ZIP code, a defendant’s first contact with the juvenile courts or any past jail time. How these are weighted may be opaque in a machine-learning context, but would nonetheless be possible to analyze. What would be prohibited from consideration are variables such as race or national origin, as well as any false data. If these were used in sentencing—or potentially, even if other factors were used that might be a close proxy for these prohibited variables—it would open a conviction to appeal. That’s why transparency is important for due process.

The future of risk-assessment algorithms likely will include greater and greater uses of machine-learning techniques, so it’s worth thinking about potential transparency and accountability trade-offs. A recent National Bureau of Economic Research paper, lead by Cornell University’s Jon Kleinberg showcased the incredible gains we can make in the pretrial system with more accurate risk predictions. In a policy simulation, the authors showed that their algorithm, trained through machine learning, could cut the jail population by 42 percent with no increase in the crime rate.

As Levey and Hagemann point out, the greater the degree of complexity in machine learning, the harder it is to peer into the inner workings of the algorithm and understand how it makes decisions. But with access to the training data and the specific machine-learning methods used, it would be entirely possible to replicate the model again and again to make sure there are no anomalies, or to create proxy models to test for different kinds of machine bias or common errors. Furthermore, we are quickly developing new methods of machine learning that are more amenable to transparency, explicability and interoperability.

Levey and Hagemann’s stated goal of accountability does not have to stand in opposition to the goal of transparency. Transparency is one method to achieve algorithmic accountability. In the context of criminal justice, it is a most worthwhile mechanism. More advanced machine learning is able to help only insofar as models are based on externally valid data. And even explainability protocols and internal diagnostic tools will not be able to alert the operator about invalid data, because a neural network has no concept of validity outside the dataset it has been trained on.

Risk assessment systems also must be calibrated to societal norms. For instance, we want to be more averse to releasing individuals who are likely to commit a murder than we are to releasing a nonviolent drug offender. But if a particular jurisdiction wants to take a hard line on marijuana use, there would be a public interest in knowing about it.

This brings us back to a larger point about the difference between government regulation and requirements built into the procurement process. In addition to not having access to profit and loss signals, the procurement process is rife with rent-seeking as private companies compete on the basis of political connections, rather than the quality of goods they are selling. As such, it is entirely appropriate for government to set procurement specifications to ensure that certain needs are met. We should not conflate this with more general forms of government regulation.

The application of AI in risk assessments in the justice system won’t be perfect, especially at first. Even if they have an overall positive effect, they may introduce hard questions about differing notions of fairness. As these technologies advance and become more opaque, complete openness is the best way to protect our civil liberties, ensure public trust and root out flaws. In the long term, with an open ecosystem, we can produce far better outcomes than the status quo.

While I largely agree with Levey and Hagemann about whether it’s wise to impose broad transparency mandates on private sector algorithms, we shouldn’t carelessly extend this thinking when it comes to the application of state power. In high stakes realms where the government can keep you locked up or otherwise take away your liberties, we should make our mantra: “Trust, but verify.”

Image by Phonlamai Photo


Molly Reynolds’ Exceptions to the Rule: The Politics of Filibuster Reform in the Senate

The Senate has changed considerably in recent years. So too has our understanding of how its members make decisions.

Traditionally, the leading scholarship on the Senate has taken as its starting point the fact that its members possess considerable parliamentary rights under the institution’s Standing Rules. This was important because senators used those rights to obstruct legislation they opposed. To the extent that such treatments acknowledged political parties, they typically focused on the negative consequences of rising partisanship, which made it harder for Senate majorities to overcome obstruction by an increasingly unified minority party.

Yet an apparent decline in member autonomy and corresponding increase in party cohesion over the last two decades prompted scholars to re-examine how they thought about the Senate. The result of their efforts has been a reorientation in our understanding of the Senate. Today, the starting point for most scholarly inquiries is not so much the efforts of individual senators to achieve their goals in the institution as the collective behavior of the majority and minority parties more generally. The treatments that first adopted this perspective sought to adapt earlier work on party effects in the House of Representatives to explain developments in the Senate.

But unlike in the House, minorities in the Senate still have the ability to influence policy outcomes. The most widely known example of this is the filibuster, which permits senators to block the majority from passing legislation. The majority can overcome such obstruction by invoking cloture under Rule XXII. But doing so requires a three-fifths majority (typically 60) to end debate and proceed to an up or down vote on a bill (typically 51). As such, the minority retains significant leverage in the legislative process so long as it is able to secure the votes needed to prevent the majority from invoking cloture (typically 41).

Senate majorities may curtail the minority’s ability to filibuster with a reform-by-ruling approach (i.e., the so-called nuclear option) to unilaterally create a new precedent that is inconsistent with, but nevertheless supersedes Rule XXII. However, they have not often done so.

Admittedly, there have been exceptions to this reluctance in recent years. In 2013, Democrats used the nuclear option to limit the minority’s ability to obstruct most nominations. And Republicans did so earlier this year to preclude filibusters of Supreme Court nominees.

Notwithstanding this, neither party has elected to go further by eliminating the legislative filibuster. While there are several reasons for this, one of the most often overlooked explanations is that the Senate can exempt specific legislation from being filibustered in the future without using the nuclear option or going through the cumbersome process of changing the rules via the process stipulated in Rule XXII. This reduces the demand for eliminating the legislative filibuster by offering determined majorities an alternative way to enact policy.

In Exceptions to the Rule: The Politics of Filibuster Limitations in the U.S. Senate, Molly E. Reynolds considers these special procedures, which she terms majoritarian exceptions. A fellow in Governance Studies at the Brookings Institution, Reynolds defines a majoritarian exception as “a provision, included in statutory law, that exempts some future piece of legislation from a filibuster on the floor of the Senate by limiting debate on that measure.”

Reynolds groups majoritarian exceptions into two categories: delegation exceptions and executive branch oversight exceptions.

With delegation exceptions, majorities empower designated actors to craft legislation addressing specific policy problems while simultaneously limiting the minority’s ability to obstruct the measure when it is eventually considered by the full Senate. The reconciliation process is an especially salient example of a delegation exception given recent Republican efforts to repeal and replace Obamacare and reform the tax code using the special process. In reconciliation, committees are authorized to craft legislation meeting specified budgetary targets. Floor debate on reconciliation bills is limited to 20 hours and the amendments senators are permitted to offer are restricted. These exceptions to the Senate’s Standing Rules were created when Congress passed the Congressional Budget and Impoundment Control Act of 1974 and the Omnibus Budget Reconciliation Act of 1990.

In contrast, oversight exceptions create a special fast-track process to approve or disapprove a presidential act after it has already occurred. These special procedures usually preclude amendments and limit overall debate time on the underlying legislation. Examples of oversight exceptions include legislation periodically passed by Congress giving the president authority to negotiate trade agreements and to expedite their consideration in the Senate (e.g., the 2015 Bipartisan Congressional Trade Priorities and Accountability Act). The elaborate disapproval process Congress utilized to raise the debt ceiling on a number of occasions during the Obama administration offers another example of an oversight exception.

In considering majoritarian exceptions as a distinct class of procedures that share certain identifiable features, Exceptions to the Rule makes an important contribution to our understanding of the relationship between partisanship and parliamentary procedure in the Senate. Reynolds highlights the utility of majoritarian exceptions to Senate majorities as well as their impact on policy outcomes, and provides an analysis that enables us to predict when Senate majorities will be more likely to propose such exceptions in the future.

But we should be careful not to overstate the value of majoritarian exceptions to Senate majorities more generally. The special procedures do not provide them with a reliable way to enact their agenda over the minority’s objections on a routine basis. This is because they must first be authorized by law and the legislation containing such provisions can be filibustered.

The repeated use of majoritarian exceptions, it is worth adding, may have important consequences for our politics more generally. For example, Congress cedes its authority to make law to the executive branch when it uses oversight objections. Doing so may be necessary to ensure action on an important public policy problem. But it also gives unilateral presidential action the imprimatur of legitimacy at a time when many observers are calling for Congress to reassert its authority.

And both oversight and delegation exceptions may exacerbate a growing accountability problem in our politics. It is harder for voters to assign responsibility to legislators for the policy outcomes produced via such processes.

More broadly, the restrictive rules Congress places on such processes distorts the nature of Senate decision-making in subtle, yet nevertheless important, ways. The limits on debate and amendments upends the deliberative process in the institution and restricts the ability of the rank-and-file senators to participate in it. In the case of reconciliation, fitting legislative proposals into the four corners of what is permitted by the special procedure supplants a more inclusive and adversarial process geared toward adjudicating the claims of senators and their constituents. Not engaging in contentious debates in this way has the potential to make the policies enacted via majoritarian exceptions less stable over the long term as opponents refuse to accept their legitimacy and instead wait for the chance to reverse them using the same process in the future.

The irony of the Senate’s increased use of majoritarian exceptions in recent years is that it has exposed the limits in the regnant approach to thinking about the institution. The spectacular failure of Republicans to repeal and replace Obamacare earlier this year and their ongoing struggle to reform the tax code using the reconciliation process suggest that the parties are not as unified as previously thought. If this is indeed the case, our identification of the filibuster as thwarting majority action and thus perpetuating gridlock and dysfunction may be incorrect.

Regardless of such concerns, Exceptions to the Rule should be required reading for anyone concerned about the state of the Senate today. Reynolds’ in-depth analysis of majoritarian exceptions offers valuable insight into a subset of parliamentary procedures that will be sure to dominate Senate decision-making for years to come.

Congress helped create the CFPB’s leadership crisis. It can fix it.

Over the past few days, the D.C. news cycle has been dominated by the palace intrigue over who should be properly recognized as acting director of the Consumer Financial Protection Bureau. But few have considered Congress’ role in creating this situation—and the fact that it could now help fix it.

In the wake of outgoing CFPB Director Richard Cordray stepping down, President Donald Trump tapped Office of Management and Budget director Mick Mulvaney to serve as CFPB’s acting director while a permanent head was selected. But in a surprise twist, Cordray declared that the agency’s chief of staff, Leandra English, was actually the CFPB’s new leader.

This is a bit of a mini constitutional crisis, as it appeared that both Mulvaney and English might enter into a power struggle over control of the agency. So far, CFPB’s general counsel has sided with Mulvaney, and in a memo advised all CFPB staff to “act consistently with the understanding that Director Mulvaney is the Acting Director of the CFPB.” English, for her part, initiated a lawsuit asking a federal court to issue a restraining order preventing Mulvaney from taking the post, which the court denied.

Legal scholars have been weighing in on the merits of who is legally correct in this scenario, and the dispute involves both constitutional as well as statutory concerns. (Jonathan Adler has a summary of the various legal positions and opinions over at The Volokh Conspiracy; for what it’s worth, I think Adam White has the best of the argument when he concludes that the Trump Administration should prevail).

Lost in all this back-and-forth, however, is that fact that this fiasco was both imminently predictable and preventable. The CFPB is sui generis in America’s system of governance in that it has unprecedented powers that are mostly incapable of being checked by the other branches of our government. What’s happening right now illustrates this: the outgoing agency leader is attempting to implement his own preferred succession plan over the wishes of the other political branches.

The CFPB was created by the 2010 Dodd-Frank Act, which dictated that it be led by an individual director. To ensure the agency’s independence, the act clarified that the director could only be removed by the president “for cause,” which insulates the agency’s leadership from presidential accountability. While “for cause” protection is common in so-called “independent agencies”—other examples include the Securities and Exchange Commission, Federal Communications Commission or Federal Trade Commission—it is unprecedented for an agency that operates under a single director rather than a multimember commission structure.

Last year, a panel of the D.C. Circuit found CFPB’s structure unconstitutional for this very reason (that decision is currently on appeal to the entire D.C. Circuit, which has yet to rule on it). By establishing an agency that is led by a single individual who cannot be removed except in special circumstances, Congress muddied the waters when it comes to who is ultimately in charge of the CFPB—the president or the agency’s director.

The uniquely unaccountable nature of the CFPB does not end with its leadership structure, either. Dodd-Frank specified that the agency was also to be funded outside Congress’ normal appropriations process, via revenues derived from the Federal Reserve System. This prompted the D.C. Circuit to quip that the agency’s funding structure was “extra icing on an unconstitutional cake already frosted.” This means that Congress succeeded in creating an agency that was unaccountable to both the executive and legislative branches.

This failure to ensure accountability at the CFPB is partially responsible for the current leadership struggle. In fact, Barney Frank, one of the principal drafters of Dodd-Frank, has suggested that the CFPB’s plan of succession was deliberately designed to insulate its leadership from presidential control. And now that a leadership struggle has occurred, both the president and Congress are mostly powerless to respond to it in effective fashion.

Despite its errors in creating the CFPB, it’s not too late for Congress to fix its mistakes. During the current Congress, the House has considered legislation that would convert the CFPB from a single-director model to a five-member commission structure, with each commissioner serving five-year staggered terms. Alternatively, the final version of the CHOICE Act, which passed the House earlier this year, clarifies that the CFPB director is fireable at will by the president. The CHOICE Act also would make the CFPB subject to the normal appropriations process.

Even if broad-sweeping Dodd-Frank reforms like the CHOICE Act are not politically feasible right now, Congress should at least pursue these discrete structural reforms. In particular, converting the agency to a commission model would help avoid succession crises like the one the agency is currently undergoing, as the staggered terms of the commissioners would reduce surprise retirements and deemphasize the importance of any one officer or director at the agency. It would also pull the agency’s ethos in the direction of a truly non-partisan, independent entity, rather than a “political vehicle” masquerading as an independent agency.

If making the CFPB truly independent is not desirable, then it should be treated like any other executive branch agency and have a director that the president can remove at will. Such a goal could be bipartisan, too, as even Barney Frank’s former legislative aide has argued that “Congress should never again create an ‘independent’ agency with a sole director, particularly one not subject to the congressional appropriations process.” Likewise, both parties should be motivated by fears of the other party controlling as unaccountable and powerful a position as CFPB director.

The CFPB has been allowed to operate as a regulatory unicorn for far too long, and its recent leadership struggle is merely a symptom of its unaccountable structure. Congress created this mess, and now it’s time for it to fix it.

R Street panel discusses how to give companies clean choice


The renewable electricity industry has grown rapidly in the past decade. From their humble beginnings, wind and solar energy have more than doubled their combined share of power generation at utility-scale facilities in the United States.

In a perfect world, companies large and small would be able to purchase this growing abundance of clean energy straight from the grid. But the electricity industry’s monopoly model is “a hell of a drug,” and less than half of states have restructured their electricity markets to allow for more competition and consumer choice.

In the 1990s, roughly a dozen states – including Texas, Illinois and Ohio – passed laws that began to open up the electricity sector to competition. But the California electricity crisis of the early 2000s and the 2008 Great Recession left the restructuring movement stillborn.

In the years that followed, regulations in most states remained unchanged. The energy marketplace, on the other hand, did not. The growth of renewables, combined with renewable portfolio standards in 29 states and a more environmentally conscious corporate mindset, have changed the incentives around energy choice.

As a result, there are now many more potential customers with a strong – and growing – demand for clean energy. Major electricity users with clean-energy leanings like Microsoft and Amazon – whose internet cloud operations rely on mammoth server farms – have begun to push for clean-energy procurement. Other major energy consumers like Google, Apple and Johnson & Johnson are looking to join the clean-energy parade.

When these customers compare states that allow consumers to choose their electricity suppliers with states that retain the monopoly model of one large electricity producer – it’s a no-brainer. From 2008 to 2016, the weighted-average price of electricity in monopoly states increased 15 percent, while in restructured states, prices fell by 8 percent.

In monopoly states, artificial barriers undermine competition and state legislatures are beholden to major legacy utility firms for much of their fundraising. Lobbyists for these companies have pseudo-official status in state capitols like Richmond, Raleigh and Atlanta, so the marketplace is currently balkanized. But for how much longer?

R Street will host a panel discussion on the future of clean-energy choice with panelist from Microsoft, Advanced Energy Economy (AEE), the American Coalition of Competitive Energy Suppliers (ACCES) and Citizens for Responsible Energy Solutions (CRES). The panel will take place at noon Nov. 30, at the Cannon House Office Building in Washington, D.C.

Among the questions to be asked are:

  • What is the value of choice overall and specifically to clean-energy procurement?
  • Microsoft’s story: What do big customers look for and what policies are needed to attract them?
  • Why is retail choice important for cheap, low-carbon technologies?
  • What does the political landscape tell us about the future of clean-energy choice?

Please join us to find out what the clean-energy choice movement has in store.

Image by zhangyang13576997233

Casey Burgat and Matt Glassman on Congress

In the first episode of a video series for the Legislative Branch Capacity Working Group, R Street Governance Fellow Casey Burgat interviews Matt Glassman, senior fellow at Georgetown University’s Government Affairs Institute on all things Congress. Topics discussed include: common political misconceptions; issues and likelihood of congressional reform; congressional capacity; necessary changes to the committee structure; and much more.

Kosar talks postal reform on C-SPAN

R Street Vice President of Policy Kevin Kosar was a guest Nov. 24 on C-SPAN’s “Washington Journal” to discuss potential reforms to the U.S. Postal Service. Full video of the appearance is embedded below.

LIVE STREAM: The Future of Internet Freedom with FCC Chairman Pai, Commissioners O’Rielly and Carr, FTC Chairman Ohlhausen


Join the R Street Institute and the Lincoln Network for an event on the future of telecom policy and internet freedom.

Federal Communications Commission Chairman Ajit Pai, Commissioner Mike O’Rielly, Commissioner Brendan Carr as well as Federal Trade Commission Acting Chairman Maureen Ohlhausen will each deliver remarks on the commission’s upcoming agenda.

The speeches by the Chairman and Commissioners will be followed by a discussion with our expert panel, featuring:

  • Tom Struble, technology policy manager at the R Street Institute
  • Brent Skorup, research fellow at the Mercatus Center at George Mason University
  • Roslyn Layton, visiting scholar at the American Enterprise Institute
  • Jessica Melugin, adjunct fellow at the Competitive Enterprise Institute (moderator)

The in person event is by invitation only (contact

Media contact: David Bahr (

Watch live beginning Tuesday, November 28th at 1:30pm ET. Use hashtag #InternetFreedom to join in the conversation:

Some links on patent reform

I’ve been digging back in on some materials related to patent policy and the case for patent reform, and thought it might be useful to others to post some links to R Street’s work over the years as well as works by other groups. Check them out below.

Publications with R Street scholars:

Publications by other center-right groups:

Academic and government publications:


How to talk to your family about net neutrality

Multi Generation Family Celebrating Thanksgiving

It’s nearly Thanksgiving – that time of year where we all try to cram our families through airport security on the same day so we can gather around the table to argue about politics.

This year is likely to prominently feature wonky topics such as tax reform and – oddly enough – net neutrality. While telecom regulation isn’t normally a salient subject in family settings, that may change tomorrow; the Federal Communications Commission (FCC) just released its proposal to rollback Title II regulation of the Internet, aka “Net Neutrality” (our substantive thoughts on the issue can be found here).

Activists didn’t waste any time in attacking the plan and taking to various social media platforms to shout their objections. But not all of us agree with left-wing activists (and we suspect most of them don’t know much about telecom policy).

With all the confusion and misinformation pervading this discussion, here are some points to share with your family should the subject come up around the table this Thanksgiving.

1) It’s not the end of the Internet. The Internet as we know it was built without Title II regulation. In fact, the current regulations only took effect in mid-2015. Cases of net neutrality “violations” were few and far between in the decades before Title II regulation, and they were resolved without prescriptive regulation. Going back to the pre-2015 light-touch framework would hardly pose an existential threat to your favorite websites.

2) There will still be “cops on the beat.” Scary scenarios in which an ISP blocks content from its competitors will still be illegal. And even as the FCC steps aside from regulating the Internet, the Federal Trade Commission still has ample authority and expertise to hold ISPs to their promises and punish them if they engage in unfair competition methods. State attorneys general also have the power to bring enforcement actions using state-level consumer protection laws.

3) The Internet has never “treated all traffic the same,” nor should it. Different kinds of data are sent over the Internet, and they don’t all need the same treatment. A half-second delay in delivering an email or part of a software update isn’t a big deal. The same delay for applications like real-time multiplayer games or video chats could render them unusable.

Additionally, some Internet applications are non-neutral. If you use T-Mobile’s Binge On, you get slightly lower-quality video in exchange for free streaming. That such a service hurts consumers would be news to those who have signed up for it in droves.

4) The issues you’re worried about might not be net neutrality concerns. We’ve all had bad experiences with our ISPs’ customer service departments, but those are separate issues. More regulation, especially the kind designed for a 1934 telephone monopoly, is not going to improve the situation.

5) More broadband deployment is the long term solution. What will make things better is more competition in the marketplace, which means more broadband deployment from all sources, including wireline and wireless. Thus, instead of fixating on net neutrality, we should focus on removing barriers to deployment. The Title II regulations are one such barrier that has depressed investment. Repealing them will get us back on the road to faster Internet for all.

AT&T should acquire Time Warner despite DOJ challenge


The following post was co-authored by R Street Tech Policy Associate Joe Kane. 

This week, the Department of Justice (DOJ) formally challenged AT&T’s proposed acquisition of media conglomerate Time Warner by filing a complaint to block the merger with the U.S. District Court for the District of Columbia. Despite this challenge, the merger should be allowed to proceed, as both the facts and legal precedents strongly suggest that DOJ’s challenge lacks merit.

Time Warner produces video content and programming through its Warner Bros., Turner Broadcasting System and HBO subsidiaries. AT&T distributes content from Time Warner and other producers through its wireline, mobile and satellite networks. These two firms don’t compete against each other in any relevant market, so this represents a classic example of a vertical merger. It is very rare for the DOJ to challenge a vertical merger such as this, and even rarer for the courts to block one — it hasn’t happened in decades.

Vertical mergers are almost always pro-competitive and pro-consumer in nature. It’s horizontal mergers, in which competitors in the same market seek to combine, that are more likely to be problematic and thus subject to antitrust scrutiny. AT&T abandoned its attempt to acquire T-Mobile in 2011, for example, after the DOJ filed suit to block it. With vertical mergers, however, the combined firm can achieve valuable efficiencies that it can pass onto consumers in the form of lower prices and/or better products or services. And no firms exit the market, so consumer choice does not decrease. Thus, the benefits to consumer welfare from such mergers almost always exceed any corresponding harms.

Here, the efficiency gains that AT&T and Time Warner could achieve are both obvious and substantial. In addition to benefitting from economies of scale (e.g., by combining their legal teams or human resource departments), control over the entire chain of distribution for Time Warner’s premium video content — from the production studio to the viewer — could allow the combined firm to deliver premium content to AT&T subscribers at substantially lower costs, or to develop new service offerings to compete with the innovative video services being developed by the likes of Apple, Amazon, Netflix and Disney.

The combined AT&T-Time Warner may well have greater leverage and bargaining power in carriage negotiations — Time Warner may get better deals with other distributors when licensing its content and AT&T may get better deals with other programmers when licensing their content. That may squeeze competing programmers and distributors, including giants like Disney and Verizon, by eating into their profit margins and forcing them to innovate in order to survive in the market.

But the antitrust laws don’t protect competitors; they protect competition. The recent vertical merger between Comcast and NBCUniversal — which was allowed to proceed despite identical concerns about increased leverage in carriage negotiations — is indistinguishable from the proposed AT&T-Time Warner merger. There is simply no reason to change course now and block AT&T’s acquisition of Time Warner.

The DOJ surely knows how weak its case is, so expect to see further negotiations about merger conditions in the coming weeks. AT&T has already signaled that it’s unwilling to accept any structural conditions, such as divesting political lightning rod CNN, but a targeted behavioral condition governing the licensing of Time Warner’s content to competing online video distributors, like Sling TV, may be enough to grease the wheels and get this merger over the line.

Whether AT&T is willing to accept such conditions, or whether it pushes its hand in court to try to get the merger approved without any conditions, remains to be seen. Regardless, the benefits from the merger would be substantial and undeniable, far outweighing any likely harms. AT&T’s acquisition of Time Warner should be approved posthaste.



Kosar talks book publishing with CHCW podcast

R Street Vice President of Policy Kevin Kosar, along with food writer Monica Bhide, were recent guests of the Charles Houston Community Writers and sat for an extended discussion of publishing for the group’s podcast. Video is embedded below:

Puerto Rico: Storms and savings


Puerto Rico has a long history of many disastrous hurricanes, as once again this year with the devastating Hurricane Maria. These disasters recur frequently, historically speaking, in an island located “in the heart of hurricane territory.” Some notable examples follow, along with descriptions excerpted from various accounts of them.

  • In 1867, “Hurricane San Narciso devastated the island.” (Before reaching Puerto Rico, it caused “600 deaths by drowning and 50 ships sunk” in St. Thomas.)
  • In 1899, Hurricane San Ciriaco “leveled the island” and killed 3,369 people, including 1,294 drowned.
  • In 1928, “Hurricane San Felipe…devastated the island”…“the loss caused by the San Filipe hurricane was incredible. Hundreds of thousands of homes were destroyed. Towns near the eye of the storm were leveled,” with “catastrophic destruction all around Puerto Rico.”
  • In 1932, Hurricane San Ciprian “caused the death of hundreds of people”…“damage was extensive all across the island” and “many of the deaths were caused by the collapse of buildings or flying debris.”
  • In 1970, Tropical Depression Fifteen dumped an amazing 41.7 inches of rain on Puerto Rico, setting the record for the wettest tropical cyclone in its history.
  • In 1989, Hurricane Hugo caused “terrible damage. Banana and coffee crops were obliterated and tens of thousands of homes were destroyed.”
  • In 1998 came Hurricane Georges, “its path across the entirety of the island and its torrential rainfall made it one of the worst natural disasters in Puerto Rico’s history”…“Three-quarters of the island lost potable water”…“Nearly the entire electric grid failed”…“28,005 houses were completely destroyed.”
  • In 2004, Hurricane Jeanne caused “severe flooding along many rivers,” “produced mudslides and landslides,” “fallen trees, landslides and debris closed 302 roads” and “left most of the island without power or water.”
  • And in 2017, as we know, there was Hurricane Maria (closely following Hurricane Irma), with huge destruction in its wake.

These are some of the worst cases. On this list, there are nine over 150 years. That is, on average, one every 17 years or so.

All in all, if we look at the 150-year record from 1867 to now, Puerto Rico has experienced 42 officially defined “major hurricanes”—those of Category 3 or worse. Category 3 means “devastating damage will occur.” Category 4 means “catastrophic damage will occur.” And Category 5’s catastrophic damage further entails “A high percentage of framed homes will be destroyed…Power outages will last for weeks to possibly months. Most of the area will be uninhabitable for weeks or months.”

Of the 42 major hurricanes since 1867 in Puerto Rico, 16 were Category 3, 17 were Category 4 and 9 were Category 5, according to the official Atlantic hurricane database.

Doing the arithmetic (150 years divided by 42), we see that there is on average a major hurricane on Puerto Rico about every 3.5 years.

There is a Category 4 or 5 hurricane every 5.8 years, on average.

And Category 5 hurricanes occur on average about every 17 years.

There are multiple challenging dimensions to these dismaying frequencies–humanitarian, political, engineering, financial. To conclude with the financial question:

How can the repetitive rebuilding of such frequent destruction be financed?  Thinking about it in the most abstract way, somewhere savings have to be built up. This may be either by self-insurance or by the accumulation of sufficiently large premiums paid for insurance bought from somebody else. Self-insurance can include the cost of superior, storm-resistant construction. Or funds could be borrowed for reconstruction, but have to be quite rapidly amortized before the next hurricane arrives. Or somebody else’s savings have to be taken in size to subsidize the recoveries from the recurring disasters.

Is it possible for Puerto Rico to have a long-term strategy for financing the recurring costs of predictably being in the way of frequent hurricanes, other than using somebody else’s savings?

Image by JEAN-FRANCOIS Manuel


Why cloture benefits both parties


Senate Rule XXII requires an affirmative vote of “three-fifths of the senators duly chosen and sworn” to invoke cloture, or end debate, on any “measure, motion, or other matter pending before the Senate … except on a measure or motion to amend the Senate rules, in which case the necessary affirmative vote shall be two-thirds of the senators present and voting.”

Consequently, cloture is typically understood today as making minority obstruction possible. A three-fifths vote is effectively required to schedule an up-or-down vote on most questions, absent the unanimous agreement of all 100 senators. However, ending debate on presidential nominations only requires a simple-majority vote. (Democrats used the nuclear option to reduce the threshold to invoke cloture on most nominees in 2013 and Republican did the same for Supreme Court nominees earlier this year.)

Notwithstanding the recent use of the nuclear option, cloture remains a time-consuming process when the Senate is considering nominations and legislation. For most debatable measures, the entire process requires four calendar days to complete. This gives individual senators the ability singlehandedly to delay consideration of the majority’s agenda on the Senate floor simply by withholding their consent to expedite the decision-making process. Given this, the number of cloture votes is frequently cited as evidence of minority obstruction.

But there is more to cloture than minority obstruction.

It is certainly not incorrect to view cloture motions and minority obstruction as related. However, such a narrow focus overlooks the many advantages that the cloture rule offers Senate majorities. Then-Majority Leader Harry Reid, D-Nev., acknowledged these benefits in an exchange with then-Minority Leader Mitch McConnell, R-Ky., on the Senate floor in July 2012. “The filibuster was originally … to help legislation get passed. That is the reason they changed the rules here to do that.”

The majority, acting through its leadership, can use cloture to structure the legislative process to its advantage. When viewed from this perspective, the incidence of cloture votes also reflects an increase in the influence of the majority leader and, by extension, the majority party, in the Senate’s deliberations.

The evolution in the use of cloture during the second half of the 20th century increased the influence of the majority leader. Cloture is now utilized preemptively on a routine basis to speed consideration of legislation regardless of time spent on the floor. In this process, the majority limits the minority’s ability to debate measures freely and offer amendments pursuant to the Senate rules. Such behavior may simply result from the anticipation of expected obstruction by the minority party. It could also represent a genuine effort to push the majority’s agenda through the Senate unchanged in a timely manner. The restrictive process could also be utilized to defend carefully negotiated legislation from killer amendments or to protect majority party members from having to take tough votes.

The majority leader frequently uses cloture as a scheduling tool when the Senate considers major legislation. While filing cloture is a time-intensive process, it provides the only clearly established procedure for the resolution of debatable questions in the Senate. Thus, the cloture rule provides a small degree of certainty in an otherwise uncertain environment. The majority leader can use such certainty to his advantage by scheduling votes at the end of the week and immediately before a long recess to force an issue. Obstructing senators are less likely to risk the ire of their colleagues by forcing a rare weekend session.

The cloture rule also gives the majority leader the ability to impose a germaneness requirement on amendments to legislation post-cloture. Such a requirement may spare majority party members from having to take tough votes on nongermane amendments. It also protects carefully crafted legislation from poison-pill amendments unrelated to the underlying issue.

Finally, cloture is often utilized by the majority leader for symbolic purposes. By triggering an up-or-down vote on legislation, cloture establishes a clearly defined line of demarcation between the majority and minority parties on controversial issues. Such votes can be presented as take-it-or-leave-it propositions. The proponents of such measures can often portray the senators who vote against them as not supporting the underling legislation.

Without the cloture process, the majority leader would not have these important, albeit limited, tools at his disposal, and he would thus be unable to structure the legislative process to the majority’s advantage using existing Senate rules. When combined with the practice of filling the amendment tree, the cloture process further allows the majority leader to limit the ability of individual senators to participate in the legislative process without having to change the Senate’s rules to reduce their procedural prerogatives.

The fact that the majority leader regularly files cloture early in the legislative process, before any actual obstruction can be said to have occurred on a measure, is illustrative of the benefits that Senate majorities derive from the cloture process. As the figure below demonstrates, the instances in which cloture has been utilized during the early stages of a measure’s consideration on the Senate floor have increased dramatically since 2001. This dynamic can be isolated and the majority’s pre-emptive use of cloture can be more readily discerned by comparing the total number of cloture motions filed in a Congress to the number filed when omitting those motions filed on the first day of a bill’s consideration or very early in the legislative process.


The takeaway from this is that the cloture process may benefit both the majority and the minority parties in the Senate today.

Image by Jonathan O’Reilly


How the FCC’s media ownership reforms could save local news


The following post was co-authored by R Street Tech Policy Associate Joe Kane. 

Local news is in decline. As advertising revenues plummet and both reporters and subscription dollars increasingly flow to a handful of coastal media outlets, local newspapers and broadcasters throughout the rest of the United States struggle to get by.

Shifts in media consumption in the digital age are partly to blame, but these local news outlets also are hamstrung by arcane ownership restrictions that inhibit their ability to innovate and compete. The Federal Communications Commission’s decades-old restrictions on local media ownership may have made sense when Americans’ news outlets were limited to local newspapers, radio and three commercial TV broadcasters (ABC, CBS and NBC — Fox wasn’t formed until the mid-1980s). But with the rise of cable news and the commercial internet, these restrictions now skew the media marketplace and become more outdated every day.

Thankfully, this broken situation is about to be fixed. This week, the FCC is set to pass commonsense reforms to its local-media ownership rules that are long overdue. These updated rules will better reflect the realities of the current media landscape and allow local newspapers and broadcasters to compete with other media outlets on a level playing field. The changes include eliminating bans on media cross-ownership, updating local-broadcast merger rules and allowing broadcasters to enter into joint sales agreements (JSAs) for advertising without automatically qualifying as merged entities for purposes of the ownership restrictions.

FCC Chairman Ajit Pai recently outlined the importance of eliminating the cross-ownership bans. Like many FCC rules, the bans contemplated a siloed and heavily concentrated media market, which in no way resembles the cornucopia of media outlets available to Americans today. The cross-ownership bans date back to the 1970s, when local broadcasters and newspapers provided the only access to news in many markets. At that time, prohibiting any one owner from controlling both a radio station and a television station in the same market, or a newspaper and a television or radio station in the same market, was a way to ensure Americans had access to a diverse array of viewpoints and news sources.

However, with the rise of cable news and the internet, these cross-ownership bans no longer make any sense. Jeff Bezos (Amazon CEO and world’s richest man) was allowed to buy the Washington Post, and Facebook or Google legally could try to buy The New York Times. But a local broadcaster buying a struggling newspaper is strictly forbidden.

That simply makes no sense. Any merger that threatens to create a monopoly or lessen competition substantially (like those NYT hypotheticals), could still be blocked under general antitrust law. But many cross-ownership deals between local newspapers and broadcasters would raise few, if any, antitrust concerns, so the per se ban on them should be removed. Moreover, allowing cross-ownership between broadcasters and newspapers would likely lead to more coverage of local issues.

The FCC is also updating its rules for mergers among broadcasters, again to recognize the changing media marketplace. Previously, a top-four broadcaster and a smaller broadcaster were allowed to merge only if doing so left at least eight independently owned broadcast stations in the market. This so-called “Eight Voices Test” doesn’t count cable news or the internet as even a single “voice” in the market, which is absurd, given the effectively infinite capacity for independent voices on these platforms. Thankfully, the FCC is set to eliminate this outdated test and allow general antitrust law to govern these mergers instead.

Similarly, the FCC is relaxing its rule that prohibits all mergers between top-four broadcasters, choosing instead to review these mergers on a case-by-case basis. Currently, the FCC requires the four biggest TV broadcasters to be independently owned, regardless of how many other stations are in the market. This nationwide, bright-line rule is not appropriate in all markets. For example, in a market with two very large stations and several smaller stations, a merger between the third and fourth biggest stations could benefit both consumers and competition by putting greater pressure on the two biggest stations. In many cases, such a merger would be harmful, but employing case-by-case review will allow the FCC to evaluate actual market conditions, rather than sticking to a rigid line drawn in a bygone era.

Finally, the FCC is amending its rule that treats any broadcasters with joint sales agreements (JSAs) as being under common ownership. Again, this is simply a case of the FCC modernizing its media ownership rules to bring them more in line with the antitrust rules that govern competition in every other sector. The current rules assume that if two broadcasters use a JSA in advertising sales, it automatically gives one station enough control over the other to amount to common ownership. It’s true that such arrangements can amount to collusion and unfair restraints on trade, depending on the degree of control they exert. But they can also greatly reduce costs for struggling broadcasters who cannot afford their own sales teams. The current restriction on JSAs harms the public interest by blocking these efficiency gains. Going forward, whether JSAs are attributable for purposes of ownership restrictions will be assessed under general antitrust standards.

The media marketplace is increasingly converging toward the internet and over-the-top services, yet the FCC’s local media ownership rules were devised before the internet even existed. The commonsense reforms the FCC has proposed for these antiquated rules are well overdue. By removing unnecessary restrictions and updating its standards, the FCC can balance the playing field, stimulate investment and help save local news media.

Image by Zerbor

Virtue signaling won’t save the planet, but state compacts might


Senior U.S. climate officials arrived Monday in Bonn, Germany, a week into the latest meeting of the United Nations-sponsored climate change project known as the Conference of Parties-23 (COP-23).

To no one’s surprise, the “rest of the world” (which is to say, Europe and the American political left, mostly) remains unhappy about the United States’ decision to withdraw from the Paris Climate Accord in June. Nonetheless, they are committed to find a way to persuade the country (which is to say, the red states) to see the error of its ways.

Over the weekend, four Democratic governors from states with active environmental movements—Jay Inslee of Washington, Jerry Brown of California, Kate Brown of Oregon and Terry McAuliffe of Virginia—verbally thrashed the Trump administration, although Brown was taken aback when even he was booed and heckled by “climate justice” protestors.

But to no avail.

On Monday, as several of Trump’s most senior climate negotiators took part in a panel talk on “clean fossil fuels,” attendees started singing a clever protest song to the tune of Lee Greenwood’s “God Bless the U.S.A.”

But the Trump administration still plans to exit the Paris Agreement. What gives?

Suffice it to say, taking moral umbrage at the United States doesn’t have the same coercive power over American policy as the Pentagon’s nuclear umbrella over Europe or the U.S. Navy and its 11 aircraft carriers keeping the world’s trade routes has had on global policy.  Hence, the distinction between “hard power” and “soft power” made many years ago by Harvard’s Joseph Nye.

The top-down approach to climate change the United Nations prefers was never going to work. Major climate meetings have been taking place for 23 years—hence the name COP23—but have never succeeded in creating a workable international scheme. On two separate occasions, the United States has signed up and then removed itself from a global climate agreement, first in 2001 under George W. Bush and now in 2017 under Trump.

Thankfully, a more decentralized approach to carbon policy is quietly gaining steam, as states and cities band together to pursue their own goals. Speaking during a panel discussion in Bonn, McAuliffe celebrated the recent election wins in Virginia, which ushered in a new swath of Democrats who will enjoy something like parity with Republicans in the Legislature’s lower house, not to mention a new Democratic governor, lieutenant governor and attorney general.

This means Virginia will likely become a member of the nine-state Regional Greenhouse Gas Initiative, which has had some success cutting emissions from the power sector. Carbon markets are less economically efficient than a carbon price, but since its creation in 2005, RGGI state carbon emissions have fallen 40 percent, thanks in large part to the development of natural gas reserves from hydraulic fracturing. While RGGI is not an ideal vehicle to place a market price on carbon, this type of compulsory, cost-sharing system is the longest-lasting successful carbon market still in existence.

Along with Virginia, the election of a Democrat to replace outgoing Republican Gov. Chris Christie of New Jersey also means that state may rejoin RGGI, after leaving the group in 2011.

In other words, the growth of regional carbon markets is still a going concern. It even could force real U.S. emissions reductions in the coming years, even as the sound and fury of U.N. meetings along the Rhine continue to signify nothing.

Image by r.classen


Massachusetts carbon tax bills are a mixed bag


The search for climate change solutions that keep science front and center and political preferences secondary has led to one frustration after another. But that soon may change. A revenue-neutral carbon tax holds the promise of reducing carbon emissions without increasing the size of government – the principle objection of conservatives who long have been skeptical of more prescriptive climate regulations.

If properly designed, a revenue-neutral carbon tax can employ market-based incentives, rather than government regulations and subsidies, to ensure that pollution is appropriately priced. While no U.S. state has adopted a carbon tax to date, much less a revenue-neutral version, Massachusetts is poised to become the first state to successfully pass a hybrid version.

Two carbon-pricing bills of note have been filed this legislative session. S.1821, filed by state Sen. Michael J. Barrett, D-Lexington, and H.1726, filed by state Rep. Jennifer Benson, D-Lunenburg, both seek to assess fees on carbon emissions. Yet the Senate bill is, as a matter of both politics and policy, by far the better of the two.

Barrett’s bill would simply assess a fee on emissions without adding to government bureaucracy. That way, Massachusetts taxpayers would pay only for the price of their pollution, and no more. Conversely, the House proposal would divert 20 percent of the revenue generated by the tax into a so-called “Green Infrastructure Fund” to support investments in “transportation, resiliency and clean energy projects that reduce greenhouse gas emissions, prepare for climate change impacts, assist low-income households and renters in reducing their energy costs, and create local economic development and employment.”

While that laundry list of well-intentioned spending certainly aspires to assist the commonwealth, there’s no indication that it will better dispose of the funds it collects than private actors would under a system in which carbon emissions are appropriately priced. In other words, a revenue-neutral carbon tax could achieve all of the benefits sought by establishing a green infrastructure fund, without creating new government programs or adding to government waste.

A revenue-neutral carbon tax need not harm Massachusetts’ economy, as evidenced by the well-balanced policy approach taken by the western Canadian province of British Columbia, which adopted a similar fee-and-dividend approach to carbon pricing in 2008. In fact, the United Nations Framework Convention on Climate Change estimates B.C.’s tax has reduced province’s emissions by up to 15 percent with no observable drag on overall economic performance. In fact, between 2007 and 2014, British Columbia’s real gross domestic product grew by 12.4 percent, stronger than the Canadian average.

The only downside to the Senate bill is that it, unfortunately, does nothing to reduce the tax burden on Massachusetts residents. Rather than use the fee to lower, say, the income tax; the revenue would finance a Carbon Dioxide Emissions Charges Rebate Fund. All proceeds would be returned to residents and employers in the in the form of rebates. Analysis from the Center on Budget and Policy Priorities concludes that if large rebates were distributed through an efficient delivery system, they would be able to protect low-income households from the brunt of the tax, but would not be able to fully cover households and businesses with large carbon footprints.

A better approach would allow be to apply the revenues to reduce or eliminate more destructive taxes like the corporate excise tax or the personal income tax. Taxing bad things, like carbon emissions, rather than good things, like labor and investment, would build ongoing support for the carbon tax among citizens and businesses and allow any negative effects to be more than offset by a growing state economy.

While critics in both parties and on both sides of the climate change debate may find fault with the Senate bill, it is step in the right direction for the country and the commonwealth. If enforced properly, this legislation will reduce harmful carbon emissions and benefit Massachusetts residents and businesses, without contributing to the stream of wasteful government spending and unnecessary bureaucratic growth.

If legislators and environmental groups are serious about addressing climate chance, they should do so in a way that truly benefits everyone. If, in fact, the goal is to reduce carbon emissions and bring economic benefits to residents and businesses, a revenue-neutral carbon tax is the best way forward.

Image by funnybear63


A new development involving the Congressional Review Act


A debate has broken out in the regulatory-reform community this past year over how properly to construe the reach of the Congressional Review Act. Traditionally, most observers have viewed the CRA as a tool by which Congress could repeal new regulations issued within the last 60 legislative days. But some legal scholars have argued that, while this is broadly correct, it’s far from clear when the CRA’s 60-day clock should start ticking.

Paul Larkin from the Heritage Foundation is among those to point out that, under the CRA’s text, the clock cannot start until the regulation in question has been submitted to Congress. Because many agency rules are never officially submitted to Congress—even ones that were promulgated many years ago—the 60-day clock was never activated for those rules and Congress could thus still repeal them using the fast-track mechanism.

Another component of this debate has been clarifying what, exactly, constitutes a “rule” for CRA purposes. The text of the CRA incorporates the Administrative Procedure Act’s definition of “rule,” which as Larkin points out, “has been recognized as quite broad.” This broader interpretation of the term “rule” could encompass informal agency actions like policy statements or guidance, which do not go through the more formalized process of notice-and-comment rulemaking under the APA.

Congress has so far appeared reluctant to embrace this broader interpretation of the CRA’s text and use it to repeal rules and other agency action stretching back into previous administrations. But that could be changing. The Wall Street Journal editorial board and other media outlets are reporting that Sen. Pat Toomey, R-Pa., recently asked the Government Accountability Office to issue a determination as to whether a 2013 leveraged-lending guidance document from the Obama administration constituted a “rule” for CRA purposes.

The GAO finally has issued its ruling, concluding that the lending guidance was, in fact, a rule under the CRA, meaning it is eligible for repeal under the act. Further, under Senate precedent, the publication of a GAO report such as this one is treated as the official trigger for the CRA’s 60-day legislative clock. As the nonpartisan Congressional Research Service has noted:

In some instances, an agency has considered an action not to be a rule under the CRA and has declined to submit it to Congress… In the past, when a Member of Congress has thought an agency action is a rule under the CRA, the Member has sometimes asked GAO for a formal opinion on whether the specific action satisfies the CRA definition of a ‘rule’ such that it would be subject to the CRA’s disapproval procedures.

GAO has issued 11 opinions of this type at the request of Members of Congress. In seven opinions, GAO has determined that the agency action satisfied the CRA definition of a ‘rule.’ After receiving these opinions, some Members have submitted CRA resolutions of disapproval for the “rule” that was never submitted…

Members have had varying degrees of success in getting resolutions recognized as privileged under the CRA even if the agency never submitted the rule to Congress. It appears from recent practice that, in these cases, the Senate has considered the publication in the Congressional Record of the official GAO opinions discussed above as the trigger date for the initiation period to submit a disapproval resolution and for the action period during which such a resolution qualifies for expedited consideration in the Senate…

It remains to be seen if Congress will pursue a resolution of disapproval under the CRA to repeal this particular rule on leveraged lending, but if it does, the potential implications run deep. Congressmen could ask GAO to issue more opinions determining whether past agency actions constitute rules for CRA purposes, and then seek to repeal them. The law firm Cleary Gottlieb observed in a memorandum on this development:

The GAO’s Leveraged Lending Opinion casts a shadow of uncertainty over the applicability and future viability of the Agencies’ leveraged loan supervision regime, and critically, other agency actions that could be characterized as ‘rules’ subject to Congressional disapproval. In fact, if Congress seeks to address other agency ‘rules’ that were never submitted to Congress under the CRA, the total volume of agency interpretations and statements of policy that could potentially become subject to Congressional disapproval would be very large indeed.

The Red Tape Rollback project (of which the R Street Institute is a partner) has been compiling a list of agency actions and rules that were never properly submitted to Congress and are therefore potentially still eligible for repeal via the CRA. We’ll see where Congress goes from here, but it’s possible it could be on the brink of adopting a broader interpretation of the CRA.

Image by iQoncept


NAFTA negotiators should respect domestic labor rules


The following blog post was co-authored by R Street Research Assistant Randy Loayza.

As the United States, Canada and Mexico continue to renegotiate the North American Free Trade Agreement, political posturing and protectionism hinder progress toward modernization and trade liberalization.

The question of labor unions has come to the forefront of Canada’s concerns. Specifically, Canadian trade representatives claim that the “right-to-work” laws in 28 American states provide an unfair advantage to individual states over Canadian provinces by allowing states to curb the collective bargaining power of unions. Canada also seeks higher labor standards in Mexico as part of this protectionist push toward a more aggressive international harmonization of labor laws.

It is not surprising that Canada feels compelled to address state’s right-to-work laws, as these prevent unions from mandating dues payment from employees who opted not to join the industry’s union. Canadian labor laws allow for “a majority vote of the bargaining unit employees” to decide whether unions can collect mandatory dues from employees. This is also the standard for American states without RTW laws.

Canadian trade representatives hope to persuade the United States, as well as Mexico, to ratify the eight core conventions of the International Labor Organization (ILO) and various labor chapters from the Canada-EU Comprehensive Economic and Trade Agreement (CETA). Canadian trade representatives are even advocating federal legislation to bar RTW states from enforcing such laws, which would override the United States’ system of federalism.

Although it is conjecture to assume that Canada’s protectionist pleas are solely in response to fear of certain disadvantage if their labor markets compete with U.S. markets that operate under these RTW laws, it is not farfetched. The very notion that RTW laws provide an  to individual states is a testament to the value of open and free choice. In fact, states with RTW laws provide a better environment for investment opportunities and are less conducive to adversarial employer-employee relations. Moreover, states that mandate union membership as a condition of employment have seen labor migrate to states with RTW laws.

Labor unions see RTW laws as fundamental risks to their already declining membership base. The AFL-CIO, for instance, is extremely hostile toward states that employ such laws, citing collective bargaining leverage, as well as wage benefits. As such, workers’ right are at the forefront of the AFL-CIO’s concerns. Under the National Labor Relations Act, it is illegal in the United States for labor unions to mandate membership as a condition of employment. States without RTW laws do allow labor unions to bargain collectively with employers to mandate both membership and due payment from any employee, although most arrangements only require the latter.

Supporters of Canada’s position on U.S. RTW laws focus narrowly on union membership as the largest contributing factor in wages and labor-participation rates. In reality, other factors like emerging markets, the Great Recession, modernization of integrated supply chains, globalization and the move toward higher-skilled industries arguably have led to greater changes in American employment and wages. Also, varying productivity rates determine wage rates in the labor market and across countries. Wage rates also drive immigration among countries—as trends in U.S.-Mexico migratory patterns have shown.

Evidence from around the globe has shown that economic development through increased international trade fosters higher labor standards. While there may be some concerns over unfair labor practices that disadvantage unions, such cases should not be used to threaten the progress made through NAFTA. Larger problems of inequality and contemporary productivity-wage disparities should also concern both NAFTA supporters and skeptics alike. But such concerns will not be addressed through international labor mandates. Instead, renegotiation toward a stable, modernized, market-driven and rules-based NAFTA must respect the sovereignty of domestic labor laws.

Image by DarwelShots


Big Ag reaping federal subsidy benefits


With the farm bill up for reauthorization in 2018, policymakers will soon have a chance to reassess farm subsidies and target the rampant waste and cronyism in our farm-support system.

The Environmental Working Group (EWG) this week published the most recent update to their Farm Subsidy Database, which serves as a useful guide to illustrate who is and who is not benefiting from the current system. It confirms the trend seen over decades of aggregated data on farming subsidies: the most successful agribusinesses receiving the largest portion of federal farm subsidies.

EWG estimates that, between 1995 and 2016, farms that ranked among the top 10 percent of income received about 77 percent of “covered commodity” subsidies, or subsidies that cover corn and soybeans. To put that into perspective, these large-scale farms have an average household income of $1.1 million.

The Congressional Budget Office earlier this year projected that current farm subsidy programs would cost taxpayers an extra $7.5 billion more than originally projected. But rather than address those spiraling costs as part of its deliberations in the 2014 farm bill, Congress proposed cutting other programs, notably the Supplemental Nutritional Assistance Program, which actually has proven to be less expensive than originally expected.

The current system represents outright cronyism. Congress has neglected proposals for pragmatic reform, with the ultimate effect of disadvantaging smaller and beginning farmers, who must cope with rising land prices and farm consolidation as the mega-farms get richer

Fortunately, EWG’s newly updated database is an incredibly reliable source for policymakers to acquire baseline information regarding farm subsidies. As the deadline to reauthorize the farm bill approaches, it is necessary to recognize their work as essential in the movement to reform federal agriculture programs and to quash agribusiness cronyism.

Image by Juergen Faelchle


Jon Coppage on the ‘Stacy on the Right’ show

R Street Visiting Senior Fellow Jonathan Coppage recently joined “Stacy on the Right” host Stacy Washington to discuss how accessory dwelling units were banned in many cities, but can be restored to provide families with financial and social flexibility across a home’s life-cycle. Video of the show is embedded below:

Latest reduction in duties shows promise in US-Canada softwood lumber trade


A recent decision by the U.S. Commerce Department could signal de-escalation of what has been one of the most fraught American trade disputes with Canada. Following the announcement of preliminary anti-dumping duties in June, the finalized revisions of total countervailing and anti-dumping duties—the total amount of tariffs on Canadian softwood lumber imports—reduce rates for most Canadian exporters.

This is the latest, most promising development in a long-running trade dispute. While the two countries had been trading in softwood lumber since the 1800s and first imposed tariffs on one another in the 1930s, the contemporary dispute started in 1982, when U.S. softwood lumber companies filed a complaint against competing Canadian exporters for unfair subsidization.

Subsequent claims have also contended Canadian provincial governments artificially set below-market prices for the use of public timberlands. These public lands comprise 94 percent of total Canadian lumber-yielding areas, while only 42 percent of U.S. timberlands are under government control.

Canadian provinces have since reformed their leasing and pricing systems and negotiated an agreement on market-share caps as part of prior attempts to avoid stricter tariffs. After five more cycles of duties claims and agreements, many of the same concerns over unfair pricing still remain. American lumber companies also blame a favorable currency exchange for giving Canadian imports the advantage in American lumber markets.

Despite these protectionist pressures, the most recent revisions come at a great time for the American homebuilding industry. The National Association of Home Builders in the United has been adamant about the need for lumber imports to meet domestic demand. They also estimate that, of the approximately 33 percent of lumber imported last year, 95 percent came from Canada.

Economists have long agreed on the benefits of eliminating tariffs. While this latest revision shows a step in the right direction, the continued insistence on lumber duties—especially in the wake of rising lumber demand due to natural disasters—remains concerning to those who favor free markets and mutually beneficial, harmonious trade.

Image by quangmooo


Locked Up Without Conviction: Pretrial Integrity and Safety Act, strategies for public safety

R Street hosted a recent Capitol Hill panel to discuss how reliance on cash bail as a primary determinant for detention in pretrial hearings can harm public safety and hamper individual liberty. The panel was moderated by Marc Howard, professor of government and law, Georgetown University.

The panelists considered how objective pretrial risk assessments could serve as a complement to bail. Sekwan Merritt, a former inmate, discussed how the jail and cash bail system impacts people of color, and how facilities can increase rehabilitative resources to stop the revolving door of local jails. The panel also discussed the Pretrial Integrity and Safety Act, recently introduced bipartisan legislation sponsored by Sens. Rand Paul, R-Ky., and Kamala Harris, D-Calif.

Other panelists included:

  • Arthur Rizer, R Street’s director of national security and justice policy
  • Ed Chung, vice president for criminal justice reform, Center for American Progress
  • Robert Green, director of the Montgomery County Department of Correction and Rehabilitation
  • Sekwan Merritt, reform advocate and former inmate

Video of the event can be found here.

The Library of Virginia’s ‘Teetotalers and Moonshiners’ exhibit


R Street’s team recently made a trip to Richmond, Virginia, to visit the Library of Virginia’s “Teetotalers & Moonshiners” exhibit. Given Virginia’s rich history with moonshine, Richmond is an ideal location for such an exhibit. The exhibit, which is open through Dec. 5, tells the story of how Prohibition started local before going global.

The display starts by recounting the time period right before national Prohibition, when Virginia already was starting to clamp down on booze. In 1877, the state Legislature started taxing alcoholic spirits and, by 1886, it gave counties the ability to shutter saloons and other drinking establishments within their borders (what became known as the “local option”). On the brink of Prohibition going national, nearly 90 percent of Virginia counties had shut down their drinking establishments.

Not satisfied with the local option, Prohibition fever—spurred by the Progressive and Temperance movements—quickly advanced to the state level, where in 1914 the Legislature approved a ballot referendum on statewide prohibition. State citizens voted in favor of the statewide booze ban, although the exhibit notes that African-Americans and the white working-class—two constituencies who disfavored the ban—were largely excluded from the vote.

While Prohibition wouldn’t become nationalized until 1920, Virginia wasted little time in commencing its crackdown on bootleggers. Nov. 1, 1916, was the official “last call” for Virginia distillers, breweries and bars; most went out of business during the dry years, although some large companies were able to stay afloat by switching their production to beverages such as soda.

As followers of the Prohibition Era know, a black market of booze quickly sprang up, despite the government’s best enforcement efforts. As one placard at the exhibit described it:

Prohibition created a thriving underground economy and culture. Those in the know used passwords and secret knocks to access ‘nip joints’ and speakeasies. Moonshiners, makers or sellers of illicit whiskey, hid their operations in remote rural landscapes. Hidden compartments in clothing, everyday items, and even cars moved alcohol from place to place.

Virginia was ground zero for this moonshining culture, as remote regions of the Blue Ridge Mountains—places like Franklin County in southwest Virginia—made for ideal bootlegging locales. As I wrote recently in a piece on Virginia moonshine for National Public Radio’s “The Salt”:

[During Prohibition] Virginia’s backwoods distillers were forced underground. Moonshining in the Blue Ridge Mountains became so notorious that Franklin County, in the southwest corner of Virginia, was dubbed the “Moonshine Capital of the World,” after it was estimated that 99 out of 100 county residents were involved in the moonshine trade.

Before Prohibition, ‘getting moonshine in areas like Franklin County was not much different from buying eggs or milk,’ says Matt Bondurant. Bondurant’s novel, Wettest County in the World, is based on his grandfather’s moonshining exploits in Franklin County. Matt’s brother Robert now runs Bondurant Brothers Distillery, which distills unaged whiskey not far from Franklin. ‘But when Prohibition came around, then it became a potential money-making possibility,’ says Bondurant.

Money led to crime, which in turn led to the dramatic law-enforcement raids, car chases and prosecutions that so many Americans associate with Prohibition-era moonshining. Moonshiners were forced to operate in remote Appalachian regions like Franklin to avoid detection, and they went to great lengths to hide their efforts — including burying their stills underneath fake graveyards back in the mountains.

Perhaps the most interesting part of the exhibit was its discussion of Prohibition’s legacy. Although it notes that Prohibition “barely stemmed the flow of alcohol across the country,” it also points out that certain vestiges of Prohibition are still with us. Namely, in 1933, when Prohibition was repealed, Virginia established the Alcohol Beverage Control department to regulate booze within the state. Other states likewise passed comprehensive regulatory regimes governing alcohol in the immediate aftermath of repeal.

Unfortunately, this post-Prohibition regulatory structure remains mostly intact to this day. As R Street has noted in the past, Virginia labors under some of the worst alcohol laws in the country. It remains a “control state” in which distilled spirits can only be sold in government-operated liquor stores, and it severely taxes and handicaps its spirits producers.

Indeed, while it’s tempting to view exhibits like “Moonshiners & Teetotalers” as windows into a long ago past—one that will remain relegated to the dustbin of history—the legacy of Prohibition still haunts us in many ways. More than anything, learning about the Prohibition Era underscores the importance of pursuing rational alcoholic beverage laws that promote free enterprise and consumer choice.

For more pictures from the exhibit (both ones taken by the team and ones provided by the exhibit), check out the images below:

Was the Bank of England right to lie for its country in 1914?


Jean-Claude Juncker, now the president of the European Commission and then head of the European finance ministers, sardonically observed about government officials trying to cope with financial crises:  “When it becomes serious, you have to lie.” The underlying rationale is presumably that the officials think stating the truth might make the crisis worse.

No one would be surprised by politicians lying, but Juncker’s dictum is the opposite of the classic theory of the Roman statesman Cicero, who taught that “What is morally wrong can never be expedient.” Probably few practicing politicians in their hearts agree with Cicero about this. But how about central bankers, for whom public credibility is of the essence?  Should they lie if things are too bad to admit?

An instructive moment of things getting seriously bad enough to lie came for the Bank of England at the beginning of the crisis of the First World War in 1914. At the time, the bank was far and away the top central bank in the world, and London was the unquestioned center of global finance. One might reasonably have assumed the Bank of England to be highly credible.

A fascinating article, “Your country needs funds: The extraordinary story of Britain’s early efforts to finance the First World War” in Bank Underground, a blog for Bank of England staffers, has revealed the less-than-admirable behavior of their predecessors at the bank a century before. Or alternately, do you, thoughtful reader, conclude that it was admirable to serve the patriotic cause by dishonesty?

Fraud is a crime, and the Bank of England engaged in fraud to deceive the British public about the failed attempts of the first big government-war-bond issue. This issue raised less than a third of its target, but the real result was kept hidden. Addressing “this failure and it subsequent cover-up,” authors Michael Anson, et al., reveal that “the shortfall was secretly plugged by the Bank, with funds registered individually under the names of the Chief Cashier and his deputy to hide their true origin.” In other words, the Bank of England bought and monetized the new government debt and lied about it to the public to support the war effort.

The lie passed into the Financial Times under the headline, “OVER-SUBSCRIBED WAR LOAN”—an odd description, to say the least, of an issue that in fact was undersubscribed by two-thirds. Imagine what the Securities and Exchange Commission would do to some corporate financial officer who did the same thing.

But it was thought by the responsible officers of the British government and the Bank of England that speaking the truth would have been a disaster. Say the authors, “Revealing the truth would doubtless have led to the collapse of all outstanding War Loan prices, endangering any future capital raising. Apart from the need to plug the funding shortfall, any failure would have been a propaganda coup for Germany.” Which do you choose: truth or a preventing a German propaganda coup?

We learn from the article that the famous economist, John Maynard Keynes, wrote a secret memo to His Majesty’s Treasury, in which he described the Bank of England’s actions as “compelled by circumstances” and that they had been “concealed from the public by a masterful manipulation.” A politic and memorable euphemism.

Is it right to lie to your fellow citizens for your country? Was it right for the world’s greatest central bank to commit fraud for its country?  The Bank of England thought so in 1914. What do central banks think now?

And what do you think, honored reader?  Suppose you were a senior British official not in on the deception in 1914, but you found out about it with your country enmeshed in the expanding world war. Would you choose the theory of Juncker or Cicero?

Image by sylv1rob1

James Wallner praises the value of political conflict on the Ezra Klein Show

Does politics, and Congress in particular, actually have too little conflict, rather too much? That’s the argument R Street Senior Fellow James Wallner put forward in a recent episode of the Ezra Klein Show, a podcast produced by the Vox Media Network. From his study of congressional history and procedure, and Wallner concludes that the leadership of both parties have been using the rules to stymie disagreement, and the effect has been to make it nearly impossible for positions to be made clear, compromises to be tested and ways forward to be found.

Congress uses the Congressional Review Act to abolish another regulation

During the early months of the Trump administration, Congress and the president made unprecedented use of the Congressional Review Act to repeal regulations. The CRA, which was enacted in 1996 with bipartisan support, allows Congress to use an expedited process to overturn regulations that agencies have enacted within the past 60 legislative days.

Despite the CRA being one of the most powerful regulatory oversight mechanisms available to Congress, it was a seldom used tool until the current Republican-controlled Congress and White House deployed it 14 times this past spring to overturn Obama-era regulations. Because of the law’s 60-day time limit, however, many observers figured the CRA would go dormant after Congress repealed all the eligible Obama-era rules.

This assumption failed to consider the possibility of using the CRA to repeal rules from independent agencies, which can still act at cross-purposes with the current president. For example, the present head of the Consumer Financial Protection Bureau, Richard Cordray, is an Obama appointee who is only removable from office “for cause.” This limits President Donald Trump’s ability to insert a new agency head. Sure enough, this past July, the CFPB promulgated an arbitration rule that was opposed by the White House.

Despite its opposition to this rule, the only recourse available to the administration was enlisting Congress to pass a CRA resolution repealing the rule. This week, the Senate sent such a resolution to the president’s desk. If Trump signs the bill as expected, it will bring the CRA repeal count up to 15 since January.

‘Economic competitiveness’ doesn’t always mean stronger intellectual property laws


The following piece was co-authored by R Street Tech Policy Joe Kane. 

In a recent letter, several dozen conservative leaders joined together to call on Republican policymakers to strengthen U.S. intellectual property laws. They lament the loss of U.S. manufacturing jobs that has turned “American communities into ghost towns” (about that) and warn this trend also will overtake the innovation sector as inventors move overseas.

In fact, patent applications are up from both U.S. and foreign applicants. Nonetheless, the letter’s authors contend that bolstering U.S. intellectual property laws must be just as core to the conservative agenda as deregulation and tax reform, lest we sacrifice our constitutional principles and global economic competitiveness.

The authors are correct to note that “patent protection was enshrined in our Constitution.” Where they are wrong is in their attempt to frame intellectual property as something the founders saw as an unfailing good or absolute right. Article 1, Section 8 of the U.S. Constitution gives Congress the power to create intellectual property protections like patents “[t]o promote the progress of science and the useful arts.” It is an enumerated power of the legislature, just like the power to regulate currency, to establish post offices and to tax. In other words, patents are something Congress may employ, specifically with the goal to foster innovation. The patent system is worth defending or even strengthening to the extent that doing so supports that goal. Insofar as it undermines innovation, it should be scaled back.

It would, for example, be false to assume that because innovation is desirable, we should always endeavor to grant more patents, irrespective of their quality, or that we should go to extreme lengths to enforce them or protect them from future challenge. An improperly structured patent system will stifle innovation, rather than encourage it, as the framers intended. Granting exclusive rights to an inventor always poses the risk of delaying when a particular innovation becomes widespread and making it costlier for others to build upon it. These costs are typically worthwhile where they serve as incentives for innovations that wouldn’t otherwise see the light of day. Patents are the means we use to create those incentives. But when low-quality or overly broad patents are issued, future innovation is stalled without countervailing benefits.

Patent examiners are fallible. The U.S. Patent and Trademark Office has issued numerous low-quality patents and the Government Accountability Office has cited the USPTO for inconsistent standards of quality and clarity. This uncertainty leads to litigation, which imposes greater costs on innovators. Indeed, patents have been granted for “inventions” that already exist or that were completely obvious, such as podcasts, swinging on a swing, cats chasing laser pointers and artificial sticks, to name a few.

Because challenging dubious patents through litigation is expensive, Congress moved in 2011 to create new avenues for the expedient review of granted patents that may not meet the requirements of novelty and nonobviousness. One of these proceedings—inter partes review (IPR)—has proved highly successful in accomplishing this goal. As we argue in a recent R Street paper, this process offers a mechanism to challenge dubious patents that is much quicker and cheaper than traditional litigation. The median inter partes review challenge costs about $350,000 through the appeal phase, as compared to $3.1 million in the district court.

The letter’s signatories, however, characterize the Patent Trial and Appeal Board, which conducts IPR, as a “patent death squad” that exists “for the sole purpose of invalidating patents.” They don’t appear even to consider the possibility that some improperly granted patents should be invalidated. They also do not posit what proportion of patents – or name even a single specific patent – invalidated by IPR they think should have remained in force.

No system is perfect, but we should discuss the benefits and costs of IPR, rather than assume it is an attack on intellectual property simply because it invalidates patents. Just as procedures to invalidate counterfeit or flawed titles in real estate are not an attack on real property rights, procedures to weed out bad patents are not an attack on innovation or on patents, in general. Supporters of strong patent laws should be especially interested in the quality and clarity of patents issued.

Of course, the reality that some patent abuse exists does not mean that we should abandon patents, but it does mean that we should carefully examine the system’s trade-offs and the externalities it creates. We should not assume that simply having more patents or stronger enforcement tools means more innovation. A world with 100-year patent terms or severe criminal penalties for infringement would inevitably be one with less innovation.

We all want an innovative society, and the patent system is an important means to further that goal. But we shouldn’t shut down conversations about patent reform or other improvements to the current system that could create more efficient ways to invalidate bad patents or otherwise lower transaction costs in litigation. To keep America on top, we must recognize and embrace the artful balance our founders intended. In other words, patents are a tool and an incentive that can be quantified and measured to inform our policymaking. They’re not an end in themselves.

Image by garagestock


The Senate Committee on Rules and Administration is hemorrhaging staff. Why?


The following post was co-authored by R Street Vice President of Policy Kevin Kosar. 

At the beginning of the 115th Congress, Sen. Richard Shelby, R-Ala., took over from Sen. Roy Blunt, R-Mo., as chairman of the Senate Rules and Administration Committee. Over the past year, the number of committee staff dropped from 20 to 14, a 30 percent decrease.

Of the 20 staffers employed as of Sept. 1, 2016, only six remain with the committee as of late last month, representing a 70 percent departure rate. Three of the committee’s new staff formerly worked in Sen. Shelby’s personal office and six of the 14 current staffers have been hired since February.

The committee’s jurisdiction includes federal elections, presidential succession and various legislative branch management duties (e.g., the Library of Congress and Architect of the Capitol). It has been tossed political hot potatoes, like bills requiring public release of presidents’ tax filings and the creation of an independent commission to investigate Russian interference in the election. Of particular interest is the Senate Rules Committee’s responsibility to:

…make a continuing study of the organization and operation of the Congress of the United States and shall recommend improvements in such organization and operation with a view toward strengthening the Congress, simplifying its operations, improving its relationships with other branches of the United States Government, and enabling it better to meet its responsibilities under the Constitution of the United States.

Why the committee’s staff cohort decreased sharply and why its turnover is so high is unclear, but it is concerning. Rapid staff turnover can drain institutional memory and disrupt operationsOur findings are displayed in the figure below. Readers wishing to understand our methodology for these findings should see below.



Using LegiStorm employment data, we took snapshots of the committee’s staffing directory at three different dates:

Date 1

  • Sept. 1, 2016: Comfortably before knowing about the outcome of the 2016 congressional elections and which party would enjoy control of the Senate and chairmanships of its committees.

Date 2

  • Feb. 1, 2017: Several weeks after the 115th Congress is sworn in and the new chair (Shelby) has taken over. The gap between the new Congress and Feb. 1 allows for Shelby to make staffing changes, in addition to giving staffers a chance to experience the working environment in the new Congress and with the new chair.

Date 3

  • Sept. 27, 2017: Just over one year from Date 1, and more than 10 months since Date 2. This gap grants some time for staffer attrition due to firing, resignation or any other normal career or life circumstances (new job in or outside the chamber, moving from D.C., etc.).

Each committee staffer was assigned a number (e.g., Staffer 1) and his or her continued employment was examined at the aforementioned dates. Hence, we see in the figure above which staff stayed and which did not (e.g., Staffer 1 departed sometime after Feb. 1, 2017.

Image by Eviart


On GATT’s 70th birthday, free trade remains the wellspring of peace


It was 70 years ago today—Oct. 27, 1947—that representatives from 23 nations gathered at the Palais des Nations in Geneva, Switzerland, to finalize and sign the General Agreement on Tariffs and Trade. This globally transformative trade agreement set into motion decades of growing economic prosperity around the globe.

Following the Second World War, these select countries sought to avoid the economically dismal ramifications of post-World War I protectionist policies. With the Great Depression’s shadow looming over the first conference, the conferees understood the importance of international economic cooperation and the negative externalities of restrictive trade policies. The goal of the conference was to pursue a free and open global market, reduce tariffs and dismantle trade barriers to allow the global economy to flourish.

These negotiations were both ambitious and historic. “It marks the completion of the most comprehensive, the most significant and the most far-reaching negotiations even undertaken in the history of world trade,” Chairman Max Suetens, who presided over the trade negotiations along with his co-chair Sergio Clark, announced to the world in detailing the earnest significance of such a feat. This achievement certainly marked the beginning of the postwar economic boom, signaling to all countries that the global market was now a cooperative venture.

Indeed, the enactment of GATT propelled the global marketplace into decades of unprecedented economic growth. In the years to come, countries would continue to build on GATT’s accomplishments of tariff reductions and lower trade barriers, eventually leading to the Kennedy Round, the Tokyo Round and the Uruguay Round. These negotiations were grounded in adherence to a rules-based system, in that their motions and reports were underpinned with logical deduction, rather than a more ad hoc system of policymaking.

At the conclusion of the Uruguay Round, GATT was replaced with a more structured international body, the World Trade Organization (WTO). While the WTO may replaced GATT, it adopted its overall framework. What began as an endeavor among 23 nations to address trade reform has evolved into an international collaborative trade effort, with all but three nations maintaining a membership status as of today.

In the face of today’s trade renegotiations over North American Free Trade Agreement (NAFTA) and the growing threat of protectionism across the globe, it is immensely valuable to look to the accomplishments made under GATT. This is particularly so in case of the United States, which now is signaling a preference for bilateral rather than multilateral trade agreements. This is an unfortunate development, as all the progress made under both GATT and the WTO demonstrates. Such gains should not be easily dismissed with reckless political rhetoric.

Now is the time to remember that humans flourish when various impediments and barriers are lifted, allowing free and open trade for goods and services. Free trade remains the wellspring of peace.

Image by Lightspring


House subcommittee clears plans to unclog hydropower pipeline


With a 14-minute Oct. 26 markup session, the U.S. House Subcommittee on Energy sent a pair of deceptively simple bills to the full Energy and Commerce Committee, each of which could (should they ultimately become law) help to boost the U.S. hydropower sector.

Both measures look to cut through the thicket of regulations that has brought what once was at least incremental growth of U.S. hydropower capacity to a virtual standstill. H.R. 2872—sponsored by Rep. Larry Bucshon, R-Ind.—would give the Federal Energy Regulatory Commission discretion to give nonpowered dams that want to develop hydropower exemptions from various licensing requirements.

Meanwhile, H.R. 2880—sponsored by Morgan Griffith, R-Va.—would limit FERC’s authority, when to comes to closed-loop pumped storage hydropower, to impose only those licensing conditions needed for public safety, to protect fish and wildlife and that are reasonable and economically feasible.

Speaking at Thursday’s markup, Energy Subcommittee Chairman Fred Upton, R-Mich., called new turbines on existing hydro dams “a win-win. These projects cause minimal environmental impact, new investment, more jobs and added benefits to the grid.”

A 2014 U.S. Energy Department report found that streamlining rules to more easily permit the installation of hydro turbines at existing nonpowered dams could add more than 50 gigawatts of carbon-free power by 2030. This is equal to roughly 50 additional nuclear facilities, but at a fraction of the cost. It would boost total U.S. electricity capacity by 3.5 percent, all without any increase in emissions. And it would roughly match the amount of wind energy capacity built since 2010, but without the tens of billions of dollars of taxpayer subsidies.

R Street looked at the topic in an August 2017 white paper that recommended unclogging the regulatory bottlenecks that in a licensing process that can take between seven and 10 years and sometimes makes the cost of the upgrades too expensive to justify.

The amount of red tape to build or relicense a hydropower project reflects a slow accumulation of bureaucratic obstructions, combined with decades of congressional inaction. These bills represent a hopeful step toward more inexpensive clean energy being produced in the United States.

Image by Fedor Selivanov


Lyft driver’s past conviction doesn’t undermine background check system


The taxi industry and other foes of transportation network companies are sure to jump on a recent news story about a Chicago Lyft driver who passed the company’s background check, even though he had a federal conviction for “aiding an individual with ties to terrorism.” Expect the overheated statements from the taxi cartel, but don’t pay them any mind when they arrive.

According to WGN-TV in Chicago, Raja L. Khan “spent 30 years as a Chicago cab driver and attempted to reinstate his license. He was denied by the city and by ride share company Uber.”

Lyft said the independent company that handles its background checks missed the conviction and that it’s an isolated incident. Nevertheless, the San Francisco-based company said it is “re-running background checks for Chicago drivers” and “will also voluntarily allow the city of Chicago to audit our background checks on an ongoing basis at Lyft’s expense.” In the wake of the incident, Chicago regulators have cited Lyft and are imposing fines of up to $2 million.

No one was harmed, but the situation understandably creates public concern about the reliability of background checks. But it’s the kind of human error that plagues any kind of system. There’s little more to the incident than human error and, indeed, it was another ridesharing company that caught the problem. The incident doesn’t undermine the safety of Lyft or this emerging industry.

After a tragic accident in San Francisco involving an Uber driver, taxi officials called for the city to regulate the TNCs tightly or even shut them down. But taxi owners will use any excuse to shut out competitors that are doing a remarkable job serving customers and providing a safe app-based transportation alternative. By the way, TNCs have had remarkable success in reducing drunken-driving incidents, which highlights the overall safety benefits they provide.

As I wrote for the San Diego Union-Tribune in 2014:

A search of ‘taxi’ and ‘car crashes’ will reveal a long list of troubling news stories. In San Francisco (in 2013), an Ohio couple died after a cab with bad brakes slammed into a concrete pillar. A year earlier there, a taxi driver who caused a deadly crash was identified as a man convicted in a notorious murder case, yet he passed the background checks.

Or, to take a more extreme case, in February 2013, licensed and vetted traditional cabbie Kashif Bashir of Alexandria, Virginia, shot an Alexandria police officer in the head when the officer responded to a call about Bashir harassing a store clerk and idling suspiciously in front of her storefront. Bashir was later cleared of attempted murder charges on grounds of insanity. Neither the city’s background check nor his taxi company employer had caught his long history of mental illness and alarming behavior.

The point is that a city-run taxicab background system is not necessarily a failsafe, either. The TNCs have every reason to scrutinize their drivers, to protect the safety of passengers and to avoid fines and citations. But it’s an imperfect world and mistakes happen, especially in a massive industry employing tens of thousands of drivers nationwide.

It’s best to do as Lyft and Chicago are doing here – dealing with any errors and using new procedures to assure they don’t happen again. Any attempt to tar a company or an entire industry for a modest mistake is irresponsible and almost certainly motivated by concerns about competition rather than a fastidious devotion to safety.

Image by Andrey_Popov


DOJ’s Rosenstein is set to jump back down the rabbit hole of opposing encryption in your smartphone

Also appeared in: TechDirt


Back in May, Deputy Attorney General Rod Rosenstein wrote the notorious disapproving memo that President Donald Trump used as pretext to fire FBI Director James Comey. But on at least one area of law-enforcement policy, Rosenstein and Comey remain on the same page.

The deputy AG set out earlier this month to revive the former FBI director’s efforts to limit encryption and other digital security technologies. In doing so, Rosenstein drew upon nearly a quarter century of the FBI’s anti-encryption tradition. But it’s a bad tradition.

Like many career prosecutors, Rosenstein is pretty sure he’s more committed to upholding the U.S. Constitution and the rule of law than most of the rest of us are. This was the thrust of his Oct. 10 remarks on encryption, delivered to an audience of midshipmen at the U.S. Naval Academy.

The most troubling aspect of Rosenstein’s speech was his insistence that, while the government’s purposes in defeating encryption are inherently noble, the motives of companies that provide routine encryption and other digital-security tools (the way Apple, Google and other successful companies now do) are inherently selfish and greedy.

At the same time, Rosenstein characterized those who disagree with him on encryption policy as a matter of principle—based on decades of grappling with the public-policy implications of using strong encryption versus weak encryption, or no encryption—are “advocates of absolute privacy.” (We all know that absolutism isn’t good, right?)

Rosenstein implied in his address that federal prosecutors are devoted to the U.S. Constitution in the same way that Naval Academy students are:

Each Midshipman swears to ‘support and defend the Constitution of the United States against all enemies, foreign and domestic.’ Our federal prosecutors take the same oath.

Of course, he elides the fact that many whose views on encryption differ from his views—including yours truly, as a lawyer licensed in three jurisdictions—have also sworn, multiple times, to uphold the U.S. Constitution. What’s more, many of the constitutional rights we now regard as sacrosanct, like the Fifth Amendment privilege against self-incrimination, were only vindicated over time under our rule of law—frequently in the face of overreaching by law-enforcement personnel and federal prosecutors, all of whom also swore to uphold the Constitution.

The differing sides of the encryption policy debate can’t be reduced to those who support or oppose the rule of law and the Constitution. Rosenstein chooses to characterize the debate this way because, as someone whose generally admirable career has been entirely within government, and almost entirely within the U.S. Justice Department, he has simply never attempted to put himself in the position of those with whom he disagrees.

As I’ve noted, Rosenstein’s remarks draw on a long tradition. U.S. intelligence agencies, together with the DOJ and the FBI, long have resorted reflexively to characterizing their opponents in the encryption debate as fundamentally mercenary (if they’re companies) or fundamentally unrealistic (if they’re privacy advocates). In Steven Levy’s 2001 book Crypto, which documented the encryption policy debates of the 1980s and 1990s, he details how the FBI framed the question for the Clinton administration:

What if your child is kidnapped and the evidence necessary to find and rescue your child is unrecoverable because of ‘warrant-proof’ encryption?

The Clinton administration’s answer—deriving directly from George H.W. Bush-era intelligence initiatives—was to try to create a government standard built around a special combination of encryption hardware and software, labeled “the Clipper Chip” in policy shorthand. If the U.S. government endorsed a high-quality digital-security technology that also was guaranteed not to be “warrant-proof”—that allowed special access to government agents with a warrant—the administration asserted this would provide the appropriate “balance” between privacy guarantees and the rule of law. But as Levy documented, the government’s approach in the 1990s raised just as many questions then as Rosenstein’s speech raises now:

If a crypto solution was not global, it would be useless. If buyers abroad did not trust U.S. products with the [Clipper Chip] scheme, they would eschew those products and buy instead from manufacturers in Switzerland, Germany, or even Russia.

The United States’ commitment to rule of law also raised questions about how much our legal system should commit itself to enabling foreign governments to demand access to private communications and other data. As Levy asked at the time:

Should the United States allow access to stored keys to free-speech–challenged nations like Singapore, or China? And would France, Egypt, Japan, and other countries be happy to let their citizens use products that allowed spooks in the United States to decipher conversations but not their own law enforcement and intelligence agencies?

Rosenstein attempts to paint over this problem by pointing out that American-based technology companies have cooperated in some respects with other countries’ government demands—typically over issues like copyright infringement or child pornography, rather than digital-security technologies like encryption. “Surely those same companies and their engineers could help American law enforcement officers enforce court orders issued by American judges, pursuant to American rule of law principles,” he says.

Sure, American companies, like companies everywhere, have complied as required with government demands designed to block content deemed in illegal in the countries where they operate. But demanding these companies meet content restrictions—which itself, at times, also raises international rule-of-law issues—is a wholly separate question from requiring companies to enable law-enforcement everywhere to obtain whatever information they want regarding whatever you do on your phone or on the internet.

This is particularly concerning when it comes to foreign governments’ demands for private content and personal information, which might include providing private information about dissidents in unfree or “partly free” countries whose citizens must grapple with oppressive regimes.

It is simply not true that technology companies are just concerned about money. In fact, it’s cheaper to exclude digital-security measures than to invent and install new ones (such as Apple’s 3D-face-recognition technology set to be deployed in its new iPhone X). Companies do this not just to achieve a better bottom line but also to earn the trust of citizens. That’s why Apple resists pressure, both from foreign governments and from the U.S. government, to develop tools that governments (and criminals) could use to turn my iPhone against me.

This matters even more in 2017, and beyond. No matter how narrowly a warrant or wiretap order is written, access to my phone and other digital devices is access to more or less everything in my life. The same is true for most other Americans these days.

Rosenstein is certainly correct to say “there is no constitutional right to sell warrant-proof encryption”—but there absolutely is a constitutional right to write computer software that encrypts my private information so strongly that government can’t decrypt it easily (or at all). Writing software is generally understood to be presumptively protected expression under the First Amendment. And, of course, one needn’t sell it—many developers of encryption tools have given them away for free.

What’s more, our government’s prerogative to seek information pursuant to a court-issued order or warrant has never been understood to amount to a “constitutional right that every court order or search warrant be successful.” It’s common in our law-enforcement culture—of which Rosenstein is unquestionably a part and a partisan—to invert the meaning of the Constitution’s limits on what our government can do, so that that law-enforcement procedures under the Fourth and Fifth Amendments are interpreted as a right to investigatory success.

We’ve known this aspect of the encryption debate for a long time, and you don’t have to be a technologist to understand the principle involved. Levy quotes Jerry Berman, then of the Electronic Frontier Foundation and later the founder of the Center for Democracy and Technology, on the issue:  “The idea that government holds the keys to all our locks, even before anyone has been accused of committing a crime, doesn’t parse with the public.”

As Berman bluntly sums it up, “It’s not America.”

Image by Victor Moussa

Data show how bad turnover was in Rep. Tim Murphy’s office


The well-known impetus for the forced retirement from Congress later this month of U.S. Rep. Tim Murphy, R-Pa., was the bombshell revelation that he urged his mistress to undergo an abortion, despite his staunch pro-life stance. Receiving far less attention, however, is a June 2017 memo authored by Murphy’s long-serving chief of staff, Susan Mosychuk, which suggests his systematic mistreatment of his staff may well have led to Murphy’s downfall even if his affair scandal hadn’t.

In the memo, Mosychuk details a “pattern of sustained inappropriate behavior and engagement from the Congressman to and with staff,” including mistreating and harassing aides and “unreasonable expectations and ongoing criticisms.” Mosychuk also writes of “abysmal office moral” and an inordinate amount of staff turnover within the office (“near 100% turnover within one year’s’ time”), confirming Hill rumors that Murphy’s office hemorrhaged employees.

Is it true that Murphy, and his office environment, churned through congressional staffers, as Mosychuk suggests? If so, has he done so since first being elected in 2003, or was his reputation for doing so a recent development?

Using LegiStorm employment data, we took stock of Murphy’s staff employment trends throughout his congressional tenure. Important to note in using this data is that representatives are capped at employing 18 permanent employees at any given time, plus an additional four staffers serving as part-time, shared or temporary aides. Thus, members have a hard ceiling of 22 employees at any one time.


Figure 1 above shows the number of separated staffers—those leaving Murphy’s office—in each year of his career. On average, Murphy had more than eight staffers leave his office per year, resulting in 39 percent of his aides separating each year, on average, even using the most conservative staff total of 22 employees.

Only once in his 15-year career (2008) did Murphy have fewer than six staffers leave his office. During five years of Murphy’s career, 10 or more staffers left, with a high of 16 departing in 2004, just one year after his first year in office. All in all, Murphy aides separated from his office at an incredibly high, and fairly consistent, clip since his election in 2003.


As a second measure of office turnover, we calculated how long each of Rep. Murphy’s aides were employed by his office. Figure 2 presents the distribution of staffers’ tenure lengths. During his time in Congress, just under half of Murphy aides, 48.25 percent, worked in his office for less than one year. Another 25.67 percent stayed only between one and two years. Thus, of the 150 employees Murphy employed throughout his time in Congress, only 25.17 percent of Murphy’s aides remained on his staff for more than two years.

Taken together, these measures of office aide turnover give credence to Hill rumors and Mosychuk’s allegations that Murphy’s office was one in which staffers purposefully avoided or quickly departed after signing on. Though new accounts also suggest Mosychuk was a contributor to Murphy’s “reign of terror,” the employment trends of those serving within Murphy’s office clearly show that the representative faced an inordinate and nearly constant high rate of replacement during his entire run in Congress.

Photo by Jonathan Ernst/Reuters

October Legislative Branch Capacity Working Group on ‘regular order’

In this Legislative Branch Capacity Working Group Session, Peter Hanson and R Street’s James Wallner examine and interpret calls for “regular order” in the appropriations process from members of Congress. What does “regular order” mean to members, and why has Congress abandoned it? What can be learned from the House practice of allowing open rules on appropriations bills about the impact of regular order on debate?

Kevin Kosar on FedTalk: Doing more with less

Kevin Kosar, the vice president of policy at the R Street Institute, joined FedTalk host Ben Carnes from Shaw Bransford & Roth and Kathy Goldschmidt, the president and CEO of the Congressional Management Foundation, to discuss the dwindling capacity of congressional members and their staffs to do their jobs effectively as responsibilities increase.

In defense of third-degree amendments


Senate majorities routinely restrict the ability of senators to participate in the legislative process.

The most common way they do so is when the majority leader fills the amendment tree to block senators from offering amendments. The maneuver prevents the underlying legislation from being changed and protects rank-and-file senators in the majority from having to cast votes that could be used against them in their future efforts to win re-election.

Yet senators do not need the majority leader’s permission to offer amendments to legislation pending on the Senate floor. Indeed, they can offer so-called third-degree amendments even though the amendment tree has been filled.

This confronts the majority leader with a unique challenge. If utilized on a regular basis, third-degree amendments could eventually undermine his ability to control what measures receive votes on the Senate floor. This would, by extension, weaken significantly his ability to prevent the underlying legislation from being changed and to protect rank-and-file members in the majority from voting on amendments.

Given this, some senators have opposed efforts by their colleagues to offer third-degree amendments. Their concerns are illustrated in the debate surrounding the effort by Sen. Ted Cruz, R-Texas, to offer a third-degree amendment in July 2015. In opposing the maneuver, Sen. Lamar Alexander, R-Tenn., warned his colleagues of the consequences that would result if they joined Cruz in voting to overturn the decision of the chair. Specifically, he made two claims regarding Cruz’s effort, and the tactic of offering third-degree amendments more broadly.

First, Alexander equated Cruz’s appeal with the nuclear option employed by Senate Democrats in November 2013. He suggested, “If…a majority of senators agree with the senator from Texas, the Senate will be saying that a majority can routinely change Senate rules and procedures anytime it wants on any subject it wants in order to get the result it wants.” Alexander’s goal was to link Cruz’s appeal with the effort of Senate Democrats to circumvent the filibuster for judicial and executive nominations on a simple-majority vote in the previous Congress; a move that had been widely criticized by Senate Republicans ever since. Doing so would make it less likely that Republican senators would vote to overturn the chair, regardless of how they felt about the substance of the underlying amendments.

Second, Alexander asserted that Cruz’s appeal would, if successful, “destroy a crucial part of what we call the rule of regular order in the U.S. Senate.” The consequence would be the creation of “a precedent that destroys the orderly consideration of amendments.” As such, he confidently predicted, “There will be unlimited amendments. There will be chaos.”

Notwithstanding Alexander’s reputation as an expert on the Senate’s rules, a closer examination of his two claims demonstrates that neither has much merit.

First, there are important distinctions between third-degree amendments and the nuclear option, even though both utilize the same mechanism (i.e., an appeal). Appealing the ruling of the chair that an amendment is not in order when the amendment tree has been filled is not synonymous with the nuclear option, because it does not violate the Standing Rules of the Senate. If successful, it would simply create a new precedent governing the amendment process. It would not violate any specific rule. The appeal would only be functionally equivalent with the nuclear option if the new precedent explicitly violated an existing provision of the Standing Rules. Otherwise, the creation of a new precedent on appeal is entirely consistent with Senate rules and past practices.

Second, a closer consideration of regular order in the context of the amendment process suggests that it would remain relatively unaffected by a successful appeal in this scenario. Alexander contends that the amendment trees make it possible for the Senate to function today. He predicts that floor debate on bills would be chaotic if the current amendment trees were altered by a successful appeal. The implication is that effectively removing the limits on the number of amendments that can be pending to legislation on the Senate floor would make it impossible to consider legislation in an orderly manner.

Yet the historical development of the Senate’s amendment process demonstrates that there is nothing inherently chaotic about expanding the number of amendments that can be pending simultaneously. The principles of precedence would still apply to any new branches created on the trees. As such, the framework for the orderly consideration of the pending amendments would be preserved.

Moreover, the only time the amendment trees are adhered to literally in the contemporary Senate is almost always when the majority leader would like to block other senators from offering amendments. Instead of processing amendments by following the amendment trees, the practice most often followed is to process amendments by unanimous consent (e.g., “I ask unanimous consent to set aside the pending amendment and call up amendment No. 1234.”) Thus, limiting the majority leader’s ability to fill the amendment tree would simply force the Senate to return to the way in which it routinely processed amendments before the dramatic abuse of the amendment tree.

Indeed, the Senate has considered legislation for most of its history without utilizing the contemporary practice of routinely filling the amendment tree for the explicit purpose of blocking individual senators from offering their own amendments. While preventing the majority leader from being able to fill the tree routinely may make it more difficult for the Senate to block votes on amendments altogether, the Standing Rules and the institution’s precedents contain several tools that can be used to facilitate the orderly consideration of amendments on the Senate floor. These include (but are not limited to) the requirement that committee amendments to reported legislation be considered before the consideration of amendments from the floor, precedents prohibiting language previously amended from being amended again and the filing deadlines associated with Rule XXII.

The arguments advanced by proponents and opponents of using third-degree amendments to circumvent the majority leader’s ability to fill the amendment tree suggest two very different directions for the future course of the Senate’s development.

On one hand, equating precedents that fill in the gaps where the rules are silent with the Standing Rules would effectively bind the Senate to how it operated in the past, regardless of the development of new circumstances, the way the original precedent was established or the merits of the original precedent and whether it violated the Standing Rules in the first place. This would further increase the majority leader’s control over Senate decisionmaking by delegitimizing the efforts of individual members to adjudicate precedent or to protest what they perceived to be unfair or inaccurate rulings of the chair.

On the other hand, third-degree amendments could eventually undermine the majority leader’s ability to control the amendment process. Challenging the ability to fill the amendment tree with a third-degree amendment thus has the potential to impose significant costs on the majority leader directly. If used on a routine basis, this tactic could weaken, or even end, the majority’s ability to control outcomes in the Senate. As such, third-degree amendments could substantially alter the balance-of-power between the majority and minority parties in the institution, as well as between individual senators and the party leadership.

Image by Crush Rush


Senate finally poised to restore FTC to full strength


Earlier today, President Donald Trump formally announced the three candidates he’s nominating for the open seats at the Federal Trade Commission. Joseph Simons, Rohit Chopra, and Noah Phillips have diverse backgrounds and divergent political views, but they all have impeccable legal credentials and should be confirmed by the U.S. Senate without hesitation.

Not only will their confirmation put three more sets of steady hands at the wheel of the nation’s chief consumer protection and antitrust agency, but it also will finally restore the FTC to full strength, freeing it up to once again take on the kinds of hard cases that tend to split public opinion.

The FTC, which has jurisdiction over nearly every sector of the U.S. economy (with only a few limited exceptions), has had only two commissioners for most of 2017, ever since outgoing Chairwoman Edith Ramirez resigned in early February. To their credit, Acting Chairwoman Maureen Ohlhausen and Commissioner Terrell McSweeny have done an admirable job finding common ground and working together where possible, including by blocking an allegedly anticompetitive merger in daily fantasy sports, imposing structural-separation requirements on a key merger in the semiconductor industry, settling a privacy suit against a major ridesharing service and, most recently, launching an investigation into the Equifax breach.

However, with a partisan deadlock in place, the commission has only been able to act when it had unanimous consent. This has left it unable to tackle difficult questions that truly push the bounds of precedent and drive the evolution of legal doctrine forward. By all accounts, Simons, Chopra and Phillips are all FTC scholars who should be ready to hit the ground running on day one. Each of them also has relevant personal experience that should hold them in good stead at the commission.

Joseph Simons, long-rumored to be Trump’s pick for FTC chairman, comes most recently from the antitrust group at law firm Paul Weiss. He also spent time as director of the FTC’s Competition Bureau in the early 2000s, working deeply on both mergers and other enforcement actions. Given the uptick in merger activity this year, Simons’ experience in this area will surely come in handy at the FTC, which has a key role to play, along with the U.S. Justice Department, in reviewing proposed mergers and acquisitions to prevent potential harms to competition or consumers.

Rohit Chopra, the pick to fill the open Democratic slot, also has significant prior experience in the federal government. He served as assistant director of the Consumer Financial Protection Bureau and in 2011 was named by former Treasury Secretary Timothy Geithner to be the U.S. Treasury Department’s first student loan ombudsman. Chopra is considered a darling of key Democrats like Senate Minority Leader Chuck Schumer, D-N.Y., and Sen. Elizabeth Warren, D-Mass., for his efforts to combat student loan debt and other financial burdens affecting young people. While his stance on for-profit colleges may rankle some Senate Republicans, there is no reason to think he won’t be confirmed. After all, disagreements over policy aren’t a valid reason to deny confirmation of a qualified nominee (although members of both parties tend to forget that from time to time).

Finally, Noah Phillips was nominated to fill the final Republican vacancy at the FTC, and he also brings a decorated and interesting background to the table. Phillips previously spent time in civil litigation for both Steptoe & Johnson and Cravath, Swaine & Moore, but most recently has been serving as chief counsel for Senate Majority Whip John Cornyn, R-Texas, with the Senate Judiciary Committee. From his post on the Judiciary Committee, Phillips has oversight of the U.S. legal system as well as intellectual property, which should come in handy as the FTC continues to engage in more patent work, such as its review of patent assertion entities and its ongoing case alleging anticompetitive abuse of patents underlying equipment used in smartphones.

With a full complement of qualified commissioners, the FTC can once again function as an agency with the skills and capacity to tackle the key competition and consumer-protection issues. The Senate shouldn’t delay to confirm all three nominees.

Image by Kevin Grant


Anxiety over NAFTA causing slide of the peso, and an increase in imports from Mexico

President Donald Trump and U.S. Trade Representative Robert Lighthizer have made reducing the trade deficit a central focus of the in-progress renegotiation of the North American Free Trade Agreement.

Last weekend’s round of negotiations in Washington, D.C., ended on a fairly sour note. As optimism about a reinvigorated NAFTA 2.0 fades, economic anxiety in Mexico is putting downward pressure on the peso, according to the Wall Street Journal.

As the Peso declines versus the dollar, imports from Mexico become cheaper. As a result, our bilateral trade deficit with Mexico will expand! In other words, even if we withdrew from NAFTA and tariff rates spiked, the bilateral trade deficit with Mexico could still increase.

As virtually any economist worth his or her salt will tell you, trade deficits are driven by larger macroeconomic factors beyond trade policy. It is unlikely that trade deficits matter at all, but it is certain that bilateral trade deficits do not matter. The United States, for instance, has a trade surplus with Australia, which has a surplus with China, which has a surplus with the United States.

The sooner the Trump administration and the leadership at USTR recognize that attempting to address bilateral trade deficits through trade policy is a futile exercise, the sooner real progress on negotiation will be made.

The Russia investigation: Why the overseers need oversight


What’s going on with the Russia investigation? For most of us, the answer likely is, “Beats me.”

It seems every week or two there’s a media report about Congress holding a hearing or some member of Team Trump or other person being called in to testify: James Comey, Paul Manafort, Donald Trump Jr. The facts come out in drips: one reads of meetings with Russians during this past year’s presidential election; Facebook turning over information about shady campaign ads; Michael Flynn and possibly his son being subpoenaed to produce documents.

There are five congressional committees involved, to say nothing of independent counsel Robert Mueller. Who is doing what, when and why is anything but obvious.

Especially concerning is that Congress’ inquiries are increasingly viewed through partisan lenses. CNN reports:

In the House and Senate, several Republicans who sit on key committees are starting to grumble that the investigations have spanned the better part of the past nine months, contending that the Democratic push to extend the investigation well into next year could amount to a fishing expedition. The concerns are in line with ones raised by President Donald Trump, who has publicly and privately insisted he’s the subject of a ‘witch hunt’ on Capitol Hill and by special counsel Robert Mueller. Democrats, meanwhile, are raising their own concerns that the congressional Russia probes are rushing witnesses – including the testimony of President Donald Trump’s son-in-law Jared Kushner – as well as stalling appearances of other key Trump associates.

President Trump often has denounced the Russia issue as a hoax, and some of his supporters view it as a Democratic-media-deep-state “witch hunt” and fishing expedition. On the left, one still hears griping that Russian hackers helped Trump to steal the election and that Republican congressional majorities will hide any revelations of serious wrongdoing by the president or his campaign.

Desperately needed is something to bolster faith in the process. If the Russia investigation turns out to be a big nothingburger, then the country benefits if that conclusion is broadly accepted. And if there really is a there there, then it could lead to impeachment or other severe consequences, which, again, will require collective faith that the process is fair.

To raise credibility, Congress should adopt the benchmarks advocated by a right-left coalition of former government officials and policy wonks. In short, each of the committees (Senate Select Committee on Intelligence; Senate Judiciary Committee; House Permanent Select Committee on Intelligence; House Committee on Oversight and Government Reform; and House Judiciary Committee) should commit to carry out their work in ways that demonstrate bipartisanship and the desire to keep the public informed.

So, speaking to the former matter, committee chairmen and ranking members should jointly hold press conferences, and issue public communications under both their names. When calling witnesses or demanding documents, both the majority and minority should consent.

To increase the public’s understanding, the committees should report publicly and regularly on basic aspects of the investigation: What’s the scope of the investigation? How many witnesses have been interviewed? How many hearings (open or closed door) have been held? How much has been spent?

That is not much to ask of Congress, but the benefits could prove immense. A big part of the glue that holds us together as a nation is acceptance of the legitimacy of government. With the presidency itself at the center of the investigation, the stakes are very high.

Image by Lightspring


How senators can offer amendments without the majority leader’s permission


The demise of regular order in the Senate makes it harder for its members to participate in the legislative process. And the result of their efforts to do so gives rise to a destructive cycle that perpetuates dysfunction and gridlock.

While regular order is not easily defined, it is generally associated with an orderly process in which senators are able to participate at predictable points. Conversely, its absence is typically associated with a secretive process in which members are barred from offering amendments to legislation pending on the Senate floor. When confronted with legislation in such a process, senators are left with no choice but to “blow up” the bill to force the majority to allow them to offer amendments. This all-or-nothing approach breeds frustration among members and their constituents, thereby making it even harder to negotiate after the majority’s original plan has been thwarted.

Given this dynamic, irregular order is hardly the most productive way to make decisions. Instead of helping senators communicate across their differences, it encourages the kind of extreme position-taking and inflexibility that complicates a more deliberative process.

It should thus be no surprise that the Senate at present has difficulty passing legislation of any consequence and that its amendment process is in shambles. This is because the majority leader routinely blocks amendments and files cloture on important bills as soon as they are placed on the Senate floor. The only leverage senators have in such a scenario is their ability to block cloture on the underlying legislation.

Fortunately, there is another way for senators to amend bills on the floor without the majority leader’s permission to offer amendments. They can offer third degree amendments even when the tree has been filled and then appeal the subsequent ruling of the Senate’s presiding officer (i.e., chair) that the amendment is not in order. Doing so can force a recorded vote in relation to the amendment. The majority can prevent a vote on the appeal by filibustering it. Yet the majority’s filibuster would also prevent its bill from passing.

Offering a third-degree amendment in this scenario is consistent with the Senate’s rules and precedents as reflected in the historical development of its amendment process. It also reinforces a common minority critique of how the majority party runs the Senate. Most importantly, the tactic makes it easier for senators to participate in the legislative process, thereby avoiding the destructive cycle created by forcing them to block cloture on a bill just to get the opportunity to offer an amendment to it.

The Senate’s Standing Rules do not regulate the number of amendments that members are currently allowed to offer to legislation at the same time. Instead, that is governed by the four amendment trees followed in the Senate today. Those trees were created by precedent and evolved over time, only recently reaching their current shape.

Yet their evolution was not haphazard. The precedents that created the modern trees are based on general parliamentary law and serve to facilitate the orderly consideration of amendments on the Senate floor. For example, one precedent precludes so-called third-degree amendments. Specifically, the early Senate prohibited vertical third degree amendments (i.e., an amendment to an amendment to an amendment to the underlying legislation) and horizontal third degree amendments (i.e., a competing first- or second-degree amendment to the underlying legislation) because their use would make the floor debate on a bill too confusing.

In other words, the original prohibition on third degree amendments was not intended to block senators from offering amendments altogether. Rather, the expectation was that while a third-degree amendment would be out of order, an identical first- or second-degree amendment would be allowed once that branch on the tree opened.

Even so, senators soon realized that the amendment process was too cumbersome when the prohibition was applied strictly. As a result, the Senate facilitated more member participation and deliberation by expanding the amendment trees over time to permit vertical and horizontal third degree amendments where they had been previously prohibited. The primary motivation behind each expansion was the desire to make the amendment process more responsive to the needs of individual senators.

While the majority leader uses the same amendment trees today to block all amendments, senators retain the option to expand them again to make it easier to participate in the process and to increase deliberation. That is, they can offer their amendments even though the amendment tree has been filled.

The Senate’s precedents stipulate that “Any senator recognized is entitled to offer an amendment when such amendment is otherwise in order, but he cannot offer an amendment unless he has been recognized or has the floor.” The process of filling the tree follows precedent to block members from offering their own amendments. However, a senator may attempt to offer an amendment even though the tree has been filled. In such a situation, the chair would rule that the amendment is not in order pursuant to the Senate’s precedents. At that point, the member could appeal the ruling of the chair and request a recorded vote. The appeal represents an adjudication of the italicized portion of the precedent quoted above; namely, that an amendment is in order even though the amendment tree has been filled.

Offering amendments despite the filled tree and appealing the ruling of the chair that they are not in order forces the majority to cast votes on procedural questions directly related to the amendment being offered. Procedural votes have been viewed as substantive votes when the question is directly related to the underlying policy and the tactic is utilized on a regular basis. For example, the perception of cloture has evolved from being simply a procedural vote to the point that it is viewed by many as a substantive vote today. Votes on third degree amendments could thus be characterized as substantive votes.

As such, the threat to offer a third-degree amendment may encourage the majority to return to regular order. This is because the tactic gives the minority more leverage with which to gain the right to offer amendments without having to block cloture.

Image by mark reinstein

Steven Greenhut on American Family Radio

American Family Radio host Chris Woodward interviews R Street Western Region Director Steven Greenhut on the latest goings-on in California’s state Capitol. Woodward and Greenhut discuss the possible impact of the Trump administration’s tax plan, which will remove a key deduction that benefits Californians. This plan puts California Republicans in a tight spot, given that they want to support the president but it means a tax hike for California taxpayers. Woodward also asks Greenhut about a proposal to ban the sale of internal-combustion-engine vehicles by 2040 — something Greenhut explains is more about posturing than anything else given the rapid technological advancements in the auto industry.

Blocking amendments is a perversion of Senate rules and practices

Senate Holds Vote On Financial Bailout Legislation

The Senate today is an institution in decline. It is paralyzed – unable to legislate, much less deliberate.

The Senate’s plight is reflected in the near-total deterioration of its amendment process.

For example, senators offered a paltry 147 floor amendments between January and September of this year. Compare that to the 568 amendments they offered during the same period in 2015 and the 668 in 2009. At the present rate, Senate amendment activity could increase by as much as 250 percent over the next 15 months and still fall short of the level observed in the first nine months of 2015 alone.

This is the culmination of a broader trend going back three decades. During that time, Senate majorities have increasingly empowered the institution’s majority leader to prevent senators from offering amendments to achieve their legislative priorities.

The majority leader blocks senators from offering alternative proposals by filling the amendment tree, i.e., offering the maximum allowable number of amendments to legislation before other senators have had a chance to debate the measure and offer their own amendments.

Once used sparingly in extraordinary circumstances, the tactic is now routine and well-documented. But less appreciated is the extent to which its normalization in recent years represents a radical break from the Senate’s past practice. Also, less understood is how precisely the tactic empowers the majority to pass its agenda, given that the minority can still filibuster the underlying legislation.

Recent research suggests that the amendment process gradually evolved to facilitate the orderly consideration of the Senate’s business. The direction in which it evolved was informed by the Senate’s effort to balance the need for order in its work with the imperative of legislative deliberation.

While the Senate’s first amendment trees only permitted two amendments to be pending at the same time, they were expanded in response to member demands by adding new branches. The result was to increase the number of amendments that could be pending before the Senate simultaneously.

Notwithstanding this increase, members maintained order by adhering to the principles of precedence first compiled for the Senate in Thomas Jefferson’s A Manual of Parliamentary Practice for the Use of the Senate and still followed today. In general, those principles held that senators should have an opportunity to amend legislative text proposed to be stricken and/or inserted before the actual vote to strike and/or insert said text.

Analyzing how the Senate’s current amendment trees came to be underscores the extent to which using them to block amendments is a perversion of the Senate’s rules and practices. That is, the precedents underpinning the trees are now being used for a purpose fundamentally at odds with the one for which they were first created. Instead of facilitating the orderly consideration of amendments on the Senate floor, they are now being used to block the consideration of amendments altogether.

This suggests that the act of offering amendments no longer serves as a way in which the Senate can arrive at a greater understanding of what its members think about a given issue. Instead, the amendment process is commonly viewed as the last hurdle needed to be surmounted before a preferred bill can be sent to the House or to the president’s desk to be signed into law. To the extent that controversial amendments are permitted on legislation, frequently their consideration is structured in such a way as to guarantee their defeat. This requires channeling all decisions regarding which amendments can be offered to legislation through a single veto point (i.e., the party leaders or bill managers). Once established, such a veto point enables the leadership and/or bill managers to exercise disproportionate control over which amendments will be made pending to legislation on the Senate floor and to set the terms according to which those amendments will be disposed of.

Establishing a veto point is accomplished by putting the Senate in a parliamentary situation in which unanimous consent is needed to get an amendment pending under one of the four amendment trees. The primary tool utilized by the majority leader to accomplish this is the tactic of filling the amendment tree (or offering a blocker amendment in one of the available slots such that further amendments are precluded by the principles of precedence if that blocker amendment is pending). No amendments are in order once all the extant branches on the tree are occupied. At that point, the majority leader and/or bill manager is free to focus on negotiations with interested rank-and-file colleagues to reach a unanimous consent agreement that provides for several amendments and a vote on final passage without having to worry about a senator jeopardizing the legislation’s prospects by offering a controversial or otherwise unwanted amendment without permission.


As noted, the majority leader (or bill manager) may also offer a “blocker” amendment to establish the veto point. For example, an amendment offered to branch C in the chart above would serve as a blocker amendment if offered first and in the form of a motion to insert (or strike and insert). Once pending, any other amendment offered directly to the amendment in the nature of a substitute (ANS) would require consent to get pending (which would presumably be denied if the majority leader/bill manager wanted to block the amendment).

This tactic is less aggressive than completely filling the amendment tree, in that it typically leaves a few branches open for possible amendment. However, these branches are rarely connected to the ANS directly. For example, in the hypothetical example, the blocker amendment leaves branches E and F (on the left side of the amendment tree) open. Branch D (second degree to C on the right side) is also left open. These branches do not present the same challenges to proponents of the bill because their impact would be minimal if the amendments pending there prevailed. The majority leader could move to table C to prevent a vote on D on the right side of the tree. Additionally, adoption of E and F on the left side of the tree would be negated once the Senate adopts the ANS.

Once the Senate is in a parliamentary situation in which unanimous consent is needed to get an amendment pending to legislation on the floor, the majority leader can use his increased leverage to secure a higher vote threshold for adoption of an amendment. The majority’s desire to limit the minority’s ability to attach what it considers poison-pill amendments to legislation it supports is thus reflected in the dramatic increase in the use of unanimous consent agreements to set 60-vote thresholds for adopting amendments.  The majority leader uses the threat of not allowing amendments to get pending to compel individual senators to agree to the higher vote threshold on their amendment, even though doing so means that the amendment will most likely be rejected.

The routine practice of filling the amendment tree in the Senate today, coupled with the cloture process to end debate, effectively prevents members from being able to perfect legislation before it receives an up-or-down vote on final passage. Instead of a deliberative process designed to discern the true sense of the institution’s membership on an issue, senators are confronted with a fait accompli. This practice is inconsistent with the longstanding rules and practices on which the amendment process is based.

Structural imbalances in the Senate’s amendment process

U.S. Capitol Building, Washington D.C.

The Senate is a pale imitation of what it once was.

A major reason for its current predicament is that senators are no longer freely able to amend the bills they consider. This is because the majority leader routinely blocks members from offering their own ideas on the Senate floor by filling the amendment tree.

While the tactic effectively precludes votes on unwanted amendments, the minority may still filibuster the underlying legislation in protest. This gives Senate minorities leverage to negotiate with the majority over what amendments will be permitted during a bill’s consideration, so long as 41 of its members are committed to blocking cloture until their demands are met.

But remaining united in opposition to cloture is not always easy, because the minority comprises individual senators who hold an array of policy views. Given this, the majority leader will negotiate directly with those members whose policy views are closest to his own when trying to secure the votes needed to invoke cloture.

The majority leader also can structure an amendment’s consideration in a way that makes its success less likely. This is done by setting a higher threshold for the amendment’s adoption in the unanimous consent agreement that typically schedules the vote on it. The utility of this approach to Senate majorities is reflected in the dramatic increase in its use in recent years to set 60-vote thresholds for passing amendments.


The earliest documented use of such a consent agreement occurred in the 102nd Congress. But it was a rare procedural tool until the 109th and 110th Congresses, when Majority Leaders Bill Frist, R-Tenn., and Harry Reid, D-Nev., respectively, began utilizing them on an increasing scale. In the 109th Congress, consent agreements were used in this manner in six instances. However, in the 110th Congress, their use increased significantly, totaling 37 instances. The use of the tactic remained relatively level in the 111th Congress at 38. In the 112th Congress, 60-vote thresholds were set on amendments on a staggering 111 occasions.

The tactic was utilized 35 times in the 113th Congress. The decline in amendments subject to a 60-vote threshold from the 112th to the 113th Congress is not as abrupt when viewed as a percentage of all amendments offered. This is because only 542 amendments were offered to legislation on the Senate floor during the 113th Congress (compared to 974 in the 112th).

Moreover, the share of roll call votes (RCVs) on amendments set at 60 by consent has increased since the 109th Congress. The routine utilization of the 60-vote threshold is particularly striking when RCVs on amendments to the budget resolution and reconciliation bills are omitted. Excluding budget and reconciliation amendments from the count yields a more accurate portrayal of the tactic’s centrality to decisionmaking in the Senate at present, because a member cannot be blocked, in theory, from offering amendments during the budget process’s vote-a-rama.


Pursuant to these unanimous consent agreements, the amendment is withdrawn if it does not get the requisite number of votes. The practice thus allows an amendment’s supporters to demonstrate support for cloture without going through the time-consuming process of invoking it.

Amendments offered pursuant to such agreements, however, are seldom successful. In the 109th and 110th Congresses, amendments considered in this manner failed 100 percent and 78 percent of the time, respectively. In the 111th and 112th Congresses, the percentages of amendments considered in this manner that failed were 61 percent and 87 percent, respectively. Most recently, 77 percent of the amendments considered pursuant to this tactic failed in the 113th Congress.

The use of unanimous consent agreements to set 60-vote thresholds on amendments can thus be interpreted as allowing the majority to facilitate the passage of legislation by allowing the minority to offer amendments without risking the adoption of a poison pill. This process does not present a problem for members of the majority party because they typically oppose the amendment in question, and a 60-vote threshold means that it is unlikely to pass. In addition, members of the majority are more likely to have their priorities included in the underlying bill before it reaches the Senate floor for consideration.

Minority party members, as well as those in the majority party who are out of step with their colleagues on the policy question at hand, often support this process begrudgingly because it provides an opportunity to offer an amendment and get a vote on it, all without having to expend the necessary resources to filibuster the underlying legislation. They may not get the opportunity to offer the amendment altogether if they reject the 60-vote threshold.

Setting 60-vote thresholds for amendments via unanimous consent agreements is central to the majority’s ability to control the agenda in the Senate today. Yet the tactic’s increased use in recent years is at odds with calls to reform, or eliminate, the legislative filibuster. This suggests that there is a growing constituency inside the Senate to increase the majority’s ability to control the legislative process while reducing the minority’s ability to leverage the filibuster to secure majority concessions. If this trend persists, the Senate risks becoming more majoritarian, and thus more dysfunctional, moving forward.

Ranking Member Cummings cites Lehrer on census

House Oversight Committee Ranking Member Elijah Cummings, D-Md., cited a recent op-ed by R Street President Eli Lehrer in his opening remarks at the panel’s Oct. 12 hearing on the 2020 U.S. Census.

EPA ends ‘sue and settle’ era


A new directive handed down Oct. 16 by Environmental Protection Agency Administrator Scott Pruitt pledges to put an end to the controversial practice of settling lawsuits with special interest groups behind closed doors, often while paying their attorneys’ fees.

These so-called “sue and settle” practices long have been criticized by businesses and conservative groups as a way to circumvent the normal regulatory process. Over its eight years, the Obama administration’s EPA chose not to defend itself in more than 100 lawsuits brought by special interest advocacy groups and paid out $13 million in attorneys’ fees in such cases.

Pruitt has had the tactic in his sights since his days as Oklahoma’s attorney general, when he sued the EPA in federal court more than a dozen times. In a letter this week to EPA managers, he said the practice “risks bypassing the transparency and due process safeguards enshrined in the Administrative Procedure Act and other statutes.” He also called it “regulation through litigation” and an “abusive” policy, in part because it excludes state involvement in any settlement between the EPA and private litigants.

The practice has not been confined just to the Obama administration, as the Bush EPA settled 64 cases over its two terms in office. But during the Obama years, “sue and settle” became one of the primary avenues to formalize major regulations, including the Clean Power Plan’s proposed constraints on carbon emissions as well as recent mercury and air-toxin standards.

Pruitt’s directive calls for improved transparency around litigation, with all potential settlement agreements open to a 30-day public comment period. The directive also calls for publishing attorneys’ fees, a break from the Obama administration practice of agreeing to fees “informally.” Pruitt also has instructed the EPA to reach out directly to states and regulated entities that would be affected by any given consent decree.

Given the litigiousness of environmental policy, it’s easy to see how the “sue and settle” process could be attractive for the agency. But as Pruitt rightly suggests, the process had become a way to circumvent the full regulatory process, which can take years, and essentially gives the executive branch control to shape legal settlements in complaints that are never even heard by the courts.

Given the Obama administration’s clear tendency to replace legislative compromise with “phone and a pen” executive action, there is little doubt the “sue and settle” tactic was being abused in ways that had not be foreseen when the practice began. Good riddance.

Image by petrmalinak


Local e-cigarette crackdowns are misguided and counter-productive

Electronic Cigarette or E-Cigarette

In an unfortunate trend across the country, cities and towns have raced to institute new regulations and update existing laws that deal with e-cigarettes and vapor products, often with little consideration of the potential these products have to improve public health.

In Massachusetts, recent actions by local boards of health to label e-cigarettes as “tobacco products” are misleading, at best, and at worst, a move that limits access to far less-harmful alternatives to cigarettes. Many local policies aimed at protecting teens from smoking myopically disregard the effects on the adult smoking population.

Tobacco harm reduction is an approach to public health that seeks to reduce the incidence of cigarette use and smoking-related diseases by encouraging smokers to switch to less-harmful alternatives. These include e-cigarettes, vapor products and certain smokeless tobacco products that, while not completely without risk, are orders of magnitude less harmful to a person’s health than their combustible cousins.

Historically, American tobacco control policy has been based on the premise that all tobacco products are hazardous and that none can offer personal or public health benefits. However, peer-reviewed research by the United Kingdom’s Royal College of Physicians has demonstrated that e-cigarettes are a significantly safer than cigarettes, which continue to be both the most widely used and the most harmful tobacco products on the market.

That work by the Royal College of Physicians is particularly notable in light of the fact that it was they, decades ago, who presented the first comprehensive study on the negative health impact of cigarette use.

More recently, in the United States, Food and Drug Administration Commissioner Scott Gottlieb echoed similar sentiments in a recent Washington Post interview. Gottlieb noted that most e-cigarettes contain nicotine, a known addictive substance, but the real threat to humans are the carcinogens produced when tobacco is combusted. Electronic nicotine delivery systems, or “ENDS,” provide a safer alternative for adults who still want access to nicotine but avoid that mass of carcinogens.

While the relative safety of noncombustible products is not in doubt, many local boards of health continues to resist their use out of fear that they may lead to heightened incidence of tobacco use among teens. In particular, there are concerns that “flavored products” attract teens to smoking. In response, localities have issued broad prohibitions on the sale of such products, without differentiating between cigarettes and less-harmful alternatives. Recently in Massachusetts, the towns of Canton and Marion and the city of Gloucester all have considered regulations that, if approved, will greatly reduce access to a host of less-harmful, non-combustible alternatives.

The unintended consequence of such rules is that they could make those who already smoke less likely to transition away from cigarettes. Furthermore, a recent study by Saul Shiffman and colleagues that examined flavor preferences among adolescent nonsmokers found they had less interest in supposedly youth-targeted e-cigarette flavors than adult smokers. In fact, the study concluded that teens preferred flavors that seemed more “adult-like.” Thus, not only do flavor bans not have their desired effect of preventing teens from smoking, they actually make it more difficult for adult smokers to improve their health. That’s bad policy.

A holistic approach to harm reduction demands that, in addition to discouraging adolescents from nicotine and cigarette use, a significant goal of any tobacco regulation should be to encourage adult smokers to switch to safer alternatives. Greater flavor options provide smokers with more paths away from the most harmful and widely used tobacco products – cigarettes. Taking steps to make e-cigarettes less accessible to current and future smokers means failing to make progress on reducing future rates of smoking-related diseases, which collectively kill 480,000 people in the United States each year.

By focusing solely on minors, many of these local regulations disregard and discount cigarette use among adults. The measure of a successful public health policy should be the impact it has on the whole population, not just certain segments. While cigarette use in the United States is at an all-time low, the significant drop-off in smoking rates is due, at least in part, to the development of attractive (and much safer) alternatives.

Harris deserves praise for seeking middle ground on sex-trafficking bill


The Sacramento Bee implied in a recent article that Sen. Kamala Harris, D-Calif.,  was being inconsistent or unduly influenced by Silicon Valley campaign supporters for her reluctance to back this far-reaching bill. But Harris’ approach of finding a middle ground is the only sensible course, especially given the potential harm to internet speech that could result from a hastily drafted law.

It’s really tough to stand up to “mom and apple pie” legislation such as this bill. Indeed, that’s why the “Stop Enabling Sex Traffickers Act” is the most dangerous sort of legislation, in that it uses legitimate fears of the scourge of sex trafficking to grant the government newfound powers to shut down online speech.

It also grants attorneys the ability to sue website operators, search engines, email providers and other online players into oblivion. Is it any wonder the president of a trial-lawyer-backed “consumer advocacy group,” Consumer Watchdog, was quoted by the Bee favoring the bill? The act would certainly be good for the trial bar given that it would obliterate longstanding federal protections for those web-based “intermediaries” that host third-party online speech.

Thanks to Section 230 of the federal Communications Decency Act of 1996, Facebook, Google and even the Bee itself are limited in their liability for the posts, images and comments made on their sites. In the name of combating sex trafficking, this bill would eviscerate those protections by opening up intermediaries to federal criminal prosecution and civil liability.

“Without this protection, intermediaries would face a potential lawsuit in each one of the thousands, millions or even billions of posts, images and video uploaded to their services every day,” according to a letter that privacy groups, including the American Civil Liberties Union, sent to the U.S. Senate leadership in August. Intermediaries would “err on the side of caution” and face an unending sea of litigation – something that will dangerously constrict speech on the internet.

It’s unclear what exact middle ground Harris is seeking, but there’s certainly nothing wrong with her listening to Bay Area tech firms on an issue that intimately involves them – and us. Sure, Harris seems to have changed her position from her days as attorney general, when she filed pimping charges against a website’s operator. A judge later tossed those charges for many of the same reasons free-speech advocates oppose this bill.

We should all be happy that Sen. Harris is growing in office. By all means, let’s clamp down on the human filth who operate as sex traffickers – but without threatening the kind of online free speech we’ve all come to expect on the internet.

Image by Vince360


Why do liquor rules vary drastically from state to state?

The R Street Institute’s Jarrett Dieterle appeared on Fox 5 DC’s “The Final Five” with Jim Lokay to discuss booze policy in America. They discussed the difficulty in reforming onerous state alcohol laws and how R Street’s website is helping to track reform efforts across all 50 states.

Perry questions value of ‘free market’ in energy

Also appeared in: Red, Green and Blue


Shakespeare’s adage about those who “doth protest too much” seems an appropriate response to Energy Secretary Rick Perry’s recent testimony on an administration proposal to change the way coal and nuclear power plants are compensated for sending electricity to the U.S. grid.

Perry’s cryptic and somewhat baffling rhetoric Thursday in front of the House Commerce Committee’s Energy Subcommittee came during tough questioning by members worried the proposal, if accepted by federal regulators, would undermine electricity markets throughout the country. In particular, the proposed rule by the U.S. Energy Department calls for subsidies for power plants that keep at least 90 days’ worth of fuel stored on site. Such a rule would act as a subsidy for coal and nuclear interests over natural gas, solar, wind and other renewable energy providers, and could cost consumers up to $4 billion a year, according to analysts.

“I think you take costs in to account, but what’s the cost of freedom? What does it cost to build a system to keep America free? I’m not sure I want to put that straight out on the free market and build the cheapest delivery system here,” Perry retorted in response to a question from Rep. Paul Tonko, D-N.Y., about the potential for higher energy prices for consumers. “I think the cost-effective argument on this is secondary to whether or not the lights are going to come on.”

The DOE on Sept. 28 asked the Federal Energy Regulatory Commission (FERC) to consider new rules ensuring nuclear and coal-fired power plants are paid not just for the electricity they provide consumers, but the reliability they may provide to the electric grid. Former FERC commissioners have said such a rule could “blow up” wholesale electricity markets that have taken decades to design. Both coal and nuclear plant operators, meanwhile, have been shuttering inefficient plants over the past several years due to inexpensive natural gas-fired generation and government support for renewable generation.

It is true that fuel security is an important issue to evaluate, as long as it is evaluated objectively. Perry’s “Braveheart” moment regarding energy security suggests a certain irrationality that can only hurt electricity market operations and which, over time, would undermine fuel security as poor economic incentives become institutionalized.

The truth is that free and unfettered price discovery in electricity markets is the most important element in grid resiliency. Perry is involved in a subterfuge, a deception that even someone of his legitimate political skills has trouble pulling off. The administration is in the position of being forced to come up with creative ways to fulfill promises made directly by President Donald Trump to coal mine owners during the election campaign, even at the cost of free markets – a supposed core belief among Republicans and conservatives of all strips.

This intellectual inconstancy is even more acute when one considers that Perry spent much of his 14 years as Texas governor praising and promoting the virtues of freer energy markets in the Lone State State. Texas has the freest electricity marketplace in the country and hasn’t faced any major reliability problems, even in the aftermath of major flooding by Hurricane Harvey in late August. (Of course, it should be noted that most of Texas would be exempt from the DOE’s proposed rule because it maintains its own intrastate grid.)

Fortunately, efforts like this often come up against checks and balances that keep poor policies from being enacted. In response to the DOE proposal, a hitherto unprecedented coalition of 11 energy lobbying groups is asking FERC for a delay in processing the new rule so they can prepare arguments against it. The coalition included a combination of major oil and gas associations and the most important renewable-energy lobbyists, such as the American Wind Energy Association and the American Petroleum Institute.

Because FERC is an independent regulator, the administration can’t force the policy through by fiat. Final rules must be passed by a majority of FERC commissioners and the commission only recently received a quorum, after spending more than six months inactive. The likely postponement of quick action on the DOE proposal will allow the five FERC commissioners (two who of whom still await confirmation) time to consider the full ramifications of such a rule. If the $4 billion annual cost estimate is even close to accurate, the commission’s definition of what counts as “free” may be very different from Perry’s.

Image by Andrew Cline

Remediation won’t cut it – we need cyber resilience


Since its cybersecurity kerfuffle in June, Equifax has become a four-letter word. And that word is “hack.”

CEO Richard Smith went to Washington this past week to testify in front of four different congressional committees about the perilous pairing of human and technological error that led to 2017’s largest data breach. Unrelenting members of Congress demanded regulation and remediation for consumers.

The hearing by the House Energy and Commerce Committee’s Digital Commerce and Consumer Protection Subcommittee focused attention on Equifax’s plan to remedy consumer confusion. The fact that Equifax is both a broker of identity information and a company that sells services to protect that information makes the aftermath of the hack particularly tricky to navigate.

More than 44 percent of Americans had a treasure trove of personal information stolen in the hack by criminal actors yet to be identified. The data include names, birthdates, Social Security numbers, addresses, driver’s license information and credit information. Equifax added 2.5 million more to the 145.5 million total number of consumers affected by the data breach after cybersecurity firm Mandiant concluded its forensic investigation this week.

The news has prompted members of Congress to renew calls for legislation requiring companies to do more about cybersecurity. However, such approaches targets the symptoms rather than the disease.

Rep. Jan Schakowsky, D-Ill., is sponsor of the recently reintroduced Secure and Protect Americans’ Data Act, which would require any organization or company that holds personal information to develop a written security policy, implement extensive security procedures and assess their security program annually. In the event of a data breach, organizations would be compelled to notify consumers. The requirements set out in the Schakowsky legislation for “information brokers” are even more burdensome. The bill cedes power to the Federal Trade Commission to enforce noncompliance with these rules as an “unfair and deceptive act.”

While the bill is well-meaning, in practice, this regulation likely would result in more work, rather than more security, as organizations redirect resources to compliance.

Meanwhile, Rep. Ben Ray Luján, D-N.M., has proposed the Free Credit Freeze Act, which would require consumer reporting agencies to provide credit-freezing services free of charge in perpetuity. Equifax already has announced that it will be providing such a service, known as TrustedID Premier.

Both the Schakowsky and Lujan bills are emblematic of a shortsighted approach of overemphasizing response, remediation and resistance over a long-term resilience-based approach to cybersecurity. Breach notification, security policies and credit-monitoring services may cure the headache but they will fall short of preventing the next big hack. In contrast, pursuing resilience means that the cybersecurity ecosystem can withstand stressors, adjust to adverse events and bounce back quickly. Government should focus on fostering a policy environment in which these capabilities are strengthened.

Building immunity from the bottom-up requires a layered approach that focuses on the incentives that face both the attacker and the defender, much like the layers of defense in a secure internet-enabled system. Overlapping efforts from a variety of actors—who must include industry, individuals, third parties and government—is the only way to provide a systemwide solution to what is a systemic problem.

Consumer awareness is one way to affect change in the cybersecurity ecosystem. The Promoting Good Cyber Hygiene Act—sponsored by Rep. Anne Eshoo, D-Calif.—identifies one area where government can play a positive role. It suggests the National Institute of Standards and Technology produce an accessible list of best practices, based on NIST’s cybersecurity framework that currently is in use by both companies and the government.

Creating guidelines for individuals takes this framework one step further and empowers consumers to improve their resilience to cyberattacks. Such guidelines would include information about what to do in the event of a data breach. They would allow consumers to better navigate Equifax’s bungled consumer-notification process and misleading landing page. Industry leaders such as Google, Facebook or Apple as well as third-party organizations like the Electronic Frontier Foundation or the Internet Society can also work to fill this information gap for consumers.

In a world in which a majority of Americans have personally been the victim of a major data breach, an approach that focuses on resilience can do more than merely treat the symptoms.

Image by Shawn Hill


If the rules are right, digital microlending could play role in subprime market


Well-functioning credit markets are essential tools for many people in times of personal economic instability or emergency. Unfortunately, some prospective borrowers with subpar credit ratings and credit histories do not qualify for the standard options of credit cards, secured loans or personal loans.

Credit unions frequently are the best available choice for those who have difficulty obtaining credit through traditional banks. But for some, digitally coordinated peer-to-peer lending agreements—inspired by microfinance arrangements for economically fragile communities internationally—also are proving to be an emerging option.

However, before these kinds of lending arrangements can be expected to expand domestically, digital rules will need to be established to give certainty to lenders and borrowers alike.

Subprime borrowers may have practiced poor financial habits or failed to meet their obligations, but this does not change their need for emergency credit when things get tight. Locked out of the prime credit market, these borrowers resort to payday loans, title loans and other products that come with very high interest rates and dubious collection methods. If they default on these loans, the interest and fees skyrocket, leaving them even worse off than before they took the loan. Most lenders must charge these high rates to compensate for the enormous risk they have undertaken to underwrite the loans.

Peer-to-peer digital microlending has the potential to fill a portion of the gap by providing this cohort with small, short-term loans that typically range from $100 to $500. While traditional peer-to-peer lending sites such as Lending Club target prime borrowers, other platforms are helping subprime borrowers.

One of the largest such peer-to-peer digital microlending platforms is the “R/ Borrow” section of This subreddit uses the reputational ecosystem within reddit to identify worthy borrowers, banning users who default or violate the terms of use. The subreddit facilitates the microloans and acts as a central database of transactions, coordinating more than $780,000 in loans in 2015.

If it can be properly scaled, peer-to-peer digital microlending could be a worthy option over payday loans for subprime borrowers. Unlike the latter method, digital borrowers are not necessarily assessed hefty fines or fees for late payments. Instead, they negotiate directly with lenders to find an amicable solution. True enough, some borrowers will default on their commitments and walk away without harm to their credit scores. To compensate, most lenders on microlending platforms (including the “R/Borrow” subreddit) charge high interest rates, ranging from 10 to 25 percent over several weeks or months. This isn’t a problem for most borrowers, as most of their needs are for short-term, small amounts to get them through until their next source of income.

Barriers to the expansion of these platforms come in the form of the myriad usury laws on the books in most states. While banks and other financial institutions are exempt from such laws, individual lenders are not. Digital microlending transactions often happen over state lines, making it very difficult for lenders and potential borrowers to determine their proper jurisdiction and the interest rate restrictions that apply to them. This may be an opportunity for Congress to pre-empt such laws as a matter of interstate commerce. Legislation could provide a consistent standard for digital microlenders to follow, such as through the proposed Uniform Electronic Transactions Act (UETA).

While admittedly there are other challenges to overcome, such as developing a scalable peer-to-peer enforcement mechanism, additional legal certainty would help expand this credit option for borrowers who find themselves locked out of traditional credit markets.

Image by designer491

The 1986 tax reform effort shows that Republicans have a tough road ahead


If there is one thing that unites Republicans as different as President Donald Trump, House Speaker Paul Ryan, R-Wis., and Sen. Susan Collins, R-Maine, it is a general sense that taxes should be lower than their current levels. For all the party’s changes on social issues and foreign policy over the years, tax cuts have consistently been the GOP’s guiding light.

But we are about to see that consensus tested though. Trump and the Big Six—administration and congressional leaders on tax policy—have proposed a tax reform bill on a scale that has not been seriously considered since Ronald Reagan and a divided Congress pushed through the Tax Reform Act of 1986.

With the endorsement of the Big Six and the likely passage of a budget resolution with reconciliation instructions that will allow 50 senators and Vice President Mike Pence to advance the bill, the Republican tax plan is developing a sense of inevitability around it. After bungling the long-promised Affordable Care Act repeal, a losing effort on tax reform would seriously harm congressional Republicans’ credibility with the party faithful and may even trigger a revolt. If there was ever a time for Ryan and Senate Majority Leader Mitch McConnell, R-Ky., to get their caucuses together, it is now.

Despite the clear political incentives for Trump and congressional Republicans to deliver, the bill that the president introduced faces trouble ahead. While it’s likely that Republicans will pass some bill that affects taxation, a major tax reform bill is a longshot. Reforming the tax code usually means decreasing tax expenditures and closing loopholes in individual and corporate taxes, while lowering rates. Unfortunately for congressional Republicans, it is only the lower rates they agree on.

Once the thorny issue of loopholes comes up, members will find it difficult to come to a consensus. In fact, the 1986 Reagan bill is essentially the only time Congress has ever been able to enact a loophole-closing and rate-lowering tax-reform measure. As recounted in Showdown at Gucci Gulch (Vintage, 1988), the 1986 bill was nothing short of a miracle. What started out as an “ideal tax plan” from then-Treasury Secretary Don Regan was morphed by politics from the Reagan administration, the Democratic-controlled House and the Republican-controlled Senate until it eventually limped across the finish line and became law.

The final product established a two-rate structure for individuals, a 34 percent rate for corporations and repealed individual deductions for state and local sales taxes and corporate tax breaks like the investment tax credit. However, as Gulch authors Jeffrey Birnbaum and Alan Murray note, the bill was a hodgepodge and groups with clout, like the oil and gas industry, beat out those with less influence to keep the loopholes that mattered to them. Furthermore, the two-rate structure was a sham, as the bill included a surtax or “phantom rate” that was applied to top earners.

Still, it did end many loopholes and helped ensure that companies and the wealthy couldn’t avoid their tax bills altogether. Moreover, many members bucked lobbyists and parochial interests from their districts to support a bill that was in the general interest.

Political scientists like David Mayhew have found that the general interest is usually not what sways members of Congress to support bills. According to tax scholars with similarly pessimistic views on the incentives for legislative action, the 1986 bill was an anomaly and tax policy will usually be made incrementally rather than in sweeping changes.

In her essay on tax reform in The Evolving Congress (Congressional Research Service, 2014), Jane Gravelle lists the conditions necessary for a reform bill to pass. The first is strong presidential leadership, which ideally should come from a popular president. Reagan was extremely popular in 1985 and 1986, having just been re-elected in a 49-state landslide (it wasn’t until after the tax-reform effort that the Iran-Contra scandal reared its head and his numbers began to slide). Although Reagan was not very immersed in the details of the plan, he did provide mostly consistent public support and gave tax reform major billing in his 1985 State of the Union.

The second condition is that the first draft should be free from political pressures. This allows the draft to set the agenda and to use popular provisions like the state and local tax deductions as bargaining chips to garner support. Don Regan led the Treasury Department in drafting the “ideal tax plan” and his successor as Treasury secretary, James Baker, also put together draft legislation that was mostly free from political pressure.

The third condition is that the plan must be large and sweeping enough that it looks like “real reform.” This gives members an incentive to support it, as they do not want to be seen as beholden to special interests. The 1986 bill certainly did this, especially once Senate Finance Chairman Bob Packwood, R-Ore., introduced the radical two-rate structure that showed senators who had been more focused on preserving their slice of the pie that reform was serious business.

The 1986 bill also benefited from the fact that control of Congress was split between Democrats and Republicans, and the parties were not yet so polarized that they could not work together. A particularly strong alliance formed between relatively liberal tax reformers like Sen. Bill Bradley, D-N.J., and Republican adherents to supply-side economics, who believed that lower rates must be achieved at any cost, even eliminating popular tax breaks. The fact that both parties had an interest in seeing the bill passed encouraged its shepherds to face the wrath of special interests in unison, rather than try to score political points by blaming the other side. It also showed that, despite relatively weak public support for tax reform (which persists today), members do not want to oppose a bill that pings special interests in favor of the everyday taxpayer.

So, keeping in mind the lessons of 1986, what should we expect in 2017 or 2018? The good news for Republicans is that tax reform was on the agenda during the Obama presidency and, thus, has received some attention from political elites. During his second term, Obama wanted to work on reforming corporate loopholes while Ways and Means Chairman Dave Camp, R-Mich., was interested in dropping the top rate into the 28 to 25 percent range by eliminating a large swath of individual loopholes. However, the two proposals mostly stayed in their respective partisan enclaves and never gained traction.

The troubles of tax reform in the Obama years shows one of the key weaknesses for the Trump plan: a lack of bipartisan consensus on what to do. Right now, Republicans are focused on giving corporations a tax cut. That is not in the interest of congressional Democrats, which means the GOP must use the reconciliation process to pass a bill on partisan lines. The last time this happened was early in the George W. Bush administration, an effort that focused more on tax-rate reductions than tax-code reforms.

So, if the partisan roadblock can be bypassed with reconciliation, what about the other 1986 conditions?

President Trump is historically unpopular at this point in his term, so that likely won’t boost the chances of reform passing. The effort might be hurt by the optics of an extremely wealthy president is trying to pass a bill that could give him or members of his family tax cuts. The worry that the current proposal is too favorable to the rich already has Republicans talking about keeping the top rate above 39 percent. Trump’s unpopularity could feed into the already lackluster support for tax reform. This is not necessarily detrimental (remember, it was not popular in 1986 either), but having public opinion firmly behind a legislative initiative is never a bad thing.

The second condition for passing the bill is that it be drafted away from political pressures. The contents of the Trump bill are still somewhat unknown, as the Republicans have only released a framework, which is specific in some areas and lacking detail in others. So, we cannot make a judgment on the second condition quite yet. One of the most difficult political sells for Republicans will be the elimination or limitation of the deduction for state and local taxes. These are particularly important for Republicans from high-tax states like New York and California.

Their appeal goes beyond that though. When working on the 1986 bill, a New York coalition to preserve the state and local incentives teamed up with oil and gas interests from Texas to change the bill when it was going through the House. The coalition gained the support of other members because, as it turned out, the state and local incentives had widespread support, even in low-tax states. The coalition received 208 pledges from members who said they would not vote for a bill that eliminated the deduction.

If widespread opposition like this emerges to provisions in the general framework or in the actual bill, it could spell doom for reform. One of the difficulties for tax reform that Gravelle mentioned was that the 1986 bill eliminated most of the low-hanging fruit for loopholes. The ones that remain are popular and will probably have fierce advocates organizing opposition against their repeal.

The third condition is that the bill is sufficiently wide-ranging and appears to be “real reform” instead of a thinly veiled effort to benefit some narrow constituency. This will force members to either vote for it or incur the wrath of the average taxpayer. As of now, it does not appear that the Trump bill has that quality. For one thing, Republicans have very different ideas about what they want out of the bill.

Sen. Bob Corker, R-Tenn., who is retiring after the 2018 elections, has said that he will not support any bill that increases the deficit, a tough sell when initial estimates show the Trump proposal losing trillions. Another recalcitrant Republican is Kentucky Sen. Rand Paul, who claims the Trump plan does not help middle-class voters enough. He showed on the Graham-Cassidy rendition of the ACA repeal that he is not afraid to buck his party, even when it comes to longstanding goals or core principles like tax reform. Arizona Sen. John McCain is demanding that the bill go through regular order and might even lean toward a bipartisan package rather than the 50-vote deal that Republican leaders appear to be eyeing. He opposed the Bush tax cuts in the early 2000s, so his vote is by no means guaranteed. We have not heard much from moderates like Collins and Lisa Murkowski, R-Alaska, both of whom come from poorer states and might not be keen on a bill that favors the rich.

Whatever these individual senators might be thinking, the bill has clearly not yet reached the point of inevitability that the 1986 bill did when Packwood released his two-rate structure.

If history is any guide, the Trump tax reform plan has rough sailing ahead. It seems more likely that Republicans use the reconciliation process to enact tax cuts without targeting many of the deductions or corporate loopholes that could offset some of the revenue losses.

If anything, 1986 showed us what a herculean effort it is to overhaul the tax code. It’s not impossible, but Republicans will probably need to give more thought to selling the effort to skeptical members and the public before they are able to pass the most sweeping changes the tax code has seen in over 30 years. It might take two years (or two terms and a few more seats in the Senate) before President Trump is able to achieve anything like what Ronald Reagan and the 99th Congress did.

Image by EtiAmmos

Where R Street stands on birth control issues


The following post was co-authored by R Street Policy Analyst Caroline Kitchens.

After R Street Policy Analyst Caroline Kitchens, who co-wrote this post, wrote about birth control access in August for The Hill, more than a few allies asked us other questions about what we think and where we stand on some related issues. We’re writing this post to clarify what we as an institution think and deal with—and what we don’t deal with—on birth control and related topics.

To put it simply: We think current rules regarding access to many forms of birth control are an example of government overregulation. As such, R Street wants to change them both for their own sake and because it will advance our overall deregulatory agenda. We don’t, however, take institutional positions on related issues, such as health care and abortion.

With regard to birth control, the current regulatory regime is deeply unjust and imposes needless burdens on the vast majority of sexually active Americans. Even though the decision to use birth control (or not) is one of the most private parts of life, access to all hormonal birth control requires a time-consuming, intrusive and often expensive doctor’s office visit. This happens even though consumers are able to self-diagnose the need for the medication (in this case, wanting to avoid unintended pregnancy) and the drugs carry no risk of overdose or addiction. While some risks do exist in hormonal birth control, there is no reason why pharmacists should not be able to deal with those risks on the basis of questionnaires or minor screenings.

OBGYNs and pharmacists themselves support this. Nearly all American pharmacists already can write prescriptions for many types of vaccinations. There’s no reason why they shouldn’t be able to do what they already can in eight states and write them for birth control pills as well. In the short term, we’d like to expand pharmacist scope-of-practice to include other hormonal birth control—including the injection, patch and vaginal ring—and look for ways to allow other professionals who are not doctors to write prescriptions for the same.

To those who might suspect that we’re doing this to advance a broader libertarian and deregulatory agenda rather than simply working to expand access to birth control itself…your suspicions are justified. Our ongoing and expanding work on professional regulation convinces us that this might be a good way to get more people talking about different ways that people should be able to make a living without government approval and to draw attention to a particularly egregious and harmful example of regulatory overreach. If this helps spark a conversation that eventually makes it easier for cosmetologists to practice their craft after having learned basic health precautions rather than having attended pointless and expensive classes, we’ll be delighted.

With all of that said, we don’t see why this agenda with regard to birth control and professional regulation should obligate us to take positions on related issues. Besides a few scattered comments on very narrow reinsurance topics, we’ve been silent on those pieces of health legislation that have come before Congress since we opened our doors a little over five years ago.

Insofar as there is to be a system that specifies a mandatory benefits package and requires zero co-pay preventative care, we have no objection to the inclusion of birth control in that package and think it is probably a good idea. Since we are not advocating that birth control be made truly “over the counter,” we do think it should be covered by insurance plans on, at minimum, the same basis as any other similar prescription, even if it doesn’t require a doctor’s office visit.

The broader questions of what the health-care system should be able to look like and how (and if) employers and individuals might shape benefits packages based on personal or religious preferences are outside of R Street’s expertise. In the long term, we might pursue health care as an issue area. But we’re not going to wade into a debate that’s this complicated and consequential without deep expertise on the topic. And we don’t have that right now.

While we might eventually work on health care, R Street will never have an institutional position on abortion, per se, or any other issue that defies a solution that’s primarily economic. In the case of abortion, this is partly a matter of comparative advantage: there are dedicated, sincere, hardworking, well-funded and committed groups on both sides of the debate over the termination of pregnancies. Starting a program devoted to the issue at R Street would not add anything.

Just as importantly, we’re a pragmatic think tank that looks for innovative, market-oriented solutions to problems. The important political debate over abortion, as it involves profound questions of individual autonomy and human life, may not be suited to a market-oriented solution. Trying to point out the advantages of “the market” would not and probably should not convince anybody to change his or her opinions, anyway.

In short, R Street favors faster, better, cheaper access to birth control and doesn’t think getting it should require a doctor’s office visit. We don’t see a need to wade into other related issues to do this and, for the time being, we won’t.

Image by Image Point Fr

Why is Richard Cordray voting on FSOC?


The Financial Stability Oversight Council (FSOC) just made the good decision to remove the designation of the insurance company American International Group as a “SIFI” or “systemically important financial institution.” This was a good idea, because the notion that regulators meeting as a committee should have the discretion to expand their own power and jurisdiction was a bad idea in the first place – one of the numerous bad ideas in the Dodd-Frank Act. The new administration is moving in a sensible direction here.

The FSOC’s vote was 6-3. All three opposed votes were from holdovers from the previous Obama administration. No surprise.

One of these opposed votes was from Richard Cordray, the director of the Consumer Financial Protection Bureau (CFPB). Wait a minute! What is Richard Cordray doing voting on a matter of assessing systemic financial risk? Neither he nor the agency he heads has any expertise or any responsibility or any authority at all on this issue. Why is he even there?

Of course, Dodd-Frank, trying to make the CFPB important as well as outside of budgetary control, made him a member of FSOC. But with what defensible rationale? Suppose it be argued that the CFPB should be able to learn from the discussions at FSOC. If so, its director should be listening and by no means voting.

Mr. Cordray, and any future director of the CFPB attending an FSOC meeting, should have the good grace to abstain from votes while there.

And when in the course of Washington events, the Congress gets around to reforming Dodd-Frank, it should remove the director of the CFPB from FSOC, assuming both continue to exist, and from the board of the Federal Deposit Insurance Corp. while it is at it, on the same logic.

New Resource: How wealthy are our representatives?


The following blog post was co-authored by Charles Hunt, a doctoral student at the University of Maryland at College Park.

It likely wouldn’t surprise anyone, much less a congressional scholar, to learn that most members of Congress are wealthier than the average American. What might be more surprising is just how much wealthier they are.

According to estimates calculated by the Center for Responsive Politics, the average net worth of a member of the U.S. House is around $8 million. That’s about 116 times as much as the net worth of the average American, which according to the U.S. Census Bureau’s most recent estimate is $69,000.

Even the median net worth of the top 20 percent of Americans, about $630,000, doesn’t come close to the median net worth of a member of Congress (about $880,000).

To help visualize this for individual members of Congress, we’ve put together an interactive graphic that displays data for all members of the 114th House of Representatives for whom data are available, including their state, district, party and estimated net worth. Each circle denotes a representative – the bigger the circle, the more the he or she is worth. Hover over each circle to see the name of the representative and their net worth and zoom in and out to get a better view of a region. Use the wealth slider to limit the representative visible on the map.

Below is a summary graphic showing wealth ranges and the number of members that fall into each range. Nearly half the members of the House are millionaires, and nearly two-thirds are worth more than $500,000.


These net worth figures for members of Congress, given how out-of-sync they are with the wealth of their constituents, should give us pause and lead us to ask some important questions.

  • Why are members of Congress so much wealthier than average Americans?
  • Do voters care about this disparity, and should they?
  • What work experience created this kind of wealth for them, and what kinds of policy implications could this have?

Further analysis of our interactive graphic is likely to spur even more questions.

richest poorest

Image by


PRI Podcast: Steven Greenhut’s end-of-session wrap

R Street Western Region Director Steven Greenhut joins Pacific Research Institute’s Another Round podcast to discuss the California Legislature’s housing package, its recent cap-and-trade deal, bills that were overlooked in the recent legislative session and the impact of Proposition 54.

Rep. Tonko cites R Street’s energy research

Rep. Paul Tonko, D-N.Y., cites R Street research on electric reliability at an Oct. 3, 2017 hearing of the U.S. House Committee on Energy and Commerce’s Energy Subcommittee.

Prescribing on-site fuel storage is an unreasonable approach to grid resiliency


The U.S. Energy Department’s proposed rulemaking to the Federal Energy Regulatory Commission (FERC) is, at best, a myopic and inefficient approach to grid resiliency. The proposal prescribes one measure, among many options, to address a single, low-to-medium salience aspect of grid resiliency. That is, it proposes to compensate extended on-site fuel storage as the means to address the security of fuel supply across the power-plant fleet.

Fuel security is an important issue to evaluate objectively, but it’s been politicized immensely by rent-seeking interests. This clearly influenced the DOE proposal, which cherry-picked the evidence to make its case. The proposal selectively pulled information from the insightful DOE technical report issued in August. The proposal exalts the benefits of fuel-secure nuclear and coal, but ignored that the report highlighted substantial fuel-related outages at coal plants. Many coal plants couldn’t obtain fuel from their own on-site stockpiles because conveyor belts broke and coal piles froze.

Reducing fuel shortages at many power plants is not even a function of whether fuel is stored on-site. The biggest issue with natural gas plants lacking fuel – in the 2014 polar vortex or otherwise – is that they lacked the incentive to firm their fuel supply. Firming a fuel supply could come from on-site storage (e.g., backup oil) as well as off-site delivery, such as contracting for guaranteed pipeline service. Since the polar vortex, market reforms have increased the incentive to firm fuel supplies, and this has improved generator performance during severe weather events. In the PJM Interconnection, the largest grid operator, this largely came in the form of firming gas supplies using third-party marketers, which improves fuel security without increasing on-site supplies. Critically, this came from voluntary actions by the private sector, which creatively chose the lowest-cost ways to improve plant performance that fit their unique set of circumstances.

The DOE proposal cites the polar vortex as a cautionary tale of fuel insecurity. Yet the biggest issue was weather-related outages, as many plants couldn’t operate because temperatures dropped below a plant’s design basis (e.g., external instruments froze). If anything, it’d be more important to ensure weather-secure generation than fuel-secure generation.

Regardless, it shouldn’t be the role of government to compensate plant weatherization, on-site fuel storage or any other measure to possibly improve generator performance directly. Instead, regulators should ensure an incentive structure exists for the economically efficient level of weatherization, fuel assurance improvements and other performance-enhancing measures like improved maintenance. All these measures have costs, and only a well-functioning market should determine which costs are worth incurring to keep the lights on.

A market-based approach to reliability and resiliency values the performance or capability to provide a specific service. It does not explicitly value specific measures associated with performance or capabilities. Dozens of measures improve performance and capability, and the lowest-cost way of doing so is to provide proper incentives to market participation to decide their own course.

In contrast, the DOE proposal would result in compensation for one politically preferred measure. For government to favor a certain measure simply reveals the bias of central planning, which has a track record of raising costs unnecessarily. To the extent that on-site fuel improves generator performance, markets should reward the measure indirectly through a fuel- and technology-neutral paradigm that procures specific services.

Take “black-start” capability, for example. A power plant with black-start ability can start up without power assistance from the grid. This is critical for resilience, as it provides the ability to restore operations in case of a full grid blackout. Procurement of black-start capability predominantly occurs through administrative processes, rather than market mechanisms. Re-examining the determinants of black-start procurement and using a market approach may boost prospects for cost-effective resiliency.

A thoughtful, market-compatible approach to reliability and resiliency, like that recommended in the DOE technical report, is welcome. The current DOE proposal provides an example of what not to do. It is deeply flawed, rushed and anti-competitive. The fallout from the DOE’s proposal will hopefully encourage the administration to reinvent its strategy on resiliency to bolster market performance and empower consumers, rather than undercut them by prescribing actions. In the meantime, FERC must uphold market principles and push forward with an economically sound agenda. FERC only needs to cite the DOE’s technical report as an example of what to do, as it respectfully declines DOE’s political albatross.


James Wallner on the Senate filibuster

On the American Enterprise Institute’s Banter podcast,  Peter Hanson and R Street Senior Fellow James Wallner discuss the Senate filibuster, how it operates, its impact on the Republicans’ agenda and ways to overcome it. The full audio is embedded below:

For connected cars, let the best technology win


Vehicle crashes are the leading cause of death among young people in the United States. It’s therefore crucial that we find ways to improve the safety of our roads is we want to save lives.

However, a proposal currently before the U.S. Transportation Department to mandate that all vehicles use a kind of vehicle-to-vehicle technology known as dedicated short-range radio communications, or DSRC, is the wrong approach to this issue. The mandate would hamper development of competing standards that may work better, in addition to creating potential security vulnerabilities.

Technology-specific mandates are always problematic. As a matter of process, bureaucratic decisionmaking is not well-suited to determine which technology is best for a specific need, or whether a need even exists at all. In the case of DSRC, there are technical reasons why other standards for vehicle-to-vehicle communication may prove more popular.

For instance, standards developed by organizations like 3GPP send signals over lower-band spectrum, which travel further and can penetrate obstacles like buildings or trees better than the high-band spectrum allocated for DSRC. These characteristics likely mean the lower-band spectrum options will be cheaper to deploy than DSRC, since the same area can be covered with fewer antennas. This standard already has broad support from tech companies and carmakers like Ford, Rolls-Royce, Audi and BMW. Mandating the use of DSRC, or any specific technology, would be unwise when the market already provides competitive alternatives.

The DSRC mandate also raises security concerns. As security researcher Alex Kreilein notes, adding an interface with computers in other vehicles may improve safety on some counts, but it also creates new vulnerabilities. He argues the DSRC mandate would be especially risky in that it would create a monoculture in which all vehicles use the same technology. Compromising one car could, in fact, compromise all of them.

Kreilein further explains that it is dangerous to concentrate essential safety technology in one identifiable spectrum channel, where it can be more easily targeted by bad actors. We should allow the marketplace to consider and ultimately adopt competing standards using a variety of spectrum bands, rather than forcing all our eggs into the DSRC basket.

Some developers of self-driving vehicle systems are avoiding the security issues associated with vehicle-to-vehicle communications entirely by designing their products to account for their surroundings without directly communicating with other vehicles. These systems use technologies like cameras, LIDAR, radar and sonar to achieve similar awareness of situations, without the additional complications. In the case of these vehicles, any mandate would add unnecessary costs and security vulnerabilities, which would result in higher prices and less safety for consumers.

Spectrum for DSRC has been set aside since 1999, with almost nothing to show for it. Spectrum is a scarce resource and letting it remain underutilized has significant opportunity costs. The particular band allocated to DSRC (5.9 GHz) is adjacent to spectrum currently used for Wi-Fi. With demand for wireless bandwidth, including Wi-Fi, on the rise, the Federal Communications Commission could extend the available bandwidth for Wi-Fi to encompass the spectrum currently set aside for DSRC. While the FCC has been exploring ways to share this band between DSRC and Wi-Fi, we could maximize consumer benefits by abandoning the DSRC mandate and allowing the market to dictate how the spectrum should be used.

Thankfully, the DOT appears to be backing off the proposed mandate, moving it to the less urgent status of “undetermined.” The department should close the proceeding completely to create a level playing field that will allow the best technology to win and allocate spectrum to its most valuable uses.

Image by Zapp2Photo

National Flood Insurance Program, zoning, hurricanes: Lessons for lawmakers

In the wake of devastating storms in Texas, Florida, Puerto Rico and the U.S. Virgin Islands, the deeply indebted National Flood Insurance Program almost certainly will be forced to ask Congress to borrow even more money. Senior Fellow R.J. Lehmann took part Sept. 25 in a Capitol Hill discussion hosted by the Cato Institute to discuss ways the program could be reformed — and perhaps, eventually, completely privatized — ahead of its scheduled Dec. 8 expiration.

Full video of the panel is embedded below:

DOE proposal misframes grid resiliency  


U.S. Energy Secretary Rick Perry directed the Federal Energy Regulatory Commission (FERC) Friday to issue a rule to provide immediate cost recovery for power plants with extended on-site fuel supply. Read another way, the proposal is an arbitrary backdoor subsidy to coal and nuclear plants that risks undermining electrical competition throughout the United States.

The U.S. Energy Department proposal leverages a rarely used law that allows them to propose their own rulemaking to FERC. The DOE proposal, which is notable for its lack of detail, nevertheless calls for FERC to create a new final rule within 60 days. While DOE has the legal authority to initiate proposed rulemakings, FERC retains ultimate discretion as to how to respond.

DOE’s proposal marks a deeply troubling departure from the thoughtful recommendations in its August technical grid report. That report sought to enhance the performance of electricity markets, whereas this overtly political proposal inflicts an impossible timeframe and concocts a recipe for wounding competitive markets, while potentially imposing billions of dollars in unnecessary costs for consumers.

Proponents of markets, consumer choice and limited government should shudder. Consumers would ultimately bear a hefty and unnecessary bill from any such draconian intervention, which would also raise capital borrowing costs and have a chilling effect on new investment. Proponents of good governance should also cringe, as the proposal calls for an unnecessarily rushed response in a timeframe completely unrealistic to enact reforms through the proper channels. To craft and implement sophisticated market rules requires working through a robust development process, often over the course of two years or more. The 60-day timeframe called for in the proposal is unprecedented.

When it came to the DOE’s technical report, a solid effort by the department’s technical team muted external suspicion of pro-coal and nuclear bias. This DOE proposal instead validates that suspicion. It is neither technically nor procedurally sound and has political fingerprints all over it. Clearly, the thinking behind the proposal bypassed that of the department’s own technical experts. The political proposal does a disservice to prior DOE work, to consumers, to good governance and to competitive markets.

The DOE proposal is long on hyperbole and short on technical backing. It seeks “immediate action” to address the “crisis at hand” as the “loss of fuel-secure generation must be stopped.” Yet there is no crisis, as affirmed by recent electric performance metrics, the latest congressional testimony of the CEO of the North American Electric Reliability Corp. and even the DOE’s own technical report. Critically, motivations for market reforms should never aim to adjust compensation with a pre-determined result. The whole purpose of markets is to let competitive forces determine resource allocations, which lowers costs and allocates risk to the private sector, in contrast to government-determined investments.

Market failures for electric reliability and resilience justify a limited role for government intervention to facilitate competition. Experts traditionally considered grid reliability and resiliency as “common goods,” because suppliers cannot limit receipt of the product to those who pay for it. This will induce free ridership and cause chronic underinvestment. Thus, the fundamental issue is ensuring incentive compatibility, where market rules align the economic interests of participants with the efficient and reliable performance of the electric system.

Getting the incentives right begins with ensuring prices accurately reflect supply-demand fundamentals and that there are markets for discrete reliability and resiliency services. The DOE technical report hit this on the head, calling for improvements in energy price formation and valuation of essential reliability services (e.g., voltage support and frequency response), which does not include on-site fuel storage. An exercise that defines discrete products for reliability and resiliency to procure through fuel- and technology-neutral markets is fruitful. The DOE proposal does not call for that.

The proposal is incompatible with sound market economics. It actually promotes a gateway to expand cost-of-service regulation, where government substitutes for competition. Its definition of eligible units – those with a 90-day on-site fuel supply – is arbitrary and has no economic basis. Curiously, some coal plants wouldn’t even qualify. Some hold roughly 30 days of on-site fuel supply; however, many hold 70- to 100-day supplies.

With a splash of hyperbole, the proposal referred to the loss of “fuel secure” resources during the 2014 polar vortex as possibly “catastrophic,” by inaccurately citing the technical report. This doesn’t characterize the nature of temporary bulk power shortages correctly. When bulk demand exceeds supply, grid operators take emergency actions, the most severe being voltage reductions (brownouts) and rotating blackouts. Brief voltage reductions and even rotating 30-minute blackouts are not catastrophic, by any stretch. This is why economic studies reveal consumers would often rather have their power curtailed briefly than pay a hefty premium to keep the lights on.

Prolonged (multiday) power outages can be catastrophic, especially during severe weather. The predominant cause of these sustained outages is damage to transmission and distribution infrastructure – take the recent hurricanes, as an example. They rarely result from power plant outages, let alone those from lack of fuel. DOE’s proposal seeks to take emergency action on, at best, a low-to-medium level resiliency issue.

A resiliency initiative should prioritize mitigating transmission and distribution damage and accelerating restoration. The DOE technical report recommended that grid-resiliency efforts prioritize disaster-preparedness exercises and for NERC and grid operators to define resilience criteria and examine resilience impacts. That’s a thoughtful approach, and the exact opposite of the unrefined DOE proposal with a single DOE-determined criterion for resilience.

A thoughtful resiliency approach would take a market-compatible mindset and recognize that advances in technology have helped enable a degree of product differentiation, where consumers can pay for different levels of reliability and resiliency services. This creates the ability to cease treating aspects of reliability and resiliency as a “common good,” where a central authority substitutes their judgment on behalf of consumers. This prospect to “privatize the commons” creates a great opportunity for the Trump administration to reduce the role of government planning, not to deepen government’s dictation of private services.

Image by Christopher Halloran

Dear Senate, we want more Pai


The following blog post was co-authored by R Street Tech Policy Analyst Joe Kane.

The U.S. Senate will have a chance Monday to reconfirm Ajit Varadaraj Pai for another term as chairman of the Federal Communications Commission, but it will first have to move past some baseless accusations about his suitability for the post that have been hurled the chairman’s way by a few congressional Democrats and political groups who want to block his reconfirmation.

In fact, Pai is arguably the most well-qualified chairman the FCC has had in recent years. Arguments to the contrary amount to a smokescreen for underlying disagreements with the market-oriented policy decisions Pai and his fellow commissioners have been pursuing at the FCC. These arguments should be rejected. We want more Pai.


Hailing from Parsons, Kansas, Pai attended Harvard University and the University of Chicago Law School before embarking on his illustrious legal career. Pai’s experience includes a federal judicial clerkship in Louisiana, multiple stints at the U.S. Justice Department and the Senate Judiciary Committee, and several years in private practice, first as associate general counsel for Verizon and then as a partner at the law firm Jenner & Block. Pai first joined the FCC in the General Counsel’s Office in 2007 before being nominated by President Barack Obama to be a commissioner in 2011. In 2012, he was confirmed by Democratic-controlled Senate by unanimous voice vote.

During his time as commissioner, Pai consistently pursued market-oriented policies and opposed expansive, heavy-handed regulation. It therefore should be no surprise that he has worked to implement these same policies as chairman. Additionally, Pai has prioritized closing the “digital divide,” incorporating rigorous cost-benefit analysis into agency rulemakings and implementing unprecedented transparency reforms, like publishing all pending orders on the FCC’s website three weeks prior to a vote. Pai’s actions prove he is an able public servant truly dedicated to pro-consumer policies.


Nonetheless, political opponents and activist groups are staunchly opposed to Pai’s FCC agenda. These groups have launched an all-out assault against the reconfirmation vote, forcing Senate Republicans to invoke cloture to even get a vote on Pai, which is scheduled for next Monday. The same senators who thought Pai was well-qualified when nominated as a commissioner should take the same view now.

While Senate Democrats may disagree with the policies Pai and his fellow Republican commissioners are advancing at the FCC, blocking a qualified public servant from office is not the proper response. Telecom policy is hugely important to all Americans, so it shouldn’t be relegated to bureaucratic rulemakings and squabbles over nominations. Ongoing debates over closing the digital divide and protecting net neutrality are vitally important. We need our leaders in Congress to pursue bipartisan legislation to settle these debates, not hold the current FCC Chairman hostage.


Cheesy dance moves aside, he is the best man for the job.

Image by Mark Van Scyoc

Things are getting weird in pipeline country


In an environment that only a lawyer looking for billable hours could love, federal courts are making a mess of executive branch guidance concerning whether federal agencies need to consider “indirect” climate effects when regulating pipeline construction.

The Obama administration in August 2016 finalized guidance on how agencies should consider climate change in project reviews. The guidance said federal agencies must consider the larger impact of greenhouse-gas emissions that occur from energy projects when completing its National Environmental Policy Act (NEPA) analysis.

The decision formalized executive action that President Barack Obama had informally created when he denied construction of the Keystone XL pipeline on climate-change grounds in November 2015. Obama then signed the United States up to substantial cuts in its greenhouse-gas emissions during the Paris Climate Accords in December 2015 and it all made sense.

But that was before Donald Trump came to town. In March, the White House rescinded the Obama guidance via an executive order, and in June, Trump announced the United States would leave the Paris Accord by the end of his first term. For outside observers, this would seem to shut down the possibility of the government taking climate change into consideration until at least another Democratic administration.

But this turns out not to be the case. For the last decade or so, some federal courts have rejected projects that the courts felt hadn’t taken the potential damage of indirect climate emissions into account. This gives plaintiffs the ability to argue to courts that there is legal precedent for blocking permits, even if the executive branch in charge of the permits change hands and reverses the policy. The legal issues have never reached the U.S. Supreme Court for final adjudication.

The political battle over natural gas pipelines is where the sniper fire is hottest right now.

In August, the U.S. Court of Appeals for the Federal Circuit ruled that the Federal Energy Regulatory Commission should have considered the impact of climate change when considering whether to approve a 500-mile natural gas line serving the Southeast. It ordered FERC to redo the analysis.

But FERC, which is responsible for siting all interstate natural gas pipelines, has for years fought against including indirect emissions into its environmental analysis. Now, newly staffed with a majority of Republican commissioners appointed by Trump, FERC doesn’t look to be backing down.

On Sept. 15, FERC overruled New York State’s Department of Environmental Conservation (DEC), which had blocked an eight-mile extension of the Millennium Pipeline in upstate New York under its Clean Water Act authority. New York, which has banned hydraulic fracturing, argued in its rejection letter that FERC had earlier “failed to consider or quantify the indirect effects of downstream [greenhouse gas] emissions in its environmental review of the project.”

While pipeline builders were pleased with the FERC decision, the agency only overruled the state authority on a technicality, arguing that New York waited longer than the 12-month window allowed under statute before rejecting the application.

Two other pipeline companies have said they would seek similar waivers from FERC after being blocked by DEC using the same Clean Water Act authority. Yet it is unclear whether the same procedural violations have taken place, and courts have not supported FERC’s assertion that it shouldn’t take project emissions into account.

This means the Obama administration’s climate guidance is still operating through the U.S. court system, even when the Trump White House has rescinded the guidance.

Again, things have gotten strange regarding pipeline siting in the United States – so much so that only a decision by the U.S. Supreme Court will likely straighten the rules out.

Image by Kodda

Rep. John Ratcliffe on the Separation of Powers Restoration Act

Earlier this year, Rep. John Ratcliffe, R-Texas, introduced the Separation of Powers Restoration Act. Unlike some bills, the act’s title precisely encapsulates its purpose: restoring the power disparity in our system of separated powers.

As close observers of our political system know well, the modern presidency has grown precipitously compared to Congress. While Congress itself deserves much of the blame for this state of affairs by over-delegating its powers to the executive branch, the third branch of our system has also been complicit. Under the judicial doctrine known as “Chevron deference,” the federal judiciary has systematically deferred to executive agencies when it comes to interpreting laws.

As R Street has noted previouslyChevron deference has become increasingly controversial in the legal community:

[Chevron deference means that] unless an agency’s interpretation of a statute is unreasonable, courts must adhere to it. Unsurprisingly, this allows agencies significant leeway to exercise their regulatory powers.

This level of deference to agency interpretations … has become contentious. There continues to be an ongoing debate among judges, legal scholars and practitioners about the propriety of according federal agencies such broad deference.

Rep. Ratcliffe’s bill addresses this issue by calling for an end to such deference; in its place, the bill would require courts to review agency actions de novo (“from the beginning”) and without deference. recently spoke with Rep. Ratcliffe about his bill, which he feels would provide an “immediate and profound” step forward in the effort to rein in the executive branch. As Ratcliffe put it, Chevron deference gives agencies the ability to “grade their own paper,” since their interpretation of statutes within their jurisdiction usually prevails in court.

For Ratcliffe, eliminating judicial deference to agency legal interpretations strikes at the very heart of our constitutional framework. “The wisdom of the founding fathers was that there would be a system of checks and balances,” Ratcliffe notes. “This is what Chevron deference has thrown out of balance; it should be the legislature that writes the laws, not agencies.”

Despite the relatively simple nature of his bill—its entire text barely exceeds 150 words—it remains controversial. Ratcliffe notes, however, that a version of the bill passed the House with at least some bipartisan support from several Democrats. According to Ratcliffe, President Donald Trump has also been receptive to the bill, which puts the ball squarely in the Senate’s court.

Given the Senate’s busy calendar, it’s anyone’s guess whether it will take up and pass the Separation of Powers Restoration Act. But those interested in checking the growth of the executive branch will certainly be keeping watch.

Sen. Graham has a good idea on climate change: Here’s how to do it


Sen. Lindsey Graham certainly likes to be in the middle of things. The South Carolina Republican took time away from Washington D.C., where he had been trying to shepherd passage of a major health care bill, to tell an audience that “a price on carbon – that’s the way to go in my view.”

Graham has been here before. Back in 2010, he was in the thick of negotiations over a national carbon-trading system that broke down when the Senate couldn’t find enough votes. Graham actually called for climate-change legislation during the 2016 election, but had not mentioned a price on carbon explicitly until just last week.

Meanwhile, the Republican Party and its voters have continued to move further away from promoting any climate change solution, even as Graham remains consistent in his belief that CO2 emissions generated by man are warming the earth.

Graham is completely correct that a carbon tax is the best way to control greenhouse gas emissions with as little impact as possible on the national economy. Many economists believe a carbon tax would be a much more efficient and elegant way to encourage cuts in carbon emissions than alternatives like a trading system or command-and-control regulation. Placing a fee on carbon would be more transparent, can be done with fewer transaction costs and would keep Wall Street from gaming a complex, opaque marketplace.

But the details matter. If a carbon tax merely served as a new source of revenue to fund wasteful government spending, it would be of dubious value. Any proposal to institute a carbon tax must not expand the overall size and scope of government, and ideally, should actually shrink it.

To be successful, a carbon tax should be revenue-neutral — that is, the revenue generated by the tax should be paired with cuts to taxes that are even more economically damaging. For example, R Street has proposed eliminating the corporate income tax altogether in combination with a meaningful carbon tax. A number of studies shown that such a trade-off would boost conventional economic growth, in addition to cutting pollution.

Moreover, any carbon-tax plan ought to pre-empt existing regulations of greenhouse gases. Because a carbon tax is layered on top of the retail cost of any fuel, it encompasses the complete externalized cost of a pollutant, meaning there should be no additional costs to companies or consumers.

This means that much, if not all, of the administrative state apparatus created to control hydrocarbon pollution would have to be eliminated as a prelude to carbon pricing. These policies include pre-empting any future regulations of greenhouse gases under the Clean Air Act (CAA). It’s also possible a slew of other regulatory authorities would be on the chopping block, as well.

Sen. Graham doesn’t appear likely to revive that Republican health care bill from dead, but perhaps he could still tempt fate and resurrect a carbon tax.

Image by arindambanerjee


Can police predictions create crime?

Technology has the power to make a lot of things better – including police work and crime-fighting. But it also has the power to “create” crime where it didn’t exist before. In a recent interview with the Brian Gongol Show on WHO Newsradio 1040 in Des Moines, Iowa, R Street Justice Policy Director Arthur Rizer explains how predictive policing can help or hurt the very communities that need the work of “peace officers” the most. Full audio of the piece can be found at this link.

Congressional procedure and policymaking

At a recent gathering of the Legislative Branch Capacity Working Group, Molly Reynolds, a fellow at the Brookings Institution, led a discussion on congressional procedures and their impact on policy creation and outcomes. Topics discussed include: how procedures, especially in complication situations like reconciliation, empower leaders versus rank-and-file-members and what should be done to increase staffers’ knowledge of procedures and their consequences.

Full video of the panel is embedded below.

How does the United States rank in homeownership?


There are a lot of different housing-finance systems in the world, but the U.S. system is unique in being centered on government-sponsored enterprises. These GSEs—Fannie Mae and Freddie Mac—still dominate the system even though they went broke and were bailed out when the great housing bubble they helped inflate then deflated.

They have since 2008 been effectively, though not formally, just part of the government. Adding together Fannie, Freddie and Ginnie Mae, which is explicitly part of the government, the government guarantees $6.1 trillion of mortgage loans, or ­­59 percent of the national total of $10.3 trillion.

On top of Fannie-Freddie-Ginnie, the U.S. government has big credit exposure to mortgages through the Federal Housing Administration, the Federal Home Loan Banks and the Department of Veterans Affairs. All this adds up to a massive commitment of financing, risk and subsidies to promote the goal of homeownership.

But how does the United States fare on an international basis, as measured by rate of homeownership?  Before you look at the next paragraph, interested reader, what would you guess our international ranking on home ownership is?

The answer is that, among 27 advanced economies, the United States ranks No. 21. This may seem like a disappointing result, in exchange for so much government effort.

Here is the most recent comparative data, updated mostly to 2015 and 2016:


Advanced Economies: Homeownership Rates
Rank Country Ownership Rate Date of Data
1 Singapore 90.9% 2016
2 Poland 83.7% 2015
3 Chile 83.0% 2012
4 Norway 82.7% 2016
5 Spain 77.8% 2016
6 Iceland 77.8% 2015
7 Portugal 74.8% 2015
8 Luxembourg 73.2% 2015
9 Italy 72.9% 2015
10 Finland 71.6% 2016
11 Belgium 71.3% 2016
12 Netherlands 69.0% 2016
13 Ireland 67.6% 2016
14 Israel 67.3% 2014
15 Canada 67.0% 2015
16 Sweden 65.2% 2016
17 New Zealand 64.8% 2013
18 France 64.1% 2015
19 Mexico 63.6% 2015
20 United Kingdom 63.5% 2015
21 United States 63.4% 2016
22 Denmark 62.0% 2016
23 Japan 61.7% 2013
24 Austria 55.0% 2016
25 Germany 51.9% 2015
26 Hong Kong 48.9% 2017
27 Switzerland 43.4% 2015

Sources: Government statistics by country

It looks like U.S. housing finance needs some new ideas other than providing government guarantees.

Image by thodonal88

Hurricane Harvey isn’t about climate change, it’s about bad federal policy

In the wake of Hurricane Harvey, many have questioned the roles played by climate change and Houston’s loose zoning rules in the devastation that faced that America’s fourth-largest city. R Street Senior Fellow R.J. Lehmann sat down with Nick Gillespie of the Reason podcast to discuss how explicit government policy encourages people to live in harm’s way and what can be done to reverse that trend. The full audio of that conversation is embedded below.

Section 230: When should online platforms be liable for the unlawful activity of their users?

When should online platforms be liable for unlawful activity? Section 230 of the Communications Decency Act (CDA 230) generally immunizes online platforms from liability when users engage in unlawful activity, but there are several exceptions to that immunity. Still, some websites have successfully hid behind CDA 230 while sex traffickers and other criminal enterprises run rampant on their platforms. In response, several bills have been introduced in Congress that would narrow the scope of CDA 230’s immunity and expand potential liability for online platforms that harbor unlawful activity. A panel of legal and policy experts discuss the current scope of CDA 230 and what impacts the proposed amendments would likely have on law enforcement, victims of sex trafficking, and the internet ecosystem writ large.


Elizabeth Nolan Brown, Associate Editor, Reason Magazine

Arthur Rizer, Director of National Security and Justice Policy, R Street Institute

Berin Szóka, President, TechFreedom

Jeff Kosseff, Assistant Professor, United States Naval Academy Center for Cyber Security Studies

Stacie Rumenap, President, Stop Child Predators

Mary Graw Leary, Professor of Law, Catholic University of America

Taina Bien-Aimé, Executive Director, Coalition Against Trafficking in Women (CATW)

Arthur Rizer talks jail reform on KJZZ

R Street Justice Policy Director Arthur Rizer appeared recently KJZZ, a National Public Radio affiliate in Phoenix, Arizona, to discuss how reforms to the nation’s jail system can be the key to safer communities. Audio of the story is embedded below.

How supporting internet freedom in Cambodia makes America great


I’ve had the privilege of working on internet freedom issues in a range of foreign countries, but none of my partnerships abroad has meant more to me than my work in Cambodia. Which is what you’d expect when you find out that, in the course of this work over the past three years, I met Sienghom, who just this summer has become my wife.

I’ve written about my internet work in Cambodia here before. And I think Freedom House’s 2015 assessment that the internet “remains the country’s freest medium for sharing information” still holds true. That’s why I’ve generally been optimistic about Cambodia’s prospects for increasing internet freedom and democracy, as well as its increased engagement with the pan-Asian and world economies, which should lead to higher standards of living in the country generally.

It’s also why I was particularly troubled when Sienghom pointed out to me a range of disturbing news items emerging from Phnom Penh, starting just last month and continuing into this past week. The bad news started with the Cambodian government’s decision to shut down the U.S. Agency for International Development-funded National Democratic Institute in late August. NDI has focused on offering training and workshops for Cambodian politicians and would-be public servants—both in the majority Cambodian People’s Party (CPP) and in the opposition Cambodia National Reform Party (CNRP)—aimed at enabling stakeholders to function effectively and democratically in a government framework that has been edging (thanks in part to internet engagement) toward a more truly representative parliamentary democracy. In response, USAID expressed its disappointment, as did the U.S. State Department, while Cambodian Prime Minister Hun Sen—who in other decades has sought to thaw U.S.-Cambodia relations—has ramped up criticism of the United States and USAID in particular.

In August, The Cambodia Daily, an English-language independent newspaper, quoted University of New South Wales professor of politics Carl Thayer about these latest trends, saying “[a]t this point, it looks like the U.S. is losing leadership by default and China’s gaining it by design.” But this past week, The Cambodia Daily itself was shut down, ostensibly for tax reasons. This represents a new wave of government actions designed to quell not just dissent, but any criticism whatsoever. In the same few days, the government has arrested CNRP leader Kem Sokha, who is now charged with treason.

As Thayer remarked to The New York Times, ““The current crackdown is far more extensive than ‘normal’ repression under the Hun Sen regime.”

But what’s been triggering this latest wave of repression in a country that, as a U.S. ally, has been inching, not always steadily, toward democracy in recent years? Longtime observers will point you first to the last round of elections in 2013; as I wrote here in 2015:

It hasn’t helped the current government’s sense of insecurity that the 2013 Assembly election was marked by civil protest, which the government is inclined to blame, along with its slipping majority, on the rise of social media like Facebook, where individual Cambodians have felt free to share their political views.

But there’s another, more recent factor at work—namely, the messages the Trump administration has been sending to Cambodia’s leadership. One obvious message, per a report in the Phnom Penh Post, is the administration signaling its intent to cut foreign aid to Cambodia to zero. Another is President Trump’s often antagonistic relationship with the American press, which Hun Sen interprets as legitimizing his own treatment of the Cambodian press.

President Trump’s relationship with American journalists may not be improved anytime soon, but the president could reconsider whether to cut aid entirely. Understandably, Americans who feel they didn’t adequately benefit from the post-2008 economic recovery may favor the administration’s expressed commitment to disengage from (or at least reduce) the United States’ longstanding commitments to both our allies and to an international order aimed at increasing peace and promoting progress. The current “America First” foreign policy—combining promises of military strength with renegotiated trade deals—certainly resonated with these voters.

But there’s also a risk that disengagement from the role we’ve played in the international framework projects weakness rather than strength. That’s a message that can undercut the administration’s goal of a world that is “more peaceful and more prosperous with a stronger and more respected America.”

We may debate whether North Korea’s current in-your-face attitude about its nuclear weapons program has been improved or worsened by President Trump’s “fire and fury” threat last month. What’s less debatable is that the perception in many foreign countries is that the United States intends, if not to exit the world stage, then to reduce its role to a walk-on part. Whatever else that does, it doesn’t give the impression of a stronger, greater America.

Image by atdr


Prominent carbon tax skeptic admits it could increase economic growth

carbon tax

A lot of writing opposed to carbon taxes is, frankly, not of high quality. But there are exceptions. Bob Murphy, an economist with the Institute for Energy Research, has written some of the strongest and most sophisticated arguments for carbon-tax skepticism. So it was with interest that I read his latest broadside on the subject in the Canadian Free Press.

In the piece, Murphy focuses his ire on what might be called the nonenvironmental case for carbon taxes. Even if climate change was a hoax invented by the Chinese, a carbon tax still might be a net benefit to the economy if it allowed for cuts to more economically damaging taxes. As Murphy summarizes the case:

[W]hen proponents of a carbon tax pitch it to American conservatives and libertarians, they explain that if we have a revenue-neutral carbon tax where 100% of the proceeds are devoted to cutting taxes on capital, then reputable models show that this could boost even conventional economic growth, in addition to whatever environmental benefits accrue from reduced greenhouse gas emissions. This is called a ‘double dividend’ that arises when policymakers began to ‘tax bads, not goods.’

This sounds reasonable. And R Street has, of course, argued for swapping the corporate income tax for a carbon tax on precisely these grounds. But would it really work?

To show the limitations of the “double dividend” argument, Murphy highlights a chart from a 2013 analysis by Resources from the Future, showing the economic impact of instituting a carbon tax and using the revenue to reduce various other forms of taxation.

carbon tax

As the chart shows, it matters a lot what type of tax you are swapping out for a carbon tax. Using carbon tax revenues to offset reductions in consumption taxes, for example, would be a net negative for the economy. Swapping a carbon tax for cuts to taxes on labor would have a smaller, but still negative effect. And simply returning the money to people in the form of lump sum payments would be worst of all.

But look at the blue line. If carbon tax revenues were used to cut taxes on capital, this would result in a net increase in gross domestic product. Murphy himself acknowledges this, stating that “the RFF model shows that only if carbon tax revenues were devoted entirely to a corporate income tax cut would the economy’s growth rise above the baseline.”

That’s overstating things a bit. For example, just eyeballing the chart, it looks like a plan that used half of the revenue from a carbon tax to cut taxes on capital and the other half to cut taxes on labor would still be a net positive for economic growth, albeit not as much of a positive as if all the money went to cutting capital taxes. I’m not saying that R Street would favor such a split, just noting that you could still end up ahead economically even if not all the money from the carbon tax went to cutting taxes on capital.

And remember, the above analysis assumes no benefits to the economy from limiting climate change. To the extent that one does think there are risks from climate change that taxing carbon emissions could mitigate, it makes the case even stronger.

So why isn’t Murphy on board with swapping carbon taxes for capital taxes? Basically because he doesn’t think it’s politically realistic:

There is no way in the world that a massive new U.S. carbon tax is going to be implemented, in which all of the new revenues are devoted to cutting corporate income taxes… We can see that the ‘fashionable’ proposals that are anywhere close to actual political proposals do not consist entirely of tax cuts on corporations. For example, the recent Whitehouse-Schatz proposal, unveiled at the American Enterprise Institute, is ostensibly revenue neutral. Furthermore, one of its features is a reduction in the corporate income tax rate from 35 to 29 percent. So far, this sounds like it’s a ‘pro-growth’ measure, right?

But hold on. The Whitehouse-Schatz proposal would also use its revenues to fund a reduction in payroll taxes (but it is a flat $550 tax credit, so it lacks ‘supply-side’ incentives and acts as a lump-sum check), and to allocate $10 billion annually in grants to states to assist low-income people who will be hit the hardest by higher energy prices.

Murphy is right that the Whitehouse-Schatz proposal is flawed (we’ve written about why here). But I’m a bit surprised to hear him dismiss ideas on the grounds that they aren’t politically realistic. Murphy is an anarchist (not that there’s anything wrong with that). His preferred solution on climate is to abolish the government and have a system of private sector judges work everything out. Whatever the merits of that idea, I would submit it’s at least as unlikely as swapping a carbon tax for cuts to the corporate income tax.

More generally, lots of political ideas start out being unrealistic, only to become law later. People who advocate for Social Security privatization or drug legalization probably recognize the uphill struggle they face in advancing their views, but that hardly means they should just give up. As Milton Friedman famously said, the basic function of a policy advocate is “to develop alternatives to existing policies, [and] to keep them alive and available until the politically impossible becomes the politically inevitable.” I happen to think the time is a lot closer for revenue-neutral carbon taxes than Murphy probably does. But it’s only going to happen if people make the case.

The great Texas gas shortage


The great Texas gas shortage of September 2017 is over. But did it ever happen?

For me, it all began last Friday morning. As I was driving to my local coffee shop, I passed a gas station with a line of cars stretching out into the street. The next station I passed was even worse, with lines stretching around the block. The third station I passed had no line: it was out of gas completely. By the time I returned from my coffee run, the first two stations were out too.

The scene I witnessed that morning was playing out all over central and north Texas, as worries about supply disruptions from Hurricane Harvey led to the gasoline equivalent of a bank run. Worries that stations would soon run out of fuel became a self-fulfilling prophecy, as a cycle of panic buying caused shortages, leading to even more panic buying.

Soon, almost everywhere was out of fuel. One friend had to abandon their car part way between San Antonio and Austin because they couldn’t find gas. Another described the “post-apocalyptic” feel at a Buc-ee’s mega-gas station, which continued to be just as full of people as normal, but with empty pumps.

Public officials took to the airwaves to reassure people that there were no gas shortages. Whether this was true is mainly a matter of semantics. Claims that there was no shortage were correct in the sense that there hadn’t been a major disruption in supply. Texas is a big state, and much of the affected regions had escaped serious flooding. While some refineries were offline temporarily due to the storms, there was still plenty of fuel flowing.

The real problem was not falling supply so much as a spike in demand. Some of this spike was due to sheer stupidity (pictures circulated on the internet of people filling up garbage cans with gasoline; hint – don’t be that guy!). But this was only part of the problem. A bigger issue was a shift in demand. People normally wait to refill their gas tanks until they are mostly empty. Depending on the type of car and how much it gets used, a typical person might go a week or more between fill-ups. Gas stations thus ordinarily only need enough gas on any given day to fill the tanks of a small fraction of the local population.

The concerns over fuel shortages pulled much of that demand forward. Instead of waiting until the fuel light goes on, people decide to fill up with half a tank or more remaining.

In a situation like this, what is collectively irrational can be individually rational. In fact, keeping a cooler head in such circumstances can leave you worse off, as the race goes to the swift. Luckily, in this case, the situation was short-lived. It stabilized after a few days and, by Tuesday, things were mostly back to normal. The experience, however, does not bode well for what might happen in the case of a real shortage.

There is, of course, a simple way to avoid fuel shortages when you have rising demand and steady or falling supply: raise prices. Higher prices would encourage people to conserve fuel and might even have blunted the cycle of panic buying in the first place. Higher prices also would have served as a signal to bring in more fuel to meet the higher demand. One of the strange features of the whole situation for me was how little the price of gas increased, given the lengths to which people went to get it.

The answer to this is admittedly obvious. Stations were reluctant to raise prices lest they be charged with price gouging. Laws against gouging are supposed to protect consumers but, like all forms of price control, they can easily end up making consumers worse off by denying them access to the product at any price. It’s something to consider as we look to the likely strike of Hurricane Irma this weekend, and all the other storms in the months and years to come.



It’s crucial that STB noms support railroad deregulation


The Surface Transportation Board, a federal agency with broad authority over the nation’s railroads, is currently weighing a petition that could undo most of the progress made since railroad deregulation in the early 80s. That makes it particularly crucial that the Senate think long and hard about two pending appointments to the STB, which are set to come before the Committee on Commerce, Science and Transportation in the near future.

Formed in 1996 as a successor to the Interstate Commerce Commission, the STB interprets laws, promulgates rules and settles disputes related to railroads. It’s crucial that it be run by people who understand the need for a light regulatory touch, because the industry that it oversees has been a poster child of the power of deregulation.

Congress was able to achieve that substantial railroad deregulation with the Staggers Rail Act of 1980, which eliminated costly rate controls and regulatory review processes that needlessly drove prices upward. The law was an important step to ensure that privately operated railroads could sustain themselves in a competitive manner. In fact, in the decade following the passage the law’s passage, the rail industry was able to cut its costs and prices by half. By some estimates, shipping rates have dropped 51 percent since reforms went into effect.

But that all could change. Shipping interests who are reliant on moving their goods by rail are seeking a rule that would force railroads to lend their tracks to other railroads. This so-called “reciprocal switching” rule is based on a pair of faulty assumptions.

The first incorrect assumption is that rail lines are public property and should be treated the same as roads; they aren’t, and they shouldn’t be. For the most part, rail lines are owned by private firms. The second bad assumption is that railroads can’t coordinate use of each other’s rail lines on their own, even though they do it all the time.

President Donald Trump hasn’t yet made public his choices for the two STB seats that are set to be filled. It is vital that new members of the STB, whoever they may ultimately be, understand that a reciprocal switching rule would effectively re-regulate our nation’s rails. It is up to the Senate to ensure the nominees understand not only the details of the Staggers Act, but also its intent: to keep U.S. rails free and competitive.

Image by ideal_exclusive

The 9 lives of Richard Posner


The following blog post was co-authored by R Street Senior Fellow Ian Adams.

Love him or hate him, there is no disputing that Judge Richard A. Posner, who retired from the 7th U.S Circuit Court of Appeals Sept. 2, is a legend of American jurisprudence. Known for his deep knowledge of economic theory, which he regularly weaved into his opinions, Posner authored some of this generation’s the most profound rulings in the fields of antitrust, copyright and patent.

Named by President Ronald Reagan to the 7th U.S. Circuit Court of Appeals in 1981, when Posner was just 42, he later became the favorite to replace Sandra Day O’Connor on the Supreme Court in 2005. Alas, his ascent to the nation’s highest court did not to come to pass. Posner’s outspoken nature and personal disdain for the role of the high court—which he likened to “the House of Lords, a quasi-political body“—scuttled his candidacy before it could move forward in earnest.

Yet from his perch on the 7th Circuit, Posner was able to do more to develop his uniquely pragmatic and economically informed take on jurisprudence than many Supreme Court Justices accomplish during their careers. His significance as a jurist is evidenced not only by his more than 3,300 opinions as a member of the federal bench, but the fact that he became the most cited legal scholar of the 20th century. In an era defined by “purposive” and “textual” jurisprudence, Posner’s approach followed a straightforward approach: find what is right and what is wrong and express it in colloquial language familiar and accessible to those outside of the legal profession.

Naturally, strict constructionists, who aspire to hew closely to the four corners of the U.S. Constitution, saw Posner as everything that is wrong with the third branch of government. His occasionally flippant disregard for the Constitution—once going so far as to say that he saw “no value to a judge” spending any amount of time studying the Constitution’s text—could not have been better designed to trigger outrage from his colleagues and friends on the right.

Perhaps because he was largely unmoored from the past, Posner’s jurisprudence translated well to new frontiers of legal thought. Throughout his career, he was an undisputed champion for user-rights in the digital age. In 2012, he wrote that protections for copyright and especially patent had become excessive. His view was simple: when protections provide an inventor with more “insulation from competition” than needed, it will result in increased prices and distortions in the market. As more companies seek overly broad patents, the parties who suffer most are consumers.

In his essay “Intellectual Property: The Law and Economics Approach,” Posner spoke openly about his views of limiting copyright terms, the idea/expression dichotomy and fair use, as well as laying out a novel approach to piracy. He maintained that the analogy to “piracy” was born of a misconception that intellectual property is indeed physical property. In Posner’s view, if an individual who was never going to buy a copy of a registered work illegally copies the work, there is no market deficit. It’s only when pirates make and sell copies to individuals who would normally buy the work that the copyright owner is affected. Poser didn’t excuse bad actors, but applied rigid cost/benefit analysis to the parties and judicial economy.

Posner also was a thoughtful academic with a longtime appointment to the University of Chicago School of Law. He was committed to mentoring legal talent. Lawrence Lessig—famous for his work on remixed works and as creator of Creative Commonsonce clerked for Posner.  He has authored three dozen books thus far, on subjects that range from terrorism to sex. He was also the co-creator of the Posner-Becker blog, which ran until Nobel laureate economist Gary Becker’s death in 2014. The blog provided an outlet for the University of Chicago professors to muse over rulings, explore current events and show a human side to their work.

Despite this heady list of accomplishments, the single act that may garner Posner the most ongoing acclaim from law students was his hatred of the citation manual known as “Blue Book.” In his essay, “The Blue Book Blues,” he wrote—tongue firmly in cheek—that all copies of the style guide should be burned because it “exemplified hypertrophy in the anthropological sense.”

Posner’s legacy will be felt for generations to come. His opinions and his other writings make clear the law is as much a tool for learning as it is a tool for justice.

Pennsylvania should reject unconstitutional internet sales tax


For decades, one of the thorniest issues in all of state government has been how to be even-handed in the tax treatment of merchants who sell from within state borders versus those who market online from other places in the world. Unfortunately, an approach urged recently by the Pennsylvania Senate does not provide that balanced solution.

Under a provision added by the Senate to H.B. 542—a tax reform bill the state House passed in May—any intermediary that even merely facilitates a commercial transaction with a Pennsylvania resident would be required to collect and remit taxes, even if it lacks physical presence in the state. Legislation of this type adopted in other states has been held unconstitutional and should be rejected largely for that reason.

The bill incorporates provisions used by other states in laws that were drafted to challenge U.S. Supreme Court precedent, but this approach is both costly and unlikely to be successful. In South Dakota, a federal court recently enjoined a similar tax-remittance law that sought to extend the state’s taxing power beyond its borders, just as H.B. 542 proposes. Ultimately, by empowering Pennsylvania to collect taxes from businesses with no physical presence in the state, the rule immediately would draw the commonwealth into the potentially expensive and bitter cycle of litigation seen in other states. It’s a cycle unlikely to yield a positive result, because decades-old Supreme Court precedent makes clear that state taxing powers stop at the border’s edge.

This bill also imposes an undue burden on online marketplaces like eBay and Etsy, which are merely virtual storefronts that allow millions of small businesses to reach customers across the globe. H.B. 542 ignores the actual 21st century marketplace and creates new tax and compliance burdens not just on big internet companies, but also on craftsmen and entrepreneurs. It would be like making the King of Prussia Mall or the Millcreek Mall liable for all the sales taxes owed by its tenant stores anywhere in the country. Of course, that would be absurd.

Setting aside the bill’s obvious unconstitutionality, it would be decidedly unwise for Pennsylvania. By contributing to the erosion of borders as effective limits on state tax power, it will encourage poorly governed, tax-heavy states like California, New York and Illinois to unleash their aggressive tax collectors on Pennsylvania businesses and marketplace facilitators. Pennsylvanians could be subject to audit and enforcement actions in states all across the country in which they have no physical presence.

Moreover, citizens of the commonwealth largely oppose this tax grab. In a 2014 poll conducted by R Street and the National Taxpayers Union, overwhelming bipartisan majorities of Pennsylvania Republicans, Democrats, conservatives, moderates, liberals and independents answered “yes” to a question about whether “the internet should remain as free from government regulation and taxation as possible.” Moreover, by a margin of two to one, respondents said they opposed “federal legislation that changes how states collect sales tax from internet purchases.”

The U.S. Constitution was written to replace the Articles of Confederation, in no small part, due to the latter’s failure to prevent a spiraling interior “war” of states who could assert tax and regulatory authority outside their borders. While the Constitution’s Commerce Clause and subsequent jurisprudence make clear that taxing power must be limited by state borders, this bill seeks to wipe those limits away. The General Assembly should reject this law and avoid the ensuing legal tangle.

Image by Andriy Blokhin


Massachusetts’ ‘millionaires tax’ is a major misstep


Trying to squeeze more money out of the top income earners is a poor fiscal strategy, whether you’re looking to close budget deficits or to subsidize pet projects. Even where the tactic accomplishes its stated purposes in the short term, soaking the rich creates an unreliable revenue stream that risks driving wealthy residents, and even businesses, to other states with more accommodating tax structures.

The latest revenue-raising proposal out of Beacon Hill falls squarely into that category.

Under the Massachusetts plan, a mere 19,600 tax filers, in a state of nearly 7 million, would pay a new and higher rate. Of that fraction, a mere 900, who are projected to make more than $10 million annually, would be responsible to contribute 53 percent of new tax revenues, or roughly $1 billion of the additional $1.9 billion projected from the surtax. A smaller fraction still, the top 100 earners in the state, would see their state income taxes rise from an average of $5 million to $9.3 million annually.

The additional revenue is slated to fund transportation infrastructure and the commonwealth’s educational systems, but the promise of support rests on the none too certain assumption that those residents subject to the surtax will actually pay these higher rates for the privilege of continuing to live in the Bay State.

Analysis by the Massachusetts Taxpayers Foundation (MTF)—the state’s pre-eminent public policy organization dealing with state and local fiscal, tax and economic policies—found that if just one-third of the 900 tax filers projected to make more than $10 million annually were to relocate, total income tax revenues would drop by approximately $750 million. Such a shift would blow a hole in the budget.

It is not as though there’s no precedent for exactly this. Massachusetts enjoyed a windfall when General Electric moved its headquarters north from Connecticut. The reason for the move was clear enough. Data from the Tax Foundation, an independent tax-policy nonprofit, ranks Connecticut 43rd of the 50 states in terms of tax climate (Massachusetts ranks 27th). But one cannot help but wonder if GE would have made the move if the “millionaire’s tax” was pending, as it is now.

What the proposal lacks in policy wisdom, it also lacks in terms of a firm legal foundation. There is an open question about its constitutionality.

As written, the proposal violates the state Constitution because it is, in fact, a budget appropriation. Article XLVIII (48) of the Massachusetts Constitution lays out the guidelines for ballot initiatives and prohibits the use of such initiatives to make specific appropriations. Article 48 also mandates that ballot initiatives must have a common or related purpose. Education and transportation are unrelated matters, just as raising and appropriating funds are two separate actions. As written, this measure unconstitutionally binds voters who might want to vote for increased revenue generation, but would like it spent differently. With no precedent for this situation, it may be destined for a lengthy judicial controversy.

If, somehow, the initiative were to become law and survive judicial scrutiny, the people of Massachusetts would have real trouble undoing their mistake. Because the tax, as contemplated, would be passed in the form of a constitutional amendment, it would take a subsequent amendment to undo the millionaires’ tax. That would involve legislative approval of a subsequent constitutional amendment and a vote on the next general election ballot. In fact, should the initiative pass, the earliest a change could be made would be Jan. 1, 2023.

The need for flexibility is amplified in a region in which residents can travel from one state to the next in a matter of minutes. Consider Massachusetts’ neighbor, New Hampshire. The Tax Foundation ranks the Granite State seventh on the list of the 10 states with the best overall tax climates. New Hampshire politicos are not naïve. They certainly would work to capitalize on this misstep in “Taxachusetts” in a manner that should be familiar to our neighbors in Connecticut.

Massachusetts, and state legislatures across the country, should stop looking to the wealthy to solve budget and infrastructure woes. Even the best laid plans have unintended consequences, and targeted tax hikes on a state’s highest earners can be disruptive to both businesses and individual