R Street hosts dinner on justice reform in Nashville

As part of a continuing series of stakeholder discussions aimed at addressing the justice reform movements in states and localities nationwide, R Street’s justice and national security policy director Arthur Rizer held a salon dinner in Nashville on December 7th, during ALEC’s State’s and Nation Policy Summit.

The event brought together representatives and justice policy experts from both state and national think tanks, advocacy groups, and foundations spanning the political spectrum to discuss ongoing and future reforms in Nashville.

The discussion focused on four different agenda items. First, participants examined pretrial reform and addressed common barriers to jail reform across municipalities and state legislatures. Policing reform, primarily on the implications of militarization, as well as juvenile justice issues, were also primary topics of interest. Finally, the dialogue centered on how best to cement existing reforms in instances of “tough on crime” backlash.

Attendees included: Lauren Krisai (Reason Foundation), Craig DeRoche and Kate Trammell (Prison Fellowship), Julie Warren (Right on Crime), Jenna Moll (Justice Action Network), Ron Shultis (Beacon Center of Tennessee), Michael Leland and Ken Hardy (Pew Charitable Trusts), Cameron Smith, Ian Adams, and Alan Smith (R Street Institute), Brianna Walden (Charles Koch Institute), Sal Nuzzo (James Madison Institute), and Daniel Dew (Buckeye Institute).

 

World Trade Organization conference could be consequential for e-commerce

* This piece was cowritten by Farzaneh Badiei who serves as the Executive Director of the Internet Governance Project.

Introduction

The World Trade Organization (WTO) will be holding a ministerial conference from Dec. 10 to 13. This conference could be of high importance for e-commerce and internet governance.

The WTO has discussed the e-commerce work program at every ministerial conference – which occurs every four years – since 1998. It has not, however, advanced on any e-commerce-related discussions, and discussions that have occurred have not been binding.

For the WTO to get involved with e-commerce in a more binding fashion, an agreement must be made. A WTO e-commerce agreement would prevent data localization and maintain better privacy protection for consumers. Additionally, the WTO should consider balancing intellectual property rights in the context of e-commerce and implementing strong fair-use provisions. 

A brief background on WTO activities on e-commerce

In 1998, the WTO issued a declaration that established a work program to identify trade issues related to e-commerce. Four councils were mandated to carry out the work program: the Council for Trade in Services, the Council for Trade in Goods, the Council for TRIPS and the Committee for Trade and Development. The WTO ministers have considered the work program at each of the ministerial conferences and have instructed the work program to continue.

However, there is no sign that the councils have taken binding action related to e-commerce. Despite early initiation of and involvement in the trade issues work program, the WTO’s involvement with setting trade rules regarding e-commerce has been minimal. The WTO’s only decision regarding e-commerce since 1998 was the Declaration on Electronic Commerce, which stated that “Members will continue their current practice of not imposing customs duties on electronic transmissions.” The declaration has remained unchanged and has had positive effects on free flow of information and digital free trade.

WTO should have a more active role in e-commerce

The passive role of the WTO might not last; various trade agreements are being negotiated and discussed in different forums, and these negotiations include e-commerce chapters. Europe and the United States, among other countries, have already requested that e-commerce-related topics be discussed at the ministerial meeting this month. The flurry of interest in the issue make now an excellent time for the WTO to look into coming up with trade related e-commerce policies.

Member states have also raised the need to discuss the role of the WTO in e-commerce, and whether this role should change, at the WTO Goods Council. Some member states have agreed to discuss the formation of a working party on e-commerce. A working party at the WTO would have more authority to make decisions and start negotiations, and would thus represent a step forward towards e-commerce involvement.

Why is the WTO a suitable forum to discuss e-commerce?

Data localization hampers digital trade, requires information services to incur substantial costs to provide their services globally, and defeats the very cross-border nature of the Internet. Additionally, data localization can have damaging effects on freedom of expression and other human rights. In countries with weak or no privacy protection laws, data localization can lead to surveillance and activist arrests. With the rise of internet-of-things (IoT) devices and cloud computing, cross-border data flow is gaining even more importance.

Historically, trade agreements have helped protect and sustain information services and the free flow of information. The WTO should agree on rules that facilitate cross-border data flow and prevent data localization. This can prevent data localization, which is a form of non-tariff trade barrier, can be framed as trade protectionism and it will not contribute to the growth and expansion of IoT industry.

Moreover, with a multilateral agreement on minimum privacy protection for consumers, WTO can commit its members to consider privacy measures in their local laws. This measure would be especially beneficial to those countries with no privacy laws. The practice of not imposing customs duties on electronic transmissions should also be indefinitely binding on the member states.

Intellectual property rights, digital trade and the WTO

Intellectual property rights (IPR) are government-granted protections used to encourage innovation and creative output by ensuring monetary compensation for the use of a work. Since the WTO’s institution of the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), a balance has existed in terms of protected works throughout the member countries. Whilst TRIPS is broad, its light-touch approach has provided guidance for countries to cultivate their own domestic laws relating to intellectual property.

The protection and enforcement of IPR has always been a longstanding component of United States international trade policy. But as IPR creeps into more internationally traded goods –  especially those conventionally seen as “low tech,” such as household goods and automobiles – it is implicit upon the WTO to continue to its broad approach to IPR protection. As more and more goods become “smart,” countries cannot allow the enforcement of IPR to put undue burdens on consumers, researchers and tinkerers. The WTO should be careful not to impose excessive copyrights or “digital locks” that keep users from accessing goods they legally purchase.

Since IPR has been stripped from the Trans-Pacific Partnership, the WTO would do well to retain a balanced approach to IPR. The WTO can achieve such a balance by promoting open and flexible general exceptions like the American four-step fair use test. While it may be unrealistic to expect WTO agreements to include language mirroring U.S. law, the WTO could include similar language to the TPP’s call for all parties to “endeavor to achieve balance” in copyright. Currently, forty-seven countries have some form of fair use.

The WTO should also express caution when considering IPR term length. Regarding copyright, for works other than a photographic work or work of applied art, the term length is calculated on a basis other than the life of the person, and it will be no less than 50 years. Regarding patent, it is 20 years from the filing date. Protecting works is important and this should be stressed, however onerous term length will stifle derivative works and the ability of users to enjoy the fruits of a creator’s labor in new and innovative ways.

 

 

Fighting climate change does not mean going vegan

Here at R Street we like to think of ourselves as red meat conservatives. I mean that literally. On Friday, R Street will host its annual “Meatfest,” where R Street staff will celebrate the close of another successful year by heading to Fogo de Chão and eating unlimited quantities of tasty cooked animal flesh. Meat is to R Street what cowbell is to Blue Oyster Cult. You can never have enough.

So it was pretty disturbing for me to read that special taxes on meat are being contemplated in order to fight climate change:

Some investors are betting governments around the world will find a way to start taxing meat production as they aim to improve public health and hit emissions targets set in the Paris Climate Agreement. Socially focused investors are starting to push companies to diversify into plant protein, or even suggest livestock producers use a “shadow price” of meat — similar to an internal carbon price — to estimate future costs.

The idea is being analogized to taxes on tobacco and sugar.

Ideas like this give climate advocacy a bad name. Granted, climate change is a thing, and R Street has long supported a broad-based carbon fee to deal with the risks of climate change. But when governments selectively impose taxes on some sources of emissions but not others, they can give the impression that “fighting climate change” is less about protecting the planet than it is about waging culture war fights indirectly. As I wrote previously about a proposal to tax having children because of their carbon footprint:

Calling a tax on kids a carbon tax is a bit like calling a tax on Coke (but not Pepsi) a soda tax. The tax might reduce consumption of one type of soda (namely, the best kind), but it’s unclear the extent to which it would reduce overall soda consumption, as opposed to just encouraging people to drink other types of soda.

A lot of skepticism about climate change is driven by the idea that elites just want to tell people how to live their lives, what lightbulbs to use, what car to drive, what not to eat. And that’s bad! A plane trip to Burning Man has a larger carbon footprint that grilling steaks in your backyard. Ignoring the first while attacking the second is not only bad policy, it’s bad strategy. Because meat is delicious, and if you tell people they have to eat veggie burgers to stop climate change, they are going to tell you to scram.

Net neutrality’s effect on investment: It’s complicated

netneut2

This week, the Federal Communications Commission (FCC) will vote to remove 2015 rules that regulated broadband service under Title II of the 1934 Communications Act. Throughout the debate over this move there have been several attempts to portray the plan as mistaken or outright lying about the effect of Title II on investment. However, these claims continually make the mistake of looking at absolute numbers rather than what’s known as a counterfactual.

Consider an example: You’re in the apple business. Last year, you bought five apple trees from your supplier. Business was going well, so this year, we’d expect you to invest in 10 apple trees. But then someone imposes costly regulations on the apple industry, and you only buy eight trees. Did the regulation increase or decrease the number of trees you bought? Or did it have no effect? Can we even tell?

Clearly you increased your investment in apple trees compared to last year–eight is larger than five–but that’s not the right question. To determine the regulation’s effect, we have to ask: What would your investment in apples trees have been this year if the regulation had not gone into effect?

This is a much more difficult question to answer. The world is a complex place where numerous factors may impact investment decisions. Maybe the price of apple-tree-growing supplies has increased. Maybe the economy went into a recession. If all we know is that last year you bought five trees, the regulation went into effect, and then you bought eight trees the following year, neither opponents nor proponents of the regulation should wave around this correlation as absolute proof for their side.

Yet this is what we’ve seen time and again in the net neutrality debate. Article after article has claimed that Internet Service Providers (ISPs) “increased … capital expenditures” and “continued to invest” despite reclassification of broadband as a Title II service, so any suggestion that Title II hurt investment has been “proven indisputably false” and, in fact, the rules “haven’t affected overall industry investment. 

It’s also troubling that many of these stories rely so heavily on a Free Press report that makes such elementary blunders as failing to adjust for inflation. Doing so reverses the study’s purported findings, showing a decline, rather than an increase, in investment. 

Simply looking at what happened after the reclassification does not, in itself, tell us anything about what would have happened over the same time period without Title II regulation. Maybe these companies would have invested more in broadband infrastructure. Maybe their investment levels would have declined, but Title II regulation protected the “virtuous cycle” and actually increased investment. We can’t know just by looking at bottom-line expenditures.

Luckily, more sophisticated econometric methods can help us zero-in on how the regulation affected broadband investment. Economist Dr. George Ford of the Phoenix Center conducted an insightful, methodologically-sound study that accounts for an additional complicating factor: the fact that the FCC has had the Title II option on the table since 2010. That date is a better place to start looking for effects, since companies account for what might happen when making investment decisions rather than waiting for final rules to take effect. And, in fact, Ford found that the threat and later imposition of Title II regulation did decrease investment ISPs’ investment–by $160 to 210 billion from 2011 to 2015. 

You need not agree with Ford’s conclusion here, but refuting him requires engaging in counterfactual analysis–trying to figure out not just what happened after we enacted the 2015 Title II rules, but what would have happened without the real possibility of Title II regulation. These aren’t questions that can be answered by looking at bottom-line numbers for particular years and seeing if they rose or fell; answers to these questions require econometric analysis of the sort Ford conducted. 

The other common feature of reporting on broadband investment is that many of these articles tout statements by ISPs to their investors as proof that broadband investments were not harmed by Title II. As we at R Street Institute document in our reply comments to the FCC, however, the statements themselves don’t actually support that portrayal. 

Some ISPs did say that their current business practices wouldn’t be affected by the new rules; that is, they wouldn’t have to stop blocking, throttling, or engaging in paid prioritization because they weren’t doing that in the first place. Of course, rules banning something that you don’t do won’t affect your day-to-day activities very much. ISPs’ statements to investors and the Securities and Exchange Commission about the effect of reclassification on long term investment prospects, however, did clearly list Title II regulation as a significant threat.

Broadband investment is the best way to close the digital divide and create competition that will produce better quality services at lower prices. Regulations that create uncertainty and increase the cost of investment result in fewer people getting access to broadband and fewer options for those who have access. 

Regardless of your position on net neutrality, we should all take care to ask the right questions and embrace their complexity rather than cutting corners to score points for our side.

Shoshana Weissmann Joins Matt Lewis’s Podcast

RSI’s Digital Media Specialist joined Matt Lewis to talk about her career, Twitter, passion for sloths, and conservatism in cities.

Exceptions to the Rule: The Politics of Filibuster Limitations in the U.S. Senate

Hoover’s Director of Washington Programs, Michael G. Franc, interviews author Molly E. Reynolds on her latest book, Exceptions to the Rule: The Politics of Filibuster Limitations in the U.S. Senate.

Many people believe that, in today’s partisan environment, the filibuster prevents the Senate from acting on all but the least controversial matters. But that knowledge is not exactly correct. In fact, the Senate since the 1970s has created a series of special rules-described as “majoritarian exceptions”-that limit debate on a wide range of measures.

Reynolds argues that these procedures represent a key instrument of majority party power in the Senate. They allow the majority-even if it does not have the sixty votes needed to block a filibuster-to produce policies that will improve its future electoral prospects, and thus increase the chances it remains the majority party.

Clean Energy Choices

Across the country, clean energy is growing rapidly in states that allow customers to choose their electricity supplier. The following is a panel discussion held at the Cannon House Office Building on Nov. 30. What can lawmakers learn from these developments, and what further policy reforms are needed to unleash the power of competitive forces to deliver cheaper and cleaner energy?

Panelists include:

Michelle Patron, Director of Sustainability Policy at Microsoft

Dylan Reed, Head of Congressional Affairs with Advanced Energy Economy (AEE)

Devin Hartman, Electricity Policy Manager and Senior Fellow with R Street Institute

Frank Caliva, Senior Spokesman with American Coalition of Competitive Energy Suppliers (ACCES)

Charles Hernick, Director of Policy and Advocacy with Citizens for Responsible Solutions (Moderator)

Greentech Media Podcast: Beyond Subsidies

R Street senior fellow Devin Hartman and EDF’s Lenae Shirley discussed on Greentech Media’s The Interchange podcast how competitive electricity markets align the conservative and green agendas. The reasons are simple – competition and consumer choice drive economic development and innovation and deployment of clean energy resources. This has resulted in the political right, left and clean energy industry expanding conversations on how clean energy can compete on its merits. Hartman discusses pro-market, pro-consumer reforms, while the hosts note political convergence around unease with the monopoly utility model.

AEI Event: Is the Bank Holding Company Act obsolete

Most of America’s 4969 are owned by holding companies, so the Bank Holding Company Act of 1956 is a key banking law. But do the prescriptions of six decades ago may not still make sense for the banks of today. The act for most of them creates a costly and arguably unnecessary double layer of regulation. Its original main purpose of stopping interstate banking is now completely irrelevant. One of its biggest effects has been to expand the regulatory power of the Federal Reserve–is that good or bad? Does it simply serve as an anti-competitive shield for existing banks against new competition? Some banks have gotten rid of their holding companies–will that be a trend? This conference generated an informed and lively exchange among a panel of banking experts, including the recent Acting Comptroller of the Currency, Keith Noreika, and was chaired by R Street’s Alex Pollock.

 

Future of Internet Freedom with FCC Chairman Pai

R Street and the Lincoln Network co-hosted a Nov. 28 event on the future of telecom policy and Internet freedom, including the Federal Communications Commission’s upcoming agenda. The speakers included Federal Communications Commission Chairman  Ajit Pai, Federal Trade Commission Chair Maureen Ohlhausen and FCC commissioners Mike O’Rielly Brendan Carr.

Those speeches were followed by a panel discussion featuring: Tom Struble, technology policy manager at the R Street Institute; Brent Skorup, research fellow at the Mercatus Center at George Mason University; and Roslyn Layton, visiting scholar at the American Enterprise Institute. The panel was moderated by Jessica Melugin, adjunct fellow at the Competitive Enterprise Institute.

When it comes to criminal justice AI, we need transparency and accountability

shutterstock_722529169

A recent Wall Street Journal article makes the case that—in regulating artificial intelligence, including those applications used to aid the criminal justice system—we should emphasize accountability, rather than prescriptions to make every algorithm completely transparent. While authors Curt Levey and Ryan Hagemann make important points, the article misses key details about the state of machine learning and the fundamental differences between requirements demanded by government procurement agents and regulations that would affect the broader market.

Levey and Hagemann argue that calls for algorithmic transparency in areas like criminal justice risk assessment are misguided because they fail to account for the opaque nature of advanced machine-learning techniques. Furthermore, they believe transparency requirements—for both training data and source code—would unfairly undermine trade secrets and competitiveness in the market for such software.

Their argument about artificial intelligence regulation thus has three components:

  1. Transparency requirements will not be effective with machine-learning techniques, because each is a “black box.”
  2. Transparency requirements are undesirable because they undermine intellectual property and market competition.
  3. It is not appropriate for government to impose transparency requirements even on itself, including risk assessments in the criminal-justice system.

To be sure, there are good reasons to avoid broad-based algorithmic transparency requirements for every AI application. As I discussed in greater length at Cato Unbound earlier this year, such rules would stifle competitiveness and innovation.

But the criminal justice system is not an ordinary market, and the government is not an ordinary firm. Just as ordinary firms may and often do decide to use open source software, it is entirely appropriate for the government to make determinations about what it will require in contracts with its vendors.

Unlike ordinary firms, government also has constitutional obligations to be transparent, such as in upholding citizens’ rights to due process and equal protection under the law. Statutory obligations like the Freedom of Information Act and other “sunshine” laws; the jurisprudence of criminal procedure; 51 federal and state constitutions; and myriad court precedents all set out additional rules and protections. Notions of equity, predictability and, yes, transparency are at the heart of what our justice system strives to provide.

I’ve argued before that we should err on the side of transparency in the criminal justice system. This could be done by requiring, as part of procurement processes, that all algorithms that inform judicial decisionmaking in sentencing be built and operated on an open source software platform. Everything from the source code to the variable weights to the (anonymized) training data would be available for public scrutiny and could be audited for bias.

The government would likely have to pay more upfront for a transparent open source system, as it would essentially be buying the algorithms outright rather than renting them, and continued investment would be needed for their development. However, with a more open ecosystem, there are good reasons to think the costs to taxpayers could be offset by philanthropic investment and engagement from civil society.

There may indeed be mechanisms to validate risk-assessment software in criminal justice that stop short of disclosing training data, continuous outcome data or the underlying code. But such an approach requires taking unnecessary risks that the system will be abused, in addition to fomenting public backlash against the technology. Even setting aside civil liberties considerations, opting against transparency in criminal justice AI would keep a developing ecosystem opaque, when it could benefit from broad-based collaboration and the input of diverse stakeholders.

Thus, it seems that transparency would be desirable if feasible. Let’s address some of the feasibility concerns and proposed harms of transparency more specifically.

It’s first worth noting that the algorithms used in the criminal justice system today are relatively simple from a technical perspective, and do not rely on advanced neural networks. Their set algorithmic weights can be discovered via transparency and do not yet suffer from the concerns about “black box” machine learning. Most of these systems, like the Public Safety Assessment (PSA) tool used in New Jersey, can be calculated by hand in a short period of time if you have the relevant background and criminal history.

Systems like the COMPAS algorithm used in Wisconsin are proprietary, which makes it difficult to know exactly how they operate. However, based on sample tests obtained by ProPublica, they still seem to be within the realm of pen-and paper.

Below is a part of the published PSA, which illustrates how simple it really is.

PSA

There are structural limits as to the kinds of variables in play in a risk assessment. While a judge can consider any number of extraneous factors, a computer system must rely on a uniform dataset. This might include such variables as age, ZIP code, a defendant’s first contact with the juvenile courts or any past jail time. How these are weighted may be opaque in a machine-learning context, but would nonetheless be possible to analyze. What would be prohibited from consideration are variables such as race or national origin, as well as any false data. If these were used in sentencing—or potentially, even if other factors were used that might be a close proxy for these prohibited variables—it would open a conviction to appeal. That’s why transparency is important for due process.

The future of risk-assessment algorithms likely will include greater and greater uses of machine-learning techniques, so it’s worth thinking about potential transparency and accountability trade-offs. A recent National Bureau of Economic Research paper, lead by Cornell University’s Jon Kleinberg showcased the incredible gains we can make in the pretrial system with more accurate risk predictions. In a policy simulation, the authors showed that their algorithm, trained through machine learning, could cut the jail population by 42 percent with no increase in the crime rate.

As Levey and Hagemann point out, the greater the degree of complexity in machine learning, the harder it is to peer into the inner workings of the algorithm and understand how it makes decisions. But with access to the training data and the specific machine-learning methods used, it would be entirely possible to replicate the model again and again to make sure there are no anomalies, or to create proxy models to test for different kinds of machine bias or common errors. Furthermore, we are quickly developing new methods of machine learning that are more amenable to transparency, explicability and interoperability.

Levey and Hagemann’s stated goal of accountability does not have to stand in opposition to the goal of transparency. Transparency is one method to achieve algorithmic accountability. In the context of criminal justice, it is a most worthwhile mechanism. More advanced machine learning is able to help only insofar as models are based on externally valid data. And even explainability protocols and internal diagnostic tools will not be able to alert the operator about invalid data, because a neural network has no concept of validity outside the dataset it has been trained on.

Risk assessment systems also must be calibrated to societal norms. For instance, we want to be more averse to releasing individuals who are likely to commit a murder than we are to releasing a nonviolent drug offender. But if a particular jurisdiction wants to take a hard line on marijuana use, there would be a public interest in knowing about it.

This brings us back to a larger point about the difference between government regulation and requirements built into the procurement process. In addition to not having access to profit and loss signals, the procurement process is rife with rent-seeking as private companies compete on the basis of political connections, rather than the quality of goods they are selling. As such, it is entirely appropriate for government to set procurement specifications to ensure that certain needs are met. We should not conflate this with more general forms of government regulation.

The application of AI in risk assessments in the justice system won’t be perfect, especially at first. Even if they have an overall positive effect, they may introduce hard questions about differing notions of fairness. As these technologies advance and become more opaque, complete openness is the best way to protect our civil liberties, ensure public trust and root out flaws. In the long term, with an open ecosystem, we can produce far better outcomes than the status quo.

While I largely agree with Levey and Hagemann about whether it’s wise to impose broad transparency mandates on private sector algorithms, we shouldn’t carelessly extend this thinking when it comes to the application of state power. In high stakes realms where the government can keep you locked up or otherwise take away your liberties, we should make our mantra: “Trust, but verify.”


Image by Phonlamai Photo

 

Molly Reynolds’ Exceptions to the Rule: The Politics of Filibuster Reform in the Senate

The Senate has changed considerably in recent years. So too has our understanding of how its members make decisions.

Traditionally, the leading scholarship on the Senate has taken as its starting point the fact that its members possess considerable parliamentary rights under the institution’s Standing Rules. This was important because senators used those rights to obstruct legislation they opposed. To the extent that such treatments acknowledged political parties, they typically focused on the negative consequences of rising partisanship, which made it harder for Senate majorities to overcome obstruction by an increasingly unified minority party.

Yet an apparent decline in member autonomy and corresponding increase in party cohesion over the last two decades prompted scholars to re-examine how they thought about the Senate. The result of their efforts has been a reorientation in our understanding of the Senate. Today, the starting point for most scholarly inquiries is not so much the efforts of individual senators to achieve their goals in the institution as the collective behavior of the majority and minority parties more generally. The treatments that first adopted this perspective sought to adapt earlier work on party effects in the House of Representatives to explain developments in the Senate.

But unlike in the House, minorities in the Senate still have the ability to influence policy outcomes. The most widely known example of this is the filibuster, which permits senators to block the majority from passing legislation. The majority can overcome such obstruction by invoking cloture under Rule XXII. But doing so requires a three-fifths majority (typically 60) to end debate and proceed to an up or down vote on a bill (typically 51). As such, the minority retains significant leverage in the legislative process so long as it is able to secure the votes needed to prevent the majority from invoking cloture (typically 41).

Senate majorities may curtail the minority’s ability to filibuster with a reform-by-ruling approach (i.e., the so-called nuclear option) to unilaterally create a new precedent that is inconsistent with, but nevertheless supersedes Rule XXII. However, they have not often done so.

Admittedly, there have been exceptions to this reluctance in recent years. In 2013, Democrats used the nuclear option to limit the minority’s ability to obstruct most nominations. And Republicans did so earlier this year to preclude filibusters of Supreme Court nominees.

Notwithstanding this, neither party has elected to go further by eliminating the legislative filibuster. While there are several reasons for this, one of the most often overlooked explanations is that the Senate can exempt specific legislation from being filibustered in the future without using the nuclear option or going through the cumbersome process of changing the rules via the process stipulated in Rule XXII. This reduces the demand for eliminating the legislative filibuster by offering determined majorities an alternative way to enact policy.

In Exceptions to the Rule: The Politics of Filibuster Limitations in the U.S. Senate, Molly E. Reynolds considers these special procedures, which she terms majoritarian exceptions. A fellow in Governance Studies at the Brookings Institution, Reynolds defines a majoritarian exception as “a provision, included in statutory law, that exempts some future piece of legislation from a filibuster on the floor of the Senate by limiting debate on that measure.”

Reynolds groups majoritarian exceptions into two categories: delegation exceptions and executive branch oversight exceptions.

With delegation exceptions, majorities empower designated actors to craft legislation addressing specific policy problems while simultaneously limiting the minority’s ability to obstruct the measure when it is eventually considered by the full Senate. The reconciliation process is an especially salient example of a delegation exception given recent Republican efforts to repeal and replace Obamacare and reform the tax code using the special process. In reconciliation, committees are authorized to craft legislation meeting specified budgetary targets. Floor debate on reconciliation bills is limited to 20 hours and the amendments senators are permitted to offer are restricted. These exceptions to the Senate’s Standing Rules were created when Congress passed the Congressional Budget and Impoundment Control Act of 1974 and the Omnibus Budget Reconciliation Act of 1990.

In contrast, oversight exceptions create a special fast-track process to approve or disapprove a presidential act after it has already occurred. These special procedures usually preclude amendments and limit overall debate time on the underlying legislation. Examples of oversight exceptions include legislation periodically passed by Congress giving the president authority to negotiate trade agreements and to expedite their consideration in the Senate (e.g., the 2015 Bipartisan Congressional Trade Priorities and Accountability Act). The elaborate disapproval process Congress utilized to raise the debt ceiling on a number of occasions during the Obama administration offers another example of an oversight exception.

In considering majoritarian exceptions as a distinct class of procedures that share certain identifiable features, Exceptions to the Rule makes an important contribution to our understanding of the relationship between partisanship and parliamentary procedure in the Senate. Reynolds highlights the utility of majoritarian exceptions to Senate majorities as well as their impact on policy outcomes, and provides an analysis that enables us to predict when Senate majorities will be more likely to propose such exceptions in the future.

But we should be careful not to overstate the value of majoritarian exceptions to Senate majorities more generally. The special procedures do not provide them with a reliable way to enact their agenda over the minority’s objections on a routine basis. This is because they must first be authorized by law and the legislation containing such provisions can be filibustered.

The repeated use of majoritarian exceptions, it is worth adding, may have important consequences for our politics more generally. For example, Congress cedes its authority to make law to the executive branch when it uses oversight objections. Doing so may be necessary to ensure action on an important public policy problem. But it also gives unilateral presidential action the imprimatur of legitimacy at a time when many observers are calling for Congress to reassert its authority.

And both oversight and delegation exceptions may exacerbate a growing accountability problem in our politics. It is harder for voters to assign responsibility to legislators for the policy outcomes produced via such processes.

More broadly, the restrictive rules Congress places on such processes distorts the nature of Senate decision-making in subtle, yet nevertheless important, ways. The limits on debate and amendments upends the deliberative process in the institution and restricts the ability of the rank-and-file senators to participate in it. In the case of reconciliation, fitting legislative proposals into the four corners of what is permitted by the special procedure supplants a more inclusive and adversarial process geared toward adjudicating the claims of senators and their constituents. Not engaging in contentious debates in this way has the potential to make the policies enacted via majoritarian exceptions less stable over the long term as opponents refuse to accept their legitimacy and instead wait for the chance to reverse them using the same process in the future.

The irony of the Senate’s increased use of majoritarian exceptions in recent years is that it has exposed the limits in the regnant approach to thinking about the institution. The spectacular failure of Republicans to repeal and replace Obamacare earlier this year and their ongoing struggle to reform the tax code using the reconciliation process suggest that the parties are not as unified as previously thought. If this is indeed the case, our identification of the filibuster as thwarting majority action and thus perpetuating gridlock and dysfunction may be incorrect.

Regardless of such concerns, Exceptions to the Rule should be required reading for anyone concerned about the state of the Senate today. Reynolds’ in-depth analysis of majoritarian exceptions offers valuable insight into a subset of parliamentary procedures that will be sure to dominate Senate decision-making for years to come.

Congress helped create the CFPB’s leadership crisis. It can fix it.

Over the past few days, the D.C. news cycle has been dominated by the palace intrigue over who should be properly recognized as acting director of the Consumer Financial Protection Bureau. But few have considered Congress’ role in creating this situation—and the fact that it could now help fix it.

In the wake of outgoing CFPB Director Richard Cordray stepping down, President Donald Trump tapped Office of Management and Budget director Mick Mulvaney to serve as CFPB’s acting director while a permanent head was selected. But in a surprise twist, Cordray declared that the agency’s chief of staff, Leandra English, was actually the CFPB’s new leader.

This is a bit of a mini constitutional crisis, as it appeared that both Mulvaney and English might enter into a power struggle over control of the agency. So far, CFPB’s general counsel has sided with Mulvaney, and in a memo advised all CFPB staff to “act consistently with the understanding that Director Mulvaney is the Acting Director of the CFPB.” English, for her part, initiated a lawsuit asking a federal court to issue a restraining order preventing Mulvaney from taking the post, which the court denied.

Legal scholars have been weighing in on the merits of who is legally correct in this scenario, and the dispute involves both constitutional as well as statutory concerns. (Jonathan Adler has a summary of the various legal positions and opinions over at The Volokh Conspiracy; for what it’s worth, I think Adam White has the best of the argument when he concludes that the Trump Administration should prevail).

Lost in all this back-and-forth, however, is that fact that this fiasco was both imminently predictable and preventable. The CFPB is sui generis in America’s system of governance in that it has unprecedented powers that are mostly incapable of being checked by the other branches of our government. What’s happening right now illustrates this: the outgoing agency leader is attempting to implement his own preferred succession plan over the wishes of the other political branches.

The CFPB was created by the 2010 Dodd-Frank Act, which dictated that it be led by an individual director. To ensure the agency’s independence, the act clarified that the director could only be removed by the president “for cause,” which insulates the agency’s leadership from presidential accountability. While “for cause” protection is common in so-called “independent agencies”—other examples include the Securities and Exchange Commission, Federal Communications Commission or Federal Trade Commission—it is unprecedented for an agency that operates under a single director rather than a multimember commission structure.

Last year, a panel of the D.C. Circuit found CFPB’s structure unconstitutional for this very reason (that decision is currently on appeal to the entire D.C. Circuit, which has yet to rule on it). By establishing an agency that is led by a single individual who cannot be removed except in special circumstances, Congress muddied the waters when it comes to who is ultimately in charge of the CFPB—the president or the agency’s director.

The uniquely unaccountable nature of the CFPB does not end with its leadership structure, either. Dodd-Frank specified that the agency was also to be funded outside Congress’ normal appropriations process, via revenues derived from the Federal Reserve System. This prompted the D.C. Circuit to quip that the agency’s funding structure was “extra icing on an unconstitutional cake already frosted.” This means that Congress succeeded in creating an agency that was unaccountable to both the executive and legislative branches.

This failure to ensure accountability at the CFPB is partially responsible for the current leadership struggle. In fact, Barney Frank, one of the principal drafters of Dodd-Frank, has suggested that the CFPB’s plan of succession was deliberately designed to insulate its leadership from presidential control. And now that a leadership struggle has occurred, both the president and Congress are mostly powerless to respond to it in effective fashion.

Despite its errors in creating the CFPB, it’s not too late for Congress to fix its mistakes. During the current Congress, the House has considered legislation that would convert the CFPB from a single-director model to a five-member commission structure, with each commissioner serving five-year staggered terms. Alternatively, the final version of the CHOICE Act, which passed the House earlier this year, clarifies that the CFPB director is fireable at will by the president. The CHOICE Act also would make the CFPB subject to the normal appropriations process.

Even if broad-sweeping Dodd-Frank reforms like the CHOICE Act are not politically feasible right now, Congress should at least pursue these discrete structural reforms. In particular, converting the agency to a commission model would help avoid succession crises like the one the agency is currently undergoing, as the staggered terms of the commissioners would reduce surprise retirements and deemphasize the importance of any one officer or director at the agency. It would also pull the agency’s ethos in the direction of a truly non-partisan, independent entity, rather than a “political vehicle” masquerading as an independent agency.

If making the CFPB truly independent is not desirable, then it should be treated like any other executive branch agency and have a director that the president can remove at will. Such a goal could be bipartisan, too, as even Barney Frank’s former legislative aide has argued that “Congress should never again create an ‘independent’ agency with a sole director, particularly one not subject to the congressional appropriations process.” Likewise, both parties should be motivated by fears of the other party controlling as unaccountable and powerful a position as CFPB director.

The CFPB has been allowed to operate as a regulatory unicorn for far too long, and its recent leadership struggle is merely a symptom of its unaccountable structure. Congress created this mess, and now it’s time for it to fix it.

R Street panel discusses how to give companies clean choice

shutterstock_541744588

The renewable electricity industry has grown rapidly in the past decade. From their humble beginnings, wind and solar energy have more than doubled their combined share of power generation at utility-scale facilities in the United States.

In a perfect world, companies large and small would be able to purchase this growing abundance of clean energy straight from the grid. But the electricity industry’s monopoly model is “a hell of a drug,” and less than half of states have restructured their electricity markets to allow for more competition and consumer choice.

In the 1990s, roughly a dozen states – including Texas, Illinois and Ohio – passed laws that began to open up the electricity sector to competition. But the California electricity crisis of the early 2000s and the 2008 Great Recession left the restructuring movement stillborn.

In the years that followed, regulations in most states remained unchanged. The energy marketplace, on the other hand, did not. The growth of renewables, combined with renewable portfolio standards in 29 states and a more environmentally conscious corporate mindset, have changed the incentives around energy choice.

As a result, there are now many more potential customers with a strong – and growing – demand for clean energy. Major electricity users with clean-energy leanings like Microsoft and Amazon – whose internet cloud operations rely on mammoth server farms – have begun to push for clean-energy procurement. Other major energy consumers like Google, Apple and Johnson & Johnson are looking to join the clean-energy parade.

When these customers compare states that allow consumers to choose their electricity suppliers with states that retain the monopoly model of one large electricity producer – it’s a no-brainer. From 2008 to 2016, the weighted-average price of electricity in monopoly states increased 15 percent, while in restructured states, prices fell by 8 percent.

In monopoly states, artificial barriers undermine competition and state legislatures are beholden to major legacy utility firms for much of their fundraising. Lobbyists for these companies have pseudo-official status in state capitols like Richmond, Raleigh and Atlanta, so the marketplace is currently balkanized. But for how much longer?

R Street will host a panel discussion on the future of clean-energy choice with panelist from Microsoft, Advanced Energy Economy (AEE), the American Coalition of Competitive Energy Suppliers (ACCES) and Citizens for Responsible Energy Solutions (CRES). The panel will take place at noon Nov. 30, at the Cannon House Office Building in Washington, D.C.

Among the questions to be asked are:

  • What is the value of choice overall and specifically to clean-energy procurement?
  • Microsoft’s story: What do big customers look for and what policies are needed to attract them?
  • Why is retail choice important for cheap, low-carbon technologies?
  • What does the political landscape tell us about the future of clean-energy choice?

Please join us to find out what the clean-energy choice movement has in store.


Image by zhangyang13576997233

Casey Burgat and Matt Glassman on Congress

In the first episode of a video series for the Legislative Branch Capacity Working Group, R Street Governance Fellow Casey Burgat interviews Matt Glassman, senior fellow at Georgetown University’s Government Affairs Institute on all things Congress. Topics discussed include: common political misconceptions; issues and likelihood of congressional reform; congressional capacity; necessary changes to the committee structure; and much more.

Kosar talks postal reform on C-SPAN

R Street Vice President of Policy Kevin Kosar was a guest Nov. 24 on C-SPAN’s “Washington Journal” to discuss potential reforms to the U.S. Postal Service. Full video of the appearance is embedded below.

LIVE STREAM: The Future of Internet Freedom with FCC Chairman Pai, Commissioners O’Rielly and Carr, FTC Chairman Ohlhausen

router-internet-copy

Join the R Street Institute and the Lincoln Network for an event on the future of telecom policy and internet freedom.

Federal Communications Commission Chairman Ajit Pai, Commissioner Mike O’Rielly, Commissioner Brendan Carr as well as Federal Trade Commission Acting Chairman Maureen Ohlhausen will each deliver remarks on the commission’s upcoming agenda.

The speeches by the Chairman and Commissioners will be followed by a discussion with our expert panel, featuring:

  • Tom Struble, technology policy manager at the R Street Institute
  • Brent Skorup, research fellow at the Mercatus Center at George Mason University
  • Roslyn Layton, visiting scholar at the American Enterprise Institute
  • Jessica Melugin, adjunct fellow at the Competitive Enterprise Institute (moderator)

The in person event is by invitation only (contact smoss@rstreet.org).

Media contact: David Bahr (dbahr@rstreet.org)

Watch live beginning Tuesday, November 28th at 1:30pm ET. Use hashtag #InternetFreedom to join in the conversation:

Some links on patent reform

I’ve been digging back in on some materials related to patent policy and the case for patent reform, and thought it might be useful to others to post some links to R Street’s work over the years as well as works by other groups. Check them out below.

Publications with R Street scholars:

Publications by other center-right groups:

Academic and government publications:

 

How to talk to your family about net neutrality

Multi Generation Family Celebrating Thanksgiving

It’s nearly Thanksgiving – that time of year where we all try to cram our families through airport security on the same day so we can gather around the table to argue about politics.

This year is likely to prominently feature wonky topics such as tax reform and – oddly enough – net neutrality. While telecom regulation isn’t normally a salient subject in family settings, that may change tomorrow; the Federal Communications Commission (FCC) just released its proposal to rollback Title II regulation of the Internet, aka “Net Neutrality” (our substantive thoughts on the issue can be found here).

Activists didn’t waste any time in attacking the plan and taking to various social media platforms to shout their objections. But not all of us agree with left-wing activists (and we suspect most of them don’t know much about telecom policy).

With all the confusion and misinformation pervading this discussion, here are some points to share with your family should the subject come up around the table this Thanksgiving.

1) It’s not the end of the Internet. The Internet as we know it was built without Title II regulation. In fact, the current regulations only took effect in mid-2015. Cases of net neutrality “violations” were few and far between in the decades before Title II regulation, and they were resolved without prescriptive regulation. Going back to the pre-2015 light-touch framework would hardly pose an existential threat to your favorite websites.

2) There will still be “cops on the beat.” Scary scenarios in which an ISP blocks content from its competitors will still be illegal. And even as the FCC steps aside from regulating the Internet, the Federal Trade Commission still has ample authority and expertise to hold ISPs to their promises and punish them if they engage in unfair competition methods. State attorneys general also have the power to bring enforcement actions using state-level consumer protection laws.

3) The Internet has never “treated all traffic the same,” nor should it. Different kinds of data are sent over the Internet, and they don’t all need the same treatment. A half-second delay in delivering an email or part of a software update isn’t a big deal. The same delay for applications like real-time multiplayer games or video chats could render them unusable.

Additionally, some Internet applications are non-neutral. If you use T-Mobile’s Binge On, you get slightly lower-quality video in exchange for free streaming. That such a service hurts consumers would be news to those who have signed up for it in droves.

4) The issues you’re worried about might not be net neutrality concerns. We’ve all had bad experiences with our ISPs’ customer service departments, but those are separate issues. More regulation, especially the kind designed for a 1934 telephone monopoly, is not going to improve the situation.

5) More broadband deployment is the long term solution. What will make things better is more competition in the marketplace, which means more broadband deployment from all sources, including wireline and wireless. Thus, instead of fixating on net neutrality, we should focus on removing barriers to deployment. The Title II regulations are one such barrier that has depressed investment. Repealing them will get us back on the road to faster Internet for all.

AT&T should acquire Time Warner despite DOJ challenge

ThinkstockPhotos-123909553

The following post was co-authored by R Street Tech Policy Associate Joe Kane. 


This week, the Department of Justice (DOJ) formally challenged AT&T’s proposed acquisition of media conglomerate Time Warner by filing a complaint to block the merger with the U.S. District Court for the District of Columbia. Despite this challenge, the merger should be allowed to proceed, as both the facts and legal precedents strongly suggest that DOJ’s challenge lacks merit.

Time Warner produces video content and programming through its Warner Bros., Turner Broadcasting System and HBO subsidiaries. AT&T distributes content from Time Warner and other producers through its wireline, mobile and satellite networks. These two firms don’t compete against each other in any relevant market, so this represents a classic example of a vertical merger. It is very rare for the DOJ to challenge a vertical merger such as this, and even rarer for the courts to block one — it hasn’t happened in decades.

Vertical mergers are almost always pro-competitive and pro-consumer in nature. It’s horizontal mergers, in which competitors in the same market seek to combine, that are more likely to be problematic and thus subject to antitrust scrutiny. AT&T abandoned its attempt to acquire T-Mobile in 2011, for example, after the DOJ filed suit to block it. With vertical mergers, however, the combined firm can achieve valuable efficiencies that it can pass onto consumers in the form of lower prices and/or better products or services. And no firms exit the market, so consumer choice does not decrease. Thus, the benefits to consumer welfare from such mergers almost always exceed any corresponding harms.

Here, the efficiency gains that AT&T and Time Warner could achieve are both obvious and substantial. In addition to benefitting from economies of scale (e.g., by combining their legal teams or human resource departments), control over the entire chain of distribution for Time Warner’s premium video content — from the production studio to the viewer — could allow the combined firm to deliver premium content to AT&T subscribers at substantially lower costs, or to develop new service offerings to compete with the innovative video services being developed by the likes of Apple, Amazon, Netflix and Disney.

The combined AT&T-Time Warner may well have greater leverage and bargaining power in carriage negotiations — Time Warner may get better deals with other distributors when licensing its content and AT&T may get better deals with other programmers when licensing their content. That may squeeze competing programmers and distributors, including giants like Disney and Verizon, by eating into their profit margins and forcing them to innovate in order to survive in the market.

But the antitrust laws don’t protect competitors; they protect competition. The recent vertical merger between Comcast and NBCUniversal — which was allowed to proceed despite identical concerns about increased leverage in carriage negotiations — is indistinguishable from the proposed AT&T-Time Warner merger. There is simply no reason to change course now and block AT&T’s acquisition of Time Warner.

The DOJ surely knows how weak its case is, so expect to see further negotiations about merger conditions in the coming weeks. AT&T has already signaled that it’s unwilling to accept any structural conditions, such as divesting political lightning rod CNN, but a targeted behavioral condition governing the licensing of Time Warner’s content to competing online video distributors, like Sling TV, may be enough to grease the wheels and get this merger over the line.

Whether AT&T is willing to accept such conditions, or whether it pushes its hand in court to try to get the merger approved without any conditions, remains to be seen. Regardless, the benefits from the merger would be substantial and undeniable, far outweighing any likely harms. AT&T’s acquisition of Time Warner should be approved posthaste.

 

 

Kosar talks book publishing with CHCW podcast

R Street Vice President of Policy Kevin Kosar, along with food writer Monica Bhide, were recent guests of the Charles Houston Community Writers and sat for an extended discussion of publishing for the group’s podcast. Video is embedded below:

Puerto Rico: Storms and savings

shutterstock_728622295

Puerto Rico has a long history of many disastrous hurricanes, as once again this year with the devastating Hurricane Maria. These disasters recur frequently, historically speaking, in an island located “in the heart of hurricane territory.” Some notable examples follow, along with descriptions excerpted from various accounts of them.

  • In 1867, “Hurricane San Narciso devastated the island.” (Before reaching Puerto Rico, it caused “600 deaths by drowning and 50 ships sunk” in St. Thomas.)
  • In 1899, Hurricane San Ciriaco “leveled the island” and killed 3,369 people, including 1,294 drowned.
  • In 1928, “Hurricane San Felipe…devastated the island”…“the loss caused by the San Filipe hurricane was incredible. Hundreds of thousands of homes were destroyed. Towns near the eye of the storm were leveled,” with “catastrophic destruction all around Puerto Rico.”
  • In 1932, Hurricane San Ciprian “caused the death of hundreds of people”…“damage was extensive all across the island” and “many of the deaths were caused by the collapse of buildings or flying debris.”
  • In 1970, Tropical Depression Fifteen dumped an amazing 41.7 inches of rain on Puerto Rico, setting the record for the wettest tropical cyclone in its history.
  • In 1989, Hurricane Hugo caused “terrible damage. Banana and coffee crops were obliterated and tens of thousands of homes were destroyed.”
  • In 1998 came Hurricane Georges, “its path across the entirety of the island and its torrential rainfall made it one of the worst natural disasters in Puerto Rico’s history”…“Three-quarters of the island lost potable water”…“Nearly the entire electric grid failed”…“28,005 houses were completely destroyed.”
  • In 2004, Hurricane Jeanne caused “severe flooding along many rivers,” “produced mudslides and landslides,” “fallen trees, landslides and debris closed 302 roads” and “left most of the island without power or water.”
  • And in 2017, as we know, there was Hurricane Maria (closely following Hurricane Irma), with huge destruction in its wake.

These are some of the worst cases. On this list, there are nine over 150 years. That is, on average, one every 17 years or so.

All in all, if we look at the 150-year record from 1867 to now, Puerto Rico has experienced 42 officially defined “major hurricanes”—those of Category 3 or worse. Category 3 means “devastating damage will occur.” Category 4 means “catastrophic damage will occur.” And Category 5’s catastrophic damage further entails “A high percentage of framed homes will be destroyed…Power outages will last for weeks to possibly months. Most of the area will be uninhabitable for weeks or months.”

Of the 42 major hurricanes since 1867 in Puerto Rico, 16 were Category 3, 17 were Category 4 and 9 were Category 5, according to the official Atlantic hurricane database.

Doing the arithmetic (150 years divided by 42), we see that there is on average a major hurricane on Puerto Rico about every 3.5 years.

There is a Category 4 or 5 hurricane every 5.8 years, on average.

And Category 5 hurricanes occur on average about every 17 years.

There are multiple challenging dimensions to these dismaying frequencies–humanitarian, political, engineering, financial. To conclude with the financial question:

How can the repetitive rebuilding of such frequent destruction be financed?  Thinking about it in the most abstract way, somewhere savings have to be built up. This may be either by self-insurance or by the accumulation of sufficiently large premiums paid for insurance bought from somebody else. Self-insurance can include the cost of superior, storm-resistant construction. Or funds could be borrowed for reconstruction, but have to be quite rapidly amortized before the next hurricane arrives. Or somebody else’s savings have to be taken in size to subsidize the recoveries from the recurring disasters.

Is it possible for Puerto Rico to have a long-term strategy for financing the recurring costs of predictably being in the way of frequent hurricanes, other than using somebody else’s savings?


Image by JEAN-FRANCOIS Manuel

 

Why cloture benefits both parties

shutterstock_674041957

Senate Rule XXII requires an affirmative vote of “three-fifths of the senators duly chosen and sworn” to invoke cloture, or end debate, on any “measure, motion, or other matter pending before the Senate … except on a measure or motion to amend the Senate rules, in which case the necessary affirmative vote shall be two-thirds of the senators present and voting.”

Consequently, cloture is typically understood today as making minority obstruction possible. A three-fifths vote is effectively required to schedule an up-or-down vote on most questions, absent the unanimous agreement of all 100 senators. However, ending debate on presidential nominations only requires a simple-majority vote. (Democrats used the nuclear option to reduce the threshold to invoke cloture on most nominees in 2013 and Republican did the same for Supreme Court nominees earlier this year.)

Notwithstanding the recent use of the nuclear option, cloture remains a time-consuming process when the Senate is considering nominations and legislation. For most debatable measures, the entire process requires four calendar days to complete. This gives individual senators the ability singlehandedly to delay consideration of the majority’s agenda on the Senate floor simply by withholding their consent to expedite the decision-making process. Given this, the number of cloture votes is frequently cited as evidence of minority obstruction.

But there is more to cloture than minority obstruction.

It is certainly not incorrect to view cloture motions and minority obstruction as related. However, such a narrow focus overlooks the many advantages that the cloture rule offers Senate majorities. Then-Majority Leader Harry Reid, D-Nev., acknowledged these benefits in an exchange with then-Minority Leader Mitch McConnell, R-Ky., on the Senate floor in July 2012. “The filibuster was originally … to help legislation get passed. That is the reason they changed the rules here to do that.”

The majority, acting through its leadership, can use cloture to structure the legislative process to its advantage. When viewed from this perspective, the incidence of cloture votes also reflects an increase in the influence of the majority leader and, by extension, the majority party, in the Senate’s deliberations.

The evolution in the use of cloture during the second half of the 20th century increased the influence of the majority leader. Cloture is now utilized preemptively on a routine basis to speed consideration of legislation regardless of time spent on the floor. In this process, the majority limits the minority’s ability to debate measures freely and offer amendments pursuant to the Senate rules. Such behavior may simply result from the anticipation of expected obstruction by the minority party. It could also represent a genuine effort to push the majority’s agenda through the Senate unchanged in a timely manner. The restrictive process could also be utilized to defend carefully negotiated legislation from killer amendments or to protect majority party members from having to take tough votes.

The majority leader frequently uses cloture as a scheduling tool when the Senate considers major legislation. While filing cloture is a time-intensive process, it provides the only clearly established procedure for the resolution of debatable questions in the Senate. Thus, the cloture rule provides a small degree of certainty in an otherwise uncertain environment. The majority leader can use such certainty to his advantage by scheduling votes at the end of the week and immediately before a long recess to force an issue. Obstructing senators are less likely to risk the ire of their colleagues by forcing a rare weekend session.

The cloture rule also gives the majority leader the ability to impose a germaneness requirement on amendments to legislation post-cloture. Such a requirement may spare majority party members from having to take tough votes on nongermane amendments. It also protects carefully crafted legislation from poison-pill amendments unrelated to the underlying issue.

Finally, cloture is often utilized by the majority leader for symbolic purposes. By triggering an up-or-down vote on legislation, cloture establishes a clearly defined line of demarcation between the majority and minority parties on controversial issues. Such votes can be presented as take-it-or-leave-it propositions. The proponents of such measures can often portray the senators who vote against them as not supporting the underling legislation.

Without the cloture process, the majority leader would not have these important, albeit limited, tools at his disposal, and he would thus be unable to structure the legislative process to the majority’s advantage using existing Senate rules. When combined with the practice of filling the amendment tree, the cloture process further allows the majority leader to limit the ability of individual senators to participate in the legislative process without having to change the Senate’s rules to reduce their procedural prerogatives.

The fact that the majority leader regularly files cloture early in the legislative process, before any actual obstruction can be said to have occurred on a measure, is illustrative of the benefits that Senate majorities derive from the cloture process. As the figure below demonstrates, the instances in which cloture has been utilized during the early stages of a measure’s consideration on the Senate floor have increased dramatically since 2001. This dynamic can be isolated and the majority’s pre-emptive use of cloture can be more readily discerned by comparing the total number of cloture motions filed in a Congress to the number filed when omitting those motions filed on the first day of a bill’s consideration or very early in the legislative process.

Cloture

The takeaway from this is that the cloture process may benefit both the majority and the minority parties in the Senate today.


Image by Jonathan O’Reilly

 

How the FCC’s media ownership reforms could save local news

shutterstock_308058626

The following post was co-authored by R Street Tech Policy Associate Joe Kane. 


Local news is in decline. As advertising revenues plummet and both reporters and subscription dollars increasingly flow to a handful of coastal media outlets, local newspapers and broadcasters throughout the rest of the United States struggle to get by.

Shifts in media consumption in the digital age are partly to blame, but these local news outlets also are hamstrung by arcane ownership restrictions that inhibit their ability to innovate and compete. The Federal Communications Commission’s decades-old restrictions on local media ownership may have made sense when Americans’ news outlets were limited to local newspapers, radio and three commercial TV broadcasters (ABC, CBS and NBC — Fox wasn’t formed until the mid-1980s). But with the rise of cable news and the commercial internet, these restrictions now skew the media marketplace and become more outdated every day.

Thankfully, this broken situation is about to be fixed. This week, the FCC is set to pass commonsense reforms to its local-media ownership rules that are long overdue. These updated rules will better reflect the realities of the current media landscape and allow local newspapers and broadcasters to compete with other media outlets on a level playing field. The changes include eliminating bans on media cross-ownership, updating local-broadcast merger rules and allowing broadcasters to enter into joint sales agreements (JSAs) for advertising without automatically qualifying as merged entities for purposes of the ownership restrictions.

FCC Chairman Ajit Pai recently outlined the importance of eliminating the cross-ownership bans. Like many FCC rules, the bans contemplated a siloed and heavily concentrated media market, which in no way resembles the cornucopia of media outlets available to Americans today. The cross-ownership bans date back to the 1970s, when local broadcasters and newspapers provided the only access to news in many markets. At that time, prohibiting any one owner from controlling both a radio station and a television station in the same market, or a newspaper and a television or radio station in the same market, was a way to ensure Americans had access to a diverse array of viewpoints and news sources.

However, with the rise of cable news and the internet, these cross-ownership bans no longer make any sense. Jeff Bezos (Amazon CEO and world’s richest man) was allowed to buy the Washington Post, and Facebook or Google legally could try to buy The New York Times. But a local broadcaster buying a struggling newspaper is strictly forbidden.

That simply makes no sense. Any merger that threatens to create a monopoly or lessen competition substantially (like those NYT hypotheticals), could still be blocked under general antitrust law. But many cross-ownership deals between local newspapers and broadcasters would raise few, if any, antitrust concerns, so the per se ban on them should be removed. Moreover, allowing cross-ownership between broadcasters and newspapers would likely lead to more coverage of local issues.

The FCC is also updating its rules for mergers among broadcasters, again to recognize the changing media marketplace. Previously, a top-four broadcaster and a smaller broadcaster were allowed to merge only if doing so left at least eight independently owned broadcast stations in the market. This so-called “Eight Voices Test” doesn’t count cable news or the internet as even a single “voice” in the market, which is absurd, given the effectively infinite capacity for independent voices on these platforms. Thankfully, the FCC is set to eliminate this outdated test and allow general antitrust law to govern these mergers instead.

Similarly, the FCC is relaxing its rule that prohibits all mergers between top-four broadcasters, choosing instead to review these mergers on a case-by-case basis. Currently, the FCC requires the four biggest TV broadcasters to be independently owned, regardless of how many other stations are in the market. This nationwide, bright-line rule is not appropriate in all markets. For example, in a market with two very large stations and several smaller stations, a merger between the third and fourth biggest stations could benefit both consumers and competition by putting greater pressure on the two biggest stations. In many cases, such a merger would be harmful, but employing case-by-case review will allow the FCC to evaluate actual market conditions, rather than sticking to a rigid line drawn in a bygone era.

Finally, the FCC is amending its rule that treats any broadcasters with joint sales agreements (JSAs) as being under common ownership. Again, this is simply a case of the FCC modernizing its media ownership rules to bring them more in line with the antitrust rules that govern competition in every other sector. The current rules assume that if two broadcasters use a JSA in advertising sales, it automatically gives one station enough control over the other to amount to common ownership. It’s true that such arrangements can amount to collusion and unfair restraints on trade, depending on the degree of control they exert. But they can also greatly reduce costs for struggling broadcasters who cannot afford their own sales teams. The current restriction on JSAs harms the public interest by blocking these efficiency gains. Going forward, whether JSAs are attributable for purposes of ownership restrictions will be assessed under general antitrust standards.

The media marketplace is increasingly converging toward the internet and over-the-top services, yet the FCC’s local media ownership rules were devised before the internet even existed. The commonsense reforms the FCC has proposed for these antiquated rules are well overdue. By removing unnecessary restrictions and updating its standards, the FCC can balance the playing field, stimulate investment and help save local news media.


Image by Zerbor

Virtue signaling won’t save the planet, but state compacts might

shutterstock_526388254

Senior U.S. climate officials arrived Monday in Bonn, Germany, a week into the latest meeting of the United Nations-sponsored climate change project known as the Conference of Parties-23 (COP-23).

To no one’s surprise, the “rest of the world” (which is to say, Europe and the American political left, mostly) remains unhappy about the United States’ decision to withdraw from the Paris Climate Accord in June. Nonetheless, they are committed to find a way to persuade the country (which is to say, the red states) to see the error of its ways.

Over the weekend, four Democratic governors from states with active environmental movements—Jay Inslee of Washington, Jerry Brown of California, Kate Brown of Oregon and Terry McAuliffe of Virginia—verbally thrashed the Trump administration, although Brown was taken aback when even he was booed and heckled by “climate justice” protestors.

But to no avail.

On Monday, as several of Trump’s most senior climate negotiators took part in a panel talk on “clean fossil fuels,” attendees started singing a clever protest song to the tune of Lee Greenwood’s “God Bless the U.S.A.”

But the Trump administration still plans to exit the Paris Agreement. What gives?

Suffice it to say, taking moral umbrage at the United States doesn’t have the same coercive power over American policy as the Pentagon’s nuclear umbrella over Europe or the U.S. Navy and its 11 aircraft carriers keeping the world’s trade routes has had on global policy.  Hence, the distinction between “hard power” and “soft power” made many years ago by Harvard’s Joseph Nye.

The top-down approach to climate change the United Nations prefers was never going to work. Major climate meetings have been taking place for 23 years—hence the name COP23—but have never succeeded in creating a workable international scheme. On two separate occasions, the United States has signed up and then removed itself from a global climate agreement, first in 2001 under George W. Bush and now in 2017 under Trump.

Thankfully, a more decentralized approach to carbon policy is quietly gaining steam, as states and cities band together to pursue their own goals. Speaking during a panel discussion in Bonn, McAuliffe celebrated the recent election wins in Virginia, which ushered in a new swath of Democrats who will enjoy something like parity with Republicans in the Legislature’s lower house, not to mention a new Democratic governor, lieutenant governor and attorney general.

This means Virginia will likely become a member of the nine-state Regional Greenhouse Gas Initiative, which has had some success cutting emissions from the power sector. Carbon markets are less economically efficient than a carbon price, but since its creation in 2005, RGGI state carbon emissions have fallen 40 percent, thanks in large part to the development of natural gas reserves from hydraulic fracturing. While RGGI is not an ideal vehicle to place a market price on carbon, this type of compulsory, cost-sharing system is the longest-lasting successful carbon market still in existence.

Along with Virginia, the election of a Democrat to replace outgoing Republican Gov. Chris Christie of New Jersey also means that state may rejoin RGGI, after leaving the group in 2011.

In other words, the growth of regional carbon markets is still a going concern. It even could force real U.S. emissions reductions in the coming years, even as the sound and fury of U.N. meetings along the Rhine continue to signify nothing.


Image by r.classen

 

Massachusetts carbon tax bills are a mixed bag

shutterstock_736054342

The search for climate change solutions that keep science front and center and political preferences secondary has led to one frustration after another. But that soon may change. A revenue-neutral carbon tax holds the promise of reducing carbon emissions without increasing the size of government – the principle objection of conservatives who long have been skeptical of more prescriptive climate regulations.

If properly designed, a revenue-neutral carbon tax can employ market-based incentives, rather than government regulations and subsidies, to ensure that pollution is appropriately priced. While no U.S. state has adopted a carbon tax to date, much less a revenue-neutral version, Massachusetts is poised to become the first state to successfully pass a hybrid version.

Two carbon-pricing bills of note have been filed this legislative session. S.1821, filed by state Sen. Michael J. Barrett, D-Lexington, and H.1726, filed by state Rep. Jennifer Benson, D-Lunenburg, both seek to assess fees on carbon emissions. Yet the Senate bill is, as a matter of both politics and policy, by far the better of the two.

Barrett’s bill would simply assess a fee on emissions without adding to government bureaucracy. That way, Massachusetts taxpayers would pay only for the price of their pollution, and no more. Conversely, the House proposal would divert 20 percent of the revenue generated by the tax into a so-called “Green Infrastructure Fund” to support investments in “transportation, resiliency and clean energy projects that reduce greenhouse gas emissions, prepare for climate change impacts, assist low-income households and renters in reducing their energy costs, and create local economic development and employment.”

While that laundry list of well-intentioned spending certainly aspires to assist the commonwealth, there’s no indication that it will better dispose of the funds it collects than private actors would under a system in which carbon emissions are appropriately priced. In other words, a revenue-neutral carbon tax could achieve all of the benefits sought by establishing a green infrastructure fund, without creating new government programs or adding to government waste.

A revenue-neutral carbon tax need not harm Massachusetts’ economy, as evidenced by the well-balanced policy approach taken by the western Canadian province of British Columbia, which adopted a similar fee-and-dividend approach to carbon pricing in 2008. In fact, the United Nations Framework Convention on Climate Change estimates B.C.’s tax has reduced province’s emissions by up to 15 percent with no observable drag on overall economic performance. In fact, between 2007 and 2014, British Columbia’s real gross domestic product grew by 12.4 percent, stronger than the Canadian average.

The only downside to the Senate bill is that it, unfortunately, does nothing to reduce the tax burden on Massachusetts residents. Rather than use the fee to lower, say, the income tax; the revenue would finance a Carbon Dioxide Emissions Charges Rebate Fund. All proceeds would be returned to residents and employers in the in the form of rebates. Analysis from the Center on Budget and Policy Priorities concludes that if large rebates were distributed through an efficient delivery system, they would be able to protect low-income households from the brunt of the tax, but would not be able to fully cover households and businesses with large carbon footprints.

A better approach would allow be to apply the revenues to reduce or eliminate more destructive taxes like the corporate excise tax or the personal income tax. Taxing bad things, like carbon emissions, rather than good things, like labor and investment, would build ongoing support for the carbon tax among citizens and businesses and allow any negative effects to be more than offset by a growing state economy.

While critics in both parties and on both sides of the climate change debate may find fault with the Senate bill, it is step in the right direction for the country and the commonwealth. If enforced properly, this legislation will reduce harmful carbon emissions and benefit Massachusetts residents and businesses, without contributing to the stream of wasteful government spending and unnecessary bureaucratic growth.

If legislators and environmental groups are serious about addressing climate chance, they should do so in a way that truly benefits everyone. If, in fact, the goal is to reduce carbon emissions and bring economic benefits to residents and businesses, a revenue-neutral carbon tax is the best way forward.


Image by funnybear63

 

A new development involving the Congressional Review Act

shutterstock_687899419

A debate has broken out in the regulatory-reform community this past year over how properly to construe the reach of the Congressional Review Act. Traditionally, most observers have viewed the CRA as a tool by which Congress could repeal new regulations issued within the last 60 legislative days. But some legal scholars have argued that, while this is broadly correct, it’s far from clear when the CRA’s 60-day clock should start ticking.

Paul Larkin from the Heritage Foundation is among those to point out that, under the CRA’s text, the clock cannot start until the regulation in question has been submitted to Congress. Because many agency rules are never officially submitted to Congress—even ones that were promulgated many years ago—the 60-day clock was never activated for those rules and Congress could thus still repeal them using the fast-track mechanism.

Another component of this debate has been clarifying what, exactly, constitutes a “rule” for CRA purposes. The text of the CRA incorporates the Administrative Procedure Act’s definition of “rule,” which as Larkin points out, “has been recognized as quite broad.” This broader interpretation of the term “rule” could encompass informal agency actions like policy statements or guidance, which do not go through the more formalized process of notice-and-comment rulemaking under the APA.

Congress has so far appeared reluctant to embrace this broader interpretation of the CRA’s text and use it to repeal rules and other agency action stretching back into previous administrations. But that could be changing. The Wall Street Journal editorial board and other media outlets are reporting that Sen. Pat Toomey, R-Pa., recently asked the Government Accountability Office to issue a determination as to whether a 2013 leveraged-lending guidance document from the Obama administration constituted a “rule” for CRA purposes.

The GAO finally has issued its ruling, concluding that the lending guidance was, in fact, a rule under the CRA, meaning it is eligible for repeal under the act. Further, under Senate precedent, the publication of a GAO report such as this one is treated as the official trigger for the CRA’s 60-day legislative clock. As the nonpartisan Congressional Research Service has noted:

In some instances, an agency has considered an action not to be a rule under the CRA and has declined to submit it to Congress… In the past, when a Member of Congress has thought an agency action is a rule under the CRA, the Member has sometimes asked GAO for a formal opinion on whether the specific action satisfies the CRA definition of a ‘rule’ such that it would be subject to the CRA’s disapproval procedures.

GAO has issued 11 opinions of this type at the request of Members of Congress. In seven opinions, GAO has determined that the agency action satisfied the CRA definition of a ‘rule.’ After receiving these opinions, some Members have submitted CRA resolutions of disapproval for the “rule” that was never submitted…

Members have had varying degrees of success in getting resolutions recognized as privileged under the CRA even if the agency never submitted the rule to Congress. It appears from recent practice that, in these cases, the Senate has considered the publication in the Congressional Record of the official GAO opinions discussed above as the trigger date for the initiation period to submit a disapproval resolution and for the action period during which such a resolution qualifies for expedited consideration in the Senate…

It remains to be seen if Congress will pursue a resolution of disapproval under the CRA to repeal this particular rule on leveraged lending, but if it does, the potential implications run deep. Congressmen could ask GAO to issue more opinions determining whether past agency actions constitute rules for CRA purposes, and then seek to repeal them. The law firm Cleary Gottlieb observed in a memorandum on this development:

The GAO’s Leveraged Lending Opinion casts a shadow of uncertainty over the applicability and future viability of the Agencies’ leveraged loan supervision regime, and critically, other agency actions that could be characterized as ‘rules’ subject to Congressional disapproval. In fact, if Congress seeks to address other agency ‘rules’ that were never submitted to Congress under the CRA, the total volume of agency interpretations and statements of policy that could potentially become subject to Congressional disapproval would be very large indeed.

The Red Tape Rollback project (of which the R Street Institute is a partner) has been compiling a list of agency actions and rules that were never properly submitted to Congress and are therefore potentially still eligible for repeal via the CRA. We’ll see where Congress goes from here, but it’s possible it could be on the brink of adopting a broader interpretation of the CRA.


Image by iQoncept

 

NAFTA negotiators should respect domestic labor rules

shutterstock_647854123

The following blog post was co-authored by R Street Research Assistant Randy Loayza.


As the United States, Canada and Mexico continue to renegotiate the North American Free Trade Agreement, political posturing and protectionism hinder progress toward modernization and trade liberalization.

The question of labor unions has come to the forefront of Canada’s concerns. Specifically, Canadian trade representatives claim that the “right-to-work” laws in 28 American states provide an unfair advantage to individual states over Canadian provinces by allowing states to curb the collective bargaining power of unions. Canada also seeks higher labor standards in Mexico as part of this protectionist push toward a more aggressive international harmonization of labor laws.

It is not surprising that Canada feels compelled to address state’s right-to-work laws, as these prevent unions from mandating dues payment from employees who opted not to join the industry’s union. Canadian labor laws allow for “a majority vote of the bargaining unit employees” to decide whether unions can collect mandatory dues from employees. This is also the standard for American states without RTW laws.

Canadian trade representatives hope to persuade the United States, as well as Mexico, to ratify the eight core conventions of the International Labor Organization (ILO) and various labor chapters from the Canada-EU Comprehensive Economic and Trade Agreement (CETA). Canadian trade representatives are even advocating federal legislation to bar RTW states from enforcing such laws, which would override the United States’ system of federalism.

Although it is conjecture to assume that Canada’s protectionist pleas are solely in response to fear of certain disadvantage if their labor markets compete with U.S. markets that operate under these RTW laws, it is not farfetched. The very notion that RTW laws provide an  to individual states is a testament to the value of open and free choice. In fact, states with RTW laws provide a better environment for investment opportunities and are less conducive to adversarial employer-employee relations. Moreover, states that mandate union membership as a condition of employment have seen labor migrate to states with RTW laws.

Labor unions see RTW laws as fundamental risks to their already declining membership base. The AFL-CIO, for instance, is extremely hostile toward states that employ such laws, citing collective bargaining leverage, as well as wage benefits. As such, workers’ right are at the forefront of the AFL-CIO’s concerns. Under the National Labor Relations Act, it is illegal in the United States for labor unions to mandate membership as a condition of employment. States without RTW laws do allow labor unions to bargain collectively with employers to mandate both membership and due payment from any employee, although most arrangements only require the latter.

Supporters of Canada’s position on U.S. RTW laws focus narrowly on union membership as the largest contributing factor in wages and labor-participation rates. In reality, other factors like emerging markets, the Great Recession, modernization of integrated supply chains, globalization and the move toward higher-skilled industries arguably have led to greater changes in American employment and wages. Also, varying productivity rates determine wage rates in the labor market and across countries. Wage rates also drive immigration among countries—as trends in U.S.-Mexico migratory patterns have shown.

Evidence from around the globe has shown that economic development through increased international trade fosters higher labor standards. While there may be some concerns over unfair labor practices that disadvantage unions, such cases should not be used to threaten the progress made through NAFTA. Larger problems of inequality and contemporary productivity-wage disparities should also concern both NAFTA supporters and skeptics alike. But such concerns will not be addressed through international labor mandates. Instead, renegotiation toward a stable, modernized, market-driven and rules-based NAFTA must respect the sovereignty of domestic labor laws.


Image by DarwelShots

 

Big Ag reaping federal subsidy benefits

shutterstock_644826256

With the farm bill up for reauthorization in 2018, policymakers will soon have a chance to reassess farm subsidies and target the rampant waste and cronyism in our farm-support system.

The Environmental Working Group (EWG) this week published the most recent update to their Farm Subsidy Database, which serves as a useful guide to illustrate who is and who is not benefiting from the current system. It confirms the trend seen over decades of aggregated data on farming subsidies: the most successful agribusinesses receiving the largest portion of federal farm subsidies.

EWG estimates that, between 1995 and 2016, farms that ranked among the top 10 percent of income received about 77 percent of “covered commodity” subsidies, or subsidies that cover corn and soybeans. To put that into perspective, these large-scale farms have an average household income of $1.1 million.

The Congressional Budget Office earlier this year projected that current farm subsidy programs would cost taxpayers an extra $7.5 billion more than originally projected. But rather than address those spiraling costs as part of its deliberations in the 2014 farm bill, Congress proposed cutting other programs, notably the Supplemental Nutritional Assistance Program, which actually has proven to be less expensive than originally expected.

The current system represents outright cronyism. Congress has neglected proposals for pragmatic reform, with the ultimate effect of disadvantaging smaller and beginning farmers, who must cope with rising land prices and farm consolidation as the mega-farms get richer

Fortunately, EWG’s newly updated database is an incredibly reliable source for policymakers to acquire baseline information regarding farm subsidies. As the deadline to reauthorize the farm bill approaches, it is necessary to recognize their work as essential in the movement to reform federal agriculture programs and to quash agribusiness cronyism.


Image by Juergen Faelchle

 

Jon Coppage on the ‘Stacy on the Right’ show

R Street Visiting Senior Fellow Jonathan Coppage recently joined “Stacy on the Right” host Stacy Washington to discuss how accessory dwelling units were banned in many cities, but can be restored to provide families with financial and social flexibility across a home’s life-cycle. Video of the show is embedded below:

Latest reduction in duties shows promise in US-Canada softwood lumber trade

shutterstock_554261926

A recent decision by the U.S. Commerce Department could signal de-escalation of what has been one of the most fraught American trade disputes with Canada. Following the announcement of preliminary anti-dumping duties in June, the finalized revisions of total countervailing and anti-dumping duties—the total amount of tariffs on Canadian softwood lumber imports—reduce rates for most Canadian exporters.

This is the latest, most promising development in a long-running trade dispute. While the two countries had been trading in softwood lumber since the 1800s and first imposed tariffs on one another in the 1930s, the contemporary dispute started in 1982, when U.S. softwood lumber companies filed a complaint against competing Canadian exporters for unfair subsidization.

Subsequent claims have also contended Canadian provincial governments artificially set below-market prices for the use of public timberlands. These public lands comprise 94 percent of total Canadian lumber-yielding areas, while only 42 percent of U.S. timberlands are under government control.

Canadian provinces have since reformed their leasing and pricing systems and negotiated an agreement on market-share caps as part of prior attempts to avoid stricter tariffs. After five more cycles of duties claims and agreements, many of the same concerns over unfair pricing still remain. American lumber companies also blame a favorable currency exchange for giving Canadian imports the advantage in American lumber markets.

Despite these protectionist pressures, the most recent revisions come at a great time for the American homebuilding industry. The National Association of Home Builders in the United has been adamant about the need for lumber imports to meet domestic demand. They also estimate that, of the approximately 33 percent of lumber imported last year, 95 percent came from Canada.

Economists have long agreed on the benefits of eliminating tariffs. While this latest revision shows a step in the right direction, the continued insistence on lumber duties—especially in the wake of rising lumber demand due to natural disasters—remains concerning to those who favor free markets and mutually beneficial, harmonious trade.


Image by quangmooo

 

Locked Up Without Conviction: Pretrial Integrity and Safety Act, strategies for public safety

R Street hosted a recent Capitol Hill panel to discuss how reliance on cash bail as a primary determinant for detention in pretrial hearings can harm public safety and hamper individual liberty. The panel was moderated by Marc Howard, professor of government and law, Georgetown University.

The panelists considered how objective pretrial risk assessments could serve as a complement to bail. Sekwan Merritt, a former inmate, discussed how the jail and cash bail system impacts people of color, and how facilities can increase rehabilitative resources to stop the revolving door of local jails. The panel also discussed the Pretrial Integrity and Safety Act, recently introduced bipartisan legislation sponsored by Sens. Rand Paul, R-Ky., and Kamala Harris, D-Calif.

Other panelists included:

  • Arthur Rizer, R Street’s director of national security and justice policy
  • Ed Chung, vice president for criminal justice reform, Center for American Progress
  • Robert Green, director of the Montgomery County Department of Correction and Rehabilitation
  • Sekwan Merritt, reform advocate and former inmate

Video of the event can be found here.

The Library of Virginia’s ‘Teetotalers and Moonshiners’ exhibit

6-(1)

R Street’s DrinksReform.org team recently made a trip to Richmond, Virginia, to visit the Library of Virginia’s “Teetotalers & Moonshiners” exhibit. Given Virginia’s rich history with moonshine, Richmond is an ideal location for such an exhibit. The exhibit, which is open through Dec. 5, tells the story of how Prohibition started local before going global.

The display starts by recounting the time period right before national Prohibition, when Virginia already was starting to clamp down on booze. In 1877, the state Legislature started taxing alcoholic spirits and, by 1886, it gave counties the ability to shutter saloons and other drinking establishments within their borders (what became known as the “local option”). On the brink of Prohibition going national, nearly 90 percent of Virginia counties had shut down their drinking establishments.

Not satisfied with the local option, Prohibition fever—spurred by the Progressive and Temperance movements—quickly advanced to the state level, where in 1914 the Legislature approved a ballot referendum on statewide prohibition. State citizens voted in favor of the statewide booze ban, although the exhibit notes that African-Americans and the white working-class—two constituencies who disfavored the ban—were largely excluded from the vote.

While Prohibition wouldn’t become nationalized until 1920, Virginia wasted little time in commencing its crackdown on bootleggers. Nov. 1, 1916, was the official “last call” for Virginia distillers, breweries and bars; most went out of business during the dry years, although some large companies were able to stay afloat by switching their production to beverages such as soda.

As followers of the Prohibition Era know, a black market of booze quickly sprang up, despite the government’s best enforcement efforts. As one placard at the exhibit described it:

Prohibition created a thriving underground economy and culture. Those in the know used passwords and secret knocks to access ‘nip joints’ and speakeasies. Moonshiners, makers or sellers of illicit whiskey, hid their operations in remote rural landscapes. Hidden compartments in clothing, everyday items, and even cars moved alcohol from place to place.

Virginia was ground zero for this moonshining culture, as remote regions of the Blue Ridge Mountains—places like Franklin County in southwest Virginia—made for ideal bootlegging locales. As I wrote recently in a piece on Virginia moonshine for National Public Radio’s “The Salt”:

[During Prohibition] Virginia’s backwoods distillers were forced underground. Moonshining in the Blue Ridge Mountains became so notorious that Franklin County, in the southwest corner of Virginia, was dubbed the “Moonshine Capital of the World,” after it was estimated that 99 out of 100 county residents were involved in the moonshine trade.

Before Prohibition, ‘getting moonshine in areas like Franklin County was not much different from buying eggs or milk,’ says Matt Bondurant. Bondurant’s novel, Wettest County in the World, is based on his grandfather’s moonshining exploits in Franklin County. Matt’s brother Robert now runs Bondurant Brothers Distillery, which distills unaged whiskey not far from Franklin. ‘But when Prohibition came around, then it became a potential money-making possibility,’ says Bondurant.

Money led to crime, which in turn led to the dramatic law-enforcement raids, car chases and prosecutions that so many Americans associate with Prohibition-era moonshining. Moonshiners were forced to operate in remote Appalachian regions like Franklin to avoid detection, and they went to great lengths to hide their efforts — including burying their stills underneath fake graveyards back in the mountains.

Perhaps the most interesting part of the exhibit was its discussion of Prohibition’s legacy. Although it notes that Prohibition “barely stemmed the flow of alcohol across the country,” it also points out that certain vestiges of Prohibition are still with us. Namely, in 1933, when Prohibition was repealed, Virginia established the Alcohol Beverage Control department to regulate booze within the state. Other states likewise passed comprehensive regulatory regimes governing alcohol in the immediate aftermath of repeal.

Unfortunately, this post-Prohibition regulatory structure remains mostly intact to this day. As R Street has noted in the past, Virginia labors under some of the worst alcohol laws in the country. It remains a “control state” in which distilled spirits can only be sold in government-operated liquor stores, and it severely taxes and handicaps its spirits producers.

Indeed, while it’s tempting to view exhibits like “Moonshiners & Teetotalers” as windows into a long ago past—one that will remain relegated to the dustbin of history—the legacy of Prohibition still haunts us in many ways. More than anything, learning about the Prohibition Era underscores the importance of pursuing rational alcoholic beverage laws that promote free enterprise and consumer choice.

For more pictures from the exhibit (both ones taken by the DrinksReform.org team and ones provided by the exhibit), check out the images below:

Was the Bank of England right to lie for its country in 1914?

shutterstock_303346862

Jean-Claude Juncker, now the president of the European Commission and then head of the European finance ministers, sardonically observed about government officials trying to cope with financial crises:  “When it becomes serious, you have to lie.” The underlying rationale is presumably that the officials think stating the truth might make the crisis worse.

No one would be surprised by politicians lying, but Juncker’s dictum is the opposite of the classic theory of the Roman statesman Cicero, who taught that “What is morally wrong can never be expedient.” Probably few practicing politicians in their hearts agree with Cicero about this. But how about central bankers, for whom public credibility is of the essence?  Should they lie if things are too bad to admit?

An instructive moment of things getting seriously bad enough to lie came for the Bank of England at the beginning of the crisis of the First World War in 1914. At the time, the bank was far and away the top central bank in the world, and London was the unquestioned center of global finance. One might reasonably have assumed the Bank of England to be highly credible.

A fascinating article, “Your country needs funds: The extraordinary story of Britain’s early efforts to finance the First World War” in Bank Underground, a blog for Bank of England staffers, has revealed the less-than-admirable behavior of their predecessors at the bank a century before. Or alternately, do you, thoughtful reader, conclude that it was admirable to serve the patriotic cause by dishonesty?

Fraud is a crime, and the Bank of England engaged in fraud to deceive the British public about the failed attempts of the first big government-war-bond issue. This issue raised less than a third of its target, but the real result was kept hidden. Addressing “this failure and it subsequent cover-up,” authors Michael Anson, et al., reveal that “the shortfall was secretly plugged by the Bank, with funds registered individually under the names of the Chief Cashier and his deputy to hide their true origin.” In other words, the Bank of England bought and monetized the new government debt and lied about it to the public to support the war effort.

The lie passed into the Financial Times under the headline, “OVER-SUBSCRIBED WAR LOAN”—an odd description, to say the least, of an issue that in fact was undersubscribed by two-thirds. Imagine what the Securities and Exchange Commission would do to some corporate financial officer who did the same thing.

But it was thought by the responsible officers of the British government and the Bank of England that speaking the truth would have been a disaster. Say the authors, “Revealing the truth would doubtless have led to the collapse of all outstanding War Loan prices, endangering any future capital raising. Apart from the need to plug the funding shortfall, any failure would have been a propaganda coup for Germany.” Which do you choose: truth or a preventing a German propaganda coup?

We learn from the article that the famous economist, John Maynard Keynes, wrote a secret memo to His Majesty’s Treasury, in which he described the Bank of England’s actions as “compelled by circumstances” and that they had been “concealed from the public by a masterful manipulation.” A politic and memorable euphemism.

Is it right to lie to your fellow citizens for your country? Was it right for the world’s greatest central bank to commit fraud for its country?  The Bank of England thought so in 1914. What do central banks think now?

And what do you think, honored reader?  Suppose you were a senior British official not in on the deception in 1914, but you found out about it with your country enmeshed in the expanding world war. Would you choose the theory of Juncker or Cicero?


Image by sylv1rob1

James Wallner praises the value of political conflict on the Ezra Klein Show

Does politics, and Congress in particular, actually have too little conflict, rather too much? That’s the argument R Street Senior Fellow James Wallner put forward in a recent episode of the Ezra Klein Show, a podcast produced by the Vox Media Network. From his study of congressional history and procedure, and Wallner concludes that the leadership of both parties have been using the rules to stymie disagreement, and the effect has been to make it nearly impossible for positions to be made clear, compromises to be tested and ways forward to be found.

Congress uses the Congressional Review Act to abolish another regulation

During the early months of the Trump administration, Congress and the president made unprecedented use of the Congressional Review Act to repeal regulations. The CRA, which was enacted in 1996 with bipartisan support, allows Congress to use an expedited process to overturn regulations that agencies have enacted within the past 60 legislative days.

Despite the CRA being one of the most powerful regulatory oversight mechanisms available to Congress, it was a seldom used tool until the current Republican-controlled Congress and White House deployed it 14 times this past spring to overturn Obama-era regulations. Because of the law’s 60-day time limit, however, many observers figured the CRA would go dormant after Congress repealed all the eligible Obama-era rules.

This assumption failed to consider the possibility of using the CRA to repeal rules from independent agencies, which can still act at cross-purposes with the current president. For example, the present head of the Consumer Financial Protection Bureau, Richard Cordray, is an Obama appointee who is only removable from office “for cause.” This limits President Donald Trump’s ability to insert a new agency head. Sure enough, this past July, the CFPB promulgated an arbitration rule that was opposed by the White House.

Despite its opposition to this rule, the only recourse available to the administration was enlisting Congress to pass a CRA resolution repealing the rule. This week, the Senate sent such a resolution to the president’s desk. If Trump signs the bill as expected, it will bring the CRA repeal count up to 15 since January.

‘Economic competitiveness’ doesn’t always mean stronger intellectual property laws

shutterstock_354376550

The following piece was co-authored by R Street Tech Policy Joe Kane. 


In a recent letter, several dozen conservative leaders joined together to call on Republican policymakers to strengthen U.S. intellectual property laws. They lament the loss of U.S. manufacturing jobs that has turned “American communities into ghost towns” (about that) and warn this trend also will overtake the innovation sector as inventors move overseas.

In fact, patent applications are up from both U.S. and foreign applicants. Nonetheless, the letter’s authors contend that bolstering U.S. intellectual property laws must be just as core to the conservative agenda as deregulation and tax reform, lest we sacrifice our constitutional principles and global economic competitiveness.

The authors are correct to note that “patent protection was enshrined in our Constitution.” Where they are wrong is in their attempt to frame intellectual property as something the founders saw as an unfailing good or absolute right. Article 1, Section 8 of the U.S. Constitution gives Congress the power to create intellectual property protections like patents “[t]o promote the progress of science and the useful arts.” It is an enumerated power of the legislature, just like the power to regulate currency, to establish post offices and to tax. In other words, patents are something Congress may employ, specifically with the goal to foster innovation. The patent system is worth defending or even strengthening to the extent that doing so supports that goal. Insofar as it undermines innovation, it should be scaled back.

It would, for example, be false to assume that because innovation is desirable, we should always endeavor to grant more patents, irrespective of their quality, or that we should go to extreme lengths to enforce them or protect them from future challenge. An improperly structured patent system will stifle innovation, rather than encourage it, as the framers intended. Granting exclusive rights to an inventor always poses the risk of delaying when a particular innovation becomes widespread and making it costlier for others to build upon it. These costs are typically worthwhile where they serve as incentives for innovations that wouldn’t otherwise see the light of day. Patents are the means we use to create those incentives. But when low-quality or overly broad patents are issued, future innovation is stalled without countervailing benefits.

Patent examiners are fallible. The U.S. Patent and Trademark Office has issued numerous low-quality patents and the Government Accountability Office has cited the USPTO for inconsistent standards of quality and clarity. This uncertainty leads to litigation, which imposes greater costs on innovators. Indeed, patents have been granted for “inventions” that already exist or that were completely obvious, such as podcasts, swinging on a swing, cats chasing laser pointers and artificial sticks, to name a few.

Because challenging dubious patents through litigation is expensive, Congress moved in 2011 to create new avenues for the expedient review of granted patents that may not meet the requirements of novelty and nonobviousness. One of these proceedings—inter partes review (IPR)—has proved highly successful in accomplishing this goal. As we argue in a recent R Street paper, this process offers a mechanism to challenge dubious patents that is much quicker and cheaper than traditional litigation. The median inter partes review challenge costs about $350,000 through the appeal phase, as compared to $3.1 million in the district court.

The letter’s signatories, however, characterize the Patent Trial and Appeal Board, which conducts IPR, as a “patent death squad” that exists “for the sole purpose of invalidating patents.” They don’t appear even to consider the possibility that some improperly granted patents should be invalidated. They also do not posit what proportion of patents – or name even a single specific patent – invalidated by IPR they think should have remained in force.

No system is perfect, but we should discuss the benefits and costs of IPR, rather than assume it is an attack on intellectual property simply because it invalidates patents. Just as procedures to invalidate counterfeit or flawed titles in real estate are not an attack on real property rights, procedures to weed out bad patents are not an attack on innovation or on patents, in general. Supporters of strong patent laws should be especially interested in the quality and clarity of patents issued.

Of course, the reality that some patent abuse exists does not mean that we should abandon patents, but it does mean that we should carefully examine the system’s trade-offs and the externalities it creates. We should not assume that simply having more patents or stronger enforcement tools means more innovation. A world with 100-year patent terms or severe criminal penalties for infringement would inevitably be one with less innovation.

We all want an innovative society, and the patent system is an important means to further that goal. But we shouldn’t shut down conversations about patent reform or other improvements to the current system that could create more efficient ways to invalidate bad patents or otherwise lower transaction costs in litigation. To keep America on top, we must recognize and embrace the artful balance our founders intended. In other words, patents are a tool and an incentive that can be quantified and measured to inform our policymaking. They’re not an end in themselves.


Image by garagestock

 

The Senate Committee on Rules and Administration is hemorrhaging staff. Why?

shutterstock_739787926

The following post was co-authored by R Street Vice President of Policy Kevin Kosar. 


At the beginning of the 115th Congress, Sen. Richard Shelby, R-Ala., took over from Sen. Roy Blunt, R-Mo., as chairman of the Senate Rules and Administration Committee. Over the past year, the number of committee staff dropped from 20 to 14, a 30 percent decrease.

Of the 20 staffers employed as of Sept. 1, 2016, only six remain with the committee as of late last month, representing a 70 percent departure rate. Three of the committee’s new staff formerly worked in Sen. Shelby’s personal office and six of the 14 current staffers have been hired since February.

The committee’s jurisdiction includes federal elections, presidential succession and various legislative branch management duties (e.g., the Library of Congress and Architect of the Capitol). It has been tossed political hot potatoes, like bills requiring public release of presidents’ tax filings and the creation of an independent commission to investigate Russian interference in the election. Of particular interest is the Senate Rules Committee’s responsibility to:

…make a continuing study of the organization and operation of the Congress of the United States and shall recommend improvements in such organization and operation with a view toward strengthening the Congress, simplifying its operations, improving its relationships with other branches of the United States Government, and enabling it better to meet its responsibilities under the Constitution of the United States.

Why the committee’s staff cohort decreased sharply and why its turnover is so high is unclear, but it is concerning. Rapid staff turnover can drain institutional memory and disrupt operationsOur findings are displayed in the figure below. Readers wishing to understand our methodology for these findings should see below.

Senate+Rules+turnover+figure+10-2017

Methodology

Using LegiStorm employment data, we took snapshots of the committee’s staffing directory at three different dates:

Date 1

  • Sept. 1, 2016: Comfortably before knowing about the outcome of the 2016 congressional elections and which party would enjoy control of the Senate and chairmanships of its committees.

Date 2

  • Feb. 1, 2017: Several weeks after the 115th Congress is sworn in and the new chair (Shelby) has taken over. The gap between the new Congress and Feb. 1 allows for Shelby to make staffing changes, in addition to giving staffers a chance to experience the working environment in the new Congress and with the new chair.

Date 3

  • Sept. 27, 2017: Just over one year from Date 1, and more than 10 months since Date 2. This gap grants some time for staffer attrition due to firing, resignation or any other normal career or life circumstances (new job in or outside the chamber, moving from D.C., etc.).

Each committee staffer was assigned a number (e.g., Staffer 1) and his or her continued employment was examined at the aforementioned dates. Hence, we see in the figure above which staff stayed and which did not (e.g., Staffer 1 departed sometime after Feb. 1, 2017.


Image by Eviart

 

On GATT’s 70th birthday, free trade remains the wellspring of peace

shutterstock_611937239

It was 70 years ago today—Oct. 27, 1947—that representatives from 23 nations gathered at the Palais des Nations in Geneva, Switzerland, to finalize and sign the General Agreement on Tariffs and Trade. This globally transformative trade agreement set into motion decades of growing economic prosperity around the globe.

Following the Second World War, these select countries sought to avoid the economically dismal ramifications of post-World War I protectionist policies. With the Great Depression’s shadow looming over the first conference, the conferees understood the importance of international economic cooperation and the negative externalities of restrictive trade policies. The goal of the conference was to pursue a free and open global market, reduce tariffs and dismantle trade barriers to allow the global economy to flourish.

These negotiations were both ambitious and historic. “It marks the completion of the most comprehensive, the most significant and the most far-reaching negotiations even undertaken in the history of world trade,” Chairman Max Suetens, who presided over the trade negotiations along with his co-chair Sergio Clark, announced to the world in detailing the earnest significance of such a feat. This achievement certainly marked the beginning of the postwar economic boom, signaling to all countries that the global market was now a cooperative venture.

Indeed, the enactment of GATT propelled the global marketplace into decades of unprecedented economic growth. In the years to come, countries would continue to build on GATT’s accomplishments of tariff reductions and lower trade barriers, eventually leading to the Kennedy Round, the Tokyo Round and the Uruguay Round. These negotiations were grounded in adherence to a rules-based system, in that their motions and reports were underpinned with logical deduction, rather than a more ad hoc system of policymaking.

At the conclusion of the Uruguay Round, GATT was replaced with a more structured international body, the World Trade Organization (WTO). While the WTO may replaced GATT, it adopted its overall framework. What began as an endeavor among 23 nations to address trade reform has evolved into an international collaborative trade effort, with all but three nations maintaining a membership status as of today.

In the face of today’s trade renegotiations over North American Free Trade Agreement (NAFTA) and the growing threat of protectionism across the globe, it is immensely valuable to look to the accomplishments made under GATT. This is particularly so in case of the United States, which now is signaling a preference for bilateral rather than multilateral trade agreements. This is an unfortunate development, as all the progress made under both GATT and the WTO demonstrates. Such gains should not be easily dismissed with reckless political rhetoric.

Now is the time to remember that humans flourish when various impediments and barriers are lifted, allowing free and open trade for goods and services. Free trade remains the wellspring of peace.


Image by Lightspring

 

House subcommittee clears plans to unclog hydropower pipeline

shutterstock_354950444

With a 14-minute Oct. 26 markup session, the U.S. House Subcommittee on Energy sent a pair of deceptively simple bills to the full Energy and Commerce Committee, each of which could (should they ultimately become law) help to boost the U.S. hydropower sector.

Both measures look to cut through the thicket of regulations that has brought what once was at least incremental growth of U.S. hydropower capacity to a virtual standstill. H.R. 2872—sponsored by Rep. Larry Bucshon, R-Ind.—would give the Federal Energy Regulatory Commission discretion to give nonpowered dams that want to develop hydropower exemptions from various licensing requirements.

Meanwhile, H.R. 2880—sponsored by Morgan Griffith, R-Va.—would limit FERC’s authority, when to comes to closed-loop pumped storage hydropower, to impose only those licensing conditions needed for public safety, to protect fish and wildlife and that are reasonable and economically feasible.

Speaking at Thursday’s markup, Energy Subcommittee Chairman Fred Upton, R-Mich., called new turbines on existing hydro dams “a win-win. These projects cause minimal environmental impact, new investment, more jobs and added benefits to the grid.”

A 2014 U.S. Energy Department report found that streamlining rules to more easily permit the installation of hydro turbines at existing nonpowered dams could add more than 50 gigawatts of carbon-free power by 2030. This is equal to roughly 50 additional nuclear facilities, but at a fraction of the cost. It would boost total U.S. electricity capacity by 3.5 percent, all without any increase in emissions. And it would roughly match the amount of wind energy capacity built since 2010, but without the tens of billions of dollars of taxpayer subsidies.

R Street looked at the topic in an August 2017 white paper that recommended unclogging the regulatory bottlenecks that in a licensing process that can take between seven and 10 years and sometimes makes the cost of the upgrades too expensive to justify.

The amount of red tape to build or relicense a hydropower project reflects a slow accumulation of bureaucratic obstructions, combined with decades of congressional inaction. These bills represent a hopeful step toward more inexpensive clean energy being produced in the United States.


Image by Fedor Selivanov

 

Lyft driver’s past conviction doesn’t undermine background check system

shutterstock_571652542

The taxi industry and other foes of transportation network companies are sure to jump on a recent news story about a Chicago Lyft driver who passed the company’s background check, even though he had a federal conviction for “aiding an individual with ties to terrorism.” Expect the overheated statements from the taxi cartel, but don’t pay them any mind when they arrive.

According to WGN-TV in Chicago, Raja L. Khan “spent 30 years as a Chicago cab driver and attempted to reinstate his license. He was denied by the city and by ride share company Uber.”

Lyft said the independent company that handles its background checks missed the conviction and that it’s an isolated incident. Nevertheless, the San Francisco-based company said it is “re-running background checks for Chicago drivers” and “will also voluntarily allow the city of Chicago to audit our background checks on an ongoing basis at Lyft’s expense.” In the wake of the incident, Chicago regulators have cited Lyft and are imposing fines of up to $2 million.

No one was harmed, but the situation understandably creates public concern about the reliability of background checks. But it’s the kind of human error that plagues any kind of system. There’s little more to the incident than human error and, indeed, it was another ridesharing company that caught the problem. The incident doesn’t undermine the safety of Lyft or this emerging industry.

After a tragic accident in San Francisco involving an Uber driver, taxi officials called for the city to regulate the TNCs tightly or even shut them down. But taxi owners will use any excuse to shut out competitors that are doing a remarkable job serving customers and providing a safe app-based transportation alternative. By the way, TNCs have had remarkable success in reducing drunken-driving incidents, which highlights the overall safety benefits they provide.

As I wrote for the San Diego Union-Tribune in 2014:

A search of ‘taxi’ and ‘car crashes’ will reveal a long list of troubling news stories. In San Francisco (in 2013), an Ohio couple died after a cab with bad brakes slammed into a concrete pillar. A year earlier there, a taxi driver who caused a deadly crash was identified as a man convicted in a notorious murder case, yet he passed the background checks.

Or, to take a more extreme case, in February 2013, licensed and vetted traditional cabbie Kashif Bashir of Alexandria, Virginia, shot an Alexandria police officer in the head when the officer responded to a call about Bashir harassing a store clerk and idling suspiciously in front of her storefront. Bashir was later cleared of attempted murder charges on grounds of insanity. Neither the city’s background check nor his taxi company employer had caught his long history of mental illness and alarming behavior.

The point is that a city-run taxicab background system is not necessarily a failsafe, either. The TNCs have every reason to scrutinize their drivers, to protect the safety of passengers and to avoid fines and citations. But it’s an imperfect world and mistakes happen, especially in a massive industry employing tens of thousands of drivers nationwide.

It’s best to do as Lyft and Chicago are doing here – dealing with any errors and using new procedures to assure they don’t happen again. Any attempt to tar a company or an entire industry for a modest mistake is irresponsible and almost certainly motivated by concerns about competition rather than a fastidious devotion to safety.


Image by Andrey_Popov

 

DOJ’s Rosenstein is set to jump back down the rabbit hole of opposing encryption in your smartphone

Also appeared in: TechDirt

shutterstock_624395702

Back in May, Deputy Attorney General Rod Rosenstein wrote the notorious disapproving memo that President Donald Trump used as pretext to fire FBI Director James Comey. But on at least one area of law-enforcement policy, Rosenstein and Comey remain on the same page.

The deputy AG set out earlier this month to revive the former FBI director’s efforts to limit encryption and other digital security technologies. In doing so, Rosenstein drew upon nearly a quarter century of the FBI’s anti-encryption tradition. But it’s a bad tradition.

Like many career prosecutors, Rosenstein is pretty sure he’s more committed to upholding the U.S. Constitution and the rule of law than most of the rest of us are. This was the thrust of his Oct. 10 remarks on encryption, delivered to an audience of midshipmen at the U.S. Naval Academy.

The most troubling aspect of Rosenstein’s speech was his insistence that, while the government’s purposes in defeating encryption are inherently noble, the motives of companies that provide routine encryption and other digital-security tools (the way Apple, Google and other successful companies now do) are inherently selfish and greedy.

At the same time, Rosenstein characterized those who disagree with him on encryption policy as a matter of principle—based on decades of grappling with the public-policy implications of using strong encryption versus weak encryption, or no encryption—are “advocates of absolute privacy.” (We all know that absolutism isn’t good, right?)

Rosenstein implied in his address that federal prosecutors are devoted to the U.S. Constitution in the same way that Naval Academy students are:

Each Midshipman swears to ‘support and defend the Constitution of the United States against all enemies, foreign and domestic.’ Our federal prosecutors take the same oath.

Of course, he elides the fact that many whose views on encryption differ from his views—including yours truly, as a lawyer licensed in three jurisdictions—have also sworn, multiple times, to uphold the U.S. Constitution. What’s more, many of the constitutional rights we now regard as sacrosanct, like the Fifth Amendment privilege against self-incrimination, were only vindicated over time under our rule of law—frequently in the face of overreaching by law-enforcement personnel and federal prosecutors, all of whom also swore to uphold the Constitution.

The differing sides of the encryption policy debate can’t be reduced to those who support or oppose the rule of law and the Constitution. Rosenstein chooses to characterize the debate this way because, as someone whose generally admirable career has been entirely within government, and almost entirely within the U.S. Justice Department, he has simply never attempted to put himself in the position of those with whom he disagrees.

As I’ve noted, Rosenstein’s remarks draw on a long tradition. U.S. intelligence agencies, together with the DOJ and the FBI, long have resorted reflexively to characterizing their opponents in the encryption debate as fundamentally mercenary (if they’re companies) or fundamentally unrealistic (if they’re privacy advocates). In Steven Levy’s 2001 book Crypto, which documented the encryption policy debates of the 1980s and 1990s, he details how the FBI framed the question for the Clinton administration:

What if your child is kidnapped and the evidence necessary to find and rescue your child is unrecoverable because of ‘warrant-proof’ encryption?

The Clinton administration’s answer—deriving directly from George H.W. Bush-era intelligence initiatives—was to try to create a government standard built around a special combination of encryption hardware and software, labeled “the Clipper Chip” in policy shorthand. If the U.S. government endorsed a high-quality digital-security technology that also was guaranteed not to be “warrant-proof”—that allowed special access to government agents with a warrant—the administration asserted this would provide the appropriate “balance” between privacy guarantees and the rule of law. But as Levy documented, the government’s approach in the 1990s raised just as many questions then as Rosenstein’s speech raises now:

If a crypto solution was not global, it would be useless. If buyers abroad did not trust U.S. products with the [Clipper Chip] scheme, they would eschew those products and buy instead from manufacturers in Switzerland, Germany, or even Russia.

The United States’ commitment to rule of law also raised questions about how much our legal system should commit itself to enabling foreign governments to demand access to private communications and other data. As Levy asked at the time:

Should the United States allow access to stored keys to free-speech–challenged nations like Singapore, or China? And would France, Egypt, Japan, and other countries be happy to let their citizens use products that allowed spooks in the United States to decipher conversations but not their own law enforcement and intelligence agencies?

Rosenstein attempts to paint over this problem by pointing out that American-based technology companies have cooperated in some respects with other countries’ government demands—typically over issues like copyright infringement or child pornography, rather than digital-security technologies like encryption. “Surely those same companies and their engineers could help American law enforcement officers enforce court orders issued by American judges, pursuant to American rule of law principles,” he says.

Sure, American companies, like companies everywhere, have complied as required with government demands designed to block content deemed in illegal in the countries where they operate. But demanding these companies meet content restrictions—which itself, at times, also raises international rule-of-law issues—is a wholly separate question from requiring companies to enable law-enforcement everywhere to obtain whatever information they want regarding whatever you do on your phone or on the internet.

This is particularly concerning when it comes to foreign governments’ demands for private content and personal information, which might include providing private information about dissidents in unfree or “partly free” countries whose citizens must grapple with oppressive regimes.

It is simply not true that technology companies are just concerned about money. In fact, it’s cheaper to exclude digital-security measures than to invent and install new ones (such as Apple’s 3D-face-recognition technology set to be deployed in its new iPhone X). Companies do this not just to achieve a better bottom line but also to earn the trust of citizens. That’s why Apple resists pressure, both from foreign governments and from the U.S. government, to develop tools that governments (and criminals) could use to turn my iPhone against me.

This matters even more in 2017, and beyond. No matter how narrowly a warrant or wiretap order is written, access to my phone and other digital devices is access to more or less everything in my life. The same is true for most other Americans these days.

Rosenstein is certainly correct to say “there is no constitutional right to sell warrant-proof encryption”—but there absolutely is a constitutional right to write computer software that encrypts my private information so strongly that government can’t decrypt it easily (or at all). Writing software is generally understood to be presumptively protected expression under the First Amendment. And, of course, one needn’t sell it—many developers of encryption tools have given them away for free.

What’s more, our government’s prerogative to seek information pursuant to a court-issued order or warrant has never been understood to amount to a “constitutional right that every court order or search warrant be successful.” It’s common in our law-enforcement culture—of which Rosenstein is unquestionably a part and a partisan—to invert the meaning of the Constitution’s limits on what our government can do, so that that law-enforcement procedures under the Fourth and Fifth Amendments are interpreted as a right to investigatory success.

We’ve known this aspect of the encryption debate for a long time, and you don’t have to be a technologist to understand the principle involved. Levy quotes Jerry Berman, then of the Electronic Frontier Foundation and later the founder of the Center for Democracy and Technology, on the issue:  “The idea that government holds the keys to all our locks, even before anyone has been accused of committing a crime, doesn’t parse with the public.”

As Berman bluntly sums it up, “It’s not America.”


Image by Victor Moussa

Data show how bad turnover was in Rep. Tim Murphy’s office

tim_1_ax1pfz

The well-known impetus for the forced retirement from Congress later this month of U.S. Rep. Tim Murphy, R-Pa., was the bombshell revelation that he urged his mistress to undergo an abortion, despite his staunch pro-life stance. Receiving far less attention, however, is a June 2017 memo authored by Murphy’s long-serving chief of staff, Susan Mosychuk, which suggests his systematic mistreatment of his staff may well have led to Murphy’s downfall even if his affair scandal hadn’t.

In the memo, Mosychuk details a “pattern of sustained inappropriate behavior and engagement from the Congressman to and with staff,” including mistreating and harassing aides and “unreasonable expectations and ongoing criticisms.” Mosychuk also writes of “abysmal office moral” and an inordinate amount of staff turnover within the office (“near 100% turnover within one year’s’ time”), confirming Hill rumors that Murphy’s office hemorrhaged employees.

Is it true that Murphy, and his office environment, churned through congressional staffers, as Mosychuk suggests? If so, has he done so since first being elected in 2003, or was his reputation for doing so a recent development?

Using LegiStorm employment data, we took stock of Murphy’s staff employment trends throughout his congressional tenure. Important to note in using this data is that representatives are capped at employing 18 permanent employees at any given time, plus an additional four staffers serving as part-time, shared or temporary aides. Thus, members have a hard ceiling of 22 employees at any one time.

Burgat+Murphy+chart+110-2017

Figure 1 above shows the number of separated staffers—those leaving Murphy’s office—in each year of his career. On average, Murphy had more than eight staffers leave his office per year, resulting in 39 percent of his aides separating each year, on average, even using the most conservative staff total of 22 employees.

Only once in his 15-year career (2008) did Murphy have fewer than six staffers leave his office. During five years of Murphy’s career, 10 or more staffers left, with a high of 16 departing in 2004, just one year after his first year in office. All in all, Murphy aides separated from his office at an incredibly high, and fairly consistent, clip since his election in 2003.

Burgat+Murphy+chart+2+10-2017

As a second measure of office turnover, we calculated how long each of Rep. Murphy’s aides were employed by his office. Figure 2 presents the distribution of staffers’ tenure lengths. During his time in Congress, just under half of Murphy aides, 48.25 percent, worked in his office for less than one year. Another 25.67 percent stayed only between one and two years. Thus, of the 150 employees Murphy employed throughout his time in Congress, only 25.17 percent of Murphy’s aides remained on his staff for more than two years.

Taken together, these measures of office aide turnover give credence to Hill rumors and Mosychuk’s allegations that Murphy’s office was one in which staffers purposefully avoided or quickly departed after signing on. Though new accounts also suggest Mosychuk was a contributor to Murphy’s “reign of terror,” the employment trends of those serving within Murphy’s office clearly show that the representative faced an inordinate and nearly constant high rate of replacement during his entire run in Congress.


Photo by Jonathan Ernst/Reuters

October Legislative Branch Capacity Working Group on ‘regular order’

In this Legislative Branch Capacity Working Group Session, Peter Hanson and R Street’s James Wallner examine and interpret calls for “regular order” in the appropriations process from members of Congress. What does “regular order” mean to members, and why has Congress abandoned it? What can be learned from the House practice of allowing open rules on appropriations bills about the impact of regular order on debate?

Kevin Kosar on FedTalk: Doing more with less

Kevin Kosar, the vice president of policy at the R Street Institute, joined FedTalk host Ben Carnes from Shaw Bransford & Roth and Kathy Goldschmidt, the president and CEO of the Congressional Management Foundation, to discuss the dwindling capacity of congressional members and their staffs to do their jobs effectively as responsibilities increase.

In defense of third-degree amendments

shutterstock_354714626

Senate majorities routinely restrict the ability of senators to participate in the legislative process.

The most common way they do so is when the majority leader fills the amendment tree to block senators from offering amendments. The maneuver prevents the underlying legislation from being changed and protects rank-and-file senators in the majority from having to cast votes that could be used against them in their future efforts to win re-election.

Yet senators do not need the majority leader’s permission to offer amendments to legislation pending on the Senate floor. Indeed, they can offer so-called third-degree amendments even though the amendment tree has been filled.

This confronts the majority leader with a unique challenge. If utilized on a regular basis, third-degree amendments could eventually undermine his ability to control what measures receive votes on the Senate floor. This would, by extension, weaken significantly his ability to prevent the underlying legislation from being changed and to protect rank-and-file members in the majority from voting on amendments.

Given this, some senators have opposed efforts by their colleagues to offer third-degree amendments. Their concerns are illustrated in the debate surrounding the effort by Sen. Ted Cruz, R-Texas, to offer a third-degree amendment in July 2015. In opposing the maneuver, Sen. Lamar Alexander, R-Tenn., warned his colleagues of the consequences that would result if they joined Cruz in voting to overturn the decision of the chair. Specifically, he made two claims regarding Cruz’s effort, and the tactic of offering third-degree amendments more broadly.

First, Alexander equated Cruz’s appeal with the nuclear option employed by Senate Democrats in November 2013. He suggested, “If…a majority of senators agree with the senator from Texas, the Senate will be saying that a majority can routinely change Senate rules and procedures anytime it wants on any subject it wants in order to get the result it wants.” Alexander’s goal was to link Cruz’s appeal with the effort of Senate Democrats to circumvent the filibuster for judicial and executive nominations on a simple-majority vote in the previous Congress; a move that had been widely criticized by Senate Republicans ever since. Doing so would make it less likely that Republican senators would vote to overturn the chair, regardless of how they felt about the substance of the underlying amendments.

Second, Alexander asserted that Cruz’s appeal would, if successful, “destroy a crucial part of what we call the rule of regular order in the U.S. Senate.” The consequence would be the creation of “a precedent that destroys the orderly consideration of amendments.” As such, he confidently predicted, “There will be unlimited amendments. There will be chaos.”

Notwithstanding Alexander’s reputation as an expert on the Senate’s rules, a closer examination of his two claims demonstrates that neither has much merit.

First, there are important distinctions between third-degree amendments and the nuclear option, even though both utilize the same mechanism (i.e., an appeal). Appealing the ruling of the chair that an amendment is not in order when the amendment tree has been filled is not synonymous with the nuclear option, because it does not violate the Standing Rules of the Senate. If successful, it would simply create a new precedent governing the amendment process. It would not violate any specific rule. The appeal would only be functionally equivalent with the nuclear option if the new precedent explicitly violated an existing provision of the Standing Rules. Otherwise, the creation of a new precedent on appeal is entirely consistent with Senate rules and past practices.

Second, a closer consideration of regular order in the context of the amendment process suggests that it would remain relatively unaffected by a successful appeal in this scenario. Alexander contends that the amendment trees make it possible for the Senate to function today. He predicts that floor debate on bills would be chaotic if the current amendment trees were altered by a successful appeal. The implication is that effectively removing the limits on the number of amendments that can be pending to legislation on the Senate floor would make it impossible to consider legislation in an orderly manner.

Yet the historical development of the Senate’s amendment process demonstrates that there is nothing inherently chaotic about expanding the number of amendments that can be pending simultaneously. The principles of precedence would still apply to any new branches created on the trees. As such, the framework for the orderly consideration of the pending amendments would be preserved.

Moreover, the only time the amendment trees are adhered to literally in the contemporary Senate is almost always when the majority leader would like to block other senators from offering amendments. Instead of processing amendments by following the amendment trees, the practice most often followed is to process amendments by unanimous consent (e.g., “I ask unanimous consent to set aside the pending amendment and call up amendment No. 1234.”) Thus, limiting the majority leader’s ability to fill the amendment tree would simply force the Senate to return to the way in which it routinely processed amendments before the dramatic abuse of the amendment tree.

Indeed, the Senate has considered legislation for most of its history without utilizing the contemporary practice of routinely filling the amendment tree for the explicit purpose of blocking individual senators from offering their own amendments. While preventing the majority leader from being able to fill the tree routinely may make it more difficult for the Senate to block votes on amendments altogether, the Standing Rules and the institution’s precedents contain several tools that can be used to facilitate the orderly consideration of amendments on the Senate floor. These include (but are not limited to) the requirement that committee amendments to reported legislation be considered before the consideration of amendments from the floor, precedents prohibiting language previously amended from being amended again and the filing deadlines associated with Rule XXII.

The arguments advanced by proponents and opponents of using third-degree amendments to circumvent the majority leader’s ability to fill the amendment tree suggest two very different directions for the future course of the Senate’s development.

On one hand, equating precedents that fill in the gaps where the rules are silent with the Standing Rules would effectively bind the Senate to how it operated in the past, regardless of the development of new circumstances, the way the original precedent was established or the merits of the original precedent and whether it violated the Standing Rules in the first place. This would further increase the majority leader’s control over Senate decisionmaking by delegitimizing the efforts of individual members to adjudicate precedent or to protest what they perceived to be unfair or inaccurate rulings of the chair.

On the other hand, third-degree amendments could eventually undermine the majority leader’s ability to control the amendment process. Challenging the ability to fill the amendment tree with a third-degree amendment thus has the potential to impose significant costs on the majority leader directly. If used on a routine basis, this tactic could weaken, or even end, the majority’s ability to control outcomes in the Senate. As such, third-degree amendments could substantially alter the balance-of-power between the majority and minority parties in the institution, as well as between individual senators and the party leadership.


Image by Crush Rush

 

Senate finally poised to restore FTC to full strength

shutterstock_184999244

Earlier today, President Donald Trump formally announced the three candidates he’s nominating for the open seats at the Federal Trade Commission. Joseph Simons, Rohit Chopra, and Noah Phillips have diverse backgrounds and divergent political views, but they all have impeccable legal credentials and should be confirmed by the U.S. Senate without hesitation.

Not only will their confirmation put three more sets of steady hands at the wheel of the nation’s chief consumer protection and antitrust agency, but it also will finally restore the FTC to full strength, freeing it up to once again take on the kinds of hard cases that tend to split public opinion.

The FTC, which has jurisdiction over nearly every sector of the U.S. economy (with only a few limited exceptions), has had only two commissioners for most of 2017, ever since outgoing Chairwoman Edith Ramirez resigned in early February. To their credit, Acting Chairwoman Maureen Ohlhausen and Commissioner Terrell McSweeny have done an admirable job finding common ground and working together where possible, including by blocking an allegedly anticompetitive merger in daily fantasy sports, imposing structural-separation requirements on a key merger in the semiconductor industry, settling a privacy suit against a major ridesharing service and, most recently, launching an investigation into the Equifax breach.

However, with a partisan deadlock in place, the commission has only been able to act when it had unanimous consent. This has left it unable to tackle difficult questions that truly push the bounds of precedent and drive the evolution of legal doctrine forward. By all accounts, Simons, Chopra and Phillips are all FTC scholars who should be ready to hit the ground running on day one. Each of them also has relevant personal experience that should hold them in good stead at the commission.

Joseph Simons, long-rumored to be Trump’s pick for FTC chairman, comes most recently from the antitrust group at law firm Paul Weiss. He also spent time as director of the FTC’s Competition Bureau in the early 2000s, working deeply on both mergers and other enforcement actions. Given the uptick in merger activity this year, Simons’ experience in this area will surely come in handy at the FTC, which has a key role to play, along with the U.S. Justice Department, in reviewing proposed mergers and acquisitions to prevent potential harms to competition or consumers.

Rohit Chopra, the pick to fill the open Democratic slot, also has significant prior experience in the federal government. He served as assistant director of the Consumer Financial Protection Bureau and in 2011 was named by former Treasury Secretary Timothy Geithner to be the U.S. Treasury Department’s first student loan ombudsman. Chopra is considered a darling of key Democrats like Senate Minority Leader Chuck Schumer, D-N.Y., and Sen. Elizabeth Warren, D-Mass., for his efforts to combat student loan debt and other financial burdens affecting young people. While his stance on for-profit colleges may rankle some Senate Republicans, there is no reason to think he won’t be confirmed. After all, disagreements over policy aren’t a valid reason to deny confirmation of a qualified nominee (although members of both parties tend to forget that from time to time).

Finally, Noah Phillips was nominated to fill the final Republican vacancy at the FTC, and he also brings a decorated and interesting background to the table. Phillips previously spent time in civil litigation for both Steptoe & Johnson and Cravath, Swaine & Moore, but most recently has been serving as chief counsel for Senate Majority Whip John Cornyn, R-Texas, with the Senate Judiciary Committee. From his post on the Judiciary Committee, Phillips has oversight of the U.S. legal system as well as intellectual property, which should come in handy as the FTC continues to engage in more patent work, such as its review of patent assertion entities and its ongoing case alleging anticompetitive abuse of patents underlying equipment used in smartphones.

With a full complement of qualified commissioners, the FTC can once again function as an agency with the skills and capacity to tackle the key competition and consumer-protection issues. The Senate shouldn’t delay to confirm all three nominees.


Image by Kevin Grant

 

Anxiety over NAFTA causing slide of the peso, and an increase in imports from Mexico

President Donald Trump and U.S. Trade Representative Robert Lighthizer have made reducing the trade deficit a central focus of the in-progress renegotiation of the North American Free Trade Agreement.

Last weekend’s round of negotiations in Washington, D.C., ended on a fairly sour note. As optimism about a reinvigorated NAFTA 2.0 fades, economic anxiety in Mexico is putting downward pressure on the peso, according to the Wall Street Journal.

As the Peso declines versus the dollar, imports from Mexico become cheaper. As a result, our bilateral trade deficit with Mexico will expand! In other words, even if we withdrew from NAFTA and tariff rates spiked, the bilateral trade deficit with Mexico could still increase.

As virtually any economist worth his or her salt will tell you, trade deficits are driven by larger macroeconomic factors beyond trade policy. It is unlikely that trade deficits matter at all, but it is certain that bilateral trade deficits do not matter. The United States, for instance, has a trade surplus with Australia, which has a surplus with China, which has a surplus with the United States.

The sooner the Trump administration and the leadership at USTR recognize that attempting to address bilateral trade deficits through trade policy is a futile exercise, the sooner real progress on negotiation will be made.

The Russia investigation: Why the overseers need oversight

shutterstock_639962236

What’s going on with the Russia investigation? For most of us, the answer likely is, “Beats me.”

It seems every week or two there’s a media report about Congress holding a hearing or some member of Team Trump or other person being called in to testify: James Comey, Paul Manafort, Donald Trump Jr. The facts come out in drips: one reads of meetings with Russians during this past year’s presidential election; Facebook turning over information about shady campaign ads; Michael Flynn and possibly his son being subpoenaed to produce documents.

There are five congressional committees involved, to say nothing of independent counsel Robert Mueller. Who is doing what, when and why is anything but obvious.

Especially concerning is that Congress’ inquiries are increasingly viewed through partisan lenses. CNN reports:

In the House and Senate, several Republicans who sit on key committees are starting to grumble that the investigations have spanned the better part of the past nine months, contending that the Democratic push to extend the investigation well into next year could amount to a fishing expedition. The concerns are in line with ones raised by President Donald Trump, who has publicly and privately insisted he’s the subject of a ‘witch hunt’ on Capitol Hill and by special counsel Robert Mueller. Democrats, meanwhile, are raising their own concerns that the congressional Russia probes are rushing witnesses – including the testimony of President Donald Trump’s son-in-law Jared Kushner – as well as stalling appearances of other key Trump associates.

President Trump often has denounced the Russia issue as a hoax, and some of his supporters view it as a Democratic-media-deep-state “witch hunt” and fishing expedition. On the left, one still hears griping that Russian hackers helped Trump to steal the election and that Republican congressional majorities will hide any revelations of serious wrongdoing by the president or his campaign.

Desperately needed is something to bolster faith in the process. If the Russia investigation turns out to be a big nothingburger, then the country benefits if that conclusion is broadly accepted. And if there really is a there there, then it could lead to impeachment or other severe consequences, which, again, will require collective faith that the process is fair.

To raise credibility, Congress should adopt the benchmarks advocated by a right-left coalition of former government officials and policy wonks. In short, each of the committees (Senate Select Committee on Intelligence; Senate Judiciary Committee; House Permanent Select Committee on Intelligence; House Committee on Oversight and Government Reform; and House Judiciary Committee) should commit to carry out their work in ways that demonstrate bipartisanship and the desire to keep the public informed.

So, speaking to the former matter, committee chairmen and ranking members should jointly hold press conferences, and issue public communications under both their names. When calling witnesses or demanding documents, both the majority and minority should consent.

To increase the public’s understanding, the committees should report publicly and regularly on basic aspects of the investigation: What’s the scope of the investigation? How many witnesses have been interviewed? How many hearings (open or closed door) have been held? How much has been spent?

That is not much to ask of Congress, but the benefits could prove immense. A big part of the glue that holds us together as a nation is acceptance of the legitimacy of government. With the presidency itself at the center of the investigation, the stakes are very high.


Image by Lightspring

 

How senators can offer amendments without the majority leader’s permission

shutterstock_721330684

The demise of regular order in the Senate makes it harder for its members to participate in the legislative process. And the result of their efforts to do so gives rise to a destructive cycle that perpetuates dysfunction and gridlock.

While regular order is not easily defined, it is generally associated with an orderly process in which senators are able to participate at predictable points. Conversely, its absence is typically associated with a secretive process in which members are barred from offering amendments to legislation pending on the Senate floor. When confronted with legislation in such a process, senators are left with no choice but to “blow up” the bill to force the majority to allow them to offer amendments. This all-or-nothing approach breeds frustration among members and their constituents, thereby making it even harder to negotiate after the majority’s original plan has been thwarted.

Given this dynamic, irregular order is hardly the most productive way to make decisions. Instead of helping senators communicate across their differences, it encourages the kind of extreme position-taking and inflexibility that complicates a more deliberative process.

It should thus be no surprise that the Senate at present has difficulty passing legislation of any consequence and that its amendment process is in shambles. This is because the majority leader routinely blocks amendments and files cloture on important bills as soon as they are placed on the Senate floor. The only leverage senators have in such a scenario is their ability to block cloture on the underlying legislation.

Fortunately, there is another way for senators to amend bills on the floor without the majority leader’s permission to offer amendments. They can offer third degree amendments even when the tree has been filled and then appeal the subsequent ruling of the Senate’s presiding officer (i.e., chair) that the amendment is not in order. Doing so can force a recorded vote in relation to the amendment. The majority can prevent a vote on the appeal by filibustering it. Yet the majority’s filibuster would also prevent its bill from passing.

Offering a third-degree amendment in this scenario is consistent with the Senate’s rules and precedents as reflected in the historical development of its amendment process. It also reinforces a common minority critique of how the majority party runs the Senate. Most importantly, the tactic makes it easier for senators to participate in the legislative process, thereby avoiding the destructive cycle created by forcing them to block cloture on a bill just to get the opportunity to offer an amendment to it.

The Senate’s Standing Rules do not regulate the number of amendments that members are currently allowed to offer to legislation at the same time. Instead, that is governed by the four amendment trees followed in the Senate today. Those trees were created by precedent and evolved over time, only recently reaching their current shape.

Yet their evolution was not haphazard. The precedents that created the modern trees are based on general parliamentary law and serve to facilitate the orderly consideration of amendments on the Senate floor. For example, one precedent precludes so-called third-degree amendments. Specifically, the early Senate prohibited vertical third degree amendments (i.e., an amendment to an amendment to an amendment to the underlying legislation) and horizontal third degree amendments (i.e., a competing first- or second-degree amendment to the underlying legislation) because their use would make the floor debate on a bill too confusing.

In other words, the original prohibition on third degree amendments was not intended to block senators from offering amendments altogether. Rather, the expectation was that while a third-degree amendment would be out of order, an identical first- or second-degree amendment would be allowed once that branch on the tree opened.

Even so, senators soon realized that the amendment process was too cumbersome when the prohibition was applied strictly. As a result, the Senate facilitated more member participation and deliberation by expanding the amendment trees over time to permit vertical and horizontal third degree amendments where they had been previously prohibited. The primary motivation behind each expansion was the desire to make the amendment process more responsive to the needs of individual senators.

While the majority leader uses the same amendment trees today to block all amendments, senators retain the option to expand them again to make it easier to participate in the process and to increase deliberation. That is, they can offer their amendments even though the amendment tree has been filled.

The Senate’s precedents stipulate that “Any senator recognized is entitled to offer an amendment when such amendment is otherwise in order, but he cannot offer an amendment unless he has been recognized or has the floor.” The process of filling the tree follows precedent to block members from offering their own amendments. However, a senator may attempt to offer an amendment even though the tree has been filled. In such a situation, the chair would rule that the amendment is not in order pursuant to the Senate’s precedents. At that point, the member could appeal the ruling of the chair and request a recorded vote. The appeal represents an adjudication of the italicized portion of the precedent quoted above; namely, that an amendment is in order even though the amendment tree has been filled.

Offering amendments despite the filled tree and appealing the ruling of the chair that they are not in order forces the majority to cast votes on procedural questions directly related to the amendment being offered. Procedural votes have been viewed as substantive votes when the question is directly related to the underlying policy and the tactic is utilized on a regular basis. For example, the perception of cloture has evolved from being simply a procedural vote to the point that it is viewed by many as a substantive vote today. Votes on third degree amendments could thus be characterized as substantive votes.

As such, the threat to offer a third-degree amendment may encourage the majority to return to regular order. This is because the tactic gives the minority more leverage with which to gain the right to offer amendments without having to block cloture.


Image by mark reinstein

Steven Greenhut on American Family Radio

American Family Radio host Chris Woodward interviews R Street Western Region Director Steven Greenhut on the latest goings-on in California’s state Capitol. Woodward and Greenhut discuss the possible impact of the Trump administration’s tax plan, which will remove a key deduction that benefits Californians. This plan puts California Republicans in a tight spot, given that they want to support the president but it means a tax hike for California taxpayers. Woodward also asks Greenhut about a proposal to ban the sale of internal-combustion-engine vehicles by 2040 — something Greenhut explains is more about posturing than anything else given the rapid technological advancements in the auto industry.

Blocking amendments is a perversion of Senate rules and practices

Senate Holds Vote On Financial Bailout Legislation

The Senate today is an institution in decline. It is paralyzed – unable to legislate, much less deliberate.

The Senate’s plight is reflected in the near-total deterioration of its amendment process.

For example, senators offered a paltry 147 floor amendments between January and September of this year. Compare that to the 568 amendments they offered during the same period in 2015 and the 668 in 2009. At the present rate, Senate amendment activity could increase by as much as 250 percent over the next 15 months and still fall short of the level observed in the first nine months of 2015 alone.

This is the culmination of a broader trend going back three decades. During that time, Senate majorities have increasingly empowered the institution’s majority leader to prevent senators from offering amendments to achieve their legislative priorities.

The majority leader blocks senators from offering alternative proposals by filling the amendment tree, i.e., offering the maximum allowable number of amendments to legislation before other senators have had a chance to debate the measure and offer their own amendments.

Once used sparingly in extraordinary circumstances, the tactic is now routine and well-documented. But less appreciated is the extent to which its normalization in recent years represents a radical break from the Senate’s past practice. Also, less understood is how precisely the tactic empowers the majority to pass its agenda, given that the minority can still filibuster the underlying legislation.

Recent research suggests that the amendment process gradually evolved to facilitate the orderly consideration of the Senate’s business. The direction in which it evolved was informed by the Senate’s effort to balance the need for order in its work with the imperative of legislative deliberation.

While the Senate’s first amendment trees only permitted two amendments to be pending at the same time, they were expanded in response to member demands by adding new branches. The result was to increase the number of amendments that could be pending before the Senate simultaneously.

Notwithstanding this increase, members maintained order by adhering to the principles of precedence first compiled for the Senate in Thomas Jefferson’s A Manual of Parliamentary Practice for the Use of the Senate and still followed today. In general, those principles held that senators should have an opportunity to amend legislative text proposed to be stricken and/or inserted before the actual vote to strike and/or insert said text.

Analyzing how the Senate’s current amendment trees came to be underscores the extent to which using them to block amendments is a perversion of the Senate’s rules and practices. That is, the precedents underpinning the trees are now being used for a purpose fundamentally at odds with the one for which they were first created. Instead of facilitating the orderly consideration of amendments on the Senate floor, they are now being used to block the consideration of amendments altogether.

This suggests that the act of offering amendments no longer serves as a way in which the Senate can arrive at a greater understanding of what its members think about a given issue. Instead, the amendment process is commonly viewed as the last hurdle needed to be surmounted before a preferred bill can be sent to the House or to the president’s desk to be signed into law. To the extent that controversial amendments are permitted on legislation, frequently their consideration is structured in such a way as to guarantee their defeat. This requires channeling all decisions regarding which amendments can be offered to legislation through a single veto point (i.e., the party leaders or bill managers). Once established, such a veto point enables the leadership and/or bill managers to exercise disproportionate control over which amendments will be made pending to legislation on the Senate floor and to set the terms according to which those amendments will be disposed of.

Establishing a veto point is accomplished by putting the Senate in a parliamentary situation in which unanimous consent is needed to get an amendment pending under one of the four amendment trees. The primary tool utilized by the majority leader to accomplish this is the tactic of filling the amendment tree (or offering a blocker amendment in one of the available slots such that further amendments are precluded by the principles of precedence if that blocker amendment is pending). No amendments are in order once all the extant branches on the tree are occupied. At that point, the majority leader and/or bill manager is free to focus on negotiations with interested rank-and-file colleagues to reach a unanimous consent agreement that provides for several amendments and a vote on final passage without having to worry about a senator jeopardizing the legislation’s prospects by offering a controversial or otherwise unwanted amendment without permission.

Wallner+Senate+1+chart

As noted, the majority leader (or bill manager) may also offer a “blocker” amendment to establish the veto point. For example, an amendment offered to branch C in the chart above would serve as a blocker amendment if offered first and in the form of a motion to insert (or strike and insert). Once pending, any other amendment offered directly to the amendment in the nature of a substitute (ANS) would require consent to get pending (which would presumably be denied if the majority leader/bill manager wanted to block the amendment).

This tactic is less aggressive than completely filling the amendment tree, in that it typically leaves a few branches open for possible amendment. However, these branches are rarely connected to the ANS directly. For example, in the hypothetical example, the blocker amendment leaves branches E and F (on the left side of the amendment tree) open. Branch D (second degree to C on the right side) is also left open. These branches do not present the same challenges to proponents of the bill because their impact would be minimal if the amendments pending there prevailed. The majority leader could move to table C to prevent a vote on D on the right side of the tree. Additionally, adoption of E and F on the left side of the tree would be negated once the Senate adopts the ANS.

Once the Senate is in a parliamentary situation in which unanimous consent is needed to get an amendment pending to legislation on the floor, the majority leader can use his increased leverage to secure a higher vote threshold for adoption of an amendment. The majority’s desire to limit the minority’s ability to attach what it considers poison-pill amendments to legislation it supports is thus reflected in the dramatic increase in the use of unanimous consent agreements to set 60-vote thresholds for adopting amendments.  The majority leader uses the threat of not allowing amendments to get pending to compel individual senators to agree to the higher vote threshold on their amendment, even though doing so means that the amendment will most likely be rejected.

The routine practice of filling the amendment tree in the Senate today, coupled with the cloture process to end debate, effectively prevents members from being able to perfect legislation before it receives an up-or-down vote on final passage. Instead of a deliberative process designed to discern the true sense of the institution’s membership on an issue, senators are confronted with a fait accompli. This practice is inconsistent with the longstanding rules and practices on which the amendment process is based.

Structural imbalances in the Senate’s amendment process

U.S. Capitol Building, Washington D.C.

The Senate is a pale imitation of what it once was.

A major reason for its current predicament is that senators are no longer freely able to amend the bills they consider. This is because the majority leader routinely blocks members from offering their own ideas on the Senate floor by filling the amendment tree.

While the tactic effectively precludes votes on unwanted amendments, the minority may still filibuster the underlying legislation in protest. This gives Senate minorities leverage to negotiate with the majority over what amendments will be permitted during a bill’s consideration, so long as 41 of its members are committed to blocking cloture until their demands are met.

But remaining united in opposition to cloture is not always easy, because the minority comprises individual senators who hold an array of policy views. Given this, the majority leader will negotiate directly with those members whose policy views are closest to his own when trying to secure the votes needed to invoke cloture.

The majority leader also can structure an amendment’s consideration in a way that makes its success less likely. This is done by setting a higher threshold for the amendment’s adoption in the unanimous consent agreement that typically schedules the vote on it. The utility of this approach to Senate majorities is reflected in the dramatic increase in its use in recent years to set 60-vote thresholds for passing amendments.

Wallner+2+amendments+60+votes+

The earliest documented use of such a consent agreement occurred in the 102nd Congress. But it was a rare procedural tool until the 109th and 110th Congresses, when Majority Leaders Bill Frist, R-Tenn., and Harry Reid, D-Nev., respectively, began utilizing them on an increasing scale. In the 109th Congress, consent agreements were used in this manner in six instances. However, in the 110th Congress, their use increased significantly, totaling 37 instances. The use of the tactic remained relatively level in the 111th Congress at 38. In the 112th Congress, 60-vote thresholds were set on amendments on a staggering 111 occasions.

The tactic was utilized 35 times in the 113th Congress. The decline in amendments subject to a 60-vote threshold from the 112th to the 113th Congress is not as abrupt when viewed as a percentage of all amendments offered. This is because only 542 amendments were offered to legislation on the Senate floor during the 113th Congress (compared to 974 in the 112th).

Moreover, the share of roll call votes (RCVs) on amendments set at 60 by consent has increased since the 109th Congress. The routine utilization of the 60-vote threshold is particularly striking when RCVs on amendments to the budget resolution and reconciliation bills are omitted. Excluding budget and reconciliation amendments from the count yields a more accurate portrayal of the tactic’s centrality to decisionmaking in the Senate at present, because a member cannot be blocked, in theory, from offering amendments during the budget process’s vote-a-rama.

Wallner+2+amendments+percentage

Pursuant to these unanimous consent agreements, the amendment is withdrawn if it does not get the requisite number of votes. The practice thus allows an amendment’s supporters to demonstrate support for cloture without going through the time-consuming process of invoking it.

Amendments offered pursuant to such agreements, however, are seldom successful. In the 109th and 110th Congresses, amendments considered in this manner failed 100 percent and 78 percent of the time, respectively. In the 111th and 112th Congresses, the percentages of amendments considered in this manner that failed were 61 percent and 87 percent, respectively. Most recently, 77 percent of the amendments considered pursuant to this tactic failed in the 113th Congress.

The use of unanimous consent agreements to set 60-vote thresholds on amendments can thus be interpreted as allowing the majority to facilitate the passage of legislation by allowing the minority to offer amendments without risking the adoption of a poison pill. This process does not present a problem for members of the majority party because they typically oppose the amendment in question, and a 60-vote threshold means that it is unlikely to pass. In addition, members of the majority are more likely to have their priorities included in the underlying bill before it reaches the Senate floor for consideration.

Minority party members, as well as those in the majority party who are out of step with their colleagues on the policy question at hand, often support this process begrudgingly because it provides an opportunity to offer an amendment and get a vote on it, all without having to expend the necessary resources to filibuster the underlying legislation. They may not get the opportunity to offer the amendment altogether if they reject the 60-vote threshold.

Setting 60-vote thresholds for amendments via unanimous consent agreements is central to the majority’s ability to control the agenda in the Senate today. Yet the tactic’s increased use in recent years is at odds with calls to reform, or eliminate, the legislative filibuster. This suggests that there is a growing constituency inside the Senate to increase the majority’s ability to control the legislative process while reducing the minority’s ability to leverage the filibuster to secure majority concessions. If this trend persists, the Senate risks becoming more majoritarian, and thus more dysfunctional, moving forward.

Ranking Member Cummings cites Lehrer on census

House Oversight Committee Ranking Member Elijah Cummings, D-Md., cited a recent op-ed by R Street President Eli Lehrer in his opening remarks at the panel’s Oct. 12 hearing on the 2020 U.S. Census.

EPA ends ‘sue and settle’ era

shutterstock_253697599

A new directive handed down Oct. 16 by Environmental Protection Agency Administrator Scott Pruitt pledges to put an end to the controversial practice of settling lawsuits with special interest groups behind closed doors, often while paying their attorneys’ fees.

These so-called “sue and settle” practices long have been criticized by businesses and conservative groups as a way to circumvent the normal regulatory process. Over its eight years, the Obama administration’s EPA chose not to defend itself in more than 100 lawsuits brought by special interest advocacy groups and paid out $13 million in attorneys’ fees in such cases.

Pruitt has had the tactic in his sights since his days as Oklahoma’s attorney general, when he sued the EPA in federal court more than a dozen times. In a letter this week to EPA managers, he said the practice “risks bypassing the transparency and due process safeguards enshrined in the Administrative Procedure Act and other statutes.” He also called it “regulation through litigation” and an “abusive” policy, in part because it excludes state involvement in any settlement between the EPA and private litigants.

The practice has not been confined just to the Obama administration, as the Bush EPA settled 64 cases over its two terms in office. But during the Obama years, “sue and settle” became one of the primary avenues to formalize major regulations, including the Clean Power Plan’s proposed constraints on carbon emissions as well as recent mercury and air-toxin standards.

Pruitt’s directive calls for improved transparency around litigation, with all potential settlement agreements open to a 30-day public comment period. The directive also calls for publishing attorneys’ fees, a break from the Obama administration practice of agreeing to fees “informally.” Pruitt also has instructed the EPA to reach out directly to states and regulated entities that would be affected by any given consent decree.

Given the litigiousness of environmental policy, it’s easy to see how the “sue and settle” process could be attractive for the agency. But as Pruitt rightly suggests, the process had become a way to circumvent the full regulatory process, which can take years, and essentially gives the executive branch control to shape legal settlements in complaints that are never even heard by the courts.

Given the Obama administration’s clear tendency to replace legislative compromise with “phone and a pen” executive action, there is little doubt the “sue and settle” tactic was being abused in ways that had not be foreseen when the practice began. Good riddance.


Image by petrmalinak

 

Local e-cigarette crackdowns are misguided and counter-productive

Electronic Cigarette or E-Cigarette

In an unfortunate trend across the country, cities and towns have raced to institute new regulations and update existing laws that deal with e-cigarettes and vapor products, often with little consideration of the potential these products have to improve public health.

In Massachusetts, recent actions by local boards of health to label e-cigarettes as “tobacco products” are misleading, at best, and at worst, a move that limits access to far less-harmful alternatives to cigarettes. Many local policies aimed at protecting teens from smoking myopically disregard the effects on the adult smoking population.

Tobacco harm reduction is an approach to public health that seeks to reduce the incidence of cigarette use and smoking-related diseases by encouraging smokers to switch to less-harmful alternatives. These include e-cigarettes, vapor products and certain smokeless tobacco products that, while not completely without risk, are orders of magnitude less harmful to a person’s health than their combustible cousins.

Historically, American tobacco control policy has been based on the premise that all tobacco products are hazardous and that none can offer personal or public health benefits. However, peer-reviewed research by the United Kingdom’s Royal College of Physicians has demonstrated that e-cigarettes are a significantly safer than cigarettes, which continue to be both the most widely used and the most harmful tobacco products on the market.

That work by the Royal College of Physicians is particularly notable in light of the fact that it was they, decades ago, who presented the first comprehensive study on the negative health impact of cigarette use.

More recently, in the United States, Food and Drug Administration Commissioner Scott Gottlieb echoed similar sentiments in a recent Washington Post interview. Gottlieb noted that most e-cigarettes contain nicotine, a known addictive substance, but the real threat to humans are the carcinogens produced when tobacco is combusted. Electronic nicotine delivery systems, or “ENDS,” provide a safer alternative for adults who still want access to nicotine but avoid that mass of carcinogens.

While the relative safety of noncombustible products is not in doubt, many local boards of health continues to resist their use out of fear that they may lead to heightened incidence of tobacco use among teens. In particular, there are concerns that “flavored products” attract teens to smoking. In response, localities have issued broad prohibitions on the sale of such products, without differentiating between cigarettes and less-harmful alternatives. Recently in Massachusetts, the towns of Canton and Marion and the city of Gloucester all have considered regulations that, if approved, will greatly reduce access to a host of less-harmful, non-combustible alternatives.

The unintended consequence of such rules is that they could make those who already smoke less likely to transition away from cigarettes. Furthermore, a recent study by Saul Shiffman and colleagues that examined flavor preferences among adolescent nonsmokers found they had less interest in supposedly youth-targeted e-cigarette flavors than adult smokers. In fact, the study concluded that teens preferred flavors that seemed more “adult-like.” Thus, not only do flavor bans not have their desired effect of preventing teens from smoking, they actually make it more difficult for adult smokers to improve their health. That’s bad policy.

A holistic approach to harm reduction demands that, in addition to discouraging adolescents from nicotine and cigarette use, a significant goal of any tobacco regulation should be to encourage adult smokers to switch to safer alternatives. Greater flavor options provide smokers with more paths away from the most harmful and widely used tobacco products – cigarettes. Taking steps to make e-cigarettes less accessible to current and future smokers means failing to make progress on reducing future rates of smoking-related diseases, which collectively kill 480,000 people in the United States each year.

By focusing solely on minors, many of these local regulations disregard and discount cigarette use among adults. The measure of a successful public health policy should be the impact it has on the whole population, not just certain segments. While cigarette use in the United States is at an all-time low, the significant drop-off in smoking rates is due, at least in part, to the development of attractive (and much safer) alternatives.

Harris deserves praise for seeking middle ground on sex-trafficking bill

shutterstock_680530741

The Sacramento Bee implied in a recent article that Sen. Kamala Harris, D-Calif.,  was being inconsistent or unduly influenced by Silicon Valley campaign supporters for her reluctance to back this far-reaching bill. But Harris’ approach of finding a middle ground is the only sensible course, especially given the potential harm to internet speech that could result from a hastily drafted law.

It’s really tough to stand up to “mom and apple pie” legislation such as this bill. Indeed, that’s why the “Stop Enabling Sex Traffickers Act” is the most dangerous sort of legislation, in that it uses legitimate fears of the scourge of sex trafficking to grant the government newfound powers to shut down online speech.

It also grants attorneys the ability to sue website operators, search engines, email providers and other online players into oblivion. Is it any wonder the president of a trial-lawyer-backed “consumer advocacy group,” Consumer Watchdog, was quoted by the Bee favoring the bill? The act would certainly be good for the trial bar given that it would obliterate longstanding federal protections for those web-based “intermediaries” that host third-party online speech.

Thanks to Section 230 of the federal Communications Decency Act of 1996, Facebook, Google and even the Bee itself are limited in their liability for the posts, images and comments made on their sites. In the name of combating sex trafficking, this bill would eviscerate those protections by opening up intermediaries to federal criminal prosecution and civil liability.

“Without this protection, intermediaries would face a potential lawsuit in each one of the thousands, millions or even billions of posts, images and video uploaded to their services every day,” according to a letter that privacy groups, including the American Civil Liberties Union, sent to the U.S. Senate leadership in August. Intermediaries would “err on the side of caution” and face an unending sea of litigation – something that will dangerously constrict speech on the internet.

It’s unclear what exact middle ground Harris is seeking, but there’s certainly nothing wrong with her listening to Bay Area tech firms on an issue that intimately involves them – and us. Sure, Harris seems to have changed her position from her days as attorney general, when she filed pimping charges against a website’s operator. A judge later tossed those charges for many of the same reasons free-speech advocates oppose this bill.

We should all be happy that Sen. Harris is growing in office. By all means, let’s clamp down on the human filth who operate as sex traffickers – but without threatening the kind of online free speech we’ve all come to expect on the internet.


Image by Vince360

 

Why do liquor rules vary drastically from state to state?

The R Street Institute’s Jarrett Dieterle appeared on Fox 5 DC’s “The Final Five” with Jim Lokay to discuss booze policy in America. They discussed the difficulty in reforming onerous state alcohol laws and how R Street’s DrinksReform.org website is helping to track reform efforts across all 50 states.

Perry questions value of ‘free market’ in energy

Also appeared in: Red, Green and Blue

shutterstock_281979875

Shakespeare’s adage about those who “doth protest too much” seems an appropriate response to Energy Secretary Rick Perry’s recent testimony on an administration proposal to change the way coal and nuclear power plants are compensated for sending electricity to the U.S. grid.

Perry’s cryptic and somewhat baffling rhetoric Thursday in front of the House Commerce Committee’s Energy Subcommittee came during tough questioning by members worried the proposal, if accepted by federal regulators, would undermine electricity markets throughout the country. In particular, the proposed rule by the U.S. Energy Department calls for subsidies for power plants that keep at least 90 days’ worth of fuel stored on site. Such a rule would act as a subsidy for coal and nuclear interests over natural gas, solar, wind and other renewable energy providers, and could cost consumers up to $4 billion a year, according to analysts.

“I think you take costs in to account, but what’s the cost of freedom? What does it cost to build a system to keep America free? I’m not sure I want to put that straight out on the free market and build the cheapest delivery system here,” Perry retorted in response to a question from Rep. Paul Tonko, D-N.Y., about the potential for higher energy prices for consumers. “I think the cost-effective argument on this is secondary to whether or not the lights are going to come on.”

The DOE on Sept. 28 asked the Federal Energy Regulatory Commission (FERC) to consider new rules ensuring nuclear and coal-fired power plants are paid not just for the electricity they provide consumers, but the reliability they may provide to the electric grid. Former FERC commissioners have said such a rule could “blow up” wholesale electricity markets that have taken decades to design. Both coal and nuclear plant operators, meanwhile, have been shuttering inefficient plants over the past several years due to inexpensive natural gas-fired generation and government support for renewable generation.

It is true that fuel security is an important issue to evaluate, as long as it is evaluated objectively. Perry’s “Braveheart” moment regarding energy security suggests a certain irrationality that can only hurt electricity market operations and which, over time, would undermine fuel security as poor economic incentives become institutionalized.

The truth is that free and unfettered price discovery in electricity markets is the most important element in grid resiliency. Perry is involved in a subterfuge, a deception that even someone of his legitimate political skills has trouble pulling off. The administration is in the position of being forced to come up with creative ways to fulfill promises made directly by President Donald Trump to coal mine owners during the election campaign, even at the cost of free markets – a supposed core belief among Republicans and conservatives of all strips.

This intellectual inconstancy is even more acute when one considers that Perry spent much of his 14 years as Texas governor praising and promoting the virtues of freer energy markets in the Lone State State. Texas has the freest electricity marketplace in the country and hasn’t faced any major reliability problems, even in the aftermath of major flooding by Hurricane Harvey in late August. (Of course, it should be noted that most of Texas would be exempt from the DOE’s proposed rule because it maintains its own intrastate grid.)

Fortunately, efforts like this often come up against checks and balances that keep poor policies from being enacted. In response to the DOE proposal, a hitherto unprecedented coalition of 11 energy lobbying groups is asking FERC for a delay in processing the new rule so they can prepare arguments against it. The coalition included a combination of major oil and gas associations and the most important renewable-energy lobbyists, such as the American Wind Energy Association and the American Petroleum Institute.

Because FERC is an independent regulator, the administration can’t force the policy through by fiat. Final rules must be passed by a majority of FERC commissioners and the commission only recently received a quorum, after spending more than six months inactive. The likely postponement of quick action on the DOE proposal will allow the five FERC commissioners (two who of whom still await confirmation) time to consider the full ramifications of such a rule. If the $4 billion annual cost estimate is even close to accurate, the commission’s definition of what counts as “free” may be very different from Perry’s.


Image by Andrew Cline

Remediation won’t cut it – we need cyber resilience

shutterstock_712440448

Since its cybersecurity kerfuffle in June, Equifax has become a four-letter word. And that word is “hack.”

CEO Richard Smith went to Washington this past week to testify in front of four different congressional committees about the perilous pairing of human and technological error that led to 2017’s largest data breach. Unrelenting members of Congress demanded regulation and remediation for consumers.

The hearing by the House Energy and Commerce Committee’s Digital Commerce and Consumer Protection Subcommittee focused attention on Equifax’s plan to remedy consumer confusion. The fact that Equifax is both a broker of identity information and a company that sells services to protect that information makes the aftermath of the hack particularly tricky to navigate.

More than 44 percent of Americans had a treasure trove of personal information stolen in the hack by criminal actors yet to be identified. The data include names, birthdates, Social Security numbers, addresses, driver’s license information and credit information. Equifax added 2.5 million more to the 145.5 million total number of consumers affected by the data breach after cybersecurity firm Mandiant concluded its forensic investigation this week.

The news has prompted members of Congress to renew calls for legislation requiring companies to do more about cybersecurity. However, such approaches targets the symptoms rather than the disease.

Rep. Jan Schakowsky, D-Ill., is sponsor of the recently reintroduced Secure and Protect Americans’ Data Act, which would require any organization or company that holds personal information to develop a written security policy, implement extensive security procedures and assess their security program annually. In the event of a data breach, organizations would be compelled to notify consumers. The requirements set out in the Schakowsky legislation for “information brokers” are even more burdensome. The bill cedes power to the Federal Trade Commission to enforce noncompliance with these rules as an “unfair and deceptive act.”

While the bill is well-meaning, in practice, this regulation likely would result in more work, rather than more security, as organizations redirect resources to compliance.

Meanwhile, Rep. Ben Ray Luján, D-N.M., has proposed the Free Credit Freeze Act, which would require consumer reporting agencies to provide credit-freezing services free of charge in perpetuity. Equifax already has announced that it will be providing such a service, known as TrustedID Premier.

Both the Schakowsky and Lujan bills are emblematic of a shortsighted approach of overemphasizing response, remediation and resistance over a long-term resilience-based approach to cybersecurity. Breach notification, security policies and credit-monitoring services may cure the headache but they will fall short of preventing the next big hack. In contrast, pursuing resilience means that the cybersecurity ecosystem can withstand stressors, adjust to adverse events and bounce back quickly. Government should focus on fostering a policy environment in which these capabilities are strengthened.

Building immunity from the bottom-up requires a layered approach that focuses on the incentives that face both the attacker and the defender, much like the layers of defense in a secure internet-enabled system. Overlapping efforts from a variety of actors—who must include industry, individuals, third parties and government—is the only way to provide a systemwide solution to what is a systemic problem.

Consumer awareness is one way to affect change in the cybersecurity ecosystem. The Promoting Good Cyber Hygiene Act—sponsored by Rep. Anne Eshoo, D-Calif.—identifies one area where government can play a positive role. It suggests the National Institute of Standards and Technology produce an accessible list of best practices, based on NIST’s cybersecurity framework that currently is in use by both companies and the government.

Creating guidelines for individuals takes this framework one step further and empowers consumers to improve their resilience to cyberattacks. Such guidelines would include information about what to do in the event of a data breach. They would allow consumers to better navigate Equifax’s bungled consumer-notification process and misleading landing page. Industry leaders such as Google, Facebook or Apple as well as third-party organizations like the Electronic Frontier Foundation or the Internet Society can also work to fill this information gap for consumers.

In a world in which a majority of Americans have personally been the victim of a major data breach, an approach that focuses on resilience can do more than merely treat the symptoms.


Image by Shawn Hill

 

If the rules are right, digital microlending could play role in subprime market

shutterstock_635517380

Well-functioning credit markets are essential tools for many people in times of personal economic instability or emergency. Unfortunately, some prospective borrowers with subpar credit ratings and credit histories do not qualify for the standard options of credit cards, secured loans or personal loans.

Credit unions frequently are the best available choice for those who have difficulty obtaining credit through traditional banks. But for some, digitally coordinated peer-to-peer lending agreements—inspired by microfinance arrangements for economically fragile communities internationally—also are proving to be an emerging option.

However, before these kinds of lending arrangements can be expected to expand domestically, digital rules will need to be established to give certainty to lenders and borrowers alike.

Subprime borrowers may have practiced poor financial habits or failed to meet their obligations, but this does not change their need for emergency credit when things get tight. Locked out of the prime credit market, these borrowers resort to payday loans, title loans and other products that come with very high interest rates and dubious collection methods. If they default on these loans, the interest and fees skyrocket, leaving them even worse off than before they took the loan. Most lenders must charge these high rates to compensate for the enormous risk they have undertaken to underwrite the loans.

Peer-to-peer digital microlending has the potential to fill a portion of the gap by providing this cohort with small, short-term loans that typically range from $100 to $500. While traditional peer-to-peer lending sites such as Lending Club target prime borrowers, other platforms are helping subprime borrowers.

One of the largest such peer-to-peer digital microlending platforms is the “R/ Borrow” section of reddit.com. This subreddit uses the reputational ecosystem within reddit to identify worthy borrowers, banning users who default or violate the terms of use. The subreddit facilitates the microloans and acts as a central database of transactions, coordinating more than $780,000 in loans in 2015.

If it can be properly scaled, peer-to-peer digital microlending could be a worthy option over payday loans for subprime borrowers. Unlike the latter method, digital borrowers are not necessarily assessed hefty fines or fees for late payments. Instead, they negotiate directly with lenders to find an amicable solution. True enough, some borrowers will default on their commitments and walk away without harm to their credit scores. To compensate, most lenders on microlending platforms (including the “R/Borrow” subreddit) charge high interest rates, ranging from 10 to 25 percent over several weeks or months. This isn’t a problem for most borrowers, as most of their needs are for short-term, small amounts to get them through until their next source of income.

Barriers to the expansion of these platforms come in the form of the myriad usury laws on the books in most states. While banks and other financial institutions are exempt from such laws, individual lenders are not. Digital microlending transactions often happen over state lines, making it very difficult for lenders and potential borrowers to determine their proper jurisdiction and the interest rate restrictions that apply to them. This may be an opportunity for Congress to pre-empt such laws as a matter of interstate commerce. Legislation could provide a consistent standard for digital microlenders to follow, such as through the proposed Uniform Electronic Transactions Act (UETA).

While admittedly there are other challenges to overcome, such as developing a scalable peer-to-peer enforcement mechanism, additional legal certainty would help expand this credit option for borrowers who find themselves locked out of traditional credit markets.


Image by designer491

The 1986 tax reform effort shows that Republicans have a tough road ahead

shutterstock_614260049

If there is one thing that unites Republicans as different as President Donald Trump, House Speaker Paul Ryan, R-Wis., and Sen. Susan Collins, R-Maine, it is a general sense that taxes should be lower than their current levels. For all the party’s changes on social issues and foreign policy over the years, tax cuts have consistently been the GOP’s guiding light.

But we are about to see that consensus tested though. Trump and the Big Six—administration and congressional leaders on tax policy—have proposed a tax reform bill on a scale that has not been seriously considered since Ronald Reagan and a divided Congress pushed through the Tax Reform Act of 1986.

With the endorsement of the Big Six and the likely passage of a budget resolution with reconciliation instructions that will allow 50 senators and Vice President Mike Pence to advance the bill, the Republican tax plan is developing a sense of inevitability around it. After bungling the long-promised Affordable Care Act repeal, a losing effort on tax reform would seriously harm congressional Republicans’ credibility with the party faithful and may even trigger a revolt. If there was ever a time for Ryan and Senate Majority Leader Mitch McConnell, R-Ky., to get their caucuses together, it is now.

Despite the clear political incentives for Trump and congressional Republicans to deliver, the bill that the president introduced faces trouble ahead. While it’s likely that Republicans will pass some bill that affects taxation, a major tax reform bill is a longshot. Reforming the tax code usually means decreasing tax expenditures and closing loopholes in individual and corporate taxes, while lowering rates. Unfortunately for congressional Republicans, it is only the lower rates they agree on.

Once the thorny issue of loopholes comes up, members will find it difficult to come to a consensus. In fact, the 1986 Reagan bill is essentially the only time Congress has ever been able to enact a loophole-closing and rate-lowering tax-reform measure. As recounted in Showdown at Gucci Gulch (Vintage, 1988), the 1986 bill was nothing short of a miracle. What started out as an “ideal tax plan” from then-Treasury Secretary Don Regan was morphed by politics from the Reagan administration, the Democratic-controlled House and the Republican-controlled Senate until it eventually limped across the finish line and became law.

The final product established a two-rate structure for individuals, a 34 percent rate for corporations and repealed individual deductions for state and local sales taxes and corporate tax breaks like the investment tax credit. However, as Gulch authors Jeffrey Birnbaum and Alan Murray note, the bill was a hodgepodge and groups with clout, like the oil and gas industry, beat out those with less influence to keep the loopholes that mattered to them. Furthermore, the two-rate structure was a sham, as the bill included a surtax or “phantom rate” that was applied to top earners.

Still, it did end many loopholes and helped ensure that companies and the wealthy couldn’t avoid their tax bills altogether. Moreover, many members bucked lobbyists and parochial interests from their districts to support a bill that was in the general interest.

Political scientists like David Mayhew have found that the general interest is usually not what sways members of Congress to support bills. According to tax scholars with similarly pessimistic views on the incentives for legislative action, the 1986 bill was an anomaly and tax policy will usually be made incrementally rather than in sweeping changes.

In her essay on tax reform in The Evolving Congress (Congressional Research Service, 2014), Jane Gravelle lists the conditions necessary for a reform bill to pass. The first is strong presidential leadership, which ideally should come from a popular president. Reagan was extremely popular in 1985 and 1986, having just been re-elected in a 49-state landslide (it wasn’t until after the tax-reform effort that the Iran-Contra scandal reared its head and his numbers began to slide). Although Reagan was not very immersed in the details of the plan, he did provide mostly consistent public support and gave tax reform major billing in his 1985 State of the Union.

The second condition is that the first draft should be free from political pressures. This allows the draft to set the agenda and to use popular provisions like the state and local tax deductions as bargaining chips to garner support. Don Regan led the Treasury Department in drafting the “ideal tax plan” and his successor as Treasury secretary, James Baker, also put together draft legislation that was mostly free from political pressure.

The third condition is that the plan must be large and sweeping enough that it looks like “real reform.” This gives members an incentive to support it, as they do not want to be seen as beholden to special interests. The 1986 bill certainly did this, especially once Senate Finance Chairman Bob Packwood, R-Ore., introduced the radical two-rate structure that showed senators who had been more focused on preserving their slice of the pie that reform was serious business.

The 1986 bill also benefited from the fact that control of Congress was split between Democrats and Republicans, and the parties were not yet so polarized that they could not work together. A particularly strong alliance formed between relatively liberal tax reformers like Sen. Bill Bradley, D-N.J., and Republican adherents to supply-side economics, who believed that lower rates must be achieved at any cost, even eliminating popular tax breaks. The fact that both parties had an interest in seeing the bill passed encouraged its shepherds to face the wrath of special interests in unison, rather than try to score political points by blaming the other side. It also showed that, despite relatively weak public support for tax reform (which persists today), members do not want to oppose a bill that pings special interests in favor of the everyday taxpayer.

So, keeping in mind the lessons of 1986, what should we expect in 2017 or 2018? The good news for Republicans is that tax reform was on the agenda during the Obama presidency and, thus, has received some attention from political elites. During his second term, Obama wanted to work on reforming corporate loopholes while Ways and Means Chairman Dave Camp, R-Mich., was interested in dropping the top rate into the 28 to 25 percent range by eliminating a large swath of individual loopholes. However, the two proposals mostly stayed in their respective partisan enclaves and never gained traction.

The troubles of tax reform in the Obama years shows one of the key weaknesses for the Trump plan: a lack of bipartisan consensus on what to do. Right now, Republicans are focused on giving corporations a tax cut. That is not in the interest of congressional Democrats, which means the GOP must use the reconciliation process to pass a bill on partisan lines. The last time this happened was early in the George W. Bush administration, an effort that focused more on tax-rate reductions than tax-code reforms.

So, if the partisan roadblock can be bypassed with reconciliation, what about the other 1986 conditions?

President Trump is historically unpopular at this point in his term, so that likely won’t boost the chances of reform passing. The effort might be hurt by the optics of an extremely wealthy president is trying to pass a bill that could give him or members of his family tax cuts. The worry that the current proposal is too favorable to the rich already has Republicans talking about keeping the top rate above 39 percent. Trump’s unpopularity could feed into the already lackluster support for tax reform. This is not necessarily detrimental (remember, it was not popular in 1986 either), but having public opinion firmly behind a legislative initiative is never a bad thing.

The second condition for passing the bill is that it be drafted away from political pressures. The contents of the Trump bill are still somewhat unknown, as the Republicans have only released a framework, which is specific in some areas and lacking detail in others. So, we cannot make a judgment on the second condition quite yet. One of the most difficult political sells for Republicans will be the elimination or limitation of the deduction for state and local taxes. These are particularly important for Republicans from high-tax states like New York and California.

Their appeal goes beyond that though. When working on the 1986 bill, a New York coalition to preserve the state and local incentives teamed up with oil and gas interests from Texas to change the bill when it was going through the House. The coalition gained the support of other members because, as it turned out, the state and local incentives had widespread support, even in low-tax states. The coalition received 208 pledges from members who said they would not vote for a bill that eliminated the deduction.

If widespread opposition like this emerges to provisions in the general framework or in the actual bill, it could spell doom for reform. One of the difficulties for tax reform that Gravelle mentioned was that the 1986 bill eliminated most of the low-hanging fruit for loopholes. The ones that remain are popular and will probably have fierce advocates organizing opposition against their repeal.

The third condition is that the bill is sufficiently wide-ranging and appears to be “real reform” instead of a thinly veiled effort to benefit some narrow constituency. This will force members to either vote for it or incur the wrath of the average taxpayer. As of now, it does not appear that the Trump bill has that quality. For one thing, Republicans have very different ideas about what they want out of the bill.

Sen. Bob Corker, R-Tenn., who is retiring after the 2018 elections, has said that he will not support any bill that increases the deficit, a tough sell when initial estimates show the Trump proposal losing trillions. Another recalcitrant Republican is Kentucky Sen. Rand Paul, who claims the Trump plan does not help middle-class voters enough. He showed on the Graham-Cassidy rendition of the ACA repeal that he is not afraid to buck his party, even when it comes to longstanding goals or core principles like tax reform. Arizona Sen. John McCain is demanding that the bill go through regular order and might even lean toward a bipartisan package rather than the 50-vote deal that Republican leaders appear to be eyeing. He opposed the Bush tax cuts in the early 2000s, so his vote is by no means guaranteed. We have not heard much from moderates like Collins and Lisa Murkowski, R-Alaska, both of whom come from poorer states and might not be keen on a bill that favors the rich.

Whatever these individual senators might be thinking, the bill has clearly not yet reached the point of inevitability that the 1986 bill did when Packwood released his two-rate structure.

If history is any guide, the Trump tax reform plan has rough sailing ahead. It seems more likely that Republicans use the reconciliation process to enact tax cuts without targeting many of the deductions or corporate loopholes that could offset some of the revenue losses.

If anything, 1986 showed us what a herculean effort it is to overhaul the tax code. It’s not impossible, but Republicans will probably need to give more thought to selling the effort to skeptical members and the public before they are able to pass the most sweeping changes the tax code has seen in over 30 years. It might take two years (or two terms and a few more seats in the Senate) before President Trump is able to achieve anything like what Ronald Reagan and the 99th Congress did.


Image by EtiAmmos

Where R Street stands on birth control issues

shutterstock_174193232

The following post was co-authored by R Street Policy Analyst Caroline Kitchens.


After R Street Policy Analyst Caroline Kitchens, who co-wrote this post, wrote about birth control access in August for The Hill, more than a few allies asked us other questions about what we think and where we stand on some related issues. We’re writing this post to clarify what we as an institution think and deal with—and what we don’t deal with—on birth control and related topics.

To put it simply: We think current rules regarding access to many forms of birth control are an example of government overregulation. As such, R Street wants to change them both for their own sake and because it will advance our overall deregulatory agenda. We don’t, however, take institutional positions on related issues, such as health care and abortion.

With regard to birth control, the current regulatory regime is deeply unjust and imposes needless burdens on the vast majority of sexually active Americans. Even though the decision to use birth control (or not) is one of the most private parts of life, access to all hormonal birth control requires a time-consuming, intrusive and often expensive doctor’s office visit. This happens even though consumers are able to self-diagnose the need for the medication (in this case, wanting to avoid unintended pregnancy) and the drugs carry no risk of overdose or addiction. While some risks do exist in hormonal birth control, there is no reason why pharmacists should not be able to deal with those risks on the basis of questionnaires or minor screenings.

OBGYNs and pharmacists themselves support this. Nearly all American pharmacists already can write prescriptions for many types of vaccinations. There’s no reason why they shouldn’t be able to do what they already can in eight states and write them for birth control pills as well. In the short term, we’d like to expand pharmacist scope-of-practice to include other hormonal birth control—including the injection, patch and vaginal ring—and look for ways to allow other professionals who are not doctors to write prescriptions for the same.

To those who might suspect that we’re doing this to advance a broader libertarian and deregulatory agenda rather than simply working to expand access to birth control itself…your suspicions are justified. Our ongoing and expanding work on professional regulation convinces us that this might be a good way to get more people talking about different ways that people should be able to make a living without government approval and to draw attention to a particularly egregious and harmful example of regulatory overreach. If this helps spark a conversation that eventually makes it easier for cosmetologists to practice their craft after having learned basic health precautions rather than having attended pointless and expensive classes, we’ll be delighted.

With all of that said, we don’t see why this agenda with regard to birth control and professional regulation should obligate us to take positions on related issues. Besides a few scattered comments on very narrow reinsurance topics, we’ve been silent on those pieces of health legislation that have come before Congress since we opened our doors a little over five years ago.

Insofar as there is to be a system that specifies a mandatory benefits package and requires zero co-pay preventative care, we have no objection to the inclusion of birth control in that package and think it is probably a good idea. Since we are not advocating that birth control be made truly “over the counter,” we do think it should be covered by insurance plans on, at minimum, the same basis as any other similar prescription, even if it doesn’t require a doctor’s office visit.

The broader questions of what the health-care system should be able to look like and how (and if) employers and individuals might shape benefits packages based on personal or religious preferences are outside of R Street’s expertise. In the long term, we might pursue health care as an issue area. But we’re not going to wade into a debate that’s this complicated and consequential without deep expertise on the topic. And we don’t have that right now.

While we might eventually work on health care, R Street will never have an institutional position on abortion, per se, or any other issue that defies a solution that’s primarily economic. In the case of abortion, this is partly a matter of comparative advantage: there are dedicated, sincere, hardworking, well-funded and committed groups on both sides of the debate over the termination of pregnancies. Starting a program devoted to the issue at R Street would not add anything.

Just as importantly, we’re a pragmatic think tank that looks for innovative, market-oriented solutions to problems. The important political debate over abortion, as it involves profound questions of individual autonomy and human life, may not be suited to a market-oriented solution. Trying to point out the advantages of “the market” would not and probably should not convince anybody to change his or her opinions, anyway.

In short, R Street favors faster, better, cheaper access to birth control and doesn’t think getting it should require a doctor’s office visit. We don’t see a need to wade into other related issues to do this and, for the time being, we won’t.


Image by Image Point Fr

Why is Richard Cordray voting on FSOC?

la-fi-cfwb-ocwen-20170420

The Financial Stability Oversight Council (FSOC) just made the good decision to remove the designation of the insurance company American International Group as a “SIFI” or “systemically important financial institution.” This was a good idea, because the notion that regulators meeting as a committee should have the discretion to expand their own power and jurisdiction was a bad idea in the first place – one of the numerous bad ideas in the Dodd-Frank Act. The new administration is moving in a sensible direction here.

The FSOC’s vote was 6-3. All three opposed votes were from holdovers from the previous Obama administration. No surprise.

One of these opposed votes was from Richard Cordray, the director of the Consumer Financial Protection Bureau (CFPB). Wait a minute! What is Richard Cordray doing voting on a matter of assessing systemic financial risk? Neither he nor the agency he heads has any expertise or any responsibility or any authority at all on this issue. Why is he even there?

Of course, Dodd-Frank, trying to make the CFPB important as well as outside of budgetary control, made him a member of FSOC. But with what defensible rationale? Suppose it be argued that the CFPB should be able to learn from the discussions at FSOC. If so, its director should be listening and by no means voting.

Mr. Cordray, and any future director of the CFPB attending an FSOC meeting, should have the good grace to abstain from votes while there.

And when in the course of Washington events, the Congress gets around to reforming Dodd-Frank, it should remove the director of the CFPB from FSOC, assuming both continue to exist, and from the board of the Federal Deposit Insurance Corp. while it is at it, on the same logic.

New LegBranch.com Resource: How wealthy are our representatives?

shutterstock_420830281

The following blog post was co-authored by Charles Hunt, a doctoral student at the University of Maryland at College Park.


It likely wouldn’t surprise anyone, much less a congressional scholar, to learn that most members of Congress are wealthier than the average American. What might be more surprising is just how much wealthier they are.

According to estimates calculated by the Center for Responsive Politics, the average net worth of a member of the U.S. House is around $8 million. That’s about 116 times as much as the net worth of the average American, which according to the U.S. Census Bureau’s most recent estimate is $69,000.

Even the median net worth of the top 20 percent of Americans, about $630,000, doesn’t come close to the median net worth of a member of Congress (about $880,000).

To help visualize this for individual members of Congress, we’ve put together an interactive graphic that displays data for all members of the 114th House of Representatives for whom data are available, including their state, district, party and estimated net worth. Each circle denotes a representative – the bigger the circle, the more the he or she is worth. Hover over each circle to see the name of the representative and their net worth and zoom in and out to get a better view of a region. Use the wealth slider to limit the representative visible on the map.

Below is a summary graphic showing wealth ranges and the number of members that fall into each range. Nearly half the members of the House are millionaires, and nearly two-thirds are worth more than $500,000.

worthbar

These net worth figures for members of Congress, given how out-of-sync they are with the wealth of their constituents, should give us pause and lead us to ask some important questions.

  • Why are members of Congress so much wealthier than average Americans?
  • Do voters care about this disparity, and should they?
  • What work experience created this kind of wealth for them, and what kinds of policy implications could this have?

Further analysis of our interactive graphic is likely to spur even more questions.

richest poorest


Image by DenisProduction.com

 

PRI Podcast: Steven Greenhut’s end-of-session wrap

R Street Western Region Director Steven Greenhut joins Pacific Research Institute’s Another Round podcast to discuss the California Legislature’s housing package, its recent cap-and-trade deal, bills that were overlooked in the recent legislative session and the impact of Proposition 54.

Rep. Tonko cites R Street’s energy research

Rep. Paul Tonko, D-N.Y., cites R Street research on electric reliability at an Oct. 3, 2017 hearing of the U.S. House Committee on Energy and Commerce’s Energy Subcommittee.

Prescribing on-site fuel storage is an unreasonable approach to grid resiliency

shutterstock_311444093

The U.S. Energy Department’s proposed rulemaking to the Federal Energy Regulatory Commission (FERC) is, at best, a myopic and inefficient approach to grid resiliency. The proposal prescribes one measure, among many options, to address a single, low-to-medium salience aspect of grid resiliency. That is, it proposes to compensate extended on-site fuel storage as the means to address the security of fuel supply across the power-plant fleet.

Fuel security is an important issue to evaluate objectively, but it’s been politicized immensely by rent-seeking interests. This clearly influenced the DOE proposal, which cherry-picked the evidence to make its case. The proposal selectively pulled information from the insightful DOE technical report issued in August. The proposal exalts the benefits of fuel-secure nuclear and coal, but ignored that the report highlighted substantial fuel-related outages at coal plants. Many coal plants couldn’t obtain fuel from their own on-site stockpiles because conveyor belts broke and coal piles froze.

Reducing fuel shortages at many power plants is not even a function of whether fuel is stored on-site. The biggest issue with natural gas plants lacking fuel – in the 2014 polar vortex or otherwise – is that they lacked the incentive to firm their fuel supply. Firming a fuel supply could come from on-site storage (e.g., backup oil) as well as off-site delivery, such as contracting for guaranteed pipeline service. Since the polar vortex, market reforms have increased the incentive to firm fuel supplies, and this has improved generator performance during severe weather events. In the PJM Interconnection, the largest grid operator, this largely came in the form of firming gas supplies using third-party marketers, which improves fuel security without increasing on-site supplies. Critically, this came from voluntary actions by the private sector, which creatively chose the lowest-cost ways to improve plant performance that fit their unique set of circumstances.

The DOE proposal cites the polar vortex as a cautionary tale of fuel insecurity. Yet the biggest issue was weather-related outages, as many plants couldn’t operate because temperatures dropped below a plant’s design basis (e.g., external instruments froze). If anything, it’d be more important to ensure weather-secure generation than fuel-secure generation.

Regardless, it shouldn’t be the role of government to compensate plant weatherization, on-site fuel storage or any other measure to possibly improve generator performance directly. Instead, regulators should ensure an incentive structure exists for the economically efficient level of weatherization, fuel assurance improvements and other performance-enhancing measures like improved maintenance. All these measures have costs, and only a well-functioning market should determine which costs are worth incurring to keep the lights on.

A market-based approach to reliability and resiliency values the performance or capability to provide a specific service. It does not explicitly value specific measures associated with performance or capabilities. Dozens of measures improve performance and capability, and the lowest-cost way of doing so is to provide proper incentives to market participation to decide their own course.

In contrast, the DOE proposal would result in compensation for one politically preferred measure. For government to favor a certain measure simply reveals the bias of central planning, which has a track record of raising costs unnecessarily. To the extent that on-site fuel improves generator performance, markets should reward the measure indirectly through a fuel- and technology-neutral paradigm that procures specific services.

Take “black-start” capability, for example. A power plant with black-start ability can start up without power assistance from the grid. This is critical for resilience, as it provides the ability to restore operations in case of a full grid blackout. Procurement of black-start capability predominantly occurs through administrative processes, rather than market mechanisms. Re-examining the determinants of black-start procurement and using a market approach may boost prospects for cost-effective resiliency.

A thoughtful, market-compatible approach to reliability and resiliency, like that recommended in the DOE technical report, is welcome. The current DOE proposal provides an example of what not to do. It is deeply flawed, rushed and anti-competitive. The fallout from the DOE’s proposal will hopefully encourage the administration to reinvent its strategy on resiliency to bolster market performance and empower consumers, rather than undercut them by prescribing actions. In the meantime, FERC must uphold market principles and push forward with an economically sound agenda. FERC only needs to cite the DOE’s technical report as an example of what to do, as it respectfully declines DOE’s political albatross.

 

James Wallner on the Senate filibuster

On the American Enterprise Institute’s Banter podcast,  Peter Hanson and R Street Senior Fellow James Wallner discuss the Senate filibuster, how it operates, its impact on the Republicans’ agenda and ways to overcome it. The full audio is embedded below:

For connected cars, let the best technology win

shutterstock_682503085

Vehicle crashes are the leading cause of death among young people in the United States. It’s therefore crucial that we find ways to improve the safety of our roads is we want to save lives.

However, a proposal currently before the U.S. Transportation Department to mandate that all vehicles use a kind of vehicle-to-vehicle technology known as dedicated short-range radio communications, or DSRC, is the wrong approach to this issue. The mandate would hamper development of competing standards that may work better, in addition to creating potential security vulnerabilities.

Technology-specific mandates are always problematic. As a matter of process, bureaucratic decisionmaking is not well-suited to determine which technology is best for a specific need, or whether a need even exists at all. In the case of DSRC, there are technical reasons why other standards for vehicle-to-vehicle communication may prove more popular.

For instance, standards developed by organizations like 3GPP send signals over lower-band spectrum, which travel further and can penetrate obstacles like buildings or trees better than the high-band spectrum allocated for DSRC. These characteristics likely mean the lower-band spectrum options will be cheaper to deploy than DSRC, since the same area can be covered with fewer antennas. This standard already has broad support from tech companies and carmakers like Ford, Rolls-Royce, Audi and BMW. Mandating the use of DSRC, or any specific technology, would be unwise when the market already provides competitive alternatives.

The DSRC mandate also raises security concerns. As security researcher Alex Kreilein notes, adding an interface with computers in other vehicles may improve safety on some counts, but it also creates new vulnerabilities. He argues the DSRC mandate would be especially risky in that it would create a monoculture in which all vehicles use the same technology. Compromising one car could, in fact, compromise all of them.

Kreilein further explains that it is dangerous to concentrate essential safety technology in one identifiable spectrum channel, where it can be more easily targeted by bad actors. We should allow the marketplace to consider and ultimately adopt competing standards using a variety of spectrum bands, rather than forcing all our eggs into the DSRC basket.

Some developers of self-driving vehicle systems are avoiding the security issues associated with vehicle-to-vehicle communications entirely by designing their products to account for their surroundings without directly communicating with other vehicles. These systems use technologies like cameras, LIDAR, radar and sonar to achieve similar awareness of situations, without the additional complications. In the case of these vehicles, any mandate would add unnecessary costs and security vulnerabilities, which would result in higher prices and less safety for consumers.

Spectrum for DSRC has been set aside since 1999, with almost nothing to show for it. Spectrum is a scarce resource and letting it remain underutilized has significant opportunity costs. The particular band allocated to DSRC (5.9 GHz) is adjacent to spectrum currently used for Wi-Fi. With demand for wireless bandwidth, including Wi-Fi, on the rise, the Federal Communications Commission could extend the available bandwidth for Wi-Fi to encompass the spectrum currently set aside for DSRC. While the FCC has been exploring ways to share this band between DSRC and Wi-Fi, we could maximize consumer benefits by abandoning the DSRC mandate and allowing the market to dictate how the spectrum should be used.

Thankfully, the DOT appears to be backing off the proposed mandate, moving it to the less urgent status of “undetermined.” The department should close the proceeding completely to create a level playing field that will allow the best technology to win and allocate spectrum to its most valuable uses.


Image by Zapp2Photo

National Flood Insurance Program, zoning, hurricanes: Lessons for lawmakers

In the wake of devastating storms in Texas, Florida, Puerto Rico and the U.S. Virgin Islands, the deeply indebted National Flood Insurance Program almost certainly will be forced to ask Congress to borrow even more money. Senior Fellow R.J. Lehmann took part Sept. 25 in a Capitol Hill discussion hosted by the Cato Institute to discuss ways the program could be reformed — and perhaps, eventually, completely privatized — ahead of its scheduled Dec. 8 expiration.

Full video of the panel is embedded below:

DOE proposal misframes grid resiliency  

shutterstock_79533418

U.S. Energy Secretary Rick Perry directed the Federal Energy Regulatory Commission (FERC) Friday to issue a rule to provide immediate cost recovery for power plants with extended on-site fuel supply. Read another way, the proposal is an arbitrary backdoor subsidy to coal and nuclear plants that risks undermining electrical competition throughout the United States.

The U.S. Energy Department proposal leverages a rarely used law that allows them to propose their own rulemaking to FERC. The DOE proposal, which is notable for its lack of detail, nevertheless calls for FERC to create a new final rule within 60 days. While DOE has the legal authority to initiate proposed rulemakings, FERC retains ultimate discretion as to how to respond.

DOE’s proposal marks a deeply troubling departure from the thoughtful recommendations in its August technical grid report. That report sought to enhance the performance of electricity markets, whereas this overtly political proposal inflicts an impossible timeframe and concocts a recipe for wounding competitive markets, while potentially imposing billions of dollars in unnecessary costs for consumers.

Proponents of markets, consumer choice and limited government should shudder. Consumers would ultimately bear a hefty and unnecessary bill from any such draconian intervention, which would also raise capital borrowing costs and have a chilling effect on new investment. Proponents of good governance should also cringe, as the proposal calls for an unnecessarily rushed response in a timeframe completely unrealistic to enact reforms through the proper channels. To craft and implement sophisticated market rules requires working through a robust development process, often over the course of two years or more. The 60-day timeframe called for in the proposal is unprecedented.

When it came to the DOE’s technical report, a solid effort by the department’s technical team muted external suspicion of pro-coal and nuclear bias. This DOE proposal instead validates that suspicion. It is neither technically nor procedurally sound and has political fingerprints all over it. Clearly, the thinking behind the proposal bypassed that of the department’s own technical experts. The political proposal does a disservice to prior DOE work, to consumers, to good governance and to competitive markets.

The DOE proposal is long on hyperbole and short on technical backing. It seeks “immediate action” to address the “crisis at hand” as the “loss of fuel-secure generation must be stopped.” Yet there is no crisis, as affirmed by recent electric performance metrics, the latest congressional testimony of the CEO of the North American Electric Reliability Corp. and even the DOE’s own technical report. Critically, motivations for market reforms should never aim to adjust compensation with a pre-determined result. The whole purpose of markets is to let competitive forces determine resource allocations, which lowers costs and allocates risk to the private sector, in contrast to government-determined investments.

Market failures for electric reliability and resilience justify a limited role for government intervention to facilitate competition. Experts traditionally considered grid reliability and resiliency as “common goods,” because suppliers cannot limit receipt of the product to those who pay for it. This will induce free ridership and cause chronic underinvestment. Thus, the fundamental issue is ensuring incentive compatibility, where market rules align the economic interests of participants with the efficient and reliable performance of the electric system.

Getting the incentives right begins with ensuring prices accurately reflect supply-demand fundamentals and that there are markets for discrete reliability and resiliency services. The DOE technical report hit this on the head, calling for improvements in energy price formation and valuation of essential reliability services (e.g., voltage support and frequency response), which does not include on-site fuel storage. An exercise that defines discrete products for reliability and resiliency to procure through fuel- and technology-neutral markets is fruitful. The DOE proposal does not call for that.

The proposal is incompatible with sound market economics. It actually promotes a gateway to expand cost-of-service regulation, where government substitutes for competition. Its definition of eligible units – those with a 90-day on-site fuel supply – is arbitrary and has no economic basis. Curiously, some coal plants wouldn’t even qualify. Some hold roughly 30 days of on-site fuel supply; however, many hold 70- to 100-day supplies.

With a splash of hyperbole, the proposal referred to the loss of “fuel secure” resources during the 2014 polar vortex as possibly “catastrophic,” by inaccurately citing the technical report. This doesn’t characterize the nature of temporary bulk power shortages correctly. When bulk demand exceeds supply, grid operators take emergency actions, the most severe being voltage reductions (brownouts) and rotating blackouts. Brief voltage reductions and even rotating 30-minute blackouts are not catastrophic, by any stretch. This is why economic studies reveal consumers would often rather have their power curtailed briefly than pay a hefty premium to keep the lights on.

Prolonged (multiday) power outages can be catastrophic, especially during severe weather. The predominant cause of these sustained outages is damage to transmission and distribution infrastructure – take the recent hurricanes, as an example. They rarely result from power plant outages, let alone those from lack of fuel. DOE’s proposal seeks to take emergency action on, at best, a low-to-medium level resiliency issue.

A resiliency initiative should prioritize mitigating transmission and distribution damage and accelerating restoration. The DOE technical report recommended that grid-resiliency efforts prioritize disaster-preparedness exercises and for NERC and grid operators to define resilience criteria and examine resilience impacts. That’s a thoughtful approach, and the exact opposite of the unrefined DOE proposal with a single DOE-determined criterion for resilience.

A thoughtful resiliency approach would take a market-compatible mindset and recognize that advances in technology have helped enable a degree of product differentiation, where consumers can pay for different levels of reliability and resiliency services. This creates the ability to cease treating aspects of reliability and resiliency as a “common good,” where a central authority substitutes their judgment on behalf of consumers. This prospect to “privatize the commons” creates a great opportunity for the Trump administration to reduce the role of government planning, not to deepen government’s dictation of private services.


Image by Christopher Halloran

Dear Senate, we want more Pai

shutterstock_253261441

The following blog post was co-authored by R Street Tech Policy Analyst Joe Kane.


The U.S. Senate will have a chance Monday to reconfirm Ajit Varadaraj Pai for another term as chairman of the Federal Communications Commission, but it will first have to move past some baseless accusations about his suitability for the post that have been hurled the chairman’s way by a few congressional Democrats and political groups who want to block his reconfirmation.

In fact, Pai is arguably the most well-qualified chairman the FCC has had in recent years. Arguments to the contrary amount to a smokescreen for underlying disagreements with the market-oriented policy decisions Pai and his fellow commissioners have been pursuing at the FCC. These arguments should be rejected. We want more Pai.

1

Hailing from Parsons, Kansas, Pai attended Harvard University and the University of Chicago Law School before embarking on his illustrious legal career. Pai’s experience includes a federal judicial clerkship in Louisiana, multiple stints at the U.S. Justice Department and the Senate Judiciary Committee, and several years in private practice, first as associate general counsel for Verizon and then as a partner at the law firm Jenner & Block. Pai first joined the FCC in the General Counsel’s Office in 2007 before being nominated by President Barack Obama to be a commissioner in 2011. In 2012, he was confirmed by Democratic-controlled Senate by unanimous voice vote.

During his time as commissioner, Pai consistently pursued market-oriented policies and opposed expansive, heavy-handed regulation. It therefore should be no surprise that he has worked to implement these same policies as chairman. Additionally, Pai has prioritized closing the “digital divide,” incorporating rigorous cost-benefit analysis into agency rulemakings and implementing unprecedented transparency reforms, like publishing all pending orders on the FCC’s website three weeks prior to a vote. Pai’s actions prove he is an able public servant truly dedicated to pro-consumer policies.

2

Nonetheless, political opponents and activist groups are staunchly opposed to Pai’s FCC agenda. These groups have launched an all-out assault against the reconfirmation vote, forcing Senate Republicans to invoke cloture to even get a vote on Pai, which is scheduled for next Monday. The same senators who thought Pai was well-qualified when nominated as a commissioner should take the same view now.

While Senate Democrats may disagree with the policies Pai and his fellow Republican commissioners are advancing at the FCC, blocking a qualified public servant from office is not the proper response. Telecom policy is hugely important to all Americans, so it shouldn’t be relegated to bureaucratic rulemakings and squabbles over nominations. Ongoing debates over closing the digital divide and protecting net neutrality are vitally important. We need our leaders in Congress to pursue bipartisan legislation to settle these debates, not hold the current FCC Chairman hostage.

3

Cheesy dance moves aside, he is the best man for the job.


Image by Mark Van Scyoc

Things are getting weird in pipeline country

shutterstock_590235038

In an environment that only a lawyer looking for billable hours could love, federal courts are making a mess of executive branch guidance concerning whether federal agencies need to consider “indirect” climate effects when regulating pipeline construction.

The Obama administration in August 2016 finalized guidance on how agencies should consider climate change in project reviews. The guidance said federal agencies must consider the larger impact of greenhouse-gas emissions that occur from energy projects when completing its National Environmental Policy Act (NEPA) analysis.

The decision formalized executive action that President Barack Obama had informally created when he denied construction of the Keystone XL pipeline on climate-change grounds in November 2015. Obama then signed the United States up to substantial cuts in its greenhouse-gas emissions during the Paris Climate Accords in December 2015 and it all made sense.

But that was before Donald Trump came to town. In March, the White House rescinded the Obama guidance via an executive order, and in June, Trump announced the United States would leave the Paris Accord by the end of his first term. For outside observers, this would seem to shut down the possibility of the government taking climate change into consideration until at least another Democratic administration.

But this turns out not to be the case. For the last decade or so, some federal courts have rejected projects that the courts felt hadn’t taken the potential damage of indirect climate emissions into account. This gives plaintiffs the ability to argue to courts that there is legal precedent for blocking permits, even if the executive branch in charge of the permits change hands and reverses the policy. The legal issues have never reached the U.S. Supreme Court for final adjudication.

The political battle over natural gas pipelines is where the sniper fire is hottest right now.

In August, the U.S. Court of Appeals for the Federal Circuit ruled that the Federal Energy Regulatory Commission should have considered the impact of climate change when considering whether to approve a 500-mile natural gas line serving the Southeast. It ordered FERC to redo the analysis.

But FERC, which is responsible for siting all interstate natural gas pipelines, has for years fought against including indirect emissions into its environmental analysis. Now, newly staffed with a majority of Republican commissioners appointed by Trump, FERC doesn’t look to be backing down.

On Sept. 15, FERC overruled New York State’s Department of Environmental Conservation (DEC), which had blocked an eight-mile extension of the Millennium Pipeline in upstate New York under its Clean Water Act authority. New York, which has banned hydraulic fracturing, argued in its rejection letter that FERC had earlier “failed to consider or quantify the indirect effects of downstream [greenhouse gas] emissions in its environmental review of the project.”

While pipeline builders were pleased with the FERC decision, the agency only overruled the state authority on a technicality, arguing that New York waited longer than the 12-month window allowed under statute before rejecting the application.

Two other pipeline companies have said they would seek similar waivers from FERC after being blocked by DEC using the same Clean Water Act authority. Yet it is unclear whether the same procedural violations have taken place, and courts have not supported FERC’s assertion that it shouldn’t take project emissions into account.

This means the Obama administration’s climate guidance is still operating through the U.S. court system, even when the Trump White House has rescinded the guidance.

Again, things have gotten strange regarding pipeline siting in the United States – so much so that only a decision by the U.S. Supreme Court will likely straighten the rules out.


Image by Kodda

Rep. John Ratcliffe on the Separation of Powers Restoration Act

Earlier this year, Rep. John Ratcliffe, R-Texas, introduced the Separation of Powers Restoration Act. Unlike some bills, the act’s title precisely encapsulates its purpose: restoring the power disparity in our system of separated powers.

As close observers of our political system know well, the modern presidency has grown precipitously compared to Congress. While Congress itself deserves much of the blame for this state of affairs by over-delegating its powers to the executive branch, the third branch of our system has also been complicit. Under the judicial doctrine known as “Chevron deference,” the federal judiciary has systematically deferred to executive agencies when it comes to interpreting laws.

As R Street has noted previouslyChevron deference has become increasingly controversial in the legal community:

[Chevron deference means that] unless an agency’s interpretation of a statute is unreasonable, courts must adhere to it. Unsurprisingly, this allows agencies significant leeway to exercise their regulatory powers.

This level of deference to agency interpretations … has become contentious. There continues to be an ongoing debate among judges, legal scholars and practitioners about the propriety of according federal agencies such broad deference.

Rep. Ratcliffe’s bill addresses this issue by calling for an end to such deference; in its place, the bill would require courts to review agency actions de novo (“from the beginning”) and without deference.

LegBranch.com recently spoke with Rep. Ratcliffe about his bill, which he feels would provide an “immediate and profound” step forward in the effort to rein in the executive branch. As Ratcliffe put it, Chevron deference gives agencies the ability to “grade their own paper,” since their interpretation of statutes within their jurisdiction usually prevails in court.

For Ratcliffe, eliminating judicial deference to agency legal interpretations strikes at the very heart of our constitutional framework. “The wisdom of the founding fathers was that there would be a system of checks and balances,” Ratcliffe notes. “This is what Chevron deference has thrown out of balance; it should be the legislature that writes the laws, not agencies.”

Despite the relatively simple nature of his bill—its entire text barely exceeds 150 words—it remains controversial. Ratcliffe notes, however, that a version of the bill passed the House with at least some bipartisan support from several Democrats. According to Ratcliffe, President Donald Trump has also been receptive to the bill, which puts the ball squarely in the Senate’s court.

Given the Senate’s busy calendar, it’s anyone’s guess whether it will take up and pass the Separation of Powers Restoration Act. But those interested in checking the growth of the executive branch will certainly be keeping watch.

Sen. Graham has a good idea on climate change: Here’s how to do it

shutterstock_376878268

Sen. Lindsey Graham certainly likes to be in the middle of things. The South Carolina Republican took time away from Washington D.C., where he had been trying to shepherd passage of a major health care bill, to tell an audience that “a price on carbon – that’s the way to go in my view.”

Graham has been here before. Back in 2010, he was in the thick of negotiations over a national carbon-trading system that broke down when the Senate couldn’t find enough votes. Graham actually called for climate-change legislation during the 2016 election, but had not mentioned a price on carbon explicitly until just last week.

Meanwhile, the Republican Party and its voters have continued to move further away from promoting any climate change solution, even as Graham remains consistent in his belief that CO2 emissions generated by man are warming the earth.

Graham is completely correct that a carbon tax is the best way to control greenhouse gas emissions with as little impact as possible on the national economy. Many economists believe a carbon tax would be a much more efficient and elegant way to encourage cuts in carbon emissions than alternatives like a trading system or command-and-control regulation. Placing a fee on carbon would be more transparent, can be done with fewer transaction costs and would keep Wall Street from gaming a complex, opaque marketplace.

But the details matter. If a carbon tax merely served as a new source of revenue to fund wasteful government spending, it would be of dubious value. Any proposal to institute a carbon tax must not expand the overall size and scope of government, and ideally, should actually shrink it.

To be successful, a carbon tax should be revenue-neutral — that is, the revenue generated by the tax should be paired with cuts to taxes that are even more economically damaging. For example, R Street has proposed eliminating the corporate income tax altogether in combination with a meaningful carbon tax. A number of studies shown that such a trade-off would boost conventional economic growth, in addition to cutting pollution.

Moreover, any carbon-tax plan ought to pre-empt existing regulations of greenhouse gases. Because a carbon tax is layered on top of the retail cost of any fuel, it encompasses the complete externalized cost of a pollutant, meaning there should be no additional costs to companies or consumers.

This means that much, if not all, of the administrative state apparatus created to control hydrocarbon pollution would have to be eliminated as a prelude to carbon pricing. These policies include pre-empting any future regulations of greenhouse gases under the Clean Air Act (CAA). It’s also possible a slew of other regulatory authorities would be on the chopping block, as well.

Sen. Graham doesn’t appear likely to revive that Republican health care bill from dead, but perhaps he could still tempt fate and resurrect a carbon tax.


Image by arindambanerjee

 

Can police predictions create crime?

Technology has the power to make a lot of things better – including police work and crime-fighting. But it also has the power to “create” crime where it didn’t exist before. In a recent interview with the Brian Gongol Show on WHO Newsradio 1040 in Des Moines, Iowa, R Street Justice Policy Director Arthur Rizer explains how predictive policing can help or hurt the very communities that need the work of “peace officers” the most. Full audio of the piece can be found at this link.

Congressional procedure and policymaking

At a recent gathering of the Legislative Branch Capacity Working Group, Molly Reynolds, a fellow at the Brookings Institution, led a discussion on congressional procedures and their impact on policy creation and outcomes. Topics discussed include: how procedures, especially in complication situations like reconciliation, empower leaders versus rank-and-file-members and what should be done to increase staffers’ knowledge of procedures and their consequences.

Full video of the panel is embedded below.

How does the United States rank in homeownership?

shutterstock_652092850

There are a lot of different housing-finance systems in the world, but the U.S. system is unique in being centered on government-sponsored enterprises. These GSEs—Fannie Mae and Freddie Mac—still dominate the system even though they went broke and were bailed out when the great housing bubble they helped inflate then deflated.

They have since 2008 been effectively, though not formally, just part of the government. Adding together Fannie, Freddie and Ginnie Mae, which is explicitly part of the government, the government guarantees $6.1 trillion of mortgage loans, or ­­59 percent of the national total of $10.3 trillion.

On top of Fannie-Freddie-Ginnie, the U.S. government has big credit exposure to mortgages through the Federal Housing Administration, the Federal Home Loan Banks and the Department of Veterans Affairs. All this adds up to a massive commitment of financing, risk and subsidies to promote the goal of homeownership.

But how does the United States fare on an international basis, as measured by rate of homeownership?  Before you look at the next paragraph, interested reader, what would you guess our international ranking on home ownership is?

The answer is that, among 27 advanced economies, the United States ranks No. 21. This may seem like a disappointing result, in exchange for so much government effort.

Here is the most recent comparative data, updated mostly to 2015 and 2016:

 

Advanced Economies: Homeownership Rates
Rank Country Ownership Rate Date of Data
1 Singapore 90.9% 2016
2 Poland 83.7% 2015
3 Chile 83.0% 2012
4 Norway 82.7% 2016
5 Spain 77.8% 2016
6 Iceland 77.8% 2015
7 Portugal 74.8% 2015
8 Luxembourg 73.2% 2015
9 Italy 72.9% 2015
10 Finland 71.6% 2016
11 Belgium 71.3% 2016
12 Netherlands 69.0% 2016
13 Ireland 67.6% 2016
14 Israel 67.3% 2014
15 Canada 67.0% 2015
16 Sweden 65.2% 2016
17 New Zealand 64.8% 2013
18 France 64.1% 2015
19 Mexico 63.6% 2015
20 United Kingdom 63.5% 2015
21 United States 63.4% 2016
22 Denmark 62.0% 2016
23 Japan 61.7% 2013
24 Austria 55.0% 2016
25 Germany 51.9% 2015
26 Hong Kong 48.9% 2017
27 Switzerland 43.4% 2015

Sources: Government statistics by country

It looks like U.S. housing finance needs some new ideas other than providing government guarantees.


Image by thodonal88

Hurricane Harvey isn’t about climate change, it’s about bad federal policy

In the wake of Hurricane Harvey, many have questioned the roles played by climate change and Houston’s loose zoning rules in the devastation that faced that America’s fourth-largest city. R Street Senior Fellow R.J. Lehmann sat down with Nick Gillespie of the Reason podcast to discuss how explicit government policy encourages people to live in harm’s way and what can be done to reverse that trend. The full audio of that conversation is embedded below.

Section 230: When should online platforms be liable for the unlawful activity of their users?

When should online platforms be liable for unlawful activity? Section 230 of the Communications Decency Act (CDA 230) generally immunizes online platforms from liability when users engage in unlawful activity, but there are several exceptions to that immunity. Still, some websites have successfully hid behind CDA 230 while sex traffickers and other criminal enterprises run rampant on their platforms. In response, several bills have been introduced in Congress that would narrow the scope of CDA 230’s immunity and expand potential liability for online platforms that harbor unlawful activity. A panel of legal and policy experts discuss the current scope of CDA 230 and what impacts the proposed amendments would likely have on law enforcement, victims of sex trafficking, and the internet ecosystem writ large.

Panelists:

Elizabeth Nolan Brown, Associate Editor, Reason Magazine

Arthur Rizer, Director of National Security and Justice Policy, R Street Institute

Berin Szóka, President, TechFreedom

Jeff Kosseff, Assistant Professor, United States Naval Academy Center for Cyber Security Studies

Stacie Rumenap, President, Stop Child Predators

Mary Graw Leary, Professor of Law, Catholic University of America

Taina Bien-Aimé, Executive Director, Coalition Against Trafficking in Women (CATW)

Arthur Rizer talks jail reform on KJZZ

R Street Justice Policy Director Arthur Rizer appeared recently KJZZ, a National Public Radio affiliate in Phoenix, Arizona, to discuss how reforms to the nation’s jail system can be the key to safer communities. Audio of the story is embedded below.

How supporting internet freedom in Cambodia makes America great

shutterstock_506069557

I’ve had the privilege of working on internet freedom issues in a range of foreign countries, but none of my partnerships abroad has meant more to me than my work in Cambodia. Which is what you’d expect when you find out that, in the course of this work over the past three years, I met Sienghom, who just this summer has become my wife.

I’ve written about my internet work in Cambodia here before. And I think Freedom House’s 2015 assessment that the internet “remains the country’s freest medium for sharing information” still holds true. That’s why I’ve generally been optimistic about Cambodia’s prospects for increasing internet freedom and democracy, as well as its increased engagement with the pan-Asian and world economies, which should lead to higher standards of living in the country generally.

It’s also why I was particularly troubled when Sienghom pointed out to me a range of disturbing news items emerging from Phnom Penh, starting just last month and continuing into this past week. The bad news started with the Cambodian government’s decision to shut down the U.S. Agency for International Development-funded National Democratic Institute in late August. NDI has focused on offering training and workshops for Cambodian politicians and would-be public servants—both in the majority Cambodian People’s Party (CPP) and in the opposition Cambodia National Reform Party (CNRP)—aimed at enabling stakeholders to function effectively and democratically in a government framework that has been edging (thanks in part to internet engagement) toward a more truly representative parliamentary democracy. In response, USAID expressed its disappointment, as did the U.S. State Department, while Cambodian Prime Minister Hun Sen—who in other decades has sought to thaw U.S.-Cambodia relations—has ramped up criticism of the United States and USAID in particular.

In August, The Cambodia Daily, an English-language independent newspaper, quoted University of New South Wales professor of politics Carl Thayer about these latest trends, saying “[a]t this point, it looks like the U.S. is losing leadership by default and China’s gaining it by design.” But this past week, The Cambodia Daily itself was shut down, ostensibly for tax reasons. This represents a new wave of government actions designed to quell not just dissent, but any criticism whatsoever. In the same few days, the government has arrested CNRP leader Kem Sokha, who is now charged with treason.

As Thayer remarked to The New York Times, ““The current crackdown is far more extensive than ‘normal’ repression under the Hun Sen regime.”

But what’s been triggering this latest wave of repression in a country that, as a U.S. ally, has been inching, not always steadily, toward democracy in recent years? Longtime observers will point you first to the last round of elections in 2013; as I wrote here in 2015:

It hasn’t helped the current government’s sense of insecurity that the 2013 Assembly election was marked by civil protest, which the government is inclined to blame, along with its slipping majority, on the rise of social media like Facebook, where individual Cambodians have felt free to share their political views.

But there’s another, more recent factor at work—namely, the messages the Trump administration has been sending to Cambodia’s leadership. One obvious message, per a report in the Phnom Penh Post, is the administration signaling its intent to cut foreign aid to Cambodia to zero. Another is President Trump’s often antagonistic relationship with the American press, which Hun Sen interprets as legitimizing his own treatment of the Cambodian press.

President Trump’s relationship with American journalists may not be improved anytime soon, but the president could reconsider whether to cut aid entirely. Understandably, Americans who feel they didn’t adequately benefit from the post-2008 economic recovery may favor the administration’s expressed commitment to disengage from (or at least reduce) the United States’ longstanding commitments to both our allies and to an international order aimed at increasing peace and promoting progress. The current “America First” foreign policy—combining promises of military strength with renegotiated trade deals—certainly resonated with these voters.

But there’s also a risk that disengagement from the role we’ve played in the international framework projects weakness rather than strength. That’s a message that can undercut the administration’s goal of a world that is “more peaceful and more prosperous with a stronger and more respected America.”

We may debate whether North Korea’s current in-your-face attitude about its nuclear weapons program has been improved or worsened by President Trump’s “fire and fury” threat last month. What’s less debatable is that the perception in many foreign countries is that the United States intends, if not to exit the world stage, then to reduce its role to a walk-on part. Whatever else that does, it doesn’t give the impression of a stronger, greater America.


Image by atdr

 

Prominent carbon tax skeptic admits it could increase economic growth

carbon tax

A lot of writing opposed to carbon taxes is, frankly, not of high quality. But there are exceptions. Bob Murphy, an economist with the Institute for Energy Research, has written some of the strongest and most sophisticated arguments for carbon-tax skepticism. So it was with interest that I read his latest broadside on the subject in the Canadian Free Press.

In the piece, Murphy focuses his ire on what might be called the nonenvironmental case for carbon taxes. Even if climate change was a hoax invented by the Chinese, a carbon tax still might be a net benefit to the economy if it allowed for cuts to more economically damaging taxes. As Murphy summarizes the case:

[W]hen proponents of a carbon tax pitch it to American conservatives and libertarians, they explain that if we have a revenue-neutral carbon tax where 100% of the proceeds are devoted to cutting taxes on capital, then reputable models show that this could boost even conventional economic growth, in addition to whatever environmental benefits accrue from reduced greenhouse gas emissions. This is called a ‘double dividend’ that arises when policymakers began to ‘tax bads, not goods.’

This sounds reasonable. And R Street has, of course, argued for swapping the corporate income tax for a carbon tax on precisely these grounds. But would it really work?

To show the limitations of the “double dividend” argument, Murphy highlights a chart from a 2013 analysis by Resources from the Future, showing the economic impact of instituting a carbon tax and using the revenue to reduce various other forms of taxation.

carbon tax

As the chart shows, it matters a lot what type of tax you are swapping out for a carbon tax. Using carbon tax revenues to offset reductions in consumption taxes, for example, would be a net negative for the economy. Swapping a carbon tax for cuts to taxes on labor would have a smaller, but still negative effect. And simply returning the money to people in the form of lump sum payments would be worst of all.

But look at the blue line. If carbon tax revenues were used to cut taxes on capital, this would result in a net increase in gross domestic product. Murphy himself acknowledges this, stating that “the RFF model shows that only if carbon tax revenues were devoted entirely to a corporate income tax cut would the economy’s growth rise above the baseline.”

That’s overstating things a bit. For example, just eyeballing the chart, it looks like a plan that used half of the revenue from a carbon tax to cut taxes on capital and the other half to cut taxes on labor would still be a net positive for economic growth, albeit not as much of a positive as if all the money went to cutting capital taxes. I’m not saying that R Street would favor such a split, just noting that you could still end up ahead economically even if not all the money from the carbon tax went to cutting taxes on capital.

And remember, the above analysis assumes no benefits to the economy from limiting climate change. To the extent that one does think there are risks from climate change that taxing carbon emissions could mitigate, it makes the case even stronger.

So why isn’t Murphy on board with swapping carbon taxes for capital taxes? Basically because he doesn’t think it’s politically realistic:

There is no way in the world that a massive new U.S. carbon tax is going to be implemented, in which all of the new revenues are devoted to cutting corporate income taxes… We can see that the ‘fashionable’ proposals that are anywhere close to actual political proposals do not consist entirely of tax cuts on corporations. For example, the recent Whitehouse-Schatz proposal, unveiled at the American Enterprise Institute, is ostensibly revenue neutral. Furthermore, one of its features is a reduction in the corporate income tax rate from 35 to 29 percent. So far, this sounds like it’s a ‘pro-growth’ measure, right?

But hold on. The Whitehouse-Schatz proposal would also use its revenues to fund a reduction in payroll taxes (but it is a flat $550 tax credit, so it lacks ‘supply-side’ incentives and acts as a lump-sum check), and to allocate $10 billion annually in grants to states to assist low-income people who will be hit the hardest by higher energy prices.

Murphy is right that the Whitehouse-Schatz proposal is flawed (we’ve written about why here). But I’m a bit surprised to hear him dismiss ideas on the grounds that they aren’t politically realistic. Murphy is an anarchist (not that there’s anything wrong with that). His preferred solution on climate is to abolish the government and have a system of private sector judges work everything out. Whatever the merits of that idea, I would submit it’s at least as unlikely as swapping a carbon tax for cuts to the corporate income tax.

More generally, lots of political ideas start out being unrealistic, only to become law later. People who advocate for Social Security privatization or drug legalization probably recognize the uphill struggle they face in advancing their views, but that hardly means they should just give up. As Milton Friedman famously said, the basic function of a policy advocate is “to develop alternatives to existing policies, [and] to keep them alive and available until the politically impossible becomes the politically inevitable.” I happen to think the time is a lot closer for revenue-neutral carbon taxes than Murphy probably does. But it’s only going to happen if people make the case.

The Equifax Hack: Time to get serious about consumer data protection

shutterstock_63746137

What’s said about money can be said about data: No one treats other people’s information the way they treat their own.

This week, Equifax—one of the “big three” consumer credit rating and reporting agencies— disclosed a massive hack that compromised the personal information of 143 million U.S. consumers. What makes this hack so damaging is that Equifax’s databases contain a motherlode of information about consumers—names, addresses, dates of birth, Social Security numbers, bank accounts, credit cards and more—all in one place.

Such hacks fuel the supply side of identity fraud and theft. Criminal hackers then sell the information wholesale via the “dark web” to other criminals who then use it to create fraudulent credit cards or other financial accounts. The “street” value of personal data goes up the more information there is to connect to a specific individual. By itself, a credit card number has a small degree of value. Add the expiration date, and the value ticks up. Add the CVV code (the three-digit number of the back of the card), and the value ticks up more. Connect it with a name and address and Social Security number and the value skyrockets.

If you’re lucky, the process ends with a phone call from a credit-card issuer asking you to verify a big-ticket purchase in a far-flung foreign capital. If not, you can find yourself debited for thousands of dollars in purchases you did not make and face years of battling with banks to clean up your credit rating. In the worst case, your personal or business bank accounts may be accessed and drained.

The Equifax hack is damaging in at least three ways: the number of records stolen, the wealth of information they contain and that, as a major credit-reporting company, consumers are obliged to use it to conduct everyday business, ranging from applying for retail credit to renting an apartment. This last point is critical, because it’s where the curmudgeonly criticism—that if you don’t want your data stolen, don’t put it online—breaks down. Consumers today increasingly have no choice but to put personal data online. The so-called “internet of things” will depend on it.

This is not meant as a slam. The internet of things will have enormous social benefits. Further development of the platform and accompanying applications should be encouraged. But a key element in making it work will be consumer confidence in the security of the personal data that’s collected as a matter of course.

This why both the government and commerce must address the Equifax hack as a significant problem. Although I tend to favor that government takes a light hand on business, there needs to be a thorough investigation as how this hack happened. Unfortunately, if the past is any indication, the Equifax hack will likely be traced to disregard of internally published cybersecurity protocols. The hacker may have been clever enough to break through a firewall, but that breach probably was aided by system information acquired by the target’s carelessness, such as:

All these and more violate best practices for data protection that can be found on any basic list of ways to safeguard data, be it on a home PC or a corporate server farm. When there’s loss because of failure to follow established standards of behavior, whether or not encoded in law, it’s negligence. And negligence is actionable.

If consumers are to remain confident in the security of their data in an environment where they are asked to share it in greater quantities, policy attitudes must change. That starts with the government realizing that cybersecurity is too big to be managed top down by a single “office” or “czar.” Responsibilities, strategies and tools must be distributed throughout the federal and state levels of government with the understanding that different hackers have different objectives. The Equifax hack was motivated by criminal profit. That means detection, prevention, regulations and response should be quite different here than for other targets, such as the Pentagon or defense contractors (espionage) and critical infrastructure (terrorism and cyberwarfare).

For one, the Equifax hack should be treated as an international organized crime problem. Solutions call for multilateral efforts with Interpol as well as other national police agencies. Treaties and accords should be pursued, but cooperation is possible without them. A model could be the Virtual Global Taskforce, an international private-public partnership of law-enforcement agencies, nongovernmental organizations and industry that has successfully targeted child pornography and child sexual exploitation.

But the private sector should be held accountable as well, especially when breaches occur because internal cybersecurity protocols and processes have been routinely ignored. Prosecutors should push for stronger penalties and judges should be reluctant to approve defendant-friendly settlements that fail sufficiently to punish a company for its carelessness.

Legislators should enact laws that guarantee baseline protection for consumers and compensation when negligence leads to loss. When a company requests or requires valuable personal data, it should be treated as under contract to do its best to protect that data. The best practices are already there. All the public needs are legislative teeth to ensure they are followed.

In the end, this transcends Equifax or any single data breach. Policymakers are still coming to grips with how the internet has exponentially increased the value of personal information. If consumers have little or no confidence in those they must entrust with it, the digital economy will be worse for it.


Image by Michael D Brown

The great Texas gas shortage

shutterstock_220853668

The great Texas gas shortage of September 2017 is over. But did it ever happen?

For me, it all began last Friday morning. As I was driving to my local coffee shop, I passed a gas station with a line of cars stretching out into the street. The next station I passed was even worse, with lines stretching around the block. The third station I passed had no line: it was out of gas completely. By the time I returned from my coffee run, the first two stations were out too.

The scene I witnessed that morning was playing out all over central and north Texas, as worries about supply disruptions from Hurricane Harvey led to the gasoline equivalent of a bank run. Worries that stations would soon run out of fuel became a self-fulfilling prophecy, as a cycle of panic buying caused shortages, leading to even more panic buying.

Soon, almost everywhere was out of fuel. One friend had to abandon their car part way between San Antonio and Austin because they couldn’t find gas. Another described the “post-apocalyptic” feel at a Buc-ee’s mega-gas station, which continued to be just as full of people as normal, but with empty pumps.

Public officials took to the airwaves to reassure people that there were no gas shortages. Whether this was true is mainly a matter of semantics. Claims that there was no shortage were correct in the sense that there hadn’t been a major disruption in supply. Texas is a big state, and much of the affected regions had escaped serious flooding. While some refineries were offline temporarily due to the storms, there was still plenty of fuel flowing.

The real problem was not falling supply so much as a spike in demand. Some of this spike was due to sheer stupidity (pictures circulated on the internet of people filling up garbage cans with gasoline; hint – don’t be that guy!). But this was only part of the problem. A bigger issue was a shift in demand. People normally wait to refill their gas tanks until they are mostly empty. Depending on the type of car and how much it gets used, a typical person might go a week or more between fill-ups. Gas stations thus ordinarily only need enough gas on any given day to fill the tanks of a small fraction of the local population.

The concerns over fuel shortages pulled much of that demand forward. Instead of waiting until the fuel light goes on, people decide to fill up with half a tank or more remaining.

In a situation like this, what is collectively irrational can be individually rational. In fact, keeping a cooler head in such circumstances can leave you worse off, as the race goes to the swift. Luckily, in this case, the situation was short-lived. It stabilized after a few days and, by Tuesday, things were mostly back to normal. The experience, however, does not bode well for what might happen in the case of a real shortage.

There is, of course, a simple way to avoid fuel shortages when you have rising demand and steady or falling supply: raise prices. Higher prices would encourage people to conserve fuel and might even have blunted the cycle of panic buying in the first place. Higher prices also would have served as a signal to bring in more fuel to meet the higher demand. One of the strange features of the whole situation for me was how little the price of gas increased, given the lengths to which people went to get it.

The answer to this is admittedly obvious. Stations were reluctant to raise prices lest they be charged with price gouging. Laws against gouging are supposed to protect consumers but, like all forms of price control, they can easily end up making consumers worse off by denying them access to the product at any price. It’s something to consider as we look to the likely strike of Hurricane Irma this weekend, and all the other storms in the months and years to come.


Image by AHMAD FAIZAL YAHYA

 

It’s crucial that STB noms support railroad deregulation

shutterstock_584438302

The Surface Transportation Board, a federal agency with broad authority over the nation’s railroads, is currently weighing a petition that could undo most of the progress made since railroad deregulation in the early 80s. That makes it particularly crucial that the Senate think long and hard about two pending appointments to the STB, which are set to come before the Committee on Commerce, Science and Transportation in the near future.

Formed in 1996 as a successor to the Interstate Commerce Commission, the STB interprets laws, promulgates rules and settles disputes related to railroads. It’s crucial that it be run by people who understand the need for a light regulatory touch, because the industry that it oversees has been a poster child of the power of deregulation.

Congress was able to achieve that substantial railroad deregulation with the Staggers Rail Act of 1980, which eliminated costly rate controls and regulatory review processes that needlessly drove prices upward. The law was an important step to ensure that privately operated railroads could sustain themselves in a competitive manner. In fact, in the decade following the passage the law’s passage, the rail industry was able to cut its costs and prices by half. By some estimates, shipping rates have dropped 51 percent since reforms went into effect.

But that all could change. Shipping interests who are reliant on moving their goods by rail are seeking a rule that would force railroads to lend their tracks to other railroads. This so-called “reciprocal switching” rule is based on a pair of faulty assumptions.

The first incorrect assumption is that rail lines are public property and should be treated the same as roads; they aren’t, and they shouldn’t be. For the most part, rail lines are owned by private firms. The second bad assumption is that railroads can’t coordinate use of each other’s rail lines on their own, even though they do it all the time.

President Donald Trump hasn’t yet made public his choices for the two STB seats that are set to be filled. It is vital that new members of the STB, whoever they may ultimately be, understand that a reciprocal switching rule would effectively re-regulate our nation’s rails. It is up to the Senate to ensure the nominees understand not only the details of the Staggers Act, but also its intent: to keep U.S. rails free and competitive.


Image by ideal_exclusive

The 9 lives of Richard Posner

2qb7o1h

The following blog post was co-authored by R Street Senior Fellow Ian Adams.


Love him or hate him, there is no disputing that Judge Richard A. Posner, who retired from the 7th U.S Circuit Court of Appeals Sept. 2, is a legend of American jurisprudence. Known for his deep knowledge of economic theory, which he regularly weaved into his opinions, Posner authored some of this generation’s the most profound rulings in the fields of antitrust, copyright and patent.

Named by President Ronald Reagan to the 7th U.S. Circuit Court of Appeals in 1981, when Posner was just 42, he later became the favorite to replace Sandra Day O’Connor on the Supreme Court in 2005. Alas, his ascent to the nation’s highest court did not to come to pass. Posner’s outspoken nature and personal disdain for the role of the high court—which he likened to “the House of Lords, a quasi-political body“—scuttled his candidacy before it could move forward in earnest.

Yet from his perch on the 7th Circuit, Posner was able to do more to develop his uniquely pragmatic and economically informed take on jurisprudence than many Supreme Court Justices accomplish during their careers. His significance as a jurist is evidenced not only by his more than 3,300 opinions as a member of the federal bench, but the fact that he became the most cited legal scholar of the 20th century. In an era defined by “purposive” and “textual” jurisprudence, Posner’s approach followed a straightforward approach: find what is right and what is wrong and express it in colloquial language familiar and accessible to those outside of the legal profession.

Naturally, strict constructionists, who aspire to hew closely to the four corners of the U.S. Constitution, saw Posner as everything that is wrong with the third branch of government. His occasionally flippant disregard for the Constitution—once going so far as to say that he saw “no value to a judge” spending any amount of time studying the Constitution’s text—could not have been better designed to trigger outrage from his colleagues and friends on the right.

Perhaps because he was largely unmoored from the past, Posner’s jurisprudence translated well to new frontiers of legal thought. Throughout his career, he was an undisputed champion for user-rights in the digital age. In 2012, he wrote that protections for copyright and especially patent had become excessive. His view was simple: when protections provide an inventor with more “insulation from competition” than needed, it will result in increased prices and distortions in the market. As more companies seek overly broad patents, the parties who suffer most are consumers.

In his essay “Intellectual Property: The Law and Economics Approach,” Posner spoke openly about his views of limiting copyright terms, the idea/expression dichotomy and fair use, as well as laying out a novel approach to piracy. He maintained that the analogy to “piracy” was born of a misconception that intellectual property is indeed physical property. In Posner’s view, if an individual who was never going to buy a copy of a registered work illegally copies the work, there is no market deficit. It’s only when pirates make and sell copies to individuals who would normally buy the work that the copyright owner is affected. Poser didn’t excuse bad actors, but applied rigid cost/benefit analysis to the parties and judicial economy.

Posner also was a thoughtful academic with a longtime appointment to the University of Chicago School of Law. He was committed to mentoring legal talent. Lawrence Lessig—famous for his work on remixed works and as creator of Creative Commonsonce clerked for Posner.  He has authored three dozen books thus far, on subjects that range from terrorism to sex. He was also the co-creator of the Posner-Becker blog, which ran until Nobel laureate economist Gary Becker’s death in 2014. The blog provided an outlet for the University of Chicago professors to muse over rulings, explore current events and show a human side to their work.

Despite this heady list of accomplishments, the single act that may garner Posner the most ongoing acclaim from law students was his hatred of the citation manual known as “Blue Book.” In his essay, “The Blue Book Blues,” he wrote—tongue firmly in cheek—that all copies of the style guide should be burned because it “exemplified hypertrophy in the anthropological sense.”

Posner’s legacy will be felt for generations to come. His opinions and his other writings make clear the law is as much a tool for learning as it is a tool for justice.

Pennsylvania should reject unconstitutional internet sales tax

shutterstock_672633394

For decades, one of the thorniest issues in all of state government has been how to be even-handed in the tax treatment of merchants who sell from within state borders versus those who market online from other places in the world. Unfortunately, an approach urged recently by the Pennsylvania Senate does not provide that balanced solution.

Under a provision added by the Senate to H.B. 542—a tax reform bill the state House passed in May—any intermediary that even merely facilitates a commercial transaction with a Pennsylvania resident would be required to collect and remit taxes, even if it lacks physical presence in the state. Legislation of this type adopted in other states has been held unconstitutional and should be rejected largely for that reason.

The bill incorporates provisions used by other states in laws that were drafted to challenge U.S. Supreme Court precedent, but this approach is both costly and unlikely to be successful. In South Dakota, a federal court recently enjoined a similar tax-remittance law that sought to extend the state’s taxing power beyond its borders, just as H.B. 542 proposes. Ultimately, by empowering Pennsylvania to collect taxes from businesses with no physical presence in the state, the rule immediately would draw the commonwealth into the potentially expensive and bitter cycle of litigation seen in other states. It’s a cycle unlikely to yield a positive result, because decades-old Supreme Court precedent makes clear that state taxing powers stop at the border’s edge.

This bill also imposes an undue burden on online marketplaces like eBay and Etsy, which are merely virtual storefronts that allow millions of small businesses to reach customers across the globe. H.B. 542 ignores the actual 21st century marketplace and creates new tax and compliance burdens not just on big internet companies, but also on craftsmen and entrepreneurs. It would be like making the King of Prussia Mall or the Millcreek Mall liable for all the sales taxes owed by its tenant stores anywhere in the country. Of course, that would be absurd.

Setting aside the bill’s obvious unconstitutionality, it would be decidedly unwise for Pennsylvania. By contributing to the erosion of borders as effective limits on state tax power, it will encourage poorly governed, tax-heavy states like California, New York and Illinois to unleash their aggressive tax collectors on Pennsylvania businesses and marketplace facilitators. Pennsylvanians could be subject to audit and enforcement actions in states all across the country in which they have no physical presence.

Moreover, citizens of the commonwealth largely oppose this tax grab. In a 2014 poll conducted by R Street and the National Taxpayers Union, overwhelming bipartisan majorities of Pennsylvania Republicans, Democrats, conservatives, moderates, liberals and independents answered “yes” to a question about whether “the internet should remain as free from government regulation and taxation as possible.” Moreover, by a margin of two to one, respondents said they opposed “federal legislation that changes how states collect sales tax from internet purchases.”

The U.S. Constitution was written to replace the Articles of Confederation, in no small part, due to the latter’s failure to prevent a spiraling interior “war” of states who could assert tax and regulatory authority outside their borders. While the Constitution’s Commerce Clause and subsequent jurisprudence make clear that taxing power must be limited by state borders, this bill seeks to wipe those limits away. The General Assembly should reject this law and avoid the ensuing legal tangle.


Image by Andriy Blokhin

 

Massachusetts’ ‘millionaires tax’ is a major misstep

shutterstock_450650347

Trying to squeeze more money out of the top income earners is a poor fiscal strategy, whether you’re looking to close budget deficits or to subsidize pet projects. Even where the tactic accomplishes its stated purposes in the short term, soaking the rich creates an unreliable revenue stream that risks driving wealthy residents, and even businesses, to other states with more accommodating tax structures.

The latest revenue-raising proposal out of Beacon Hill falls squarely into that category.

Under the Massachusetts plan, a mere 19,600 tax filers, in a state of nearly 7 million, would pay a new and higher rate. Of that fraction, a mere 900, who are projected to make more than $10 million annually, would be responsible to contribute 53 percent of new tax revenues, or roughly $1 billion of the additional $1.9 billion projected from the surtax. A smaller fraction still, the top 100 earners in the state, would see their state income taxes rise from an average of $5 million to $9.3 million annually.

The additional revenue is slated to fund transportation infrastructure and the commonwealth’s educational systems, but the promise of support rests on the none too certain assumption that those residents subject to the surtax will actually pay these higher rates for the privilege of continuing to live in the Bay State.

Analysis by the Massachusetts Taxpayers Foundation (MTF)—the state’s pre-eminent public policy organization dealing with state and local fiscal, tax and economic policies—found that if just one-third of the 900 tax filers projected to make more than $10 million annually were to relocate, total income tax revenues would drop by approximately $750 million. Such a shift would blow a hole in the budget.

It is not as though there’s no precedent for exactly this. Massachusetts enjoyed a windfall when General Electric moved its headquarters north from Connecticut. The reason for the move was clear enough. Data from the Tax Foundation, an independent tax-policy nonprofit, ranks Connecticut 43rd of the 50 states in terms of tax climate (Massachusetts ranks 27th). But one cannot help but wonder if GE would have made the move if the “millionaire’s tax” was pending, as it is now.

What the proposal lacks in policy wisdom, it also lacks in terms of a firm legal foundation. There is an open question about its constitutionality.

As written, the proposal violates the state Constitution because it is, in fact, a budget appropriation. Article XLVIII (48) of the Massachusetts Constitution lays out the guidelines for ballot initiatives and prohibits the use of such initiatives to make specific appropriations. Article 48 also mandates that ballot initiatives must have a common or related purpose. Education and transportation are unrelated matters, just as raising and appropriating funds are two separate actions. As written, this measure unconstitutionally binds voters who might want to vote for increased revenue generation, but would like it spent differently. With no precedent for this situation, it may be destined for a lengthy judicial controversy.

If, somehow, the initiative were to become law and survive judicial scrutiny, the people of Massachusetts would have real trouble undoing their mistake. Because the tax, as contemplated, would be passed in the form of a constitutional amendment, it would take a subsequent amendment to undo the millionaires’ tax. That would involve legislative approval of a subsequent constitutional amendment and a vote on the next general election ballot. In fact, should the initiative pass, the earliest a change could be made would be Jan. 1, 2023.

The need for flexibility is amplified in a region in which residents can travel from one state to the next in a matter of minutes. Consider Massachusetts’ neighbor, New Hampshire. The Tax Foundation ranks the Granite State seventh on the list of the 10 states with the best overall tax climates. New Hampshire politicos are not naïve. They certainly would work to capitalize on this misstep in “Taxachusetts” in a manner that should be familiar to our neighbors in Connecticut.

Massachusetts, and state legislatures across the country, should stop looking to the wealthy to solve budget and infrastructure woes. Even the best laid plans have unintended consequences, and targeted tax hikes on a state’s highest earners can be disruptive to both businesses and individuals. The Bay State can and should avoid the uncertainty inherent in this budgeting approach. Its long-term fiscal health depends on it.


Image by pathdoc

Hurricane Harvey shows electrical grid resiliency is key to swift recovery

shutterstock_704713648

The similarities are striking between Hurricane Katrina, which devastated New Orleans in 2005, and this week’s strike of Hurricane Harvey on Houston and Southeast Texas. Many residents of the Houston area—home to between 6 and 11 million people, depending on which way one counts—can look forward to weeks, if not months of economic and personal discomfort and hardship, as life slowly normalizes.

But even at this early date, some lessons learned over the past 12 years are resonating. The most important of these just might be the resilience of Houston’s electricity grid. Recent investments improved the grid’s operation in ways that kept electricity flowing for more than 90 percent of the area’s customers throughout the hurricane, even as more than 4 feet of water fell from the sky over two days.

By contrast, electricity didn’t return to New Orleans for weeks, a scenario that contributed directly to much of the havoc and breakdown in civil order that came to represent the post-Katrina crisis in late August and early September 2005.

It turns out that with a continued flow of electricity comes all the other attributes of modern civil life. Cell phones and land lines work, giving first responders proper direction to those most in need. Stores stay open, which depresses the likelihood of looting. People with homes that aren’t flooded can invite neighbors and displaced strangers to come to where the showers and refrigerators still work.

Houston didn’t immediately learn how to adapt its electrical grid after Katrina, but Hurricane Ike in 2012 gave the city a gut punch, taking 2 million people offline for several days. In response, the city’s power company, CenterPoint, spent nearly $500 million to reinforce the system, raising substations in low-lying areas and cutting down tens of thousands of trees along grid corridors.

As Congress meanders its way toward what could be a $1 trillion infrastructure bill in 2018, more attention should be given to grid resiliency, not just along the Gulf Coast, but everywhere where energy infrastructure is vulnerable to natural disasters.


Image by AMFPhotography

 

A wonderful confession

shutterstock_313045826

Adair Turner is an obviously very intelligent man who graduated from Cambridge University with a double first in history and economics and whose distinguished career has included being Chairman of the British Financial Services Authority, director-general of the Confederation of British Industry chairman of the Pensions Commission and becoming Baron Turner of Ecchinswell.

He begins his recent book, Between Debt and the Devil, speaking of September 2008, with a remarkable and highly instructive mea culpa, which includes the following:

I had no idea we were on the verge of disaster.

Nor did almost everyone in the central banks, regulators, or finance ministries, nor in financial markets or major economics departments.

Neither official commentators nor financial markets anticipated how deep and long lasting would be the post-crisis recession.

Almost nobody foresaw that interest rates in major advanced economies would stay close to zero for at least 6 [now 8] years.

Almost no one predicted that the Eurozone would suffer a severe crisis.

I held no official policy role before the crisis.  But if I had, I would have made the same errors.

To draw the necessary implication of this wonderful confession: If you think that the superior knowledge, foresight and wisdom of government financial regulators and central banks are going to save you from getting into trouble, you are suffering from a strange, misguided and irrational faith.


Image by Rawpixel.com

 

R.J. Lehmann: Trump policies could undermine post-Harvey rebuilding

R Street Senior Fellow R.J. Lehmann appeared recently on National Public Radio’s All Things Considered program, discussing the impact President Donald Trump’s order to rescind the Federal Flood Risk Management Standard will have in the recovery from Hurricane Harvey, as well as the tactical mistake former President Barack Obama made in selling the FFRMS as a climate change adaptation measure. The full spot is embedded below:

Presidential signing statements are declining, but why?

shutterstock_348778124

The following post was co-authored by Megha Bhattacharya, outreach and communications policy research assistant at the R Street Institute.


Earlier this month, President Donald Trump signed into law the Countering America’s Adversaries Through Sanctions Act, which strengthened sanctions against Russia, North Korea and Iran. While some observers speculated Trump might veto the bill, he was faced with veto-proof majorities in both houses of Congress, which likely forced his hand.

Even though he ultimately signed the bill, Trump issued a signing statement—the second of his presidency—claiming that several portions of the law were unconstitutional infringements on his presidential power to conduct foreign affairs.

Presidents have issued signing statements for many reasons throughout history. They can be used to criticize discrete provisions in a law, clarify how the law’s text should be interpreted or even declare a portion of the law unconstitutional. Starting with President Ronald Reagan, signing statements enjoyed an uptick in popularity among modern presidents. But by the end of the George W. Bush presidency their usage had started to decline again. A new paper by Joel Sievert and Ian Ostrander examines this drop and attempts to uncover its cause.

As Sievert and Ostrander recount, presidents traditionally have used signing statements as mechanisms to assert presidential prerogatives, including assertions that a particular piece of legislation may raise constitutional concerns. While James Monroe was the first president to issue a signing statement, the practice became more consistent during the 20th century. Their use picked up substantially during the 1980s and continued through the 2000s, culminating in a series of smaller showdowns during the Bush administration.

After Bush issued a signing statement for the Department of Defense Appropriations Act for FY 2006—objecting to provisions Sen. John McCain, R-Ariz., had inserted into the bill that restricted the use of certain interrogation techniques on enemy combatants—Congress began to more formally criticize and fight back against the use of presidential signing statements. Specifically, Congress started to convene oversight hearings regarding the practice of signing statements and even introduced legislation to regulate the president’s ability to issue such statements.

Sievert and Ostrander note that it was around this time that the Bush administration began to curtail its use of signing statements. During the first six years of his presidency, Bush issued 149 signing statements, compared to just 16 over his last two years, a trend which (for the most part) continued into the Obama administration. The authors argue that this decline can be attributed to a simple cost-benefit framework: as Congress began to push back against presidents using signing statements, the costs of issuing the statements increased significantly. As a result, presidents began to decrease their reliance on signing statements and switch to other, less controversial tools to advance presidential prerogatives.

The authors point out that presidential tools are incredibly malleable and can evolve or die out over time. Scholars have suggested that presidents increasingly have relied instead on statements of administration policy (SAPs)—which are issued while a bill is moving through Congress, rather than once it reaches the president’s desk—to take the place of signing statements.

Given the malleability of presidential tools, it raises the question of whether the debate over formal signing statements is a distraction from larger issues. As noted, presidents issue signing statements for many reasons, such as to influence how a law’s text is interpreted or to impact how an agency implements a portion of a law. But presidents can advance these goals through other means, suggesting that presidential actions rather than signing statements are where the real focus should be. For example, presidents can use surrogates or speeches to air any objections to a particular law, and they can use tools like SAPs or even internal communications to agencies to influence how a law’s text is interpreted and implemented.

Of course, the most intense debates surrounding signing statements arise when presidents use them to lodge constitutional objections to portions of a law that they don’t otherwise want to veto in totality. It remains a controversial question whether presidents can merely decline to enforce parts or all of a law they view as unconstitutional. But even in these cases, signing statements themselves take a back seat to the president’s actual on-the-ground actions.

As the Congressional Research Service’s Matt Garvey has noted: “If an action taken by a President in fact contravenes legal or constitutional provisions, that illegality is not augmented or assuaged merely by the issuance of a signing statement.” In other words, signing statements themselves matter less than whether a president takes tangible steps not to enforce portions of a law he or she finds unconstitutional or undesirable. This was seen most recently in the Obama administration’s decision not to enforce certain parts of the Affordable Care Act during its implementation stage—an action taken without any signing statement indicating that the president would do so. Garvey continues:

It can be argued that the appropriate focus of congressional concern should center not on the issuance of signing statements themselves, but on the broad assertions of presidential authority forwarded by Presidents and the substantive actions taken to establish that authority. Accordingly, a robust oversight regime focusing on substantive executive action, as opposed to the vague and generalized assertions of authority typical of signing statements, might allow Congress in turn to more effectively assert its constitutional prerogatives and ensure compliance with its enactments.

While Congress might be best served to focus its ire on presidential actions, rather than statements, it is noteworthy to see the national legislature stand up to the executive branch in any realm. Sievert and Ostrander suggest that signing statements are “one of the most recent fronts” in the power balance between the legislative and executive branches, and their decline shows Congress can act effectively to curb executive power. As Sievert and Ostrander put it, the decline in signing statements in the wake of greater congressional pushback and oversight “demonstrates that executive power does not increase monotonically or proceed inevitably toward aggrandizement.”

Whether such aggrandizement continues apace will depend on Congress’ willingness to push back against other instances of executive overreach as vigorously as it has against signing statements.


Image by OPOLJA

DOE study provides insight, despite controversy

shutterstock_344750282

The U.S. Energy Department’s much-discussed grid study, released this week amid a swirl of pre-publication controversy, offers a thoughtful and empirically based approach to examining current and future issues facing the electric grid. There’s insights here for Congress and the Trump administration’s executive agenda, and it provides a starting point for a civil dialogue on electricity policy under this administration.

Critics decried the study long before it saw the light of day, calling it, among other things, a “fake study” and pro-coal propaganda. Some of the initial concern was valid, as when Energy Secretary Rick Perry suggested possible federal intervention to prevent unprofitable coal and nuclear plants from retiring in the name of national security (for the record – there is no case for doing that).

But while reasonable people can disagree with aspects of the study, it is not reasonable to dismiss it as propaganda. Ironically, all the chuffing has made critics look hysterical. The ball is now in their court to respond productively. Taken together with President Donald Trump’s decision to reject a moratorium on coal plant retirements, this report signals that this administration is serious about pursuing market-enhancing policies.

The study sets the right tone by identifying market forces, not environmental regulations or subsidies, as the principle drivers of coal and nuclear retirements. It even approached the sore subject of workforce transition, which is a difficult but appropriate conversation as market dynamism drives creative destruction in electricity markets. It accurately notes that neither a growing amount of renewables nor a trend of baseload retirements have created problems with reliability. At the same time, force-feeding changes in the generation mix—using subsidies and mandates that outpace the ability of market design and utility planning processes to adjust—could cause those problems. The study examined the implications of regulations and subsidies with appropriate tenacity and offered a reasonable set of recommendations.

The recommendations are consistent with industry experience and empirical evidence. They emphasize improvements to price formation in wholesale markets and encourage the Federal Energy Regulatory Commission to study creating market mechanisms for essential reliability services. The report also encourages efforts to examine resiliency, a distinct concept from reliability, which is another reasonable request for electric industry stakeholders. In the report’s cover letter, Perry recognizes that it’s important consumers know a resilient grid comes at a price. This also should be encouraging, as all too often, the electric industry fails to balance the benefits of reliability and resiliency with the costs.

The report’s infrastructure development recommendations prioritizes some no-brainers long overdue for reform. It calls attention to the challenges that face fossil-fuel generators under New Source Review regulations. This Clean Air Act program can have perverse effects on environmental outcomes and create excessive industry burdens. Scholars have proposed methods to revise the program to reduce the burdens, while maintaining environmental quality. The report also offers helpful ideas for a forward-looking research and development agenda, improving the coordination of the electric and natural gas industries and areas for further research.

Addressing the plight of nuclear, the report correctly notes the need to revisit safety regulations under a risk-based approach. Similarly, hydropower regulatory reform is well past due. An R Street paper released the same day as the DOE study identified 12 priorities to ebb the flow of hydropower red tape. Our report agrees with the DOE about the need to reduce regulatory burdens on hydropower licensing and relicensing processes.

Altogether, the DOE study injects civility and reasonableness into the dialogue of our electricity future, at a time when such policy too often comes down to emotion over empirics. In particular, a cultural battle over picking favorite fuel types has resulted in a maze of distorting subsidies and preferential treatment, while distracting us from the imperative of bolstering market performance.

One can only hope that folks across the spectrum will use the DOE study to launch a sustained dialogue that guides the kinds of sensible legislative and regulatory reforms that can empower markets and consumer choice.


Image by pan demin

Indiana’s embrace of harm reduction could save lives

shutterstock_300019631

Indiana’s recent move allowing counties and municipalities across the state to approve syringe access program (SAP) operations without first obtaining state approval has been ruffling some feathers. Some even suggest that programs like Indiana’s, which provide access to clean syringes, actually increase HIV incidence.

It’s not difficult to find the evidence showing that this claim amounts to baseless fear-mongering. The overwhelming data demonstrates empirically that clean syringe access decreases HIV incidence. Indiana’s change in SAP policy reflected the real need to address the spread of HIV among injection drug users – particularly in light of the 2015 outbreak of HIV in the state’s Scott and Jackson counties. The Indiana General Assembly and Gov. Eric Holcomb did the right thing by passing and signing H.B. 1438.

But to see the data through an objective lens, it’s important first to recognize whom it is that harm-reduction programs and services help. They are our neighbors, family members, colleagues and friends – members of our community.

Opposition to increased availability of SAPs often is based on a fundamental misunderstanding of the philosophy of harm reduction. It does not hold that people who use drugs are incapable of making healthy choices, leaving us no choice as a society but to enable them. To the contrary, harm reduction is based on the premise that people want to make choices that promote their health, even if they are unwilling or unable ultimately to quit using drugs. The people who take time out of their day to get clean needles, naloxone and condoms, or who attend clinic events, prove that proposition.

Syringe access programs work. Drug users who do not have needles are far more likely to share than those who do; it’s that simple. The availability of SAPs in Vancouver, British Columbia, helped decrease needle sharing among HIV-positive injection drug users from 37 percent in 1996 to 2 percent in 2014. In fact, difficult access to clean needles makes it 3.5 times more likely that a person will share needles, while access to needle-exchange programs makes it less than half as likely that a person will share a needle.

Extending these findings to incidence of HIV supports the idea that decreased needle sharing results in decreased transmission of infectious diseases. In New York City, syringe-exchange rates have correlated strongly with decreases in HIV incidence. In 1992, when 750,000 clean needles were distributed, HIV incidence rates were at 3.7 per 100 person-years. Just 10 years later, HIV incidence rates had fallen to 0.75 per 100 person-years, after needle distribution increased to 3 million.

Similar results can be found globally. In Dublin, Ireland, there was a 24.2 percent decrease in hepatitis C prevalence. In Lang Son, Vietnam, HIV prevalence dropped from 41 to 27 percent among injection drug users following the implementation of SAPs. Studies comparing clean needle distribution in Scotland, Ireland, England, China, France, Spain, Quebec and Australia all support the idea that decreased needle sharing decreases transmission of both HIV and hepatitis C virus.

SAPs are also cost effective. In fact, they’ve been cited as one of the most cost-effective public health interventions ever funded. It is calculated that it costs an SAP between $4,000 and $12,000 to prevent one HIV seroconversion, which is far below the estimated $385,000 it costs to treat one diagnosis of HIV. In 2008, Washington, D.C., allocated $650,000 of municipal revenue to fund SAPs. It’s estimated that, within two years, this policy change averted 120 new cases of HIV, for a projected cost savings of $44 million.

Opponents of harm reduction programs often suggest that treating addiction is, perhaps, a better long-term goal. There, we can agree. Several analyses of existing programs conclude that, rather than tacitly tolerating drug use and allowing addiction to take over communities, harm reduction programs actually correlate with increased entry into treatment. In Baltimore, people who visit SAPs are more likely to enter treatment that those who did not. In Seattle, people who use SAP services are more likely to remain on methadone treatment.

This result makes sense. Harm reduction programs, including SAPs, provide a point of intervention where a person who uses drugs can interact with a nurse, a counselor or volunteers who have the best interests of the person in mind. This is true both in the short term, with regard to providing clean syringes, and over the long term, where such services can provide options for treatment programs and facilities. This is the bare minimum we owe to our friends, neighbors and family members who struggle with this horrendous disease.


Image by sumroeng chinnapan

The Future of Traditional Urbanism: Conservatism in cities and towns

In a joint forum with The American Conservative magazine, the R Street Institute hosted a pair of July 31 panels at the Hillsdale College Kirby Center in Washington, D.C., to explore the conservative case for traditional urbanism, and outline the way forward for responsible development of U.S. cities and towns.

In the first panel—which also featured Gracy Olmstead, associate managing editor of The Federalist; Jason Segedy, director of planning and urban development for the City of Akron, Ohio; and Lewis McCrary, executive editor of The American Conservative—R Street Visiting Senior Fellow Jonathan Coppage made the case that conservatives should acknowledge and be concerned about the ways that decades of wrong-headed government planning often served to destroy architecture, cultures and vibrant neighborhood institutions that are the lifeblood of American cities.

Jon also moderated a second panel featuring New York Times columnist Ross Douthat; Benjamin Schwarz, national editor of The American Conservative; and Aaron Renn, senior fellow at the Manhattan Institute.

Full video of both panels is embedded below:

Ohio’s opioid agony

shutterstock_590129288

As college football season looms, hope springs eternal here in Columbus that our premier land-grant institution of higher learning will be atop the pile at the end of a glorious, or even a workmanlike season.

Unfortunately, Ohio currently is instead leading the pack in the metrics for two national crises: at-risk student loans and drug overdose fatalities. My Buckeye state experienced nearly 17,000 deaths from drug overdoses from 2010 to 2016, and led the nation last year with more than 4,050. The Ohio Department of Health is still compiling the data, so the total could even go higher still.

I have attended four national conferences of state legislators this summer, and these lawmakers are well-aware that the street drug problem has gotten much worse in the last couple of years, with the addition of fentanyl to many of the traditional offerings. As a synthetic opioid, it can be manufactured and transported anywhere. It can reportedly even be bought online, if one knows where to look. It’s easy to understand the dangers posed by a substance 50 times as powerful as heroin, which currently is being mixed with other street drugs—cocaine, heroin and even marijuana—to hook users.

To illustrate how “powerful” fentanyl is, the story of Patrolman Chris Green—an East Liverpool, Ohio officer—made national news in May when he overdosed from wiping just a small amount of fentanyl off his shirt. He had worn gloves and a mask which searching a stopped vehicle, but got it on his skin later when he brushed off. He had to be administered multiple doses of naloxone before he recovered.

The worst stories, in terms of the major public policy problems that surround opioids, are stories of people who have had to be administered lifesaving antidotes up to three separate times in the same 24-hour period. What can be done about those so determined to injure themselves is certainly beyond the scope of this piece, and is going to require a level of resources that is difficult to imagine.

There are 41 drug-related task forces in Ohio, and law enforcement is overwhelmed by the task of trying to “serve and protect” our citizens whose lives are dictated by access to opioids. John Born, director of the Ohio Department of Public Safety, was quoted in the Columbus Dispatch recently claiming that 20 kilos of fentanyl were taken off the streets in Ohio last week. Gov. John Kasich issued a plan in March limiting prescriptions of opioid analgesics to seven days for adults and five days for children. Ohio’s lawmakers are tackling the problem in every way they can imagine, as President Donald Trump mulls whether to declare a “national emergency.”

An extra $20 million was added to the Ohio biennial budget to be used for innovation in opioid product development, an area where there had been some progress recently. At the epicenter of the problem, scientists are working as hard as policymakers to mitigate the challenges.

The use of opioid analgesics for chronic and acute pain management—including neuropathic pain and post-surgical pain—has become commonplace and, while effective, can have unwanted side effects. The most extreme of these side effects is addiction, which people can transition to following recreational use or misuse and abuse of these drugs.

Reported separately in the Sunday edition of the Columbus Dispatch was a hopeful report on a novel delivery system. The company formed to manufacture this new product was initially funded by Ohio’s Third Frontier program, a ballot issue approved by Ohioans several years ago to facilitate generation of innovative products to the marketplace.

A pellet about half the size of a grain of rice delivers an analgesic that is designed to be nonaddictive. In a trial of 55 people, this analgesic has been injected into the intrathecal space of the lumbar spinal cord to mitigate pain from sciatica for up to a year with no major side effects. If it passes all the tests for efficacy and safety, this could offer at least one solution to one part of the problem we face as a nation.


Image by tab62

 

War for the Web: Countering ISIS and violent extremism online

In the wake of the recent terrorist attacks in London, U.K. Prime Minister Theresa May has been at the forefront of international calls for technology companies to do more to combat online extremism. The British government announced its intent to stamp out extremism “in all its forms, both across society and on the internet.”

In the United States, the Department of Homeland Security just announced a $10 million two-year grant to organizations that work to improve cybersecurity and thwart terrorism. Countering violent extremism, specifically online, requires taking proactive steps to halt extremist groups from being able to recruit and radicalize followers worldwide. This effort, now more than ever, requires increasing cooperation between the private, public and academic sectors, among others. For their part, tech companies have been experimenting with new techniques and guidelines.

These are complex issues at the intersection of freedom of expression and national security. How will all of the proposed changes and solutions express themselves online, domestically and abroad? How do these efforts to identify and prevent early online radicalization square with the First Amendment and notions of freedom of expression?

Arthur Rizer, R Street’s director of national security and justice policy, took part in a July 21, 2017 panel discussion on these and related issues hosted by the Advisory Committee to the Congressional Internet Caucus. Other panelists included Kevin Adams of the British Embassy, Alexander Meleagrou-Hitchens of George Washington University’s Project on Extremism, Mark MacCarthy of the Software & Information Industry Association and Clara Tsao of DHS’ Countering Violent Extremism Task Force.

Video of the discussion is embedded below:

Trump wisely rejects emergency order for coal

shutterstock_519756799

The Trump administration this week confirmed it has rejected a coal industry request seeking an emergency order for a two-year moratorium on coal-plant closures. This avoids what would have been an unprecedented and economically damaging intervention in electricity markets, without even the benefit of greater reliability. The move marks a sharp break from the all-tools-considered approach to reinvigorating coal, as the president reportedly had previously committed to the measure in private conversations with industry executives.

The Federal Power Act grants the U.S. Energy Department emergency authority to order continued operations of power facilities. In April, Energy Secretary Rick Perry announced the possibility of federal intervention to protect coal and nuclear plants in the name of national security, which would pre-empt state policies. The announcement coincided with the launch of an Energy Department study on so-called “baseload” power-plant retirements.

A massive moratorium on power-plant closures, especially those brought about by market forces, would heavily distort electricity markets and deter, if not outright freeze, new capital investment. Fatally undermining an investment climate could paradoxically worsen energy reliability by undermining the price signals that competitive electricity markets use to meet reliability requirements. Furthermore, using a national security mechanism when there is no national security concern would be an abuse of the emergency authority. Doing so while overriding the states would also leave a deep federalist scar.

This may even beat coal-production subsidies as the worst energy policy idea. Fortunately, many productive energy-policy corrections are on the table for the administration.

A reset on coal policy should be consistent with market principles, not a form of reverse-industrial policy to counter the prior administration’s favoritism to renewables. Thoughtful deregulation is an appropriate approach. So is lifting restrictions on coal exports or international financing for coal development. But subsidies and knee-jerk responses—a protectionist emergency order being the worst among them—would be deeply damaging and harm the economy.

One hopes this is a sign the president and his senior energy advisors recognize that economic transitions are necessary and healthy when they are supported by market forces. Coal’s biggest foe is shifting market fundamentals – namely inexpensive natural gas and declining demand. Subsidies for coal’s competitors is a lesser factor and the administration should deal with those in a manner that predictably and sustainably reduces the subsidy regime, not further entrenches it.

The surge in coal-plant retirements this decade was due mostly to a combination of environmental rules and market forces, with the latter being the main driver going forward. The mid-Atlantic region experienced more than 20 gigawatts of coal retirements already (equivalent to about three-quarters of New England’s peak demand). Markets facilitated new resources to take coal’s place. As leading industry economists note, the emergence of these alternative resources has been surprisingly robust and posed no clear reliability concern. Overall, most electric reliability metric trends are stable or improving.

Clearly, the doomsday reliability claims (e.g., coal retention as a national security issue) of some uncompetitive industries have gone unfounded. Still, achieving continued reliability requires market rules and monopoly-utility-planning processes to evolve, as unconventional resources become more economical. The administration can aid this by listening to industry experts, not the desperate claims of rent-seeking industry members.

The dismissal of a blatantly anti-market idea could, one hopes, point the way toward a more refined approach for this administration’s energy policy. The forthcoming U.S. Energy Department study has much potential to assess the regulatory and market environment fairly and to suggest market-enhancing improvements. Further work to improve the alignment of wholesale electricity market rules with electric reliability requirements is one such path to let markets, not government, decide the fate of the coal industry and all other power sources.


Image by Rudmer Zwerver

 

Trump’s ‘energy dominance’ strategy starting to crack Eastern European markets

shutterstock_547606072

The U.S. Energy Department announced Aug. 21 that a cargo ship full of Pennsylvania coal would be sailing out of Baltimore, 5,600 nautical miles across the Atlantic Ocean, Mediterranean and Black Seas to Ukraine, the first such shipment of its kind.

Such shipments hold significance in a variety of ways, and offer a possible window into the Trump administration’s desire to use energy trade to offset the aggressive geopolitical behavior of Russian President Vladimir Putin. In March, Ukraine cut off deliveries of coal from the Russian-controlled region of Donbass, where much of Ukraine’s coal industry resided before the 2014 conflict between Ukraine and Russia began.

Centrenergo, the Ukrainian power utility, has been struggling since March to replace the blocked coal supplies. Now, Latrobe, Pennsylvania-based XCoal Energy will send 700,000 tons of anthracite coal to Odessa over the next several months. The agreement follows a June meeting between President Donald Trump and Ukrainian President Petro Poroshenko and U.S. President Donald Trump.

This week also marks the first shipment of liquefied natural gas from the United States to the Baltic state of Lithuania, a former Soviet satellite, which until recently was completely dependent on Russian gas supplies. Lithuania and the other Baltic states of Latvia and Estonia have at times been under withering political pressure from their former Cold War patrons, particularly since Moscow occupied the Crimea in 2014.

Trump’s visit to Poland in early July included a speech that highlighted his administration’s desire to exert a counter-force on Russia through the energy markets, “so that you can never be held hostage to a single supplier,” Trump said.

Poland received its first U.S.-based shipment of LNG in July, and the single U.S. LNG export terminal at Sabine Pass on the Texas-Louisiana border has sent out more than 160 cargoes since starting up in February. Several of the cargoes have reached Spain, Italy, the United Kingdom and the Netherlands in the past 18 months.

U.S. LNG export capacity is set to grow five-fold by 2020, potentially making Eastern European economies awash in natural gas, just as many of the long-term delivery contracts signed between Eastern Europe and Russia’s natural gas monopoly Gazprom are set to expire in the early 2020s.


Image by Anatoly Menzhiliy

 

Rep. Meadows introduces bill to lock in regulatory budgeting

shutterstock_591208709-(1)

Those who champion slowing the growth of the regulatory state earned a victory earlier this year with President Donald Trump’s “two-out-one-in” executive order, requiring federal agencies to eliminate two old regulations for every new one they enact. The order also established a type of regulatory budget that caps the amount of regulatory costs agencies can impose on the economy during a given year.

But as R Street previously has argued, such executive branch actions, particularly in the area of deregulation, are unlikely to be lasting unless they are codified. Codification ensures that deregulatory efforts are locked in and not subject to reversal by a future president.

Toward that end, the latest good news is that H.R. 2623, legislation that effectively would codify Trump’s order, has been introduced in the U.S. House by Rep. Mark Meadows, R-N.C., chairman of the House Freedom Caucus.

Unlike past regulatory budgeting legislation, Meadows’ bill would not task Congress with setting the regulatory budget, instead granting that responsibility to the White House Office of Management and Budget. While such a structure may be the best short-term option to codify a regulatory budget, Congress ideally would be the branch responsible to set how much regulatory costs agencies could impose each year. A further concern is ensuring that OMB has the resources and manpower necessary to institute the regulatory budget.

Regardless, the Meadows bill should be welcomed as a step toward a more sustainable deregulatory effort.


Image by Maythaphorn Piyaprichart

New internet tax threatens privacy of Washington customers

shutterstock_255592126

In their zeal to shake a few more tax dollars out of Washington residents’ pockets, state lawmakers are brushing aside legitimate privacy concerns raised recently by civil-liberties groups. Under the new internet sales-tax law signed by Gov. Jay Inslee last month, the Washington Department of Revenue could learn more than most of us want it to know about our online purchases.

State officials vow the information provided to them by online retailers to facilitate the collection of the so-called “use” tax will be held in the utmost confidence. But from police agencies to the Internal Revenue Service, government bureaucracies have far from an unblemished record when it comes to protecting private records.

If you’ve bought nothing weird, then maybe you’ve got nothing to hide. And maybe the retailers, or third-party websites like eBay will do the state’s bidding and collect the tax for the department without turning over any information. Or maybe not. The new law gives out-of-state sellers the option to “voluntarily” collect Washington sales taxes or to provide the names, addresses and purchase information to the revenue folks in Olympia. As a consumer, the decision won’t be yours to make.

The stores must provide purchase amounts rather than a list of specific items. But this can be small comfort for those who patronized, e.g., a mental-illness center, a paraphernalia shop or a company that sells sex toys. Current law requires sellers with a brick-and-mortar presence in the state to collect taxes from in-state consumers, but the tax collectors say they want to “level the playing field.” Your privacy wasn’t much of a concern when they passed the law.

Even if your raciest online purchase is a calendar with cutesy cat photos, you ought to be concerned about the costly implications. There are your personal costs. The new law is, after all, a tax increase on Washington residents’ purchases. Then there’s the likely cost to the general fund, as state officials defend it for years in the federal courts. The state balanced its budget based on revenue assumptions from the tax (an estimated $1 billion over the next four years), but those collections will be put on hold through the length of the trials.

Lawmakers are confident that they are on solid legal ground, because the federal 10th Circuit Court of Appeal, after six and a half years of litigation, upheld Colorado’s internet sales tax law. The U.S. Supreme Court recently refused to review the Colorado decision, which cleared the way for that state to begin collections from out-of-state online customers.

But there’s no guarantee the federal 9th Circuit Court of Appeals, which oversees Washington state matters, will reach the same conclusion. There are significant other differences between the Washington and Colorado laws, even though Washington legislators used Colorado as a model. Those, too, can lead to a different outcome.

Both states require sellers that don’t collect the sales tax to provide personal information about online purchasers to their respective revenue departments. But the Washington law applies to companies that gross more than $10,000 a year in sales to in-state residents, whereas Colorado’s threshold is 10 times higher. “Washington’s puts more responsibility on so-called ‘marketplace facilitators’ and other internet ‘middlemen,'” according to a Tacoma News Tribune report.

That Colorado case centered on the U.S. Constitution’s Commerce Clause, governing business among the states. The Direct Marketing Association challenged the law based on a seminal 1992 U.S. Supreme Court ruling (Quill Corp. v. North Dakota) that state officials can only collect sales taxes from a business if they have a physical presence in that state. For instance, Seattle-based Amazon has long collected taxes for sales to residents living in Washington.

But the 10th Circuit ruled that “Quill applies only to the collection of sales and use taxes, and the Colorado law does not require the collection or remittance of sales and use taxes. Instead, it imposes notice and reporting obligations.”

Nevertheless, there are many reasons to question the Washington law. The $10,000 threshold imposes a burden on small businesses, given that they will need to maintain detailed reports on buyers in the state. If this law withstands court scrutiny, similar tax schemes will spread like a bad internet rumor. Even the tiniest enterprises, located here and elsewhere, will have to collect data to meet the varied demands of 50 state revenue offices – or face $20,000-plus penalties.

All U.S. internet companies, not just the small ones, will face disadvantages given that Olympia’s tax grabbers will not be able to enforce the statutes on sellers based in Shanghai or New Delhi. And Commerce Clause arguments won’t be the only ones that might tie it up in court.

In 2010, the U.S. district court in Seattle rebuffed efforts by the North Carolina secretary of revenue to receive detailed purchase information from Amazon as part of a tax audit of North Carolinians’ purchases. The court found that “citizens are entitled to receive information and ideas through books, films, and other expressive materials anonymously.”

That’s true even when the government is seeking the information for tax purposes rather than to censors. Do we really trust any government agency with such personal information? Unfortunately, we’re now at the mercy of the courts and Congress to protect such privacy rights.


Image by koya979

 

The FDA’s words and actions do not match

shutterstock_487676782

In a new paper in the New England Journal of Medicine, newly minted Food and Drug Administration Commissioner Scott Gottlieb and Mitch Zeller, longtime director of the FDA Center for Tobacco Products, commit themselves to a science-based regulatory framework that takes into account “the continuum of risk for nicotine-containing products” to reduce tobacco-related addiction, illness and death. To do this, the paper promises an FDA commitment to reduce nicotine levels in cigarettes to “non-addictive levels” and to “foster innovation in less harmful nicotine delivery.”

The proposal to reduce the nicotine content of cigarettes is based on two recent papers by Eric C. Donny and colleagues – one that also appeared in the NEJM and another that appeared in the journal Preventive Medicine. The Preventive Medicine paper outlines the issues that need to be addressed for such regulations to be developed. The NEJM paper details a six-week trial in which smokers were given free low-nicotine e-cigarettes to use, in addition to their usual cigarettes. The trial showed that smokers of low-nicotine cigarettes did not smoke more cigarettes on the last day of the trial than they had on the first day. Neither paper provides a sound basis to replace traditional cigarettes as we know them with a product that does not deliver significant amounts of nicotine.

For the past six years that Zeller has directed the Center for Tobacco Products, he has spoken of a “continuum of risk” and said he favored “fostering innovation.” Unfortunately, what has been reflected in FDA policy is direct opposite of those notions. The FDA has imposed no regulatory burden on cigarettes that wasn’t already imposed before adoption of the Tobacco Control Act of 2009. Meanwhile, the FDA continues to impose nearly impossible-to-meet requirements to approve any new product to the marketplace, including those that claim lower risk than cigarettes.

Despite overwhelming evidence that e-cigarettes are far lower in risk than cigarettes, and that they do not recruit teens to nicotine who otherwise would not have smoked, the FDA has done nothing to confirm or deny these research findings. Indeed, it has just announced a new anti-e-cigarette campaign.

The smokeless tobacco products currently on the American market have long been known to be far lower in risk than cigarettes. The FDA has done nothing to share this information with the public, and has even proposed a new set of smokeless tobacco regulations that threaten to remove almost all current smokeless products from the market.

FDA regulations continue to mandate that even the smallest change to any tobacco-related product now on the market would require immediate removal of that product, pending a new costly application for FDA approval of the modified product. So much for encouraging innovation.

The bottom line is this: neither science nor the fine words in this latest NEJM piece have anything to do with FDA policies that continue to protect cigarette sales and profits from competition from lower-risk nicotine-delivery products.


Image by Gustavo Frazao

 

How big a bank is too big to fail?

shutterstock_620476235

The notion of “too big to fail”—an idea that would play a starring role in banking debates from then to now—was introduced by then-Comptroller of the Currency Todd Conover in testimony before Congress in 1984. Conover was defending the bailout of Continental Illinois National Bank. Actually, since the stockholders lost all their money, the top management was replaced and most of the board was forced out, it was more precisely a bailout of the bank’s creditors.

Continental was the largest crisis at an individual bank in U.S. history up to that time. It has since been surpassed, of course.

Conover told the House Banking Committee that “the federal government won’t currently allow any of the nation’s 11 largest banks to fail,” as reported by The Wall Street Journal. Continental was No. 7, with total assets of $41 billion. The reason for protecting the creditors from losses, Conover said, was that if Continental had “been treated in a way in which depositors and creditors were not made whole, we could very well have seen a national, if not an international crisis the dimensions of which were difficult to imagine.” This is the possibility that no one in authority ever wants to risk have happen on their watch; therefore, it triggers bailouts.

Rep. Stewart McKinney, R-Conn., responded during the hearing that Conover had created a new kind of bank, one “too big to fail,” and the phrase thus entered the lexicon of banking politics.

It is still not clear why Conover picked the largest 11, as opposed to some other number, although he presumably because he needed to make Continental appear somewhere toward the middle of the pack. In any case, here were the 11 banks said to be too big to fail in 1984, with their year-end 1983 total assets – which to current banking eyes, look medium-sized:

alex chart

If you are young enough, you may not remember some of the names of these once prominent banks that were pronounced too big to fail. Only two of the 11 still exist as independent companies: Chemical Bank (which changed its name to Chase in 1996 and then merged with J.P. Morgan & Co. in 2000 to become JPMorgan Chase) and Citibank (now Citigroup), which has since been bailed out, as well. All the others have disappeared into mergers, although the acquiring bank adopted the name of the acquired bank in the cases of Bank of America, Morgan and Wells Fargo.

The Dodd-Frank Act is claimed by some to have ended too big to fail, but the relevant Dodd-Frank provisions are actually about how to bail out creditors, just as was the goal with Continental. Thus in the opposing view, it has simply reinforced too big to fail. I believe this latter view is correct, and the question of who is too big to fail is very much alive, controversial, relevant and unclear.

Just how big is too big to fail?

Would Continental’s $41 billion make the cut today? That size now would make it the 46th biggest bank.

If we correct Continental’s size for more than three decades of constant inflation, and express it in 2016 dollars, it would have $97 billion in inflation-adjusted total assets, ranking it 36th as of the end of 2016. Is 36th biggest big enough to be too big to fail, assuming its failure would still, as in 1984, have imposed losses on hundreds of smaller banks and large amounts of uninsured deposits?

If a bank is a “systemically important financial institution” at $50 billion in assets, as Dodd-Frank stipulates, does that mean it is too big to fail?  Is it logically possible to be one and not the other?

Let us shift to Conover’s original cutoff, the 11th biggest bank. In 2016, that was Bank of New York Mellon, with assets of $333 billion. Conover would without question have considered that—could he have imagined it in 1984—too big to fail. But now, is the test still the top 11?  Is it some other number?

Is $100 billion in assets a reasonable round number to serve as a cutoff? That would give us 35 too big to fail banks. At $250 billion, it would be 12. That’s close to 11. At $500 billion, it would be six. We should throw in Fannie Mae and Freddie Mac, which have been demonstrated beyond doubt to be too big to fail, and call it eight.

A venerable theory of central banking is always to maintain ambiguity. A more recent theory is to have clear communication of plans. Which approach is right when it comes to too big to fail?

My guess is that regulators and central bankers would oppose anything that offers as bright a line as “the 11 biggest”; claim to reject too big to fail as a doctrine; strive to continue ambiguity; and all the while be ready to bail out whichever banks turn out to be perceived as too big to fail whenever the next crisis comes.


Image by Steve Heap

 

PACER might be the government’s worst website

shutterstock_472129663

The following is a guest post by Tom Lee, former chief technology officer for the Sunlight Foundation.


When hackers are able to steal your money, it’s usually safe to call that a website’s least appealing feature. Astonishingly, that’s not true of PACER—the Public Access to Court Electronic Records system, run by the Administrative Office of the Courts—which charges for downloads of federal essential court records. In its case, hackability comes second to the bad and perhaps even illegal deal that it offers the public.

The exploit is real, mind you. The good people of the Free Law Project uncovered it months ago as part of their work to democratize legal information. Now that PACER has patched the vulnerability, FLP has disclosed the gory details.

The problem revolves around a cross-site request forgery attack. When you connect to a website, it’s normally able to store small amounts of data called “cookies” on your computer. Any time your browser makes a request to that site, it will send those cookies, along with the request. Sites can tell if a request comes from a logged-in user by examining the request for unique cookie values that were set after a successful authentication attempt and comparing those values to copies stored in the site’s database.

Code running on a different malicious website that you visit can’t look at the cookies of other websites. But it can make requests to other websites, and those requests will carry the other sites’ cookies. If those cookies identify a logged-in user, the malicious site can make invisible requests that trigger real actions on that user’s behalf on the target site.

There are standard ways to detect and defend against this, but PACER hadn’t used them. Although there is no proof that it happened, a malicious site could have made requests on behalf of logged-in users, downloading documents and racking up fees.

That’s bad. But it’s not the worst thing about PACER—that would be the fees themselves. PACER makes some kinds of documents free, but for many others, it charges 10 cents per page. Barring some truly incredible technical mistakes, that number is vastly more than the cost of serving a page of content. And it has remained at that level for many years, despite advancing technology and falling bandwidth and storage costs.

Legal actions often involve huge page counts, which means that PACER fees add up. And they render some kinds of research and scholarly work totally impractical.

Even worse, those fees might be illegal. The Administrative Office of the Courts is barred by the E-Government Act of 2002 from charging more for PACER than it costs to maintain the system. But there is evidence that AO is not in compliance with the law. In 2014, PACER collected $145 million in fees. Five years earlier, it had been projected to cost $30 million per year to maintain. Many suspect that PACER fees are being used to subsidize other line items in the agency’s budget.

A class-action lawsuit is underway that aims to untangle all of this; if you used PACER between 2010 and 2016, you might be a part of it. But even if you’re not, you can still help to democratize the system’s information. Since the government doesn’t hold copyright over PACER records, there’s nothing stopping you from sharing them with the world after you pay your 10 cents per page. The RECAP project is run by the Free Law Project and Princeton University’s CITP program, and provides browser extensions that automate and centralize this process. It will let you download records from the RECAP archive when they’re available, or contribute newly purchased PACER records to the archive automatically when they’re not.

PACER doesn’t charge for balances less than $15 per quarter, so if you’re feeling civic-minded, why not download RECAP, make a PACER account and liberate some court records for the public good? Now that they’ve patched their vulnerability, it might even be safe to do so.


Image by fizkes

 

How messaging smart flood planning as ‘climate’ policy led to its demise

shutterstock_604954820

When a bunch of reporters called me to discuss President Donald Trump’s decision to turn back Obama-era flood protection standards, I was happy to criticize the administration, because I think the standards were one of the few unalloyed good things the Obama administration did. They’re a clear message from the federal government that federal taxpayers won’t pay to build in flood-prone areas and will build infrastructure designed to stand up to nature.

The Federal Flood Risk Management Standards, promulgated by a January 2015 executive order, drew on the principles of President Ronald Reagan’s great Coastal Barrier Resources Act which forbade development subsidies for barrier islands and barrier beaches while leaving the private sector free to do as it pleased. This is a great policy.

But as I wrote in the Weekly Standard not long after the standards came out, the Obama administration made a serious political (and, arguably, factual) error by choosing in their public statements to label the standards a climate-change-adaptation measure. Now, it’s absolutely true that greenhouse gas emissions have resulted in thermal expansion of seawater and some ice melt in polar regions. These factors (mostly the former) have resulted in sea-level rise. This results in more flooding. In fact, an increase in “sunny day” flooding is one of the very few easy-to-observe widespread phenomena that we can link to greenhouse gas emissions in a convincing fashion.

That said, the areas most at-risk now and in the near future are almost all places where climate change isn’t the dominant concern. Changes in the levels of continental plates, as well as land loss caused by hydrological projects and other human activity, can have local impacts hundreds of times larger than those caused by global warming. Purely natural processes like erosion and seasonal plant growth also can change which particular areas will flood, how badly and how often. In any given area, these factors can be far more likely to make the difference than sea-level rise, which generally proceeds at a scale noticeable only after decades have passed. The folks who wrote the Obama executive order—I talked with them a bunch—knew this well and wrote the order in a neutral fashion to deal with whatever was causing flooding.

In its press statements and publicity, however, the Obama administration insisted on positioning the EO as a response to climate change. While any number of factors—including a genuine desire to cut red-tape surrounding infrastructure projects, pressure from builders and his own career as a real-estate developer—each played a role in Trump’s decision to rescind the order, I can’t help but think that a simple distaste for anything the Obama administration labeled as “climate policy” may have been the driving motivation to repeal the standards.

In part because climate change policy has become such a political hot potato—and because so many on the left have turned it into a culture war issue—focusing on climate change was clearly the wrong move for the Obama administration. As a result, the wrong messaging may have contributed to a very unfortunate policy decision.


Image by MaryValery

 

Clark Packard talks NAFTA renegotiation on Fox

clark

President Donald Trump, Mexican President Enrique Peña Nieto and Canadian Prime Minister Justin Trudeau are set to meet in Washington today, and over the next three days, for the first round of talks to renegotiate the North American Free Trade Agreement.

R Street Trade Policy Analyst Clark Packard, who back in June co-authored R Street’s comments to the Office of the U.S. Trade Representative on the subject of NAFTA renegotiation, discusses the history of the agreement, its benefits and ways it still could be improved in a new FoxNews.com video profile, embedded below.

Andrew Heaton on how to stop patent trolls

In his latest video for Reason’s Mostly Weekly series, R Street Associate Fellow Andrew Heaton takes on the subject of patent trolls and what to do about them — particularly in light of a recent decision by the U.S. Court of Appeals for the Federal Circuit that Personal Audio LLC doesn’t own the patent on the entire podcasting industry.

Kosar talks postage rates on APM’s Marketplace

R Street Vice President of Policy Kevin Kosar appeared on American Public Media’s Morning Marketplace program to discuss efforts by the U.S. Postal Service to have more flexibility to raise rates without congressional approval, and how that could cross-subsidize businesses where they compete directly with the private sector.

R Street’s voting guide for SXSW panels!

vote

VOTING ENDS AUGUST 25!!!!

We’ve put together some great policy panels for next year’s SXSW conference in Austin, Texas. BUT WE NEED YOUR HELP to get in the final conference program!

Please vote for us and help bring free-market ideas to Austin’s annual gathering of technologists, activists and entrepreneurs.

Panels featuring R Streeters:

Global Ecosystems and the Policies that Support Them: CLICK TO VOTE!

  • Featuring: Melissa Blaustein, founder and CEO, Allied for Startups;
  • Zach Graves, tech policy program director and senior fellow, R Street Institute;
  • David McCabe, technology reporter, Axios; and
  • U.S. Rep. Blake Farenthold, R-Texas.

How Scientology and Porn Shaped the Internet: CLICK TO VOTE!

  • Featuring: Sasha Moss, technology policy manager, R Street Institute;
  • Christian Dawson, co-founder and executive director, Internet Infrastructure Coalition (i2C); and
  • Aaron Perzanowski, professor of law, Case Western Law School.
  • Katie Oyama, Google.

RoboCop: Is Artificial Intelligence the Future of Criminal Justice? CLICK TO VOTE!

  • Featuring: Arthur Rizer, national security and justice policy director, R Street Institute;
  • Ryan Calo, assistant professor, University of Washington School of Law;
  • Heather West, senior policy manager, Americas principal, Mozilla; and
  • Vikrant Reddy, senior research fellow, Charles Koch Institute.

Virtual Reality Codes of Conduct in the Virtual Wild West: CLICK TO VOTE!

  • Featuring: Anne Hobson, associate fellow in technology policy, R Street Institute;
  • James Hairston, head of public policy, Oculus;
  • Alexis Kramer, legal editor, Bloomberg BNA; and
  • Matthew Schruers, adjunct professor of law, Georgetown University Law Center.

We’ve also put together a great list of policy panels from our friends! 

If you have any panels you’d like us to add to this list, please email Sasha Moss: smoss@rstreet.org.

 

Congress may be more bipartisan than you think

shutterstock_389589580

At the Library of Congress’ Congress and History conference, political scientists James Curry and Frances Lee presented their working paper “Non-Party Government: Bipartisan lawmaking and theories of party power in congress.” In the paper, the authors examine the degree to which increases in polarization and the centralization of power in Congress have resulted in strictly partisan lawmaking.

In short, they want to know if the common characterization of Congress is accurate: in our current era of hyperpolarization and confrontational politics, do majorities in Congress skip bipartisan legislating and pass bills over the strong objections of the minority? Turns out – not so much. Curry and Lee “find that lawmaking today is not significantly more partisan than it was in the 1970s and 1980s.”

Such a conclusion is a bit counterintuitive, given seemingly constant claims that parties are unwilling and unable to work together. Both parties have accused the other of ramming legislation down the throats of the minority without even a semblance of compromise or debate. Democrats have most recently leveled that charge at the GOP’s maneuvers regarding the American Health Care Act.

The perception that majorities run roughshod over minorities is based on a couple observable characteristics of recent Congresses. First, institutionally, increased polarization has diminished overlaps in policy preferences between parties, theoretically decreasing the likelihood of reaching bipartisan agreements. Additionally, stronger, more cohesive political party organizations have developed, which have subsequently centralized power in leadership offices in order to facilitate partisan lawmaking. As articulated by the authors, “Members have provided their leaders a bevy of procedural and agenda-setting tools to structure the legislative process in ways that stand to benefit the majority party.” Among these tools is the bypassing of the traditional committee-driven legislative process in exchange for leadership-managed policy creation, and granting leadership a near monopoly in deciding what issues come up for a vote.

Both of these factors—polarization and more cohesive parties with centralized power—lead observers to hold two important expectations:

  1. Bills that are actually signed into law are likely to be passed without bipartisan support;
  2. The majority party is more effective at realizing their legislative agenda, in spite of the minority opposition.

Curry and Lee, however, show that both of these expectations are not supported by the data.

For their analysis, the authors compile all passage votes in both chambers for bills that became law in the 93rd-113th congresses (1973-2014). Additionally, Curry and Lee use a subset of bills identified as “landmark legislation” by fellow political scientist David Mayhew, to examine if these more significant bills received less bipartisan support due to their increased impact and salience.

A brief discussion of three key findings within the paper are below, all of which suggest that lawmaking in Congress still generally requires, and receives, bipartisan support.

download

Most laws, including landmark legislation, are passed with strong bipartisan support. The above figure shows the average percentage of minority-party support on all bills that became law during each congress from 1973 to 2014 in the House of Representatives. Contrary to expectations, the figure shows no clear trend line of decreased minority support. On all bills that became law during this period, more than 60 percent of minority lawmakers voted in favor of passage on average, and in many congresses more than 80 percent of the minority voted yes. In fact, in the most recent congresses where polarization is most intense, we find the percentage of minority support is even higher than in less-partisan congresses of previous decades.

On landmark laws we see more variation in minority support across congresses, but still find that, on average, more than 65 percent of minority lawmakers vote in favor of these laws. Only in two congresses, the 103rd and the 111th, does the percentage of minority support fall below 50 percent. Similar patterns are found in the Senate, though not discussed here. (Please see the linked paper for the data and analysis for the upper chamber.)

download 2

Only rarely does the majority pass laws over the opposition of a majority of the minority party. The above figure shows the percent of laws that were passed in the House despite a majority of the minority voting no – this is referred to as the minority getting rolled by the majority. On average, the minority roll rates were less than 15 percent for all laws passed during the period under study. In only a handful of congresses does the roll rate get above 25 percent, with the 103rd Congress showing the highest roll rate of more than 30 percent. Again, we see no upward trend in roll rates despite stronger parties and increased centralized power in leadership offices.

Roll rates are moderately higher in the House on landmark laws, particularly in more recent congresses. However, even on these major bills, the majority is rolled only about 30 percent of the time. Of notable exception are the 103rd and 111th congresses, where the minority was rolled on more than 70 percent of landmark laws.

In the Senate, there is more variation in roll rates across congresses, but on average, the minority is rolled on less than 15 percent of all laws. On landmark laws in the Senate, there is only a slight increase in roll rates, with 19 percent of major bills being passed with the majority of the minority voting no.

download 3

Despite increased majority party tools, congressional majorities do not pass a greater portion of their legislative agenda than congresses in less partisan eras. In addition to looking at levels of minority support on legislation and roll rates, Curry and Lee also assess the degree to which majorities are able to enact their legislative agendas. Because of the increased cohesion of parties and tools granted majority leaders, we would expect to find that majorities are more effective in realizing their policy goals. Instead, the authors find that “congressional majorities rarely are able to enact new laws addressing priority agenda items that achieve most of what they set out to achieve. Far more frequently, majorities achieve none of what they set out to achieve or just some of it.”

The figure above displays the percentage of majority party agenda items enacted from 1973 to 2017, and it categorizes majority success in accomplishing some, most, or none of their policy goals on prioritized issues. While there is notable variation in the majority party’s ability to implement its agenda, the majority was able to realize none of its legislative goals most frequently. Only rarely does the majority get most of what it wants on agenda items, particularly in more recent congresses. Even in congresses with unified party control, the majority struggles to get even some of what it’s after. Having congressional majorities—as Senate Majority Leader Mitch McConnell, R-Ky., and House Speaker Paul Ryan, R-Wis., could tell you—does not automatically translate to the majority dictating policy terms to the minority. Instead, it appears the majority must make concessions on policy goals to ensure passage.

In spite of stronger, more cohesive parties, as well as more powerful leaders with tools to execute partisan lawmaking, laws passed in Congress are mostly done with large percentages of the minority voting in the affirmative. Contrary to consistent claims of majority party dominance over the minority, laws, including landmark bills, are typically passed with majorities of both parties in support.

Here’s the bottom line, in the words of the authors:

After decades of partisan change and institutional evolution in Congress, lawmaking remains a process of bipartisan accommodation.


Image by Lightspring

 

Courts deal another blow to Obama climate legacy

shutterstock_491963644

Attempts by the Environmental Protection Agency to regulate greenhouse gases suffered another setback Tuesday, when a panel of the U.S. Court of Appeals for the D.C. Circuit invalidated an Obama-era EPA rule governing the use of hydrofluorocarbons (HFCs).

HFCs are a greenhouse gas. They’re less well-known than, say, carbon dioxide, but they still have a warming effect when present in the atmosphere, and the rapid rise of HFC emissions in recent years has been a growing cause of concern for policymakers.

Ironically, HFC use has been encouraged by EPA regulation, which authorizes manufacturers to use HFCs as a replacement for other substances that negatively affect the ozone layer. The regulation struck down this week was EPA’s belated attempt to walk back this legacy, telling companies to forget what it said previously, because HFCs are bad now.

The problem is that the statute EPA claimed gave it the authority to restrict HFCs is about restricting ozone-depleting substances. But as everyone (including the EPA) concedes, HFCs don’t deplete ozone. According to the court, since the EPA had already OK’d manufacturers using HFCs as replacements for actual ozone-depleting substances, it couldn’t use the law governing ozone to bootstrap regulation of HFCs.

All this is somewhat technical, but it raises a broader issue. The EPA’s HFC regulation is one example of a larger strategy adopted by the Obama administration and some in the environmental movement to circumvent Congress when it comes to climate change policies. Instead of working out a viable legislative solution that would deal with the problem, the administration looked for ways to commandeer existing statutory and regulatory provisions as a basis for limiting greenhouse gas emissions. Often, this involved stretching the meaning or purpose of particular provisions until they bore little resemblance to how they traditionally were used. The biggest example of this, of course, was the Clean Power Plan.

Now I can almost hear the shouting as I type these words. Obama had no choice! Republicans in Congress were obstructionists, and never would have passed anything. This overlooks that Democrats controlled the House of Representatives and had a filibuster-proof majority during the first years of Obama’s presidency and still couldn’t enact their climate plan, but let’s leave that aside. My point is this: whatever the rationale of trying to act on climate without Congress, recent events have shown that this is a very fragile strategy.

When the EPA stretches its authority to act without congressional sanction, it risks having its work undone by the courts. And even where an EPA action might survive judicial scrutiny, it is vulnerable to being revoked by a future EPA with a different political bent. What can be done without Congress probably can be undone without Congress. This week’s court decision is simply more evidence that any lasting action on climate is going to have to involve Congress.


Image by Evan El-Amin

Diverse voices unite to ask Congress not to gut Section 230

shutterstock_159849869

It’s hard to argue against a bill as unassailably titled as the Stop Enabling Sex Trafficking Act, introduced in the Senate last week as S.1693. The measure already enjoys broad bipartisan support and boasts 27 cosponsors.

However, in its effort to punish backdoor online sex traffickers, this legislation appears likely to have unintended damaging consequences wholly unrelated to the issue. Since its introduction, a large array of voices—including civil liberties groups, think tanks, startups and tech industry groups—have come out, despite obvious reputational risks, to point out ways the bill would be counterproductive and damaging to internet freedom.

The proposed legislation includes overly broad language that would modify Section 230 of the Communications Decency Act, which provides online platforms a limited liability shield for user-generated content. If revised, online platforms would be liable for the behavior of their users. Critics of the legislation agree that without these protections, America’s unique and innovative internet ecosystem will collapse.

As R Street wrote in a bipartisan coalition letter with other think tanks and civil-society organizations, this well-intentioned bill threatens to weaken the pillars of internet freedom. Human rights and civil liberties organizations have voiced concerns that the bill would lead to increased censorship across the web. Moreover, it would hinder existing voluntary incentives to stop trafficking and discourage platforms’ proactive efforts to address evidence of trafficking, for fear of being implicated and prosecuted.

Currently, online communications pass through multiple intermediaries—including web-hosting platforms, email providers, messaging services, search engines, online advertisers and more—all of whom depend on protection from misdirected legal threats. Without the protection of Section 230, each intermediary could face potential lawsuits based on the millions of videos, posts and pictures uploaded to their platforms every day. Many stakeholders have pointed out that it’s unlikely the bill will do anything to combat trafficking, but it will certainly invite trial lawyers to bring a deluge of frivolous lawsuits that target law-abiding platforms.

The Electronic Frontier Foundation cited Section 230 as “one of the most important laws protecting free expression online.” To clarify, Section 230 does not provide immunity and has never prevented intermediaries from facing federal criminal charges. The U.S. Justice Department has every right to pursue anyone who violates trafficking statutes on internet platforms without making any changes to existing law.

“If online intermediaries were held responsible for the actions of each and every user, the potential liability would be so massive that no reasonable person would start or invest in such a business,” the Consumer Technology Association stated.

A multitude of tech coalitions have also highlighted how the overly broad legislation would harm legitimate U.S. tech companies. Without the protection provided by Section 230, all internet platforms will be responsible for engaging in self-censorship and resource-intensive review to inspect all user-generated content. While some tech giants might be able to shoulder the cost, the burden undoubtedly would stifle development of smaller websites and startups. Law-abiding citizens would be left dealing with the repercussions, while bad agents could easily escape by moving abroad or changing their URL address.

Section 230 promotes positive legal behavior. The tech industry has been cooperative in the fight against trafficking, working closely with law enforcement to identify potential illegal activities. The Copia Institute and Engine Advocacy groups highlighted in their letter how the tech industry has created their own tools, combining cutting-edge technology and big data to eradicate trafficking in the online sphere. This bill could have a chilling effect on the industry’s relationship with law enforcement. Trade associations spanning the breadth of the U.S. media and technology industries have described how the measure would be counterproductive to those companies’ efforts to combat sex trafficking. Ultimately, it would create incentives not to filter proactively for evidence that might implicate companies in criminal lawsuits.

New legislation is not necessary to hold actors accountable for their participation in illegal activity. The internet is the product of user-generated content and the ramifications of a bill like this would be devastating.


Image by KreativKolors

 

Free to Brew: Alabama’s war on margaritas

Cameron Smith uncovered Alabama’s overzealous alcohol control board attempting to ban the sale of pitchers of margaritas to adults. He explains how his team was able to help pressure the nannies in Alabama to reverse their decision and let consenting adults voluntarily purchase pitchers of margaritas once again. He also talked about how people can replicate the success!

Why should conservatives care about urbanism and city development?

Jonathan Coppage, visiting senior fellow with the R Street Institute, where he researches urbanism and the built environment, joins host Gracy Olmstead on this episode of Federalist Radio. They discuss the ways that design can have impact on our communities and neighborhoods.

“Building a house to engage and to face the street is the first step of reviving a public space,” he said. “Having a public space that orients people towards it is not just part of good community foundation…it’s part of public safety.”

https://thefederalist.com/2017/07/31/conservatives-care-urbanism-city-development/

They discuss Jane Jacobs, Wendell Berry, and others who have written about the spaces in which we live.

What the budget process can tell us about the state of the Senate

shutterstock_496618810

Congress is running out of time to fund the federal government for the upcoming fiscal year that begins Oct. 1.

In July, the House of Representatives passed four appropriations bills bundled together in a so-called minibus. But senators chose to leave town for their August recess rather than take up that spending package.

And there won’t be much time to do so when they return in September. The Senate is currently scheduled to be in session for only 17 days next month. The House and Senate will be on the job at the same time for only 12 of those days.

That doesn’t leave a lot of time for the Senate to take up and debate the House-passed minibus, much less the other eight appropriation bills that have yet to be considered by the full House or Senate. A short-term continuing resolution to keep the government open while Congress finishes its work appears inevitable.

Often overlooked in reporting on this state-of-play is the fact that Congress has yet to pass a budget resolution for the fiscal year that begins the end of next month. This is significant, because the budget provides the framework in which the appropriations process unfolds. That is, it governs annual spending decisions in the House and Senate. As such, its consideration is meant to precede that of the appropriations bills.

But that rarely happens these days.

Instead, Congress routinely fails to pass a budget at all. For example, Congress passed only two budgets in the seven years since 2010. And only one of those (in 2016) can be thought of as a budget in any meaningful sense. Congress passed the other one (in 2017) simply to make it possible for Republicans in the House and Senate to repeal and replace Obamacare via reconciliation. Members were focused on the budget’s reconciliation instructions and not its top-line spending, revenue and debt numbers.

A recent paper from Brookings Institution Fellow Molly Reynolds and the Center for Effective Public Management tackles this phenomenon and, in the process, provides valuable insight into why the Senate has been reluctant to take up a budget in recent years.

According to Reynolds, two developments are to blame. First, the budget process has become a partisan exercise. This aligns with how we typically think about the resolution itself. That is, as a symbolic document reflecting the priorities and governing agenda of the majority party. It is also hard to imagine a policy area that generates a comparable degree of conflict on such a consistent basis, given the controversial nature of our budgetary politics today.

As a result, budget votes have become party-line affairs, where senators from one side of the aisle reflexively line up in opposition to those on the other. In this environment, members of the minority party rarely cross over to support the majority’s budget.

One consequence of this is that it is now harder for Senate majorities to pass a budget when they are divided. Achieving party unity is made even more difficult with the strict statutory limits placed on defense and nondefense discretionary spending by the Budget Control Act of 2011.

Given that Senate minorities cannot obstruct budget resolutions, this dynamic also provides insight into how we should expect the institution to operate if a majority uses the nuclear option to eliminate the legislative filibuster in the future. If recent experience with the budget is any guide, empowering a majority to pass measures in the Senate unencumbered by the minority will not necessarily guarantee a sudden burst of legislative productivity.

Reynolds also suggests that the Senate’s reluctance to consider the budget resolution may be driven by the broader breakdown in the institution’s decision-making process more generally. That is, members increasingly offer more floor amendments during the consideration of the budget because it represents one of the few instances when they know they will have the opportunity to do so.

Overall amendment activity in the Senate has declined. While the number of amendments that are filed to legislation considered on the floor has remained relatively consistent, the number of those amendments that are eventually offered (i.e., made pending) to bills has dropped considerably. The reason is that leaders from both parties have utilized a complex assortment of rules and practices to exert greater control over the Senate floor than at any point in the institution’s history. The principal means by which they establish such control is their ability to fill the amendment tree, or offer the maximum allowable number of amendments to legislation. No amendments are in order once all the extant branches on the tree are occupied. As a result, senators are blocked from offering their own amendments.

But it is harder for leaders to block amendments during the budget’s consideration because members can continue offering amendments during the so-called vote-a-rama period once all debate time on the resolution has expired. The budget thus offers members a relatively easy way to engage in credit-claiming and position-taking activities on the Senate floor.

In highlighting these problems, Reynolds underscores the various ways in which the contemporary budget process is in tension with itself. Acknowledging the trade-offs inherent in such contradictions is an important first step in designing reforms that can help reverse Congress’ current trend of not considering a budget.

Several of these reforms are reviewed in the paper, including setting an overall limit on the number of amendments a senator may offer during floor consideration and creating a cloture-like filing deadline for those amendments to give members more time to review them before having to cast their votes.

Another possibility is to revise the contents of a budget resolution to include more information to help rank-and-file members and their staff independently assess the budget. Currently, budget enforcement mechanisms are tied to committee allocations, but few members (and few staff outside of the leadership and budget committees) fully understand how those allocations relate to the functional categories in the budget resolution text. They are not publicly available until they are published in the conference report’s statement of managers at the end of the process. Requiring the budget’s major functional categories to be replaced in the text, or at least supplemented, with specific committee allocations for budget authority, outlays, contract authority (where appropriate), and revenues (where appropriate) would enhance senators’ ability to evaluate the impact of any amendments offered, as well as the underlying resolution itself, on their priorities for the upcoming year.

Reforms like these would certainly make it easier for members to weigh the merits of various amendments and the budget resolution itself. But Reynolds concludes with the astute observation that such changes may be insufficient, so long as senators are not able to offer amendments freely to other measures on the Senate floor. That is, the budget resolution and vote-a-rama are likely remain an outlet for pent-up member demand to participate in the legislative process without changes to how the Senate makes decisions more generally.


Image by nelzajama

 

Juvenile justice reform finally clears its U.S. Senate hurdle

shutterstock_405739084

The following post was co-authored by R Street Research Assistant Megha Bhattacharya.


It’s been 10 years since the expiration of the Juvenile Justice and Delinquency Prevention Act, which created America’s federal standards for the treatment of juvenile offenders. Efforts to reauthorize the legislation have failed repeatedly.

However, the Senate last week passed its version of the JJDPA reauthorization bill—S. 860, the Juvenile Justice and Delinquency Prevention Reauthorization Act—a development that gives hope to juvenile justice reform advocates across the country.

Previous reauthorization attempts faced significant hurdles. Sen. Tom Cotton, R-Ark., held the bill last year over an objection to phasing out of the valid court order (VCO) exception. VCOs allow state and local systems to detain youth for committing so-called “status offenses” like running away from home, truancy, underage smoking and curfew violations – things that wouldn’t be crimes but for the age of the perpetrator. The VCO exception, Cotton argued, grants state courts additional options when dealing with juvenile offenders.

But Cotton’s opposition prompted a hold from Sen. Rand Paul, R-Ky., who stated he would not support the bill without the phase-out. Ultimately at an impasse, last year’s negotiations ran out of time.

For reauthorization to be successful, both the House and Senate bills must be agreed upon in conference committee and then passed by both chambers of Congress. Leaders on the issue in the House released a statement shortly after news of S. 860’s passage, expressing commitment to crafting a final reauthorization bill alongside their Senate colleagues.

Senate Judiciary Committee Chairman Chuck Grassley, R-Iowa, and Sen. Sheldon Whitehouse, D-R.I., are leading the Senate effort. It is anticipated the bill will reach the president’s desk before the end of this congressional session.


Image by Air Images

Virgin Islands follow Puerto Rico into the debt day of reckoning

shutterstock_268872116

What do Puerto Rico and the U.S. Virgin Islands have in common?  They are both islands in the Caribbean, they are both territories of the United States and they are both broke.

Moreover, they both benefited (or so it seemed in the past) from a credit subsidy unwisely granted by the U.S. Congress: having their municipal bonds be triple-tax exempt everywhere in the country, something U.S. states and their component municipalities never get. This tax subsidy helped induce investors and savers imprudently to overlend to both territorial governments, to finance their ongoing annual deficits and thus to create the present and future financial pain of both.

Puerto Rico, said a Forbes article from earlier this year—as could be equally said of the Virgin Islands—“could still be merrily chugging along if investors hadn’t lost confidence and finally stopped lending.” Well, of course:  as long as the lenders foolishly keep making you new loans to pay the interest and the principal of the old ones, the day of reckoning does not yet arrive.

In other words, both of these insolvent territories experienced the Financial Law of Lending. This, as an old banker explained to me in the international lending crisis of the 1980s, is that there is no crisis as long as the lenders are merrily lending. The crisis arrives when they stop lending, as they inevitably do when the insolvency becomes glaring. Then everybody says how dumb they are for not having stopped sooner.

Adjusted for population size, the Virgin Islands’ debt burden is of the same scale as that of Puerto Rico. The Virgin Islands, according to Moody’s, has public debt of $2 billion, plus unfunded government pension liabilities of $2.6 billion, for a total $4.6 billion. The corresponding numbers for Puerto Rico are $74 billion and $48 billion, respectively, for a total $122 billion.

The population of the Virgin Islands is 106,000, while Puerto Rico’s is 3.4 million, or 32 times bigger. So we multiply the Virgin Islands obligations by 32 to see how they compare. This gives us a population-adjusted comparison of $64 billion in public debt, and unfunded pensions of $83 billion, for a total $147 billion. They are in the same league of disastrous debt burden.

What comes next?  The Virgin Islands will follow along Puerto Rico’s path of insolvency, financial crisis, ultimate reorganization of debt, required government budgetary reform and hoped for economic improvements.

A final similarity: The Virgin Islands’ economy, like that of Puerto Rico, is locked into a currency union with the United States from which, in my opinion, it should be allowed to escape. This would add external to the imperative internal adjustment, as the debt day of reckoning arrives.


Image by Peter Hermes Furian

 

Free-marketers, environmentalists both have reasons to hate the RFS

shutterstock_506450029

The Renewable Fuel Standard, created more than a decade ago, remains the source of strong divisions today. But as an Aug. 1 hearing of the Environmental Protection Agency showed, it also can be the source of rare bipartisan agreement, with experts from across the political spectrum testifying to the need to update and reform the RFS.

Under terms of the Energy Policy Act of 2005, the RFS “requires a certain volume of renewable fuel to replace or reduce the quantity of the petroleum-based transportation fuel, heating fuel, or jet fuel.” Two years later, the Energy Independence and Security Act of 2007 updated the RFS and set a projection for the volume of renewable fuels, particularly ethanol, that are mandated to be mixed into the nation’s fuel supply.

Under current projections, by 2022, 15 billion gallons of corn-based ethanol and 2.1 billion gallons of non-corn biofuel will be required in the nation’s fuel supply. While these numbers simply continue existing statutory requirements, both environmental and free-market groups have noted the updated volumes will have harmful effects on the fuel market and on car engines, as well as contributing to pollution from farm runoff.

Before the RFS was passed, oil companies already had been producing gasoline with a 10 percent blend of ethanol—what’s commonly called E10—as corn-based ethanol is largely cheaper than its counterparts derived from petroleum. However, the RFS mandates do not stop at E10. In the effort to “create a market” for advanced fuels, the RFS now calls for blending more ethanol into gasoline than consumers are willing to buy.

Most vehicles on the road can use E10 because it allows for the highest amount of ethanol and does not void vehicle warranties. But many car engines are not warrantied to use a higher ethanol blend, and if they do, it can cause severe damage and corrosion.

“We were pleased to see that the Environmental Protection Agency acknowledged ‘real constraints’ in the market, in terms of demand, infrastructure and production, toward accommodating higher blends of ethanol,” the National Taxpayers Union’s Nan Smith testified before the EPA. “If admitting you have a problem is the first step toward recovery, this and the slightly lower [renewable volume obligations] recommended in the 2018 proposal are good signs for taxpayers.”

Unfortunately, the RFS itself makes no consideration for the consequences faced by consumers. Due to the strict requirements built into the law, the EPA is unable to adjust the volume requirements downward in the face of lower-than-expected demand. This endless cycle leaves companies scrambling for ways to comply, rather than dedicating their energies toward real, market-driven innovation.

These market distortions alone would be reason enough to oppose RFS, but regrettably, it turns out the mandate is also damaging to the environment, particularly by encouraging the use of nitrogen-rich fertilizers used to grow corn. The need for more and more corn-based ethanol because of the RFS creates larger demand for corn and more pollution from its production. The runoff from large farms in the Midwest and Great Plains makes its way into the Mississippi River and has created a large dead zone in the Gulf of Mexico.

While environmental groups overall are split on the effectiveness of the RFS mandates, Friends of the Earth has opposed the standards because of the pollution they cause. “As it ignores the significant environmental damage created by runoff from biofuels production, the RFS will likely exacerbate the problem,” the group notes.

The RFS safeguards do require that biofuels meet a greenhouse gas emissions-reduction standard for each biofuel type. Ethanol made from corn must reduce greenhouse gas emissions by 20 percent; advanced biofuels must reduce greenhouse gas emissions by 50 percent; and cellulosic biofuel must reduce greenhouse gas emissions by 60 percent. These are good standards to have, but they have loopholes that cause the effort to fall short of its desired effect. As it stands, 15 billion gallons of corn ethanol are exempt from the safeguards. FOE adds that the EPA uses flawed data on the true impact of biofuels:

For example, the EPA uses a questionable analysis to predict that corn ethanol will produce less pollution than regular gasoline one day in the future, and then uses that analysis to excuse the use of extremely dirty corn ethanol today.

Rather than hold RFS volumes steady, the EPA should work with Congress to correct what is a fundamentally flawed statute, with the goal of creating an environment where market innovation is encouraged, rather than creating fake markets for industries with powerful lobbyists. As R Street’s Lori Sanders testified at the EPA’s recent RFS hearing:

Rather than continue down this failed path, we at R Street encourage the EPA to work with Congress to pass reforms that work. The federal government does have a role to play in creating an environment in which new fuels and technologies can take root in the marketplace and, in the process, reduce emissions and preserve the environment for generations of citizens to come. Sadly, the RFS does not fit the bill, and the new EPA should seek better solutions.


Image by Jonathan Weiss

 

U.S. steel requirements for pipelines undermines American energy, trade

shutterstock_390174634

The United States is a free-trading nation, regardless what President Donald Trump says on any given day. Any doubters about current U.S. trade policy should look no further than an Aug. 1 op-ed in The Wall Street Journal written by U.S. Commerce Secretary Wilbur Ross entitled “Free Trade is a Two-Way Street.”

The article and associated graph clearly show how much lower U.S. tariffs are for nearly all imported products from the European Union and China than visa-versa, with China being the bigger protectionist. The Trump administration is preparing to launch a major attack on China’s trade barriers, but the trade barrier proposals the president has made at home are deeply inconsistent with free trade in ways that undermine U.S. jobs and energy security.

In particular, the Commerce Department is expected to submit a proposal that would require domestic steel be used in all domestic pipelines, a proposal that could dramatically upend the ability of pipeline operators to source materials at a time of booming demand.

The United States has been in a pipeline boom this past decade thanks to the shale gas and tight oil booms, with roughly 20,000 miles of oil pipeline added since 2010 and more than 10,000 miles of natural gas pipeline added each year since 2008, according to the U.S. Transportation Department.

But few U.S. firms make the type of steel pipe used in large-line pipelines, and 77 percent of the steel used in line-pipe comes, one way or another, from foreign sources: particularly China, Japan, Turkey and South Korea.

According to ICF International, requiring domestic steel could add dramatically to pipeline costs, both in money and time, since disrupting the current international supply chain would cause shortages and possibly curtail future pipeline investments.

Depending only on U.S.-produced pipe “could lead to long construction delays and higher costs, potentially canceling planned pipeline project or blocking new projects,” wrote a group of oil and gas associations to the Chamber of Commerce back in April. Pipe operators cannot simply substitute other materials or products when constructing and repairing pipelines, ICF wrote.

Such restrictions on trade fly in the face of everything the U.S. energy space has learned since the marriage of hydraulic fracturing and horizontal drilling caused oil and gas development to explode forward around 2008.

Since that time, $1 trillion in capital—much of it foreign investment—has been raised and spent to boost the drilling and transportation of oil and gas from shale fields around the country. As we speak, the construction of five separate pipelines—the Atlantic Sunrise Pipeline, the Nexus Pipeline, the Dakota Access Oil Pipeline, the Rover Pipeline and the Mariner East II—are either complete or within months of completion, moving tens of millions of dollars of oil and gas to market every day using steel sourced from around the world.

Trump’s attention – some say, fixation—on the United States’ structural trade deficit and his proposals to solve it no doubt are among the reasons for his election. But it makes no sense to place trade restrictions of the energy supply chain when the product being produced, oil and gas, have much higher value and can have a dramatically greater impact on the country’s long-term health than demand for domestic steel pipe use.


Image by fuyu liu

 

Congressional Pit Stop: How legislative dysfunction deters young talent

paul-ryan-i-1024

Young people yearn to enact change and make their mark upon the world. Many of them, however, no longer see government as a viable arena in which to do so, in no small part due to congressional dysfunction.

Nurtured in a country constantly at war for most of my life, and thrust into maturity in the worst financial crisis in decades, my generation has grown a well-developed sense of political skepticism. Large swaths of young Americans no longer possess faith in political institutions and processes, and view the government as powerless to combat injustice or solve problems.

Yet without fail, throughout the school year, the University of Chicago Institute of Politics invites myriad political speakers to campus. From members of Congress to idealist activists, their message remains unanimous: There is an unmet need for a new generation of public servants.

Each summer, D.C. is inundated with an influx of young student interns and staffers looking to make a difference. And while Congress remains a powerful attraction, more people are pursuing options beyond the Hill: turning down competitive government internships in favor of more fulfilling private-sector opportunities. As someone who’s made this exact decision, I am a part of the problem. The decision should not come at a surprise when many congressional internships have become dreary positions filled with administrative work and little connected to professional development.

And while interning itself is a temporary commitment, the disinterest in long-term governmental work among young people is indicative of a larger problem among congressional staffers. Surrounded by high disapproval ratings, political gridlock and hyperpartisanship, the frustration within government is palpable, particularly among individuals my age. The decline of faith in political institutions, combined with a growth of opportunities to enact societal change outside of government, has led to millennials choosing private-sector missions in growing numbers.

Though Congress will have little trouble filling many of the staffing positions, a serious underlying issue remains: are positions being filled by the most qualified candidate? Feelings of pessimism make it hard to attract young people to serve Congress, and even harder to retain them. As a result, it is difficult to generate institutional growth if each new wave of public servants view their time in our national legislature as a steppingstone to other opportunities with more meaning.

Congress is supposed to be the foundation upon which the rest of the government edifice rests. It is the first branch, and was designed to be the driving force of policymaking, the repository of national powers and the channel of popular energy. Article I assigned Congress diverse and immense powers to govern so as to properly reflect property, people and political communities. Congress was once the bedrock institution but has fallen victim to its vices.

Established to make policy and respond to shifting social and economic needs, our national legislature is gridlocked by ideological strife. Because of this, Congress does not offer younger candidates an environment conducive to sustainable or meaningful growth. But more than that, the inability to govern signals a lack of congressional demand for the ready supply of ideas and talent – talent that therefore flows to workplaces off Capitol Hill.

While recent attention has been focused on President Donald Trump’s inability to fill high-level government positions, the bigger story is that decades of disinvestment in Congress have left rampant staffing problems within its daily structure. Legislative branch staffing has not grown proportionally with the expanding size of the government or the U.S. population, which has weakened the most democratic branch of government.

Experienced staff is a conceptual rarity. By the time congressional staffers gain high-level expertise, they’ve typically initiated the process of cycling out of the institution to pursue other prospects. The continuous influx of bright and energetic staff is not an ideal replacement for staffers with policy experience. Disinvesting in the legislative branch talent pool has led to a dependence on external resources—mainly, interest groups—which have smarts but inevitably have an agenda. The decay of institutional knowledge is hampering effective governance.

Congressional reform should focus on battling the external pressures and strengthening the crumbling institutional structures through an increase in motivated staff with a focus on retention. While social and political issues continue growing in complexity, Congress remains unable to address them properly. The government is responsible for processing more information than ever before, and is doing so with even fewer resources. Why should Congress continue to rely on private research, elite op-eds and corporate lobbyists when it can strengthen itself from within?

Young professionals are demoralized by the behavior of Washington officials, but their disengagement is rooted in frustration, not apathy. It is misinformed to fault millennials for remaining unengaged in the Hill when the government itself has repeatedly and publicly divested from young talent. However, without a clear solution, the dysfunctionality of Congress is condemned to further spiral. Instead, Congress should invest in creating long-term paths and educational opportunities to educate staffers continuously. This is what congressional internships should be about.

A job on the Hill should be more than a pit stop. But it won’t be anything but that until Congress reforms itself.

Alex Pollock on the Peak Prosperity podcast

Appearing on the the Peak Prosperity podcast, R Street Distinguished Senior Fellow Alex Pollock details his assessment of the Federal Reserves’s major transgressions against the interests of the general public. But perhaps more interestingly, he shares his observations from a recent hearing of the House Financial Services Committee on the same topic (at which he testified) and how it struck him that many members of Congress that convened it appear to be growing increasingly concerned about the Fed’s lack of accountability, as well as its potential fallibility.

For Harry Potter’s birthday, try on the federal affairs Sorting Hat

snek

Today is Harry Potter’s 37th birthday. In honor of The Boy Who Lived and savior of the wizarding world, we had some administration officials and members of Congress try on the Sorting Hat to determine which house of Hogwarts is their true home.

As a proud Slytherin, I’d like to remind everyone that this is all in good fun, and each of the four houses has its merits. (Even Hufflepuff; J.K. Rowling herself would have been one.)

Do you agree with our sorting? Who did we miss? Let us know in the comments or tweet to us at @RSI! And always remember:

snek

Gryffindor

Sen. Mike Lee, R-Utah

Energy Secretary Rick Perry

Rep. Darrell Issa, R-Calif.

Sen. John McCain, R-Ariz.

 

Hufflepuff

Education Secretary Betsy DeVos

Rep. Blake Farenthold, R-Texas

Housing and Urban Development Secretary Ben Carson

 

Slytherin

Sen. Al Franken, D-Minn.

Sen. Tom Cotton, R-Ark.

Rep. Justin Amash, R-Mich.

Sen. Ted Cruz, R-Texas

 

Ravenclaw

Rep. Jared Polis, D-Colo.

Rep. Bob Goodlatte, R-Va.

Sen. Ron Wyden, D-Ore.

Transportation Secretary Elaine Chao

Sen. Ben Sasse, R-Neb.

 

Whitehouse-Schatz carbon tax moves in right direction, but falls far short

shutterstock_73723585

Sens. Sheldon Whitehouse, D-R.I., and Brian Schatz, D-Hawaii, are serious about tackling the challenge of climate change and they’re out this year with another carbon proposal intended to be an “olive limb” to the right. As Whitehouse describes it:

Virtually every person on the Republican side who has thought the climate change problem through to a solution has come to the same place: price carbon emissions to encourage cleaner energy and return the revenue the American people.

That’s just what their new legislation intends to do. From 10,000 feet, it’s a promising start. The proposal imposes a tax on carbon emissions from fossil-fuel combustion and other major emitters; establishes a border adjustment to address concerns about competitiveness; and returns all the revenue, keeping none for the federal coffers.

The devil, however, is in the details. And that’s where the American Opportunity Climate Fee Act falls short.

First, there’s the revenues. We know from the literature that a revenue-neutral carbon price can boost economic growth, if revenues are devoted to cutting taxes to capital. Other ways of recycling the revenue—cutting payroll taxes, offering lump-sum rebates or reducing sales taxes—all pull the reins in on the economy. The Whitehouse-Schatz proposal spends the revenue several ways: reduces the top corporate income tax rate to 29 percent; offers a refundable tax credit to working Americans; offers additional payments to Social Security and veterans’ benefits recipients; and delivers $10 billion in annual block grants to the states.

The cuts to the corporate income tax rate are a good start, but insufficient. Any redesign of the corporate income tax should make the United States a more competitive place to do business; the Whitehouse-Schatz proposal would leave the United States with a tax rate that’s still 50 percent higher than the European average. That’s not exactly the ground-breaking shift we’re looking for.

Refundable tax credits to workers and additional payments to Social Security and veterans’ benefits recipients are intended to address the regressivity of a new tax on carbon. That’s a worthy goal; reducing greenhouse gas emissions shouldn’t increase the burden on those least able to pay. But the senators’ proposed structure creates a national constituency for something akin to a new entitlement. That constituency will support a tax just high enough to maintain annual payments and just low enough to not actually phase down the greenhouse gas emissions that support the new annual payment.

Lastly, the $10 billion in block grants is intended to fund individual states’ efforts to help those who can least afford to pay the new taxes on energy, or those whose industries are hardest hit, distributed on a per-capita basis. That creates a serious issue for the most rural states with the lowest populations – Alaska, the Dakotas, Montana and Wyoming. These states would also be disproportionately impacted; energy development is among the top five industries in Alaska, North Dakota and Wyoming.

Then there’s the matter of the tax itself. Whitehouse-Schatz would start at $49/ton of carbon dioxide in 2018, rising 2 percent above inflation year-over-year until an emissions target is attained. That’s a pretty high starting value: when the Congressional Budget Office modeled the Waxman-Markey cap-and-trade proposal in 2009, it estimated first year prices around $15/ton.

More troubling, however, is how the tax is applied. The good news: it’s designed to be administratively simple, capturing emissions at as few possible collection points and as accurately as possible. The bad news: in capitulating to environmentalists’ demands, it actually discourages industry best practices and safe and clean infrastructure. Whitehouse-Schatz requires that the tax be applied to, “greenhouse gases that escape throughout the fossil fuel supply chains.” It would not be applied at the points of emission, but rather an adjustment to the tax would applied equally to all producers and importers of fuel. Companies who utilize the best practices and the most advanced infrastructure with the fewest leaks will pay just as stiff a penalty as companies that wisely avoid investing in equipment from which they won’t benefit.

Finally, the Whitehouse-Schatz proposal doesn’t include any mechanism for regulatory preemption. The Environmental Protection Agency is obligated to regulate greenhouse gas emissions under the Clean Air Act, a mandate that created the faulty, expensive and ineffective Clean Power Plan. No tool within the CAA creates a proper framework for a regulatory solution. Even the Waxman-Markey cap-and-trade bill included provisions that would prevent the EPA from regulating carbon under certain provisions in the Clean Air Act. The senators, however, see this regulatory burden as a bargaining chip, not a problem to remedy.

For all its faults, the Whitehouse-Schatz proposal is promising in one respect: it demonstrates that motivated environmentalists know that market-based instruments can address the climate challenge effectively. An appropriately designed revenue-neutral carbon price can encourage economic growth, draw investment, boost innovation and achieve more emissions reductions at a lower cost than the regulatory machine. Toward that end, R Street has proposed that a carbon tax that would finance the outright elimination of the corporate income tax, a proposal we believe will unleash capital markets and boost employment while untethering economic growth from a carbon-based fuel supply.

Sen. Whitehouse is right – conservative solutions can work. The American Opportunity Climate Fee Act, however, is a far cry from conservative.


Image by visualdestination

The Hillsborough PTC is dead; long live the Hillsborough PTC

shutterstock_655488313

After years of tormenting ridesharing companies Uber and Lyft, as well as their customers, with burdensome regulations designed to prop up area taxi cab companies, the Hillsborough County Public Transportation Commission is set to be dissolved later this year by an act of the Legislature. Founded in 1987, the Hillsborough PTC regulates ground transportation companies such as cabs and limousines, as well as overseeing tow-truck companies in the Tampa Bay area.

Now that the Hillsborough PTC’s days are numbered, some of its remaining proponents warn that consumers will lack the kinds of protections that apparently only the PTC can provide. A recent local news report honed in on the PTC’s oversight of tow-truck companies as an example.

Indeed, tow-truck company activities can and should be regulated by local and state authorities. However, it does not take an entire government agency to do just that. In fact, the PTC was the only such local transportation board in the entire state of Florida. Other counties delegate ground transportation, towing and other such oversight and regulation to police departments, consumer-protection bureaus and other departmental offices within county government.

In Miami-Dade County, for example, tow-truck companies are regulated by the Department of Regulatory and Economic Resources, which also enforces consumer-protection measures like maximum towing rates, background checks on tow-truck operators, vehicle-safety standards, insurance requirements and other protections and remedies established by the Miami-Dade County Commission for consumers who have been towed. Orange County, which includes Orlando, has a consumer fraud unit that deals with all sorts of consumer-related issues, ranging from house repairs and construction to towing grievances.

Many municipalities also enact their own regulations that either work in harmony with the county’s or add additional layers to them. Florida state law also establishes basic guidelines. While towing is an industry inherently prone to angry customers, Florida’s is a relatively stable market.

The Hillsborough County Commission is currently exploring ways to distribute the PTC’s regulatory responsibilities across existing county agencies.  Tow-truck oversight, for example, is likely being transferred to the Sheriff’s Office. The commission is set to consider this and other staff recommendations related to the PTC’s impending dissolution at its next meeting Aug. 16.

Residents should praise the Legislature for dissolving an obsolete, unnecessary government agency that had been undermining competition and restricting transportation choice. However, county residents should remain vigilant of commission proceedings to ensure it preserves the rules and regulations the PTC enacted once upon a time—those that were reasonable and worked. This exercise should not used as an opportunity by local politicians, bureaucrats and entrenched interests to foist the kinds of unnecessary, burdensome regulations that led to the PTC’s dissolution in the first place.


Image by CrispyPork

 

Great ECPA expectations

shutterstock_387175249

When the Electronic Communications Privacy Act first was passed back in 1986, lawmakers mostly didn’t even imagine that email might play a central role in American life. Scarcely anyone in 1986—whether inside or outside of Congress—foresaw a day when we’d use the internet to help us find our misplaced phones and watches.

The digital landscape for Americans has vastly changed over the last three decades, but the central law spelling out when government needs to get a warrant to capture electronic communications has not. Because the internet is central to most of our lives, and because the potential scope of government intrusion on our lives has thus become vastly greater, it’s high time (or, really, past time) for Congress to update ECPA. That’s why we are pleased to see today’s introduction of the ECPA Modernization Act of 2017 by Sens. Mike Lee, R-Utah, and Patrick Leahy, D-Vt.

Congress is now poised to update the law in ways that reflect how pervasively we use digital communications and tools (computers, phones, watches, fitness trackers, and many other devices) in our everyday lives. The act aims to fix some serious flaws in the older law. The ECPA Modernization Act is not just about the content of digital communications; it’s also about the geolocation features (and other non-email, non-messaging features) that internet services increasingly offer us.

That’s not to say that the ECPA Modernization Act is perfect. It is a fundamental principle of liberal democracy that there should be limits on what government can grab from your digital world. These limits are essential to understanding the Fourth Amendment in the 21st century. Even as we see progress toward updating digital-privacy laws, it’s essential to point out that plenty of issues, such as the gathering and analysis of metadata, still need to be revisited and more thoroughly reviewed from a pro-privacy standpoint. (I’ve written about the underlying problems with ECPA’s inadequate protections for metadata here.)

And as Chris Calabrese of the Center for Democracy and Technology testified in 2015, the last time the Senate considered updating ECPA, the consequence of failing to update this creaky 1980s statute has been ambiguity and inconsistency. Is a Google Doc subject to the law if you’re only using Google Docs to store a document for later editing? Or, if it isn’t, does it become subject to ECPA provisions when you share the document for others to edit? Inquiring minds wanted to know.

This latest ECPA-revision language takes steps toward addressing both my concerns about metadata and Calabrese’s concerns about ambiguity. It adds warrant requirements for information stored in the cloud and for location information, as well as adding new limits on metadata collection. The ECPA Modernization Act may not be perfect (and what legislation is, really?), but it’s a good start, and it ought to serve as a good reminder that we shouldn’t wait another three decades—or even another three years—before we take another comprehensive look at how our individual privacy, and Fourth-Amendment-based limits on government snooping on citizens, should be updated for our fast-evolving digital landscape.


Image by Maksim Kabakou

 

Why city officials should welcome the autonomous revolution

shutterstock_649829878

The following post was co-authored by R Street Tech Policy Associate Caleb Watney. 


With tech and car companies racing to advance the state of self-driving car technology, the House Energy and Commerce Committee just gave the burgeoning industry a measure of regulatory certainty. Earlier today, the committee marked up and unanimously passed H.R. 3388, the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act, a draft version of which previously moved through the panel’s Digital Commerce and Consumer Protection Subcommittee.

The bill would reserve for states and localities the power to regulate their streets and the rules of the road, as is appropriate. But when it comes to regulating vehicle design, performance and safety standards, the federal government would continue to take the lead through the National Highway Traffic Safety Administration.

Even though that basic division of regulatory labor has been a successful model for 60 years, groups representing city and transportation departments, along with allied activists, are sounding an alarmist warning that the House bill would “preempt state and local governments from regulating their own streets.” A joint letter from the National Association of City Transportation Officials, National League of Cities, Transportation for America and the Natural Resources Defense Council proclaims:

The bill would allow autonomous vehicle companies to self-certify the safety of their vehicles without an independent reviewer, and would severely limit any government from protecting the well-being of its citizens. This is akin to trusting the fox to protect the hen house, and would clear the way for automakers and tech companies to deploy hundreds of thousands of automated vehicles without adhering to stringent safety standards.

In fact, traditionally operated vehicles aren’t subject to pre-market approval either, because that would be a slow and costly system without any concrete benefit. What’s more, the safety standards already in place for traditionally operated vehicles also would apply to autonomous vehicles under the committee’s bill, just as they do now. Manufacturers of autonomous vehicles must go through a lengthy regulatory process to receive exemption from any NHTSA safety standard and must justify each deviation by demonstrating that an exempted development provides an equivalent level of safety. Ultimately, if manufacturers fail to live up to the agreement they make during the exemption process, or if vehicles prove to be problematic in practice, NHTSA still would have full authority to take them off the road using their expansive recall authority.

The legislation thus leaves the federal government well-positioned to continue protecting the well-being of all Americans with regard to vehicle safety—autonomous or otherwise—just as it has been doing with human-piloted vehicles for decades. By raising the cap on exemptions, companies will be able to conduct much more rigorous testing and deploy autonomous technologies more quickly. By avoiding a patchwork of design, performance and safety standards promulgated by local governments, they will not be driven to “shop” for friendlier regulatory environments across state lines or be forced into the compliance nightmare presented by the development of 50 or more conflicting standards.

NACTO and allies rightly point out that local governments “have made great strides to manage traffic congestion, reduce emissions and air pollution, and improve safety and mobility for people accessing jobs and opportunities.” After decades when American street design and transportation planning lagged behind international standards, many localities are catching up by implementing effective road diets, narrowing lanes and making multi-modal accommodations. But this legislation does nothing to interfere with that fine work. In fact, it relieves city and transportation planners of responsibilities that are beyond both their budgets and their core competencies.

Nothing in the House legislation prevents state and local governments from continuing to enhance the safety of their streets through improved design and regulation. Autonomous vehicles, just like human-piloted vehicles, will be responsible for following “rules of the road”, including speed limits and rights of way. And in fact, testing thus far shows that autonomous vehicles promise to be far more compliant with road regulations than citizen drivers and provide dramatically better safety outcomes.

With more than 40,000 auto fatalities in 2016, 94 percent of which were due to human error, every day that autonomous vehicles aren’t on the road means lives are lost. No one knows the safety dangers posed by human-operated automobiles better than the transportation officials that NACTO represents. Those officials should welcome the addition of highly autonomous vehicles to the toolkit of advocates for street safety.


Image by Scharfsinn

 

How Congress can use evidence-based policymaking

The Legislative Branch Capacity Working Group examined the use of data and analyses in policymaking at the group’s July 17 meeting, including exploring the challenges Congress faces in attempting to implement evidence-based policymaking and how increasing congressional capacity could lead to more and better evidenced-based lawmaking.

Collectively, panelists Lucas Hitt of the Commission on Evidence-Based Policymaking, Andrew Reamer of George Washington University, Timothy Shaw of the Bipartisan Policy Center and R Street Vice President of Policy Kevin R. Kosar noted that Congress always has sought data and evidence to help it make policy, but legislators will disregard that evidence for at least a few reasons: values, distrust, and parochial and other pluralistic interests.

The Commission on Evidence-Based Policy to release its report this fall, which will advise Congress on how to increase the use of data and research in legislating and oversight.

Video of the panel is embedded below:

 

Moss on whether copyright is a property right

With Congress possibly set to consider new ideas on copyright, R Street Tech Policy Manager Sasha Moss participated in a recent panel convened by America’s Future Foundation to debate the constitutional and philosophical underpinnings of intellectual property and explore whether today’s copyright laws are excessive or not sufficiently protective enough. Alongside co-panelist Kristian Stout of the International Center for Law and Economics and moderator Jim Harper of the Competitive Enterprise Institute, Sasha observed that current U.S. copyright law is not in line with what the founders intended. Full video is embedded below:

Is the real estate double bubble back?

shutterstock_595098272

Average U.S. commercial real estate prices are now far over their 2007 bubble peak, about 22 percent higher than they were in the excesses of a decade ago, just before their last big crash. In inflation-adjusted terms, they are also well over their bubble peak, by about 6 percent.

In the wake of the bubble, the Federal Reserve set out to create renewed asset-price inflation. It certainly succeeded with commercial real estate – a sector often at the center of financial booms and busts.

Commercial real estate prices dropped like a rock after 2007, far more than did house prices, falling on average 40 percent to their trough in 2010. Since then, the asset price inflation has been dramatic: up more than 100 percent from the bottom. In inflation-adjusted terms, they are up 83 percent.

This remarkable price history is shown in Graph 1.

graf 1

Bank credit to commercial real estate has been notably expanding. It is up $238 billion, or 21 percent, since the end of 2013 to $1.35 trillion. It has grown in the last two years at more than 7 percent a year, which is twice the growth rate of nominal gross domestic product, although not up to the annual loan growth rate of more than 9 percent in the bubble years of 2000-2007.

The Federal Reserve also succeeded in promoting asset-price inflation in houses. U.S. average house prices are also back over their bubble peak—by about 2 percent, in this case. They have rebounded 41 percent from their 2012 trough. In inflation-adjusted terms, house prices a have climbed back to the level of 2004, when we were about two-thirds of the way into the bubble. See Graph 2.

graf 2

The rapid house price increases since 2012 have not been matched by growth in bank residential mortgage loans or aggregate mortgage credit. Banks’ total residential mortgage loans were $2.45 trillion in 2012 and $2.41 trillion in the first quarter of 2017. Total U.S. 1-4 family mortgages outstanding went from $10.04 trillion to $10.33 trillion in the same period. Thus, there is a marked difference between the two real estate markets, with commercial real estate having even more price inflation and more bank credit expansion than houses. The interest rate environment is, of course, the same for both.

House prices and commercial real estate prices are closely related. As shown in Graph 3, they made an obvious double bubble, a double collapse and a double big rebound. The statistical correlation between the two since 2001 is 86 percent.

graf 3

Is what we have now a new double bubble, or something else?  Considering where these charts may go from here, we may ponder three key questions:

  1. If interest rates go up 1 percent or 2 percent, what will happen to commercial real estate and house prices?
  2. If the Fed stopped being a big buyer of mortgage-backed securities and bonds, what would happen to interest rates?
  3. Having driven asset prices up, by buying and maintaining huge long positions, can the Fed get out of these positions without driving prices down?

We will know the answers when, sometime in the future, somebody explains it all to us ex post. For now, we know that real estate prices are back to the levels of the last bubble, reflecting the Federal Reserve’s production of asset-price inflation through its interest rate and bond market manipulations.


Image by Noah Wegryn

 

New DOJ asset-forfeiture rules trample basic rights

handcuffs-and-money-1462610092f8q

In a speech Monday to the National District Attorneys Association annual conference, Attorney General Jeff Sessions announced the U.S. Justice Department plans to ramp up the use of civil asset forfeiture to “combat crime.”

If this sounds like a cliché ripped from a 1980s political speech, that’s not far off. The truth is, the DOJ new effort has less to do with fighting crime than it does with funding for law enforcement.

Sadly, what Sessions actually is doing is green-lighting escalation of DOJ and local law-enforcement efforts to seize property from people who have never been convicted of a crime, thus allowing government agencies to reap major monetary rewards. To put it another way, if the government can’t convict you of a crime, they will just take your stuff instead.

One could argue the road to asset forfeiture was paved with good intentions. The practice re-emerged at the height of the 1980s drug war, when law-enforcement agencies across the country were trying to bring down the drug trade. Civil asset forfeiture programs gave government agencies the power to seize cash, cars, guns or anything else of value that was potentially bought with drug money. Suspected drug dealers would then be forced to prove in civil court that they obtained everything legally. Once seized, the cash and other items would be used to fund both federal and local agencies’ drug war efforts, creating something of a vicious circle.

Like any power the government is granted, the practice has been expanded massively, with the end result being blatant violations of Americans’ civil rights. This country was founded on the principles of property rights and protections from unreasonable government search and seizure. Indeed, we have drifted a long way from the inalienable rights outlined in our founding documents that all men are protected under the due process of law.

Unsurprisingly, asset forfeiture has become a cash cow for the federal government and a slush fund for local law-enforcement agencies across the country. Local agencies construct their budgets based on expected seizures, which has created incentives to seize assets just to keep the lights on. All in all, civil asset forfeiture is a $5 billion “industry.” The government has so perfected the art of seizure that they now outperform actual criminals. In 2014 alone, the government seized more assets than actual burglars did.

For a while, things had been looking up. During the Obama administration, the Justice Department took some real steps toward curbing civil asset forfeiture. More importantly, many states across the country started to take a stand by passing laws to make it tougher for the government to seize assets. As of today, according to the Institute for Justice, 13 states require a criminal conviction before the government can take someone’s property. However, these state-level reforms are about to become moot thanks to the Justice Department.

Along with increased interest in asset forfeiture, Sessions and the DOJ announced Wednesday that the DOJ will also reinstate “adoptive” forfeiture, which allows state and local agencies a workaround to any potential state laws by allowing them to use a federal statue to seize property. Not only is this a direct challenge to states’ rights, it also provides incentives for local agencies to continue to pursue these actions with little regard for civil liberties.

Few think criminals should profit from their crimes. There’s also no doubt that it is challenging for state and federal law enforcement agencies to investigate and prosecute complex criminal enterprises like drug cartels and human traffickers. But the current system violates some of the basic principles this nation was built upon—due process of law, innocent until proven guilty and freedom—all in the pursuit of innocent people’s property.


Image by hafakot

 

Using the CPP to boost coal is just as bad

shutterstock_663418213

President Donald Trump has spoken repeatedly of his support for coal mining, pledging publicly that “we will put our miners back to work.”

It probably should not be surprising, then, that the White House would give serious consideration to a pitch made by several coal-mining union representatives to the Office of Management and Budget that would see the Environmental Protection Agency rewrite the Obama administration’s Clean Power Plan in ways that help the coal industry.

Alas, the ends the industry wants to achieve using the CPP are at least as wrongheaded as the command-and-control model that was used to craft the emissions plan in the first place.

What the proposal by the AFL-CIO, the International Brotherhood of Electrical Workers and the Utility Workers Union of America recommends is for EPA Administrator Scott Pruitt to initiate only the first of the CPP’s four “building blocks.” Such a plan would reward coal-fired power plant if they improved their boiler heat-rate efficiency, even though the improvements could only cut greenhouse gas emissions by 2-3 percent, as opposed to the additional 10-12 percent the previous administration wanted to see.

The CPP’s other three building blocks—natural gas switching, renewable energy and energy-efficiency programs—would be eliminated, leaving a rump emissions plan that could pass muster in the courts.

Unlike the recent decision to exit the Paris Climate Accords, in which the United States simply said it wouldn’t follow through on a prior commitment, the Clean Power Plan’s regulation of existing power plants was finalized in June 2015. That makes it legally hazardous to jettison the plan, which remains before the U.S. Supreme Court, without a replacement. Only an unprecedented legal stay issued by the court in February 2016 – shortly before the death of Justice Antonin Scalia – kept the regulations from coming into force.

It’s worth remembering that the Clean Power Plan was the Obama administration grand attempt to regulate emissions from coal-fired power plants. The White House sought to expand the scope of the Clean Air Act beyond “the fence line” of power plants to cut state-level emissions coercively, whether states agreed to the federal actions or not.

But just because the revised rule wouldn’t be as powerful doesn’t mean it wouldn’t be just as damaging to the economy over the long run. Dictating winners and losers in energy markets is always a bad idea. This is as true of the bias against coal and nuclear energy shown by regulators during the second Obama term as it would be of this new proposal to upgrade coal-powered electricity plants to a point where they still won’t be as clean as a new natural gas-fired plant.

The natural gas fracking revolution– driven entirely by market forces and private property rights – has contributed to the 14 percent reduction in energy-related U.S. carbon emissions since 2005, leaving us roughly in the same position we were in the early 1990s. Leaving an ineffective regulatory structure in place of the original CPP may save the Trump administration a lot of time and effort, but it isn’t the principled approach to energy development this country needs in the 21st Century.


Image by 1968

Microsoft’s alternative power deal could be breakthrough for consumer choice

shutterstock_622022240

Washington state regulators approved a settlement last week between Microsoft Corp. and their monopoly utility, Puget Sound Energy Inc. (PSE), to enable Microsoft to buy its own wholesale energy or develop its own supply. The agreement represents a more cordial approach amid a widespread trend of large customers seeking alternative power suppliers, but underscores the inherent choice-constraining limitations of the monopoly model, even with favorable amendments.

The monopoly model, premised on a single power provider with captive customers, does not easily accommodate customer preferences. However, a glimmer of choice has emerged recently. Microsoft is just one of many corporate customers to pursue third-party purchases or direct-access policies that enable one-off customer choice within a monopoly footprint.

Spurred by less expensive alternative suppliers and corporate commitments to clean energy, corporations have procured more than 6 gigawatts of wind and solar in the last two years alone. In 2016, Microsoft and Amazon led the pack in corporate clean-energy procurement. Based on public commitments, this trend looks likely to continue, with the likes of Google, Apple, Johnson & Johnson and more committing to source all of their consumption from renewables.

At a time when climate and clean-energy policy too often reverts to a culture war, voluntary clean-energy procurement by corporate leaders marks a refreshing intersection of the conservative and green agendas. Bill Hogan, a Harvard professor and electricity markets expert, emphasizes that customers spending their own money to contract for green power is consistent with market principles. He clarifies that the “problem comes when governments spend other people’s money, using their power to mandate, that is a public policy concern.”

This may blossom into the new chapter of voluntary environmentalism, which has roots in the kinds of conventional pollution reduction (beyond legal requirements) that preceded today’s amplified climate discussion. For some companies, the reputational or branding benefits of contributing to a cleaner environment can provide substantial incentives. It appears those benefits are magnifying at the same time that the cost of renewables has fallen, spearheaded by merchant wind developers providing very competitive power purchase agreements.

Some have voiced concerns that an exodus of big customers from monopoly service may leave other customers with higher bills. A large customer’s departure could create stranded costs for the utility, which it will shift to other customers if permitted by regulators. To cover these costs, regulators may require customers seeking to leave the monopoly to pay exit fees. Companies like Microsoft might even go beyond the exit fee by pledging support for local community programs.

Proper exit fees can prove technically challenging to calculate. In addition, monopoly utilities often leverage those fees to impose a regulatory barrier to exit. In particular, they frequently will underplay the benefits to their remaining customers of the reduced costs and expanded opportunities to sell excess power.

Litigated exit fee cases have proven contentious and inefficient. In Nevada, numerous cases have led to prolonged regulatory battles and deterred some companies (e.g., Las Vegas Sands Corp.) from seeking to buy power on the open wholesale market. In a recent filing before the Nevada Public Utilities Commission (PUC), Wynn Las Vegas argued the exit fee imposed by the PUC—whose staff changed their methodology from the one applied to the previous exit request of the data storage company Switch—was unfair and discriminatory.

In fact, Switch incurred regulatory headaches of its own. The PUC rejected its initial proposal to switch to an alternative provider in 2015. Other Nevada resorts and casinos, including Caesars Entertainment Corp., are either considering or already have applied to leave monopoly service, with the MGM Grand agreeing to pay an $87 million exit fee.

Even with direct access, regulatory delays and inflated exit fees can serve as chronic limits to customer choice, not to mention that clinging to the monopoly model results in an underdeveloped market for alternative suppliers. Even the Microsoft settlement revealed differences between the customer and the utility over how to calculate the exit fees. In its initial testimony, Microsoft argued that its departure would provide a net benefit and estimated that, using generally accepted rate-setting standards, the utility would compensate Microsoft between $15 million and $35 million to leave (the two sides differed over the timeframe used to calculate the useful life of the utility’s assets and market value of excess generation).

However, in the end, Microsoft agreed to pay an inflated $24 million exit fee. The settlement represents a deal between numerous parties that is likely more efficient than prolonged litigation. Such a collaborative approach may serve as the preferred interim model in monopoly states (i.e., negotiated special contracts), short of a new customer tariff that would streamline the process.

Despite the niceties of settlements, such agreements retain undertones of the fundamental rift between increasingly heterogeneous customers and the choice-constraining monopoly model. In restructured or “retail choice” states, customers choose their power provider freely, and large customers often negotiate contract terms tailored to their unique profile.

Restructured states present a big advantage for corporate consumers, and policymakers increasingly have noted this advantage for retaining and attracting businesses. Enabling third-party service or direct access is certainly not the “end game” regulatory structure, but it offers a great incremental step to introduce customer choice, with benefits both for customers and for the environment.


Image by Katherine Welles

Jonathan Coppage all over your TV screen

cnbc-screenshot

Visiting Senior Fellow Jonathan Coppage’s recent Washington Post op-ed taking apart the alarmist coverage of a purported trend of millennials living at home as adults (tl;dr, it’s a normal thing, historically, and there’s a lot to recommend it in practice) drew quite a bit of attention, earning Jon invitations to sit down on a pair of national cable news shows. First, there was a two-part spot on CNBC’s Squawk Box:

Next, he was on CNN, discussing the piece with Smerconish host Michael Smerconish:

Why quality will trump quantity in the net-neutrality debate

Also appeared in: TechDirt

shutterstock_546896407

If you count just by numbers alone, net-neutrality activists have succeeded in their big July 12 push to get citizens to file comments with the Federal Communications Commission. As I write this, it looks as if 8 million or more comments have now been filed on FCC Chairman Ajit Pai’s proposal to roll back the expansive network-neutrality authority the commission asserted under its previous chairman in 2015.

There’s some debate, though, about whether the sheer number of comments—which are unprecedented not only for the FCC, but also for any other federal agency—is a thing that matters. I think they do, but not in any simple way. If you look at the legal framework under which the FCC is authorized to regulate, you see that the commission has an obligation to open its proposed rulemakings (or revisions or repeals of standing rules) for public comments. In the internet era, of course, this has meant enabling the public (and companies, public officials and other stakeholders) to file online. So naturally enough, given the comparative ease of filing comments online, controversial public issues are going to generate more and more public comments over time. Not impossibly, this FCC proceeding—centering as it does on our beloved public internet—marks a watershed moment, after which we’ll see increasing flurries of public participation on agency rulemakings.

Columbia University law professor Tim Wu—who may fairly be considered the architect of net neutrality, thanks to his having spent a decade and a half building his case for it—tweeted July 12 that it would be “undemocratic” if the commission ends up “ignoring” the (as of then) 6.8 million comments filed in the proceeding.

But a number of critics immediately pointed out, correctly, that the high volume of comments (presumed mostly to oppose Pai’s proposal) doesn’t entail that the commission bow to the will of any majority or plurality of the commenters.

I view the public comments as relevant, but not dispositive. I think Wu overreaches to suggest that ignoring the volume of comments is “undemocratic.” We should keep in mind that there is nothing inherently or deeply democratic about the regulatory process – at least at the FCC. (In fairness to Wu, he could also mean that the comments need to be read and weighed substantively, not merely be tallied and dismissed.)

But I happen to agree with Wu that the volume of comments is relevant to regulators, and that it ought to be. Chairman Pai (whose views on the FCC’s framing net neutrality as a Title II function predate the Trump administration) has made it clear, I think, that quantity is not quality with regard to comments. The purpose of saying this upfront (as the chairman did when announcing the proposal) is reasonably interpreted by Wu (and by me and others) as indicating he believes the commission is at liberty to regulate in a different way from what a majority (or plurality) of commenters might want. Pai is right to think this, I strongly believe.

But the chairman also has said he wants (and will consider more deeply) substantive comments, ideally based on economic analysis. This seems to me to identify an opportunity for net-neutrality advocates to muster their own economists to argue for keeping the current Open Internet Order or modifying it more to their liking. And, of course, it’s also an opportunity for opponents of the order to do the same.

But it’s important for commenters not to miss the forest for the trees. The volume of comments both in 2014 and this year (we can call this “the John Oliver Effect”) has in some sense put net-neutrality advocates in a bind. Certainly, if there were far fewer comments (in number alone) this year, it might be interpreted as showing declining public concern over net neutrality. Obviously, that’s not how things turned out. So the net-neutrality activists had to get similar or better numbers this year.

At the same time, advocates on all sides shouldn’t be blinded by the numbers game. Given that the chairman has said the sheer volume of comments won’t be enough to make the case for Title II authority (or other strong interventions) from the commission, it seems clear to me that while racking up a volume of comments is a necessary condition to be heard, it is not a sufficient condition to ensure the policy outcome you want.

Ultimately, what will matter most, if you want to persuade the commissioners one way or another on the net-neutrality proposal, is how substantive, relevant, thoughtful and persuasive your individual comments prove to be. My former boss at Public Knowledge, Gigi Sohn, a net-neutrality advocate who played a major role in crafting the FCC’s current Open Internet Order, has published helpful advice for anyone who wants to contribute to the debate. I think it ought to be required reading for anyone with a perspective to share on this or any other proposed federal regulation.

If you want to weigh in on net neutrality and the FCC’s role in implementing it—whether you’re for such regulation or against it, or if you think it can be improved—you should follow Sohn’s advice and file your original comments no later than Monday, July 17, or reply comments no later than Aug. 16. If you miss the first deadline, don’t panic—there’s plenty of scope to raise your issues in the reply period.

My own feeling is, if you truly care about the net-neutrality issue, the most “undemocratic” reaction would be to miss this opportunity to be heard.


Image by Inspiring

 

Alabama backs down on targeting margarita pitchers

shutterstock_3506070

In these hot summer months, nothing refreshes like a margarita. But in Alabama, the state Alcoholic Beverage Control Board had banned pitchers of this limey and refreshing libation. Seriously.

R Street’s Cameron Smith exposed the ban and advocated for its repeal in AL.com after a series of email exchanges with ABC representatives:

The Alabama Alcoholic Beverage Control Board (ABC) doesn’t want you wasting away in Margaritaville, so they’ve banned pitchers of the frozen concoction outright.

No, I’m not joking.

But we shouldn’t be surprised. This is the ABC that cracked down on people drinking while dining on the sidewalks in Mobile. It’s the same ABC that cut a deal to impose a 5 percent liquor mark-up to help the legislature and the governor enact a back-door tax hike.

Now the agency has taken to reminding licensees of its legal ‘interpretation’ that beer is the only alcoholic beverage that may be served in a pitcher…

ABC claimed it was concerned with the tequila in margarita pitchers “settling” over time, which could lead to situations where the first few drinks poured from the pitcher had less alcohol than the ones from the bottom of the pitcher. As Smith pointed out, this amounted to an argument that a group of legal adults “can’t figure out how to handle a pitcher of margaritas shared among them.”

Smith’s column generated enough outcry among Alabama residents that Dean Argo, ABC’s government relations manager, took to AL.com to announce that the board would no longer target margarita pitchers. In short, ABC has backed off, at least for now. (The Associated Press also covered the reversal).

While this was a clear win for margarita lovers across the state, Argo ominously suggested that the state may still draw a line between which types of drinks can be served in pitchers and which cannot. The dividing line would appear to be if the drink in question is “customarily” served in pitchers. So, margaritas and beer would seem to be safe, but what about less clear cases like mojitos? Mojitos are certainly served in pitchers sometimes, but is it “customary” to serve them that way? And how about bottled cocktails, which have become all the rage in the cocktail world? Are they a “pitcher,” and if so, are they “customary”?

The ABC’s decision to draw the line at what types of drinks are “customarily” put into pitchers is the type of ambiguous legal phrase that only a government lawyer could love. Call it “pitcher ambiguity,” and suffice it to say R Street’s team will be the first to blow the whistle if more pitcher shenanigans go down in Alabama.

Note: Cameron Smith has also been tracking and writing about the Alabama ABC’s attempt to enact a stealth tax increase by increasing the state liquor mark-up. Read more about that here.


Image by Danny E Hooks

 

Welcome to Climate Junior High

shutterstock_640678507

The new kid in the class is glib and loud, while the gal in charge of the “cool kids” pretended he hadn’t even entered the classroom. At least, that’s the way it seems from watching President Donald Trump and German Chancellor Angela Merkel in Hamburg last weekend at the Group of 20 (G-20) summit involving a majority of the world’s most industrialized countries.

In the weeks before the meeting, analysts and partisans were praying for some kind of moral reckoning for Trump on his arrival in Hamburg, the heart of Germany’s political left. Trump’s withdrawal from the Paris Climate Accords in early June had sent many European leaders into a state of shock, given that the European Union’s plan to cut its climate emissions dramatically is its pre-eminent geopolitical strategy.

Speaking before the German Parliament in late June, Merkel said of the U.S. withdrawal that “the climate treaty is irreversible and is not negotiable” – a direct rebuke of Trump’s decision to go it alone concerning climate change.

In other words, a beat-down in the lunchroom was expected.

Nevertheless, Trump and Merkel played nice in front of the dignitaries during the July 8-9 summit and the United States dissented from the 19 other countries’ consensus language on climate change in the final joint declaration with relative ease. The White House even was allowed to insert language saying the United States “will endeavor to work closely with other countries to help them access and use fossil fuels more cleanly and efficiently.”

The addition of the clean fossil fuel language was a “poker tell” to the radically divergent strategies at the heart of the chasm between United States and European Union on energy and climate policy. The problems undermining the Paris Accord—its voluntary and top-down nature, in particular—have been highlighted repeatedly by R Street and others. The facts of the case remain unchanged.

The United States, through the development of hydraulic fracturing and subsequent very low natural gas prices, has cut its energy-related carbon emissions more than any other member of the G-20 since 2005. The reason has nothing to do with international agreements or top-down approaches.

Instead, market forces drove natural gas drillers in the late 2000s to develop the hydraulic fracturing of shale basins in Pennsylvania and Texas. The explosion of natural gas supplies soon made it the fossil fuel of choice, over coal, for electricity plants around the country. The rest is history.

Since peaking in 2007, U.S. energy-related carbon emissions are down roughly 14 percent, while Germany, which sees itself as the world leader in climate change, had its carbon emissions fall 7 percent during the same period.

Given the size of the U.S. economy, the scale of the emissions savings has been enormous, with U.S. emissions falling 600 million metric tons compared to Germany 70 million tons over the same time period. All this while the European Union spent $1.2 trillion on wind, solar and bio-energy subsidies and an emissions trading scheme (ETS) that priced carbon too low to be effective.

Merkel waited until the very end of the summit to express her disdain: “Unfortunately – and I deplore this – the United States of America left the climate agreement,” she said in her closing statement.

As it stands, the differences in energy and climate outlook between the United States and Europe could not be wider. The United States looks to export both oil and natural gas into Europe. Meanwhile, both Germany and France are constraining both nuclear power and all fossil fuel use, as they aim for a dramatic cut in emissions by midcentury.

Perhaps French President Emmanuel Macron, who is also a new kid in the class, has a different plan to bring Trump into the climate club when he hosts Trump for Bastille Day celebrations in Paris July 14.


Image by Rawpixel.com

 

Kosar talks CRS reports on FedSoc podcast

In episode 2 of the Federal Society’s Necessary & Proper podcast, the R Street Institute’s Kevin Kosar discusses the Congressional Research Service, a nonpartisan government think tank in the Library of Congress. CRS assists Congress in lawmaking and oversight, and lamentably Congress has downsized the agency. CRS also has struggled to adapt to the hyper-partisan, Internet-connected Hill environment.

The full episode is embedded below:

South Miami solar mandate would trample property rights

shutterstock_497672149

Expanding solar energy to rely less on oil, gas and other nonrenewable resources is an almost universal goal, regardless of one’s political persuasion.  Indeed, with growing concerns about climate and the economic and even national security implications of relying on nonrenewable and oftentimes foreign energy sources, it makes sense to look at solar as a viable means to power more of society’s needs.

But as noble as the expansion of solar energy might be, its pursuit should never infringe on individual rights, as some local governments appear to be doing. For example, the City of South Miami is considering an ordinance that would require installation of solar panels on all newly constructed homes, as well as older homes whose owners elect to renovate 50 percent or more of the square footage.

Indeed, although the cost of solar-energy-generating devices has dropped in recent years, they still remain cost-prohibitive to most. This ordinance would not only increase the price of homes in a city where cost-of-living is already way above the national average, but may actually serve as a disincentive to existing homeowners who wish to make their older homes more energy efficient. Residents who might otherwise consider remodeling their homes with energy-efficient doors, windows, roof shingles, insulation and appliances may think twice if they were also forced to purchase expensive solar panels.

But even that is not the point.

This is a clear and egregious example of government trampling on individual property rights. Local and state authorities can and should develop building codes to ensure safety; Miami-Dade County already has a strict building code due to its vulnerability to hurricanes. However, residents should not be forced to purchase an expensive product that serves no health or safety purpose as a condition to develop or improve their own properties, just so politicians can feel good about themselves.

It is fair to debate how to expand solar-energy production and who should pay for it. Should government subsidize research? Should government grants or tax credits be offered to entice individuals to install solar panels? Should utility companies purchase excess power generated by privately owned solar devices?

These are all relevant public-policy issues that well-intended people with differing opinions can debate, and they all revolve around the notion that solar-device installation is a choice, not a mandate. Government should not pick one industry over another through subsidies or unfair incentives or penalties. Allowing energy producers to compete on a level playing field will encourage them to innovate and make their products more efficient and thus more economically viable over time.


Image by ND700

 

What’s in the FY2018 House legislative branch appropriation?

shutterstock_589919393

The House Appropriations Committee approved Fiscal Year 2018 appropriations via a June 29 voice vote. The bill calls for $3.58 billion of funding for House and joint-chamber operations (Senate-specific items are not included), a full $100 million more than the enacted FY2017 funding levels. It should, however, be noted that the FY2018 appropriation is much lower than the appropriation of FY2010.

On the same day, the committee released a full report explaining the appropriating rationale.

What is actually included in the bill? Who won and who lost the funding battles?

Big Winners

Security: In light of the recent shooting of Rep. Steve Scalise, R-La., staffer Zachary Barth, and Capitol Police officers Crystal Griner and David Bailey, the committee clearly saw a need to boost various forms of security for members and the government. The Capitol Police received an increase of $29 million, the House sergeant-at-arms budget was upped $5 million to $20.5 million, and $10 million of that was itemized to enhance the cybersecurity program of the chief administrative officer (CAO).

Architect of the Capitol (AOC): The stewards of the capitol complex, from building maintenance to landscaping, received a $48.4 million increase in funds over FY2017 enacted levels. The committee instructed the AOC to spend the appropriated $577.8 million on efforts that “promote the safety and health of workers and occupants, decrease the deferred maintenance backlog, and invest to achieve future energy savings.”

Library of Congress (LOC): For FY2018, the Library of Congress’ appropriations were increased $16.9 million to $648 million in an effort to modernize information technology and copyright efforts, as well as provide more funds ($3.5 million) to Congress’ nonpartisan think tank, the Congressional Research Service (CRS). Additionally, $29 million of the AOC’s appropriation was itemized for improvement and maintenance of LOC buildings and grounds.

Transparency: After years of debating the issue, the appropriators directed CRS to make all of its nonconfidential reports available to the public. The agency was given 90 days to submit an implementation plan, including cost estimates, to its oversight committees.

Big Losers

House Office Buildings: Despite the AOC receiving a sizable bump in appropriations, the amount allocated for the maintenance and care of the four House office buildings initially was chopped by $23.4 million from FY2017 levels. $4 million was later amended by voice vote, leaving the cut at $19.4 million.

Members Representational Allowance (MRA): The funding stream allowing members of the House to hire more and better compensate current congressional staffers remained at FY2017 levels ($562.6 million). “This level of funding will allow the MRAs to operate at authorized levels as approved by the Committee on House Administration,” declared the committee. What it will not do is reverse the long decline in congressional staff levels and salaries.

Government Accountability Office (GAO): Though not a decrease in funding levels, GAO was granted only a $450,000 bump in funds, despite a requested $46 million increase over FY2017 enacted levels. The agency requested the substantial increase for FY2018 chiefly for increased staffing in order to reduce the amount of improper governmental payments, identify ways to close the gap between taxes owed and taxes paid, and assist Congress in determining “policy implications of increasingly complex and rapidly evolving development of science and technology.” Instead of comparably larger increases enjoyed by sister agencies CRS ($3.5 million) and the Congressional Budget Office ($2 million), GAO’s appropriation remained relatively flat at $568 million.

Legislative Branch Appropriation Bill Specifics

Capitol Police: FY2018 funding levels increased $29 million to $422.5 million, including an increase of $7.5 million to “enhance Member protection, increased training, equipment and technology-related support items”; an increase of $13.2 million for Capitol Police buildings and grounds; and half-year funds to hire 48 additional sworn officers.

House Sergeant-at-Arms: An increase of $5 million with the “intent of enhancing security for Members when they are away from the Capitol complex. The Committee is aware that a specific plan is still evolving and once fully developed a plan will be presented to the Committee.”

Member’s Representational Allowance (MRA): though the MRA remains at FY2017 levels ($562.6 million), “the Committee has provided resources necessary to support the Committee on House Administration’s plan to increase Member’s Representational Allowance (MRA) by $25,000 per account this year for the purpose of providing Member security when away from the Capitol complex.”

Chief Administrative Officer (CAO): The CAO received an additional $10 million for strengthened cybersecurity measures. Additionally, the committee suggested that “with effective management of the program and continued support in appropriations, sufficient funding exists” to increase the number of two-year fellows partaking in the CAO’s Wounded Warrior Program from 54 to 85.

House Leadership Offices: FY2018 funding levels remained constant at $22.3 million.

House Committees: Appropriations for the salaries and expenses of House committees decreased by $45,004, from $150,324,377 in FY2017 to $150,279,373 for FY2018.

Joint Committees: The Joint Committee on Taxation received an increase of $360,000 to $10.46 million, while the Joint Economic Committee’s funding remained at $4.2 million.

Congressional Budget Office (CBO): Funding levels increased $2 million, from $46.5 million in FY2017 to $48.5 million for FY2018.

Architect of the Capitol (AOC): FY2018 funding levels increased $48 million to $578 million, including a $12.7 million increase for care and maintenance of the U.S. Capitol; $20 million increase in funding for the Capitol Power Plant; a $29 million increase for Library of Congress buildings and grounds; and a decrease of $27.4 million for House office buildings maintenance.

Congressional Research Service (CRS): Funding levels increased $3.5 million from $108 million in FY2017 to $111.5 million for FY2018.

Government Publishing Office (GPO): FY2018 funding levels remained constant at $117 million.

Office of Compliance: FY2018 funding levels remained flat at $3.6 million.

Amendments 

Two amendments to the FY2018 legislative branch appropriations bill were adopted by the Appropriations Committee.

  1. The manager’s amendment from Rep. Kevin Yoder, R-Kan., added $4 million to House office building maintenance. Instead of a decrease of $27.4 million, the amendment makes the decrease $23.4 million.
  2. Barbara Lee, D-Calif., sponsored an amendment that directed CAO to submit a report to committee within 90 days “addressing the ways in which Members and staff who have hiring and management responsibilities can be given the tools to combat unconscious bias in hiring and promotion, and with education on the negative impact of bias.”

Image by Golden Brown

 

States still stuck when it comes to pension plan fixes

shutterstock_545986114

I spoke recently with Bill Howell, the longtime speaker of Virginia’s House of Delegates. While he is not standing again for election, he is the kind of person who wants to use the last portion of his authority with the state government to work on the most important issues facing his state.

Number one on his list is pension reform. Nobody will be able to pin on him the consequences of inaction today or the failure of an unsustainable system over time. Making the choice to spend the last months of his time in office with a virtual shovel on his shoulder is leadership one doesn’t see much across the Potomac these days.

Other places will certainly provide awareness through “canary in the coal mine” warnings about the fiscal challenges of our retirement security system, but our political system and culture are generally less responsive to these kinds of virtually certain problems than they are to perceived future environmental hazards. As one example, due in large part to the one-child policy instituted in 1979, China is now contemplating the “4:2:1” situation of one grandchild in the workforce struggling to support two parents and four grandparents. For perspective, China is physically roughly the same size as the United States, with five times its population. That country alone is projecting a population over age 60 of more than 300 million people by 2024. The pressure on offspring to care for this number of elderly is mirrored in public programs.

Somewhere in my files is a page of dates in the not-so-distant future that represent each state’s technical bankruptcy, if something isn’t done in the meantime to alter the math. There is also Medicaid, of course, the budget issue du jour, but these dates are only based on pensions and state employee health care. In those jurisdictions where local governments participate in the state systems, their figures are included.

Pennsylvania is a good example of the political and financial pressures on governments to keep promises to their employees. Having barely celebrated passage of needed reforms a few days ago, there is already serious discussion of allowing the state to borrow the money it just required itself to put aside to fund the reforms.

Not even a month ago, Pennsylvania lawmakers enacted bipartisan legislation that required them to fully fund the employer (state) share of their defined contribution plan. When Gov. Tom Wolf signed the bill his public comment was: “Here in Harrisburg we can get important things done in a way that I think a lot of other places cannot.”

The new law provides that only hazardous-duty state employees, such as law enforcement, will stay eligible for the once-ubiquitous defined benefit plans that defined public pensions for decades but have been mostly phased out in the private sector. Both state and school employees who start jobs in 2019 will have three retirement options, and current employees will have to choose one, as well. Two of the new plans combine features of a guaranteed pension amount with an investment vehicle similar to private-sector plans. The third is a full defined contribution plan like a 401(k) plan, where the state pays 2 percent of salaries into the plan for school employees and 3.25 percent for other state workers to match their 7.5 percent minimum contributions.

Now there are rumblings that the state will authorize—as Illinois and other states with shaky financials have—sales of pension obligation bonds to lay against a portion of its share. It is theoretically possible to earn a rate of return on the bonds more than the pension contribution owed, but successes are few, and the risk to future workers and taxpayers accordingly great. Both Illinois and New Jersey have sold billions of dollars of pension obligation bonds. This year, 80 percent of the money paid out by Illinois for state teacher pension payments is going toward the unfunded liability. The state has never paid its full share, according to the Teacher’s Retirement System. Racking up long-term losses on these instruments, Illinois jacked up its income tax by 66 percent in 2011, and another 32 percent increase was over Gov. Bruce Rauner’s veto this past week. These are not unrelated stories.

New Jersey has suffered the indignity of being sued by federal regulators for securities fraud in its pension bond sales. The Garden State’s pension system was rated dead last among the 50 states in the most recent Pew Charitable Trust national study. State workers have been paying in higher amounts since 2011 reforms, but the state has not kept up its commitment. Ironically, the latest reform proposal for the worst-funded pension system among the states is to give it the billion-dollar lottery. This would increase the funded rate immediately to 65 percent – a dramatic improvement. If there is a better metaphor for a New Jersey solution, I don’t know what it would be. People in the Garden State will be encouraged to keep on gambling.

Pennsylvania should stay the course, and allow the reforms to nudge the retirement plans for state workers and teachers back toward stability.


Image by Aaban

 

Carbon tax versus clean tax cuts policy wonk rumble

Back in April, R Street Energy Policy Director Josiah Neeley moderated a panel at Earth Day Texas in Dallas. Billed as a “Policy Wonk Rumble,” the panel compared the merits of different ways to use the tax code to encourage clean energy and reduce greenhouse-gas emissions. Also featured on the panel were Peter Bryn of Citizens Climate Lobby, Travis Bradford of Columbia University, Rob Sisson of ConservAmerica, and Rod Richardson of the Grace Richardson Fund.

The future of aviation demands privatized air-traffic control

shutterstock_368317667

American air-traffic control is safe, but as currently constituted, the system won’t be able to keep up with the increasing demand for domestic and international air travel. To ensure Federal Aviation Administration can continue to modernize and operate efficiently, free of budget uncertainty and political interference, air-traffic control should be turned over to an independent nonprofit corporation, as proposed by H.R. 2997, the 21st Century Aviation Innovation, Reform, and Reauthorization Act.

From 1996 to 2012, the FAA’s budget doubled, even though staff levels stayed roughly constant and the agency’s productivity actually fell. A 2016 inspector-general’s report found that, of the system’s 15 most recent major system acquisitions, eight had gone over-budget by a total of $3.8 billion and eight were behind schedule by an average of more than four years. These sorts of problems illustrate the difficulties the FAA faces in adapting to new market conditions due to higher and more complex demand.

The 21st Century AIRR Act—sponsored by Rep. Bill Shuster, R-Pa., chairman of the House Transportation and Infrastructure Committee, which cleared the bill June 27 in a 32-25 vote—would assign oversight of America’s air-traffic control system to a new nonprofit corporation, with a CEO who is answerable to a board of directors made up of “a diverse cross-section of the aviation system’s stakeholders and users.” The act would refocus the FAA on federal safety oversight and streamline the FAA certification process, making it easier for companies to get their products out on time. This would encourage innovation in aviation technology by lowering the cost of implementation.

The proposal has support from President Donald Trump, who included a version of it in his proposed FY 2018 budget. As the National Taxpayers Union Foundation detailed in a recent piece, “the budget forecasts that taxes would be reduced by $115 billion from FY 2021 to FY 2027. The FAA’s budget for ATC would be reduced by $70 billion, leaving the agency to focus on regulating aviation safety.”

ATCBudget18Chart

But the measure also faces pushback from a variety of aviation interests. They prefer the Senate’s FAA reauthorization bill from Sen. Jon Thune, R-S.D., which does not include air-traffic control privatization. The Schuster proposal should be considered commonsense legislation, not only cutting government waste but making the world a little bit safer. Let’s hope it moves on the House floor soon.


Image by Stoyan Yotov

 

Private flood insurance should be allowed to compete on a level playing field

shutterstock_481251418

Since 1968, the National Flood Insurance Program (NFIP)—in a well-intentioned but ill-designed effort to help home and business owners in flood-prone regions—has provided flood insurance at below-market rates. Predictably, the program has racked up a significant amount of debt, discouraged private competition and innovation and distorted consumers’ ability to calculate the risk of living and building in flood-prone areas.

As Congress considers NFIP reauthorization this summer and fall, lawmakers ought to implement structural reforms that will benefit both insurance consumers and the American taxpayers.

It is a well-known economic adage that “if you subsidize something, you get more of it.” In this case, the NFIP’s practice of subsidizing insurance premiums for high-risk areas has created a moral hazard problem where the government insurance program actually encourages higher levels of risk-taking. This has turned out to be quite costly for the American taxpayer, as the NFIP is now over $25 billion in debt to the U.S. Treasury. The Government Accountability Office has found the program is unlikely ever to generate enough revenue to cover its costs, exposing the federal government to further financial risk.

Yet, the subsidies keep flowing to areas where floods are common, and where it may not otherwise be cost effective rebuild. There is no better evidence that the NFIP is encouraging risk than the fact that 25-30 percent of flood insurance claims in the NFIP system are generated by a mere 1 percent of properties that have government-backed insurance. This distortion of risk will continue to make the program fiscally unsustainable until the government ceases to offer insurance premiums at significantly below-market rates.

Unsurprisingly, regulations on what kinds of private market insurance lenders can accept, along with the subsidized rates, historically have made it difficult for insurance companies to offer competitive flood insurance plans. Private companies do not have the luxury losing $25 billion. Though previous reforms sought to level the playing field and move the NFIP toward risk-based rates, unclear language has continued to stymie private market development, limiting choice for consumers and putting taxpayers at continued risk. Among the issues that put private entities at a disadvantage is that NFIP policyholders who make the switch to private insurance are not considered to have continuous coverage, and therefore may have to pay significantly more should they ever decide to switch back.

Congress should look to Florida as an example of how to salvage a failing insurance system. Before state government enact reforms in 2010, Florida’s public insurance program, Citizens Property Insurance Corp., was fiscally unsound and the Florida taxpayers were exposed to high levels of risk in the event of another hurricane. State lawmakers incrementally raised premiums to be in line with the market rates and allowed private companies to assume many of the policies previously written by Citizens. As they did, the fiscal burden shifted from taxpayers to private entities.

The Flood Insurance Market Parity and Modernization Act, submitted in both the House and the Senate, would be an important first step to enable private market insurance to compete on a level playing field with government insurance. It would end clarify federal lending rules, allow insurers who participate in the NFIP’s Write Your Own program to also underwrite private flood insurance and end the practice of penalizing those who choose to purchase private coverage. It would also further the move toward a less distorted system and thus shift some of the burden off the taxpayers.

Despite passing the House unanimously in 2016, and passing the House Financial Services Committee unanimously last month as part of its package to reauthorize NFIP, the bill has not yet moved in the Senate. The Senate Banking Committee should take a lesson from their House colleagues and include this important clarification in their own legislation to reauthorize NFIP. Failing to do so would only ensure that, for many years to come, American homeowners will continue to be at the mercy of a failing government program, all on the taxpayer dime.


Image by humphery

Pollock before the Subcommittee on Monetary Policy and Trade

R Street Distinguished Senior Fellow Alex Pollock testifies before the House Financial Services Committee’s Subcommittee on Monetary Policy and Trade in a June 28 hearing on “The Federal Reserve’s Impact on Main Street, Retirees and Savings.”

Coppage at R Street-CNU event in Salt Lake City

The R Street Institute recently co-hosted an event in Salt Lake City with Utah chapter of Congress for the New Urbanism on how to make both housing affordability and strong communities possible in a red-state boom town like Utah’s capital. Alongside Sutherland Institute Director of Public Policy Derek Monson and Health Hansen, staffer to Sen. Mike Lee, R-Utah, R Street Visiting Fellow Jonathan Coppage reviewed the need to allow for small solutions to big problems, relegalizing accessory dwelling units and missing middle forms.

The downsides of using executive agency detailees

In a previous post, I recounted the advantages of using executive detailees as a means to combat staffing shortages on Capitol Hill. In short, agency detailees can serve as a free source of policy expertise to Congress, providing committees with experience and insight into agency decisionmaking and likely responses to congressional actions.

But, as with all governing arrangements, executive-branch detailees are not always an unalloyed good. Detailees, as some Hill veterans will explain, can come with costs.

  1. Detailees can have divided loyalties

Detailees can have a hard time shedding their agency allegiances, ultimately resulting in divided loyalties between their parent agency and their new congressional committee. These allegiances may be unconscious byproducts of spending a career in the executive branch.

Other agency employees, however, may have more deliberate congressional prejudices. Such detailees view Congress and its committees as institutions unfamiliar with the intricate inner-workings of their agency, and ones attempting to encroach on their expertise and operations with new laws and a constant barrage of oversight information requests. In these instances, detailees may struggle to work in support of the institutional interests of Congress.

  1. Detailees can have fixed policy preferences

Relatedly, borrowed agency employees may bring with them explicit policy preferences, often within specific issue areas they handled within their parent agency. Serving as a policy expert on a relevant committee may provide an opportunity to grind such a policy ax and, in turn, warp the policymaking processes within their new committee.

  1. Detailees often need training

Detailees are often unfamiliar with the legislative process and require basic training in congressional procedures once they get to the Hill. Given that committee resources are already severely strapped, providing such training further saps the time of permanent committee staff.

The time and resources spent bringing detailees up to speed on the ways of the Hill can result in a small return on the investment for Congress. What’s more, because detailees are loaned out for a limited time—often a year or less before returning to the executive branch—a constant cycle of orientation, training, working and departing can develop where very little time is spent on intricate policymaking.

  1. Detailees can mute the call to increase staffing capacity

A growing dependence on detailees as a means to compensate for decreasing congressional capacity may prompt some to argue that increasing the number of permanent congressional staff isn’t necessary. Detailees are seen by some as capacity Band-Aids covering up the more threatening conditions of limited expertise and too few staff in Congress. Increasing committee reliance on their use may perpetuate a situation of inadequate congressional staffing levels.

Agency detailees can be a source of policy expertise for congressional committees, but their contributions can’t be assumed. Detailees, themselves, can be a drain on the already limited capacity of Congress, and ultimately make Congress less effective, less productive and more susceptible to outside influence.

FDA misinterprets massive victory on teen smoking

shutterstock_311764157

As detailed this morning by the Food and Drug Administration, cigarette smoking by U.S. high school students has been cut in half since 2011—from 15.8 percent to 8.0 percent—a remarkable and previously unanticipated public health victory.

Unfortunately, it appears federal authorities may be misattributing the cause. In his announcement earlier today, FDA Commissioner Scott Gottlieb attributes most, if not all, of this reduction in smoking to a federally sponsored program that has only been in place since 2014. Despite substantial evidence in federally sponsored surveys in the United States and abroad showing that remarkable reductions in teen and adult smoking have been concurrent with the increasing popularity of e-cigarettes, the FDA announcement makes no reference to the possibility that much, if not most, of the recent reductions in teen smoking may be attributable to e-cigarettes.

In fact, Gottlieb urges continuing efforts to reduce teen use of all nonpharmaceutical nicotine delivery products, while endorsing expanded efforts at smoking cessation that rely on the pharmaceutical nicotine gums, patches and other products that have proved to be of only marginal effectiveness over the past four decades.

This public health victory is too important to leave to chance and guesswork. If Commissioner Gottlieb has evidence to support the claim that The Real Cost campaign “has already helped prevent nearly 350,000 kids from smoking cigarettes since it launched in 2014,” he should present it to the public. Regulators and public health authorities also should present and discuss the evidence for and against the possibility that the availability of e-cigarettes and related vapor products may, in fact, have played a major role in securing these reductions in smoking.

This is not an academic question.  Recently promulgated regulations from the Gottlieb’s own FDA threaten to eliminate more than 99 percent of e-cig products from the marketplace before the end of 2018, including all or almost all of the vape-shop component of this industry. The limited data available strongly suggests that the vape-shop products—with their ability to customize devices, flavors and strengths of nicotine to satisfy the preferences of each smoker, and modify the flavors and strength of nicotine over time to prevent relapse to cigarettes—may be more effective than the mass-market products in achieving and maintain reductions in smoking in both youth and adults.


Image by Sabphoto

 

Harm reduction is about making better choices, not perfect ones

shutterstock_300454181

Dr. Mark Boom, president and CEO of the Houston Methodist hospital system in Texas, suggests in a recent piece in The Hill that proponents of vaping are simply ignoring evidence that vapor products are not 100 percent safe.

Of course, people in the vaping community do not think that e-cigarettes are 100 percent safe. And if these products were found to increase the incidence of teen smoking of combustible cigarettes, we don’t want that either.

However, Boom appears to misunderstand the philosophy of harm reduction. Boom no doubt would encourage his patients who use intravenous drugs to, at the very least, use clean needles, rather than sharing. If he did not, he would be grossly abusing his privileged position as a healthcare authority. Similarly, applying a harm reduction philosophy by encouraging smokers to switch to e-cigarettes could save the vast majority of the 480,000 lives taken by combustible cigarettes every year.

As Boom rightly points out, e-cigarettes do, in fact, contain toxins. These are, however, at a very low concentration in the excipients – the products that make up the aerosol suspension that delivers the active ingredient of nicotine. What he neglects to add is that the excipients in nicotine liquid are strikingly similar to those in asthma inhalers. We certainly wouldn’t suggest to an asthma patient to forgo their medication because they are also inhaling toxins.

As a pharmacologist, I would encourage every person who ingests toxins to stop doing so. Of course I would. But my years in addiction research have made clear that you cannot simply tell someone to not pick up that cigarette, syringe or beer. Until that is possible, we have to encourage people to make better choices – which, unsurprisingly, is very easy to do.

When people do things we don’t approve of, we often write them off as not caring about their own health or personhood. Having worked at community organizations that distribute clean needles to curb transmission of infectious disease, naloxone to reverse overdoses and HIV drugs to prevent new infections, it is clear that people do recognize the risks they take everyday and embrace opportunities to reduce consequences associated with risky behaviors.


Image by Grey Carnation

 

Setting the record straight on copyright modernization

shutterstock_563308027

There’s a lot to be said for the adage that “we shouldn’t let the perfect be the enemy of the good.” While true in many situations, it also requires there be enough “good” to be worth the effort you’re engaged in, and not wasting energy better deployed doing something else.

In a recent blog post on Truth on the Market, Kristian Stout of the International Center for Law and Economics takes issue with my framing of a bill that would require the register of copyrights—the person who heads the Copyright Office within the Library of Congress—to be a presidential appointment. I should add the proposal comes during a time when President Donald Trump is considerably behind in selecting and confirming his appointees to a broad range of executive branch positions.

Unfortunately, Stout mischaracterizes and misreads my position. In my TechDirt piece, I described both points of view about the bill, writing that “opponents argue the bill will make the register and the Copyright Office more politicized and vulnerable to capture by special interests.” Stout takes this out of context and represents it as my position, rather than a description of what others have said.

There are a number of other issues with Stout’s piece, not all of which are worth addressing. But I will tackle the main ones.

It’s true, as Stout claims, that the idea for making the register a nominated and confirmed position has been under discussion for several years as part of the House Judiciary Committee’s copyright review, but so were a lot of other things that didn’t come to fruition. My point is not that this idea is totally new, but that the impetus for the bill to be rushed through now is motivated by the political dynamic between Congress and Librarian of Congress Carla Hayden, as well as her removal last year of then-Register of Copyrights Maria Pallante. Stout attests Hayden’s nomination was not politicized, when in fact, it was. The Heritage Foundation, among other conservative groups, argued against her confirmation. Heritage Action even urged senators to vote “no” on her nomination, a position with which we disagreed.

To set the record straight — I don’t think it’s a terrible bill. As I’ve argued in TechDirt and The Hill, there are some reasonable arguments in its favor. There are also some plausible arguments against it. I simply don’t think it does much to move the ball either way.

The main point of the bill, according to many of its proponents, would be to make the Copyright Office position more politically accountable. In theory, with congressional input, stakeholders on all sides would have an opportunity to weigh in on who gets confirmed for the position. This could limit edge cases where there is a truly awful candidate. But the Senate rarely, if ever, rejects presidential appointments who are otherwise broadly qualified — particularly for what is not a Cabinet-level position. And there wouldn’t be many groups capable of mounting a successful opposition fight over this position, as they might over a Supreme Court seat (even then, it’s rarely the primary factor). Even for Heritage, likely the most powerful conservative group in Washington, key-vote scoring against Hayden in a Republican-controlled Senate only got them 18 votes.

This, in itself, is not much of a justification for a bill.

One of the key points of Stout’s argument for the legislation is that: “Separating the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization.” But changing who appoints the register shouldn’t be conflated with separation or modernization. Indeed, the librarian of Congress still has final authority over all of the office’s substantive regulatory powers. Changing who picks the register also has nothing to do with meeting the challenges of modernizing the office’s information technology infrastructure. If an independent office is what you want, this bill isn’t that.

For the record, we at R Street are not necessarily opposed to an independent (or relocated) Copyright Office. Some scholars, including former Register Pallante, make a plausible case that the systemic bureaucracies of the Library are part of what’s holding the Copyright Office back. But it’s also hard to separate the Library’s well-documented IT problems from the decadeslong tenure of the previous librarian, James Billington. Additionally, there are IT modernization challenges at every level of the federal government, including independent agencies, and it may be worth giving the new librarian a chance to fix them.

At heart, the location of the Copyright Office is a complex question of public administration that is worthy of deep consideration and review. An immediate step I have suggested in conversations with colleagues is to have Congress ask the National Academy of Public Administration to conduct a review of the internal structural challenges of the Library and its component agencies (as it did for the PTO in 2005). This would inject a much-needed dose of objectivity into a discussion that has unfortunately served as another proxy battle between the entrenched sides of the intellectual property debate.

In his conclusion, Stout makes an excellent point: “Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates.” I agree. Regardless of how strong you think our nation’s copyright laws ought to be, you should be in favor of making the system’s core functions work better. This bill will do little, if anything, to advance that goal. I look forward to working with stakeholders on all sides, including Stout, to find solutions that do.


Image by Jirsak

 

PACE Act would prosecute teen sexting as kiddie porn

shutterstock_512002387

Crimes against children, particularly those that involve sexual exploitation, are beyond the pale. But while society needs to make sure it protects children from sexual abuse, recent legislation passed by the U.S. House could cause more problems than it solves – hurting minors, expanding minatory minimums and creating redundant federal authority where there already are similar laws at the state level.

By a 368-51 margin, the House voted May 25 to approve H.R. 1761, the Protecting Against Child Exploitation (PACE) Act of 2017. The bill is intended to strengthen federal laws dealing with the production and distribution of child pornography by making the transmission of sexual images of minors a federal crime. The measure has moved on to the upper chamber, where it will be considered by the Senate Judiciary Committee.

While the bill’s purpose is to punish child predators, its unintended consequence will be to create more criminals out of teenagers whose main crime is simply lacking common sense.

As written, the law could apply to minors who send sexual images to other minors, or what is commonly referred to as “sexting.” The House-passed bill provides no exemption or provision to deal with minors who engage in sexting, meaning they could be subject to a mandatory minimum sentence of 15 years in prison and lifetime registration as a sex offender. Because of how broadly the text is written, even a teenager who merely views a sexual image or requests that one be sent could be subject to the mandatory minimum.

Sexting among teenagers increasingly has become the norm. While the phenomenon is worth a larger discussion, most would agree that locking teenagers up for 15 years is not the best way to handle the situation. Few believe these minors are committing crimes on a par with actual child predators. They should not be treated the same way under the law.

Teenagers are still minors in the eyes of the court. By creating an inflexible law that cannot take into account the ages of those involved, the law will force the courts to punish minors for having poor judgment. For numerous other crimes, the court system is purposely designed differently when it comes to how and whether to prosecute and sentence minors. Judges are given more tools to keep them out of jail and without criminal records. By retaining local jurisdiction, communities could respond more effectively to offenders and victims, as well as to the community at large. Child pornography laws should protect children from terrible acts, not punish teenagers for lapses in judgement.

Such concerns could have been addressed in the PACE Act, were it not for pure laziness on the part of the House of Representatives. The bill was passed without any hearings or input from experts, and approved as members fled Washington for their Memorial Day recess. The American people deserve better that.

There is still hope that the Senate will take notice of these issues. Law enforcement at both the state and federal level already have multiple tools at their disposal to prosecute child predators. This expansion of federal power is nothing but Congress creating a solution to a problem that did not exist.


Image by nito

 

Dual-class shares and the shareholder empowerment movement

shutterstock_432826024

The shareholder empowerment movement has renewed its effort to eliminate, restrict or, at the very least, discourage use of dual-class share structures—that is, classes of common stock with unequal voting rights—in initial public offerings. Of particular interest to the movement, which is made up primarily of public pension funds and union-related funds that hold more than $3 billion in assets, was the recent Snap Inc. IPO that sold nonvoting stock to the public, a first for IPOs with dual-class shares.

Typically, a company will issue a class of common stock “ordinary shares” to the public that carry one vote per share, as Facebook Inc. did in its IPO, while reserving a separate “super-voting” class that provides founders like Marc Zuckerberg with at least 10 votes per share. This structure allows the founders to maintain control of the company without having to own the majority of outstanding common stock.

Even though it offered no voting rights in the shares sold to the public, the Snap IPO was a huge success. Snap priced its IPO at $17 per share, giving it a market valuation of roughly $24 billion. The book was more than 10 times oversubscribed and Snap could have priced the IPO at a price of up to $19 per share.

The Council of Institutional Investors, the trade organization that represents the shareholder empowerment movement, has asked the S&P Dow Jones Indices, MSCI Inc. and FTSE Russell to exclude Snap Inc. and other companies with nonvoting stock from their indices unless they include extremely restrictive provisions, such as maximum sunset provisions—triggers that would terminate the super-voting characteristics of the founders’ shares—of three to five years. Moreover, consistent with the CII’s general policy, the letters the council sent also advocate for a forced conversion of all dual-class share structures to one-share, one-vote, unless the majority of ordinary shares vote to extend the dual-class structures for a maximum of five years.

The movement’s advocacy is not confined to those IPOs with dual-class shares listed on the U.S. stock exchanges. It also is attempting to persuade the Singapore stock exchange not to allow dual-class share structures of any kind.

If the movement is successful, this shift would not be trivial, as many of our most valuable and dynamic companies have gone public by offering shares with unequal voting rights. Besides Snap and Facebook, other companies that have gone public with dual-class shares include Alphabet Inc. (Google); LinkedIn (acquired by Microsoft for $26 billion in 2016); Comcast; Zoetis Inc.; Nike, Inc.; and Alibaba Group Holding Ltd. Two of these companies, Alphabet and Facebook, rank in the top 10 in the world based on market valuation. Berkshire Hathaway Inc., a company that also uses a dual-class share structure, also ranks in the top 10, although it only started using the structure after Warren Buffet bought control of the company.

Public companies with dual-class share structures have an aggregate market value of close to $4 trillion. As reflected in their market valuations, they are some of our most important companies, helping to fuel the growth of the economy.

The movement’s vigorous response to Snap’s hugely successful IPO was unsurprising. The CII, since its founding in 1985, has promoted a “one-share, one-vote” policy as one of its bedrock principles. But this policy of “shareholder democracy” should not be confused with political democracy, where each person gets one vote. In shareholder democracy, voting power is assigned according to property ownership – i.e., how many shares the person or entity owns. Dual-class share structures clearly violate the CII’s policy of shareholder democracy and are an obvious threat to the movement’s power. That is, the more public companies that utilize a dual-class share structure, the more controlled companies exist and the less power the movement has.

Most importantly, the movement’s advocacy comes into strong conflict with what many believe to be the great strength of our system of corporate governance: the private ordering of corporate governance arrangements, with dual-class share structures being an optimal result of that ordering. Consistent with this understanding, NASDAQ Inc. recently declared:

One of America’s greatest strengths is that we are a magnet for entrepreneurship and innovation. Central to cultivating this strength is establishing multiple paths entrepreneurs can take to public markets. Each publicly-traded company should have flexibility to determine a class structure that is most appropriate and beneficial for them, so long as this structure is transparent and disclosed up front so that investors have complete visibility into the company. Dual class structures allow investors to invest side-by-side with innovators and high growth companies, enjoying the financial benefits of these companies’ success.

At its core, the shareholder empowerment movement advocates shifting corporate decision-making authority to shareholders, and thus away from boards of directors and executive management, the most informed loci of corporate authority. Shareholder empowerment, not maximizing shareholder wealth, is the movement’s objective. This movement must be stopped from opportunistically interfering with the use of dual-class share structures in IPOs.


Image by create jobs 51

 

Lehmann before the House Financial Services Committee

R Street Senior Fellow R.J. Lehmann testifies before a June 7 hearing of the House Financial Services Committee on “Flood Insurance Reform: A Taxpayer’s Perspective.”

How Congress became colonized by the imperial presidency

shutterstock_558924988

Ever since Arthur Schlesinger’s 1973 book coined the phrase, the so-called “imperial presidency” has been a perennial topic of our national political discourse. At a time when the American branches of government are separate but unequal, the seven essays collected in The Imperial Presidency and the Constitution trace when fears of an imperial presidency first arose, the extent to which such fears are justified and what can be done about it.

Adam J. White’s contribution, “The Administrative State and the Imperial Presidency,” cautions not to conflate the “imperial presidency” with the administrative state itself. As White points out, the administrative state is “first and foremost a creation of Congress,” and “to at least some extent, a necessary creation.”

By contrast, the imperial presidency refers to the power the president wields through his office. While this power can be channeled and enhanced through the apparatus of the administrative state, an imperial presidency also “can restrain the administrative state, as in the Reagan administration … and, less obviously, the administrative state can restrain an imperial president.”

In modern times, of course, the power of the presidency and the administrative state have grown in tandem. “The president wields executive power broadly to expand the administrative state, and the administrative state acts in service of the current president’s agenda,” White writes.

After various failed attempts by Congress itself to act as an administrative body during the Articles of Confederation era, the U.S. Constitution provided for an energetic executive, which Alexander Hamilton described as “essential to the steady administration of the laws.” Despite this, the Constitution offered little in the way of an affirmative vision of the administrative bureaucracy, an omission some scholars have referred to as “the hole in the Constitution.”

Although there were earlier antecedents, Congress’ creation of the Interstate Commerce Commission in 1887 marked the modern administrative state’s arrival. Ove time, the ICC’s powers were enhanced by Congress to encompass both judicial and legislative powers, given its ability to both set rates and adjudicate disputes. During the Progressive Era and through the New Deal, more administrative agencies were built on the ICC model, including the Federal Trade Commission and Federal Communications Commission.

Importantly, these agencies were distinct from the traditional executive branch departments and thus operated “outside of the direct oversight of the president,” White notes. Progressive policymakers—starting with some in the Franklin Roosevelt administration—quickly grew frustrated with the agencies’ ability to “impede an energetic liberal president’s regulatory agenda.”

Years later, conservatives also began to bemoan the independent nature of certain agencies. As the Reagan administration sought to cut back on the regulatory state, it attempted to increase the president’s power over the administrative state through mechanisms such as centralized regulatory review under the Office of Information and Regulatory Affairs. Since Reagan, presidents of both parties increasingly have embraced greater presidential control over federal agencies. Some used that control to expand the administrative state’s power, while others have sought to curtail it.

The “most straightforward” way to shrink the administrative state, White argues, “would be for Congress to do the work of taking delegated powers away from the agencies, by amending statutes.” Since many legislators prefer to delegate their power in an effort to avoid responsibility, White views this option as unrealistic.

This leads White to the “second best option,” which is to pass some form of broad regulatory reform legislation that revamps the processes through which agencies enact rules. He mentions the REINS Act and the Regulatory Accountability Act as two possible options. R Street actually has identified a whole menu of options from which Congress feasibly could choose.

More broadly, White points out that using the imperial presidency as a means to control and direct the administrative state is no longer an effective mechanism to rein it in. Rather, it’s far past time that the other branches assert themselves and join the fray. One possibility is for the judicial branch to revisit its doctrines that grant significant deference to federal agencies.

In many ways, Andrew Rudalevige’s contribution, “Constitutional Structure, Political History and the Invisible Congress,” picks up where White’s essay leaves off. When the system of separated powers works as intended, the legislative and executive branches operate as “rivals for power,” making their relationship contentious, rather than cooperative. Although the Founding Fathers were more concerned about the legislature accreting power than the executive, Rudalevige’s chapter retraces how both structural and political factors have created the exact opposite dynamic.

Rudalevige lays out an obvious—but often underappreciated—truth: the president has a built-in advantage in that he is just a single person. By contrast, Congress must function as a 535-member conglomeration of legislators spread across two different chambers and hailing from different political parties and geographical regions. Given that each member carries “their own localized electoral incentives,” they will “rarely find it in their interests to work together, much less to confront the executive branch.”

Another factor Rudalevige pinpoints for Congress’ decline is the rise of political polarization. Politics has increasingly become a team sport: “A vote against presidential overreach is now seen by the president’s party colleagues as damaging to the party brand, and thus to their own careers.” The result is that legislators are more likely to toe the party line in pursuit of short-term policy victories, rather than vote to strengthen Congress as an institution.

Rudalevige also highlights how modern travel has allowed congressmen to transit back-and-forth from their home districts to Washington with relative ease. This has led to the rise of the “Tuesday-Thursday club of drop-in legislators,” who spend more time pressing the flesh with donors and constituents back home than doing the hard work of hammering out legislative compromises. One option is for Congress to extend its work weeks, which could increase the amount of floor time available to conduct legislative business.

Exercising more effective oversight doesn’t just mean finding more time; it also requires more capacity. Rudalevige cites R Street’s Kevin Kosar, who has chronicled the decline in congressional staff and pay levels over the past 40 years. Beefing up congressional staff, as well as support systems like the Congressional Research Service, would help address this deficiency.

Other possibilities include forming new institutions such as a Congressional Regulation Office—as proposed by Kosar and the Brookings Institution’s Phillip Wallach—to provide independent cost-benefit analyses and retrospective reviews of regulations. A final idea—and one long advocated by policy wonks—is a return to “regular order” budgeting, in which Congress breaks the federal budget into bite-sized pieces rather than relying on last-second, thousand-page omnibus spending bills to keep the government’s lights on.

While all of these ideas are available and ready for the picking, Rudalevige admits that “current returns are unpromising” that Congress will actually implement any of them. Nonetheless, he’s correct in warning that “the matter demands our attention even so.” Let’s hope Congress—and the American citizenry—heeds his call.


Image by Ed-Ni Photo

 

How executive ‘detailees’ could help ease Congress’ staffing problems

Capitol Building

It is becoming more widely acknowledged that Congress has a staffing problem. While the executive branch employs more than 4 million people, the legislative branch has only about 30,000. This number includes personnel toiling for agencies that do not readily come to mind as legislative, like the Government Publishing Office, the Architect of the Capitol and the U.S. Capitol Police.

While congressional capacity advocates shout for more funding and personnel to be allocated to the legislative branch, political scientists Russell Mills and Jennifer Selin examine the use of an often-overlooked stream of expertise available to congressional committees: federal agency detailees. Detailees are executive agency personnel with a particular policy mastery who are temporarily loaned out to congressional committees. The typical detailee assignment runs one year.

Hill operators and observers have long known policy expertise resides primarily in congressional committee staff. Compared to House and Senate personal office aides, committee staffers typically have more experience and narrower portfolios, both of which enhance the abilities of committees and their members to conduct oversight, draft legislation and develop fruitful lines of communication with relevant agency stakeholders.

However, as Mills and Selin point out in a recent piece in Legislative Studies Quarterly, there are only about half as many committee staff as there were in 1980, while inflation-adjusted pay levels have fallen 20 percent for many committee aides. This reduction in resources has hampered committees’ oversight capabilities, in addition to abetting the centralization of policymaking in leadership offices or its complete delegation to the executive branch.

House versus Senate committee staff, 1977-2014

house v senate staff

SOURCE: Russell Mills and Jennifer Selin, 2017

Mills and Selin argue detailees offer at least three specific benefits to supplement Congress’ legislative and oversight responsibilities:

  1. Detailees provide additional legislative support. Though committee staffers are usually issue specialists, “detailees often have specialized, expert knowledge of a policy, [and] they are able to provide awareness more traditional congressional staff may not have.” Moreover, given their personal experience within the agencies, detailees offer committees important insight into the decision-making processes and likely agency responses to potential congressional action.
  1. Detailees assist with executive branch oversight. “The process for securing information through requests directly to a federal agency is slower and involves agency coordination with the presidential administration. Detailees provide a way around these problems.” Simply having agency contacts and being able to connect committee staffers directly to those agency personnel most likely to respond quickly with accurate information can expedite the frustratingly slow information-gathering process vital to conducting effective congressional oversight.
  1. Detailees supplement interest-group engagement. In developing policy, committee staffers spend much of their time meeting with relevant policy stakeholders. “Committee staff routinely assists members of Congress by meeting with interest groups to gather their input for legislative initiatives as well as to hear their objections or support for actions taken by executive agencies.” Detailees provide the committee more, and different, stakeholder contacts established from the agency perspective, which allows for better information filtering and a more informed assessment of legislative potential.

Finally, and importantly, Mills and Selin point out that use of detailees is a rare win-win for both the legislative and the executive branches. The benefits to Congress are clear: committees gain expert-level staffers with experience and connections to the agencies under the committee’s purview, all on the agencies’ dime. Sen. Susan Collins, R-Maine, has noted:

These detailees apply their expertise in researching issues, staffing hearings, and working on legislation. In return, they gain valuable experience, which develops their careers and benefits their agencies.

The gains for the executive branch are less intuitive. After all, the agency loses a competent staffer who then offers Congress firsthand insight into agency operations, even potentially providing increased oversight to the very agency from which the staffer originated.

But Mills and Selin note that, from qualitative interviews they conducted with current and former detailees, they discovered that “detailees gain experience in the legislative process, can represent the interests and perspectives of the agency, and give the agency a conduit to committee decision making.”

In other words, just as detailees provide insider information to committees on agency operations, agencies profit from their detailees returning to the agency with intelligence on committee decision-making, policymaking and oversight capabilities. All of which our personnel-strapped national legislature badly needs.

Five years of R Street

shutterstock_553557937

Five years ago today, Deborah Bailin, Christian Cámara, Julie Drenner, R.J. Lehmann, Alan Smith and I resigned our jobs at the Heartland Institute over a horrifically ill-advised billboard advertisement and began a new think tank called R Street. Tonight, we’ll celebrate our fifth anniversary.

We’re now almost 40 strong and have a budget about 10 times that of our first year. In honor of our anniversary, here are five bits of trivia about R Street that I like to share:

  1. R Street’s first hire was Erica Schoder, now our senior vice president for operations. Our first office, previously the Heartland Institute’s Washington office, was a converted art gallery above a vintage clothing store.
  2. Some other names we considered were the Metis Institute (after the Greek goddess of common sense) and JuneFirst (after the day we officially opened). Our offices were near R Street and R is the first mostly residential street off Connecticut Avenue, which is arguably the main street in Washington. So it’s the place where real life begins in the nation’s capital.
  3. One huge advantage of the name R Street was that we could get the short URL org. That’s actually a big deal. It makes our email addresses much easier to type. Many other think tanks that have started recently have long and unwieldly URLs. We don’t.
  4. To my knowledge, we remain the only right-of-center think tank that both reimburses bike sharing and maintains a gender-identify nondiscrimination policy. I’m a cyclist and support civil rights protections for the gender nonconforming. But I’d argue that both policies are simply grounded in common sense.
  5. We believe that pirates are much cooler than ninjas. By a lot.

Image by Africa Studio

 

Reports of the taxi industry’s death have been greatly exaggerated

shutterstock_403179772

Co-written with Jonathan Haggerty 

It seems like nearly every time ridesharing is brought up in New York City, someone will inevitably bring up the dramatic decline in taxi medallion prices. Dubbed the “Uber effect” by American Enterprise Institute scholar Mark Perry, the theory is that increased competition from companies like Uber and Lyft has eroded the legal monopoly that taxi medallion holders previously exerted in the on-demand automobile transport market.

By competing against this once isolated market, transportation network companies like Uber and Lyft have made these medallions significantly less valuable. One proxy for this decline can be found in share prices of Medallion Financial Corp., a publicly traded consumer and commercial lending firm that is a major creditor in the taxi medallion lending business. When looking at the period from 2013 to 2016, the decline certainly looks precipitous:

caleb1

This may not be the complete story, however. After all, the stock price may vary depending both on the specific quality of loans the company issues, its underlying cost of capital and on general market confidence. Furthermore, the stock price doesn’t make any distinction across the numerous categories of medallion ownership.

To the extent that news reports cite changes in the actual market value of a medallion, they usually do so anecdotally, comparing the peak value in 2014 of more than $1 million to the current trough of under $300,000.

Given the clamor and potential policy implications, a more detailed analysis seemed appropriate. We examined medallion price trends over time and differentiated across the different medallion categories. NYC’s Taxi and Limousine Commission compiles monthly records of medallion transactions for each of six categories: Individual unrestricted, handicap accessible and fuel alternative, as well as corporate (minifleet) unrestricted, accessible and fuel alternative. Unrestricted cabs are the general purpose yellow taxis that everyone thinks of, handicap accessible are cabs specially retrofitted to allow persons with disabilities easier access, and fuel alternative cabs have specific fuel requirements titled towards being more environmentally friendly.

The primary breakdown is between individual and minifleet. Where an individual medallion owner has to spend a minimum number of hours per year (usually the equivalent of 200 separate nine-hour shifts) driving the cab, a minifleet owner can lease out taxis to other drivers.

By far, the largest categories are the individual and minifleet unlimited licenses, and the general decline here tracks fairly well with Medallion Financial’s stock price:

caleb2

Immediately we can see that there is a clear and substantial price premium for minifleet licenses over individual licenses. This makes sense intuitively. A license with strict personal driving requirements is going to be more restrictive on your time, and less valuable, than one without. Another factor that stands out is how messy the data is, with transfers at price points both significantly cheaper and significantly more expensive than the average in any given month. Unfortunately, it’s difficult to tell whether this was an issue with the NYC taxi commission’s data recording or whether these were due to external factors, like family transfer discounts or business liquidations.

However, it is important to recognize that the towering price high in 2014 was spurred partially by fleet owners borrowing against the rising value of the medallions they already owned to finance further purchases. So while medallion prices are undoubtedly dropping, it may look worse because prices were experiencing a bit of a bubble in the first place. Indeed, a former head of the TLC stated in April “the (taxi) industry’s performance has not been as bad as the decline in medallion prices would suggest.” In other words, don’t mistake the price of medallions with the health of the industry overall.

Another obvious factor here is the decrease in liquidity since 2014. One sale in March and two in February of 2017 means one of two things are happening: either medallion owners can’t find buyers, or owners are holding on because they view a price rise or stabilization on the horizon. The prospect of a bailout could keep buoying prices, while easing restrictions on medallion transfers has increased the potential pool of buyers.

Unfortunately, there were so few alternative fuel licenses released or transferred that there was not much data to analyze. Handicap accessible licenses, however, had a more interesting story to tell:

caleb3

Here you can see that the handicap accessible licenses have actually appreciated in value over the same timeframe. (If the graph looks funky with the straight lines, that’s due to the initial auctions where these licenses were sold.) This is not an apples-to-apples comparison, because we have so little data post-2014, but the total lack of sales (for minifleet accessible) may be an indication that it’s not an asset worth liquidating.

One reason for this may be that Uber partners with cab drivers who own these handicap accessible licenses to help provide rides on their platform to users with disabilities. It seems intuitive then, that these specific medallions would continue to hold value.

But perhaps the most important factor in all this is the total size of the market. The market share of taxis has shrunk with the emergence of Uber and Lyft, but the overall size of the market is larger today:

caleb4

Note that taxi trip volume has begun to level out in late 2016 and 2017. Taxis can coexist with TNCs in some markets, especially in densely populated cities where the value of a street hail is higher.

Put all of this together, and it appears the reports of taxi death have been greatly exaggerated. While some form of the Uber effect certainly exists, insofar as general medallion prices are concerned, the decline is not quite as precipitous as some have reported and taxi ride volume is not disappearing overnight. Furthermore, the future price of all these medallions likely will be more dependent on the success or failure of autonomous vehicles than on competition from ridesharing services from here on out.

 

 

The data we compiled for the piece can be found here.


Image by Cameris

Even without Durbin Amendment repeal, Congress should pass the CHOICE Act

shutterstock_300056177

The following post was co-authored by R Street Outreach Manager Clark Packard.


House Financial Services Committee Chairman Jeb Hensarling, R-Texas, has done the yeoman’s work of putting together a host of fundamantal conservative reforms in the CHOICE Act. Although repeal of the Durbin amendment would have been a positive, pro-market reform, Congress should pass the bill even if this repeal is not included.

The most important provision of the bill allows banks the very sensible choice of maintaining substantial equity capital in exchange for a reduction in onerous and intrusive regulation. This provision puts before banks a reasonable and fundamental trade-off: more capital, less intrusive regulation. This is reason enough to support the CHOICE Act. Its numerous other reforms also include improved constitutional governance of administrative agencies, which are also a key reason to support the bill.

Accountability of banks

The 10 percent tangible leverage capital ratio, conservatively calculated, as proposed in the CHOICE Act, is a fair and workable level.

A key lesson of the housing bubble was that mortgage loans made with 0 percent skin in the game are much more likely to cause trouble. To be fully accountable for the credit risk of its loans, a bank can keep them on its own balance sheet. This is 100 percent skin in the game. The CHOICE Act rightly gives relief to banks holding mortgage loans in portfolio from regulations that try to address problems of a zero skin in the game model – problems irrelevant to the incentives of the portfolio lender.

Accountability of regulatory agencies

The CHOICE Act is Congress asserting itself to clarify that regulatory agencies are derivative bodies accountable to the legislative branch. They cannot be sovereign fiefdoms, not even the dictatorship of the Consumer Financial Protection Bureau. The most classic and still most important power of the legislature is the power of the purse.  The CHOICE Act accordingly puts all the financial regulatory agencies under the democratic discipline of congressional appropriations. This notably would end the anti-constitutional direct grab from public funds that was granted to the CFPB precisely to evade the democratic power of the purse.

The CHOICE Act also requires of all financial regulatory agencies the core discipline of cost-benefit analysis. Overall, this represents very significant progress in the governance of the administrative state and brings it under better constitutional control.

Accountability of the Federal Reserve

The CHOICE Act includes the text of The Fed Oversight Reform and Modernization Act, which improves governance of the Federal Reserve by Congress. As a former president of the New York Federal Reserve Bank once testified to the House Committee on Banking and Currency: “Obviously, the Congress which set us up has the authority and should review our actions at any time they want to, and in any way they want to.” That is entirely correct. Under the CHOICE Act, such reviews would happen at least quarterly. These reviews should include having the Fed quantify and discuss the effects of its monetary policies on savings and savers.

Reform for community banks

A good summary of the results of the Dodd-Frank Act is supplied by the Independent Community Bankers of America’s “Community Bank Agenda for Economic Growth.” “Community banks,” it states, “need relief from suffocating regulatory mandates. The exponential growth of these mandates affects nearly every aspect of community banking. The very nature of the industry is shifting away from community investment and community building to paperwork, compliance and examination,” and “the new Congress has a unique opportunity to simplify, streamline and restructure.”

So it does. The House of Representatives should pass the CHOICE Act.


Image by Erce

 

How congressional power became separate, but unequal